Update README.md
Browse files
README.md
CHANGED
@@ -1621,10 +1621,6 @@ language:
|
|
1621 |
|
1622 |
# bge-small-en-v1.5-quant
|
1623 |
|
1624 |
-
<div>
|
1625 |
-
<img src="https://huggingface.co/zeroshot/bge-small-en-v1.5-quant/resolve/main/latency.png" alt="latency" width="600" style="display:inline-block; margin-right:10px;"/>
|
1626 |
-
</div>
|
1627 |
-
|
1628 |
This is the quantized (INT8) ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
|
1629 |
|
1630 |
Current list of compressed bge ONNX models:
|
|
|
1621 |
|
1622 |
# bge-small-en-v1.5-quant
|
1623 |
|
|
|
|
|
|
|
|
|
1624 |
This is the quantized (INT8) ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
|
1625 |
|
1626 |
Current list of compressed bge ONNX models:
|