Update README.md
Browse files
README.md
CHANGED
@@ -1621,9 +1621,6 @@ language:
|
|
1621 |
|
1622 |
# bge-small-en-v1.5-quant
|
1623 |
|
1624 |
-
<img src="https://huggingface.co/zeroshot/bge-small-en-v1.5-quant/blob/main/latency.png" alt="latency" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
1625 |
-
|
1626 |
-
|
1627 |
This is the quantized (INT8) ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
|
1628 |
|
1629 |
Current list of compressed bge ONNX models:
|
|
|
1621 |
|
1622 |
# bge-small-en-v1.5-quant
|
1623 |
|
|
|
|
|
|
|
1624 |
This is the quantized (INT8) ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
|
1625 |
|
1626 |
Current list of compressed bge ONNX models:
|