Update README.md
Browse files
README.md
CHANGED
@@ -730,18 +730,15 @@ model-index:
|
|
730 |
|
731 |
# bge-base-en-v1.5-quant
|
732 |
|
733 |
-
|
|
|
|
|
734 |
|
735 |
-
|
736 |
|
737 |
-
|
738 |
-
|
739 |
-
|
740 |
-
| [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) | Quantization (INT8) |
|
741 |
-
| [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
|
742 |
-
| [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) | Quantization (INT8) |
|
743 |
-
| [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
|
744 |
-
| [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant) | Quantization (INT8) |
|
745 |
|
746 |
```bash
|
747 |
pip install -U deepsparse-nightly[sentence_transformers]
|
@@ -766,9 +763,4 @@ for sentence, embedding in zip(sentences, embeddings):
|
|
766 |
print("")
|
767 |
```
|
768 |
|
769 |
-
For
|
770 |
-
|
771 |
-
|
772 |
-
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
773 |
-
|
774 |
-

|
|
|
730 |
|
731 |
# bge-base-en-v1.5-quant
|
732 |
|
733 |
+
<div>
|
734 |
+
<img src="https://huggingface.co/zeroshot/bge-base-en-v1.5-quant/resolve/main/bge-base-latency.png" alt="latency" width="500" style="display:inline-block; margin-right:10px;"/>
|
735 |
+
</div>
|
736 |
|
737 |
+
[DeepSparse](https://github.com/neuralmagic/deepsparse) is able to improve latency performance on a 10 core laptop by 3X and up to 5X on a 16 core AWS instance.
|
738 |
|
739 |
+
## Usage
|
740 |
+
|
741 |
+
This is the quantized (INT8) ONNX variant of the [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model accelerated with [Sparsify](https://github.com/neuralmagic/sparsify) for quantization and [DeepSparseSentenceTransformers](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers) for inference.
|
|
|
|
|
|
|
|
|
|
|
742 |
|
743 |
```bash
|
744 |
pip install -U deepsparse-nightly[sentence_transformers]
|
|
|
763 |
print("")
|
764 |
```
|
765 |
|
766 |
+
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
|
|
|
|
|
|
|
|
|