zeroshot's picture
Update README.md
47e52c7
|
raw
history blame
1.28 kB
metadata
license: mit
language:
  - en
tags:
  - sparse sparsity quantized onnx embeddings int8

This is the quantized (INT8) ONNX variant of the bge-base-en-v1.5 embeddings model created with DeepSparse Optimum for ONNX export/inference pipeline and Neural Magic's Sparsify for one-shot quantization.

Current list of sparse and quantized bge ONNX models:

zeroshot/bge-large-en-v1.5-sparse

zeroshot/bge-large-en-v1.5-quant

zeroshot/bge-base-en-v1.5-sparse

zeroshot/bge-base-en-v1.5-quant

zeroshot/bge-small-en-v1.5-sparse

zeroshot/bge-small-en-v1.5-quant

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.