Lin-K76 commited on
Commit
e88d9ec
·
verified ·
1 Parent(s): 96e85de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -19,12 +19,12 @@ tags:
19
  - **Version:** 1.0
20
  - **Model Developers:** Neural Magic
21
 
22
- Quantized version of [Mistral-7B-Instruct-v0.3](mistralai/Mistral-7B-Instruct-v0.3).
23
  It achieves an average score of 65.85 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 66.33.
24
 
25
  ### Model Optimizations
26
 
27
- This model was obtained by quantizing the weights and activations of [Mistral-7B-Instruct-v0.3](mistralai/Mistral-7B-Instruct-v0.3) to FP8 data type, ready for inference with vLLM >= 0.5.0.
28
  This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
29
 
30
  Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations.
 
19
  - **Version:** 1.0
20
  - **Model Developers:** Neural Magic
21
 
22
+ Quantized version of [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
23
  It achieves an average score of 65.85 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 66.33.
24
 
25
  ### Model Optimizations
26
 
27
+ This model was obtained by quantizing the weights and activations of [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) to FP8 data type, ready for inference with vLLM >= 0.5.0.
28
  This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
29
 
30
  Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations.