horheynm commited on
Commit
4014960
·
verified ·
1 Parent(s): 343bf73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +139 -1
README.md CHANGED
@@ -9,4 +9,142 @@ tags:
9
  - mistral-small
10
  - fp8
11
  - vllm
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - mistral-small
10
  - fp8
11
  - vllm
12
+ ---
13
+
14
+ # Mistral-Small-24B-Instruct-2501-FP8-Dynamic
15
+
16
+ ## Model Overview
17
+ - **Model Architecture:** Mistral-Small-24B-Instruct-2501
18
+ - **Input:** Text
19
+ - **Output:** Text
20
+ - **Model Optimizations:**
21
+ - **Weight quantization:** FP8
22
+ - **Activation quantization:** FP8
23
+ - **Release Date:** 3/1/2025
24
+ - **Version:** 1.0
25
+ - **Model Developers:** Neural Magic
26
+
27
+ Quantized version of [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501).
28
+ It achieves a flexible-extract filter score of 0.9030 on the evaluated on [GSM8k](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/gsm8k) task, where as the unquantized model achieves a flexible-extract filter score of 0.9060.
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights and activations to FP8 data type, ready for inference with vLLM >= 0.5.2.
33
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
34
+
35
+ ## Deployment
36
+
37
+ ### Use with vLLM
38
+
39
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer
43
+ from vllm import LLM, SamplingParams
44
+
45
+ max_model_len, tp_size = 4096, 1
46
+ model_name = "nm-testing/Mistral-Small-24B-Instruct-2501-FP8-Dynamic
47
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
48
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
49
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
50
+
51
+ messages_list = [
52
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
53
+ ]
54
+
55
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
56
+
57
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
58
+
59
+ generated_text = [output.outputs[0].text for output in outputs]
60
+ print(generated_text)
61
+ ```
62
+
63
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
64
+
65
+ ## Creation
66
+
67
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
68
+
69
+
70
+ ```python
71
+ import argparse
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+ from llmcompressor.modifiers.quantization import QuantizationModifier
74
+ from llmcompressor.transformers import oneshot
75
+ import os
76
+
77
+ def main():
78
+ parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
79
+ parser.add_argument('--model_id', type=str, required=True,
80
+ help='The model ID from HuggingFace (e.g., "mistralai/Mistral-Small-24B-Instruct-2501")')
81
+ parser.add_argument('--save_path', type=str, default='.',
82
+ help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
83
+ args = parser.parse_args()
84
+
85
+ # Load model
86
+ model = AutoModelForCausalLM.from_pretrained(
87
+ args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
88
+ )
89
+ tokenizer = AutoTokenizer.from_pretrained(args.model_id)
90
+
91
+ # Configure the quantization algorithm and scheme
92
+ recipe = QuantizationModifier(
93
+ targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
94
+ )
95
+
96
+ # Apply quantization
97
+ oneshot(model=model, recipe=recipe)
98
+
99
+ save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
100
+ os.makedirs(save_path, exist_ok=True)
101
+
102
+ # Save to disk in compressed-tensors format
103
+ model.save_pretrained(save_path)
104
+ tokenizer.save_pretrained(save_path)
105
+ print(f"Model and tokenizer saved to: {save_path}")
106
+
107
+ if __name__ == "__main__":
108
+ main()
109
+ ```
110
+
111
+ ## Evaluation
112
+
113
+ The optimized model was evaluated on GSM8k task with the flexible-extract filter score of 0.9030 ± 0.0082, and strict-match filter score of 0.8976 ± 0.0083, where as the unquantized model with the flexible-extract filter score of 0.9060 ± .0080, and strict-match filter score of 0.8992 ± 0.0083.
114
+
115
+ Evaluations were carried out using the following commands.
116
+
117
+ For the quantized model:
118
+ ```
119
+ lm_eval \
120
+ --model vllm \
121
+ --model_args pretrained="nm-testing/Mistral-Small-24B-Instruct-2501-FP8-Dynamic",add_bos_token=True \
122
+ --tasks gsm8k \
123
+ --batch_size auto
124
+ ```
125
+
126
+ For the unquantized model
127
+ ```
128
+ lm_eval \
129
+ --model vllm \
130
+ --model_args pretrained="mistralai/Mistral-Small-24B-Instruct-2501",add_bos_token=True \
131
+ --tasks gsm8k \
132
+ --batch_size auto
133
+
134
+ ```
135
+
136
+ ### Accuracy
137
+
138
+ #### GSM8k evaluation scores for the optimized model
139
+
140
+ |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
141
+ |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
142
+ |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.9030|± |0.0082|
143
+ | | |strict-match | 5|exact_match|↑ |0.8976|± |0.0083|
144
+
145
+ #### GSM8k evaluation scores for the unquantized model
146
+
147
+ |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
148
+ |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
149
+ |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.9060|± |0.0080|
150
+ | | |strict-match | 5|exact_match|↑ |0.8992|± |0.0083|