nm-research commited on
Commit
8896b16
·
verified ·
1 Parent(s): 5cad2d0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +286 -0
README.md ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - deepseek
5
+ - int4
6
+ - vllm
7
+ - llmcompressor
8
+ base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
9
+ library_name: transformers
10
+ ---
11
+
12
+ # DeepSeek-R1-Distill-Llama-70B-quantized.w4a16
13
+
14
+ ## Model Overview
15
+ - **Model Architecture:** LlamaForCausalLM
16
+ - **Input:** Text
17
+ - **Output:** Text
18
+ - **Model Optimizations:**
19
+ - **Weight quantization:** INT4
20
+ - **Release Date:** 2/7/2025
21
+ - **Version:** 1.0
22
+ - **Model Developers:** Neural Magic
23
+
24
+ Quantized version of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B).
25
+
26
+
27
+ ### Model Optimizations
28
+
29
+ This model was obtained by quantizing the weights of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) to INT4 data type.
30
+ This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
31
+
32
+
33
+ Only the weights of the linear operators within transformers blocks are quantized.
34
+ Weights are quantized using a symmetric per-group scheme, with group size 128.
35
+ The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
36
+
37
+
38
+ ## Use with vLLM
39
+
40
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer
44
+ from vllm import LLM, SamplingParams
45
+
46
+ number_gpus = 1
47
+ model_name = "neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w4a16"
48
+
49
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
+ sampling_params = SamplingParams(temperature=0.6, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
51
+ llm = LLM(model=model_name, tensor_parallel_size=number_gpus, trust_remote_code=True)
52
+
53
+ messages_list = [
54
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
55
+ ]
56
+
57
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
58
+
59
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
60
+
61
+ generated_text = [output.outputs[0].text for output in outputs]
62
+ print(generated_text)
63
+ ```
64
+
65
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
66
+
67
+ ## Creation
68
+
69
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
70
+
71
+
72
+ ```python
73
+ from transformers import AutoModelForCausalLM, AutoTokenizer
74
+ from llmcompressor.modifiers.quantization import QuantizationModifier
75
+ from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
76
+ from llmcompressor.transformers import oneshot
77
+ from llmcompressor.transformers.compression.helpers import calculate_offload_device_map
78
+
79
+ # Load model
80
+ model_stub = "deepseek-ai/DeepSeek-R1-Distill-Llama-70B"
81
+ model_name = model_stub.split("/")[-1]
82
+
83
+ num_samples = 3072
84
+ max_seq_len = 8192
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained(model_stub)
87
+
88
+ device_map = calculate_offload_device_map(
89
+ model_stub,
90
+ reserve_for_hessians=True,
91
+ num_gpus=2,
92
+ torch_dtype="auto",
93
+ )
94
+
95
+ model = AutoModelForCausalLM.from_pretrained(
96
+ model_stub,
97
+ device_map=device_map,
98
+ torch_dtype="auto",
99
+ )
100
+
101
+ def preprocess_fn(example):
102
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
103
+
104
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
105
+ ds = ds.map(preprocess_fn)
106
+
107
+ # Configure the quantization algorithm and scheme
108
+ recipe = QuantizationModifier(
109
+ targets="Linear",
110
+ scheme="W4A16",
111
+ ignore=["lm_head"],
112
+ dampening_frac=0.1,
113
+ )
114
+
115
+ # Apply quantization
116
+ oneshot(
117
+ model=model,
118
+ dataset=ds,
119
+ recipe=recipe,
120
+ max_seq_length=max_seq_len,
121
+ num_calibration_samples=num_samples,
122
+ )
123
+
124
+ # Save to disk in compressed-tensors format
125
+ save_path = model_name + "-quantized.w4a16
126
+ model.save_pretrained(save_path)
127
+ tokenizer.save_pretrained(save_path)
128
+ print(f"Model and tokenizer saved to: {save_path}")
129
+ ```
130
+
131
+ ## Evaluation
132
+
133
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/), using the following commands:
134
+
135
+ OpenLLM Leaderboard V1:
136
+ ```
137
+ lm_eval \
138
+ --model vllm \
139
+ --model_args pretrained="neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w4a16",dtype=auto,max_model_len=4096,tensor_parallel_size=1,enable_chunked_prefill=True \
140
+ --tasks openllm \
141
+ --write_out \
142
+ --batch_size auto \
143
+ --output_path output_dir \
144
+ --show_config
145
+ ```
146
+
147
+ OpenLLM Leaderboard V2:
148
+ ```
149
+ lm_eval \
150
+ --model vllm \
151
+ --model_args pretrained="neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w4a16",dtype=auto,max_model_len=4096,tensor_parallel_size=1,enable_chunked_prefill=True \
152
+ --apply_chat_template \
153
+ --fewshot_as_multiturn \
154
+ --tasks leaderboard \
155
+ --write_out \
156
+ --batch_size auto \
157
+ --output_path output_dir \
158
+ --show_config
159
+ ```
160
+
161
+ ### Accuracy
162
+
163
+ <table>
164
+ <thead>
165
+ <tr>
166
+ <th>Category</th>
167
+ <th>Metric</th>
168
+ <th>deepseek-ai/DeepSeek-R1-Distill-Llama-70B</th>
169
+ <th>neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w4a16</th>
170
+ <th>Recovery</th>
171
+ </tr>
172
+ </thead>
173
+ <tbody>
174
+ <tr>
175
+ <td rowspan="7"><b>OpenLLM V1</b></td>
176
+ <td>ARC-Challenge (Acc-Norm, 25-shot)</td>
177
+ <td>63.65</td>
178
+ <td>63.31</td>
179
+ <td>99.5%</td>
180
+ </tr>
181
+ <tr>
182
+ <td>GSM8K (Strict-Match, 5-shot)</td>
183
+ <td>93.03</td>
184
+ <td>93.03</td>
185
+ <td>100.0%</td>
186
+ </tr>
187
+ <tr>
188
+ <td>HellaSwag (Acc-Norm, 10-shot)</td>
189
+ <td>84.85</td>
190
+ <td>84.43</td>
191
+ <td>99.5%</td>
192
+ </tr>
193
+ <tr>
194
+ <td>MMLU (Acc, 5-shot)</td>
195
+ <td>78.04</td>
196
+ <td>77.15</td>
197
+ <td>98.9%</td>
198
+ </tr>
199
+ <tr>
200
+ <td>TruthfulQA (MC2, 0-shot)</td>
201
+ <td>56.67</td>
202
+ <td>57.79</td>
203
+ <td>102.0%</td>
204
+ </tr>
205
+ <tr>
206
+ <td>Winogrande (Acc, 5-shot)</td>
207
+ <td>78.22</td>
208
+ <td>79.48</td>
209
+ <td>101.6%</td>
210
+ </tr>
211
+ <tr>
212
+ <td><b>Average Score</b></td>
213
+ <td><b>75.74</b></td>
214
+ <td><b>75.86</b></td>
215
+ <td><b>100.2%</b></td>
216
+ </tr>
217
+ <tr>
218
+ <td rowspan="7"><b>OpenLLM V2</b></td>
219
+ <td>IFEval (Inst Level Strict Acc, 0-shot)</td>
220
+ <td>43.15</td>
221
+ <td>42.08</td>
222
+ <td>97.5%</td>
223
+ </tr>
224
+ <tr>
225
+ <td>BBH (Acc-Norm, 3-shot)</td>
226
+ <td>64.32</td>
227
+ <td>63.91</td>
228
+ <td>99.4%</td>
229
+ </tr>
230
+ <tr>
231
+ <td>Math-Hard (Exact-Match, 4-shot)</td>
232
+ <td>35.04</td>
233
+ <td>37.81</td>
234
+ <td>107.9%</td>
235
+ </tr>
236
+ <tr>
237
+ <td>GPQA (Acc-Norm, 0-shot)</td>
238
+ <td>37.15</td>
239
+ <td>36.64</td>
240
+ <td>98.6%</td>
241
+ </tr>
242
+ <tr>
243
+ <td>MUSR (Acc-Norm, 0-shot)</td>
244
+ <td>42.89</td>
245
+ <td>42.49</td>
246
+ <td>99.1%</td>
247
+ </tr>
248
+ <tr>
249
+ <td>MMLU-Pro (Acc, 5-shot)</td>
250
+ <td>47.22</td>
251
+ <td>45.78</td>
252
+ <td>96.9%</td>
253
+ </tr>
254
+ <tr>
255
+ <td><b>Average Score</b></td>
256
+ <td><b>44.96</b></td>
257
+ <td><b>44.78</b></td>
258
+ <td><b>99.6%</b></td>
259
+ </tr>
260
+ <tr>
261
+ <td rowspan="4"><b>Coding</b></td>
262
+ <td>HumanEval (pass@1)</td>
263
+ <td>81.10</td>
264
+ <td>80.20</td>
265
+ <td><b>98.9%</b></td>
266
+ </tr>
267
+ <tr>
268
+ <td>HumanEval (pass@10)</td>
269
+ <td>87.60</td>
270
+ <td>89.30</td>
271
+ <td>101.9%</td>
272
+ </tr>
273
+ <tr>
274
+ <td>HumanEval+ (pass@10)</td>
275
+ <td>75.20</td>
276
+ <td>73.00</td>
277
+ <td>97.1%</td>
278
+ </tr>
279
+ <tr>
280
+ <td>HumanEval+ (pass@10)</td>
281
+ <td>83.10</td>
282
+ <td>83.70</td>
283
+ <td>100.7%</td>
284
+ </tr>
285
+ </tbody>
286
+ </table>