shubhrapandit commited on
Commit
db3d325
·
verified ·
1 Parent(s): 269a11e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +295 -1
README.md CHANGED
@@ -1,5 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ## Creation
2
 
 
 
 
 
 
3
  ```python
4
  from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
5
 
@@ -34,4 +109,223 @@ input_ids = processor(text="Hello my name is", return_tensors="pt").input_ids.to
34
  output = model.generate(input_ids, max_new_tokens=20)
35
  print(processor.decode(output[0]))
36
  print("==========================================")
37
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - vision
5
+ - w4a16
6
+ license: apache-2.0
7
+ license_link: >-
8
+ https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
9
+ language:
10
+ - en
11
+ base_model: Qwen/Qwen2-VL-72B-Instruct
12
+ library_name: transformers
13
+ ---
14
+
15
+ # Qwen2-VL-72B-Instruct-quantized-w4a16
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct
19
+ - **Input:** Vision-Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** INT4
23
+ - **Activation quantization:** FP16
24
+ - **Release Date:** 2/24/2025
25
+ - **Version:** 1.0
26
+ - **Model Developers:** Neural Magic
27
+
28
+ Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm.assets.image import ImageAsset
42
+ from vllm import LLM, SamplingParams
43
+
44
+ # prepare model
45
+ llm = LLM(
46
+ model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16",
47
+ trust_remote_code=True,
48
+ max_model_len=4096,
49
+ max_num_seqs=2,
50
+ )
51
+
52
+ # prepare inputs
53
+ question = "What is the content of this image?"
54
+ inputs = {
55
+ "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
56
+ "multi_modal_data": {
57
+ "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
58
+ },
59
+ }
60
+
61
+ # generate response
62
+ print("========== SAMPLE GENERATION ==============")
63
+ outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
64
+ print(f"PROMPT : {outputs[0].prompt}")
65
+ print(f"RESPONSE: {outputs[0].outputs[0].text}")
66
+ print("==========================================")
67
+ ```
68
+
69
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
70
+
71
  ## Creation
72
 
73
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
74
+
75
+ <details>
76
+ <summary>Model Creation Code</summary>
77
+
78
  ```python
79
  from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
80
 
 
109
  output = model.generate(input_ids, max_new_tokens=20)
110
  print(processor.decode(output[0]))
111
  print("==========================================")
112
+ ```
113
+ </details>
114
+
115
+ ## Evaluation
116
+
117
+
118
+ <details>
119
+ <summary>Evaluation Commands</summary>
120
+
121
+ ```
122
+ ```
123
+
124
+ </details>
125
+
126
+ ### Accuracy
127
+
128
+ ## Inference Performance
129
+
130
+
131
+ This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
132
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
133
+
134
+ <details>
135
+ <summary>Benchmarking Command</summary>
136
+ ```
137
+ guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
138
+ ```
139
+
140
+ </details>
141
+
142
+
143
+ ### Single-stream performance (measured with vLLM version 0.7.2)
144
+
145
+ <table border="1" class="dataframe">
146
+ <thead>
147
+ <tr>
148
+ <th></th>
149
+ <th></th>
150
+ <th></th>
151
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
152
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
153
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
154
+ </tr>
155
+ <tr>
156
+ <th>Hardware</th>
157
+ <th>Model</th>
158
+ <th>Average Cost Reduction</th>
159
+ <th>Latency (s)</th>
160
+ <th>QPD</th>
161
+ <th>Latency (s)th>
162
+ <th>QPD</th>
163
+ <th>Latency (s)</th>
164
+ <th>QPD</th>
165
+ </tr>
166
+ </thead>
167
+ <tbody>
168
+ <tr>
169
+ <td>A100x4</td>
170
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
171
+ <td></td>
172
+ <td>6.5</td>
173
+ <td>77</td>
174
+ <td>4.6</td>
175
+ <td>110</td>
176
+ <td>4.4</td>
177
+ <td>113</td>
178
+ </tr>
179
+ <tr>
180
+ <td>A100x2</td>
181
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
182
+ <td>1.85</td>
183
+ <td>7.2</td>
184
+ <td>139</td>
185
+ <td>4.9</td>
186
+ <td>206</td>
187
+ <td>4.8</td>
188
+ <td>211</td>
189
+ </tr>
190
+ <tr>
191
+ <td>A100x1</td>
192
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
193
+ <td>3.32</td>
194
+ <td>10.0</td>
195
+ <td>202</td>
196
+ <td>5.0</td>
197
+ <td>398</td>
198
+ <td>4.8</td>
199
+ <td>419</td>
200
+ </tr>
201
+ <tr>
202
+ <td>H100x4</td>
203
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
204
+ <td></td>
205
+ <td>4.4</td>
206
+ <td>66</td>
207
+ <td>3.0</td>
208
+ <td>97</td>
209
+ <td>2.9</td>
210
+ <td>99</td>
211
+ </tr>
212
+ <tr>
213
+ <td>H100x2</td>
214
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
215
+ <td>1.79</td>
216
+ <td>4.7</td>
217
+ <td>119</td>
218
+ <td>3.3</td>
219
+ <td>173</td>
220
+ <td>3.2</td>
221
+ <td>177</td>
222
+ </tr>
223
+ <tr>
224
+ <td>H100x1</td>
225
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
226
+ <td>2.60</td>
227
+ <td>6.4</td>
228
+ <td>172</td>
229
+ <td>4.3</td>
230
+ <td>253</td>
231
+ <td>4.2</td>
232
+ <td>259</td>
233
+ </tr>
234
+ </tbody>
235
+ </table>
236
+
237
+
238
+ ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
239
+
240
+ <table border="1" class="dataframe">
241
+ <thead>
242
+ <tr>
243
+ <th></th>
244
+ <th></th>
245
+ <th></th>
246
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
247
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
248
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
249
+ </tr>
250
+ <tr>
251
+ <th>Hardware</th>
252
+ <th>Model</th>
253
+ <th>Average Cost Reduction</th>
254
+ <th>Maximum throughput (QPS)</th>
255
+ <th>QPD</th>
256
+ <th>Maximum throughput (QPS)</th>
257
+ <th>QPD</th>
258
+ <th>Maximum throughput (QPS)</th>
259
+ <th>QPD</th>
260
+ </tr>
261
+ </thead>
262
+ <tbody>
263
+ <tr>
264
+ <th rowspan="3" valign="top">A100x4</th>
265
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
266
+ <td></td>
267
+ <td>0.3</td>
268
+ <td>169</td>
269
+ <td>1.1</td>
270
+ <td>538</td>
271
+ <td>1.2</td>
272
+ <td>595</td>
273
+ </tr>
274
+ <tr>
275
+ <td>A100x2</td>
276
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
277
+ <td>1.84</td>
278
+ <td>0.6</td>
279
+ <td>293</td>
280
+ <td>2.0</td>
281
+ <td>1021</td>
282
+ <td>2.3</td>
283
+ <td>1135</td>
284
+ </tr>
285
+ <tr>
286
+ <td>A100x1</td>
287
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
288
+ <td>2.73</td>
289
+ <td>0.6</td>
290
+ <td>314</td>
291
+ <td>3.2</td>
292
+ <td>1591</td>
293
+ <td>4.0</td>
294
+ <td>2019</td>
295
+ </tr>
296
+ <tr>
297
+ <td>H100x4</td>
298
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
299
+ <td></td>
300
+ <td>0.5</td>
301
+ <td>137</td>
302
+ <td>1.2</td>
303
+ <td>356</td>
304
+ <td>1.3</td>
305
+ <td>377</td>
306
+ </tr>
307
+ <tr>
308
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
309
+ <td>H100x2</td>
310
+ <td>1.70</td>
311
+ <td>0.8</td>
312
+ <td>236</td>
313
+ <td>2.2</td>
314
+ <td>623</td>
315
+ <td>2.4</td>
316
+ <td>669</td>
317
+ </tr>
318
+ <tr>
319
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
320
+ <td>H100x1</td>
321
+ <td>2.35</td>
322
+ <td>1.3</td>
323
+ <td>350</td>
324
+ <td>3.3</td>
325
+ <td>910</td>
326
+ <td>3.6</td>
327
+ <td>994</td>
328
+ </tr>
329
+ </tbody>
330
+ </table>
331
+