shubhrapandit commited on
Commit
cb25b6a
·
verified ·
1 Parent(s): 183210a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +460 -0
README.md ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - vision
5
+ - w4a16
6
+ license: apache-2.0
7
+ license_link: >-
8
+ https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
9
+ language:
10
+ - en
11
+ base_model: Qwen/Qwen2.5-VL-3B-Instruct
12
+ library_name: transformers
13
+ ---
14
+
15
+ # Qwen2.5-VL-3B-Instruct-quantized-w4a16
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** Qwen/Qwen2.5-VL-3B-Instruct
19
+ - **Input:** Vision-Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** INT4
23
+ - **Activation quantization:** FP16
24
+ - **Release Date:** 2/24/2025
25
+ - **Version:** 1.0
26
+ - **Model Developers:** Neural Magic
27
+
28
+ Quantized version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm.assets.image import ImageAsset
42
+ from vllm import LLM, SamplingParams
43
+
44
+ # prepare model
45
+ llm = LLM(
46
+ model="neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16",
47
+ trust_remote_code=True,
48
+ max_model_len=4096,
49
+ max_num_seqs=2,
50
+ )
51
+
52
+ # prepare inputs
53
+ question = "What is the content of this image?"
54
+ inputs = {
55
+ "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
56
+ "multi_modal_data": {
57
+ "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
58
+ },
59
+ }
60
+
61
+ # generate response
62
+ print("========== SAMPLE GENERATION ==============")
63
+ outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
64
+ print(f"PROMPT : {outputs[0].prompt}")
65
+ print(f"RESPONSE: {outputs[0].outputs[0].text}")
66
+ print("==========================================")
67
+ ```
68
+
69
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
70
+
71
+ ## Creation
72
+
73
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
74
+
75
+ <details>
76
+ <summary>Model Creation Code</summary>
77
+
78
+ ```python
79
+ import base64
80
+ from io import BytesIO
81
+ import torch
82
+ from datasets import load_dataset
83
+ from qwen_vl_utils import process_vision_info
84
+ from transformers import AutoProcessor
85
+ from llmcompressor.modifiers.quantization import GPTQModifier
86
+ from llmcompressor.transformers import oneshot
87
+ from llmcompressor.transformers.tracing import (
88
+ TraceableQwen2_5_VLForConditionalGeneration,
89
+ )
90
+ from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
91
+
92
+ # Load model.
93
+ model_id = "Qwen/Qwen2.5-VL-3B-Instruct"
94
+
95
+ model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
96
+ model_id,
97
+ device_map="auto",
98
+ torch_dtype="auto",
99
+ )
100
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
101
+
102
+ # Oneshot arguments
103
+ DATASET_ID = "lmms-lab/flickr30k"
104
+ DATASET_SPLIT = {"calibration": "test[:512]"}
105
+ NUM_CALIBRATION_SAMPLES = 512
106
+ MAX_SEQUENCE_LENGTH = 2048
107
+
108
+ # Load dataset and preprocess.
109
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
110
+ ds = ds.shuffle(seed=42)
111
+ dampening_frac=0.01
112
+
113
+ # Apply chat template and tokenize inputs.
114
+ def preprocess_and_tokenize(example):
115
+ # preprocess
116
+ buffered = BytesIO()
117
+ example["image"].save(buffered, format="PNG")
118
+ encoded_image = base64.b64encode(buffered.getvalue())
119
+ encoded_image_text = encoded_image.decode("utf-8")
120
+ base64_qwen = f"data:image;base64,{encoded_image_text}"
121
+ messages = [
122
+ {
123
+ "role": "user",
124
+ "content": [
125
+ {"type": "image", "image": base64_qwen},
126
+ {"type": "text", "text": "What does the image show?"},
127
+ ],
128
+ }
129
+ ]
130
+ text = processor.apply_chat_template(
131
+ messages, tokenize=False, add_generation_prompt=True
132
+ )
133
+ image_inputs, video_inputs = process_vision_info(messages)
134
+
135
+ # tokenize
136
+ return processor(
137
+ text=[text],
138
+ images=image_inputs,
139
+ videos=video_inputs,
140
+ padding=False,
141
+ max_length=MAX_SEQUENCE_LENGTH,
142
+ truncation=True,
143
+ )
144
+ ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
145
+
146
+ # Define a oneshot data collator for multimodal inputs.
147
+ def data_collator(batch):
148
+ assert len(batch) == 1
149
+ return {key: torch.tensor(value) for key, value in batch[0].items()}
150
+
151
+ recipe = GPTQModifier(
152
+ targets="Linear",
153
+ config_groups={
154
+ "config_group": QuantizationScheme(
155
+ targets=["Linear"],
156
+ weights=QuantizationArgs(
157
+ num_bits=4,
158
+ type=QuantizationType.INT,
159
+ strategy=QuantizationStrategy.GROUP,
160
+ group_size=128,
161
+ symmetric=True,
162
+ dynamic=False,
163
+ actorder=ActivationOrdering.WEIGHT,
164
+ ),
165
+ ),
166
+ },
167
+ sequential_targets=["Qwen2_5_VLDecoderLayer"],
168
+ ignore=["lm_head", "re:visual.*"],
169
+ update_size=NUM_CALIBRATION_SAMPLES,
170
+ dampening_frac=dampening_frac
171
+ )
172
+
173
+ SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16"
174
+
175
+ # Perform oneshot
176
+ oneshot(
177
+ model=model,
178
+ tokenizer=model_id,
179
+ dataset=ds,
180
+ recipe=recipe,
181
+ max_seq_length=MAX_SEQUENCE_LENGTH,
182
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
183
+ trust_remote_code_model=True,
184
+ data_collator=data_collator,
185
+ output_dir=SAVE_DIR
186
+ )
187
+
188
+ ```
189
+ </details>
190
+
191
+ ## Evaluation
192
+
193
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
194
+
195
+ <details>
196
+ <summary>Evaluation Commands</summary>
197
+
198
+ ```
199
+ ```
200
+
201
+ </details>
202
+
203
+ ### Accuracy
204
+
205
+ ## Inference Performance
206
+
207
+
208
+ This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
209
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
210
+
211
+ <details>
212
+ <summary>Benchmarking Command</summary>
213
+ ```
214
+ guidellm --model neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
215
+ ```
216
+
217
+ </details>
218
+
219
+ ### Single-stream performance (measured with vLLM version 0.7.2)
220
+
221
+ <table border="1" class="dataframe">
222
+ <thead>
223
+ <tr>
224
+ <th></th>
225
+ <th></th>
226
+ <th></th>
227
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
228
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
229
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
230
+ </tr>
231
+ <tr>
232
+ <th>Hardware</th>
233
+ <th>Model</th>
234
+ <th>Average Cost Reduction</th>
235
+ <th>Latency (s)</th>
236
+ <th>QPD</th>
237
+ <th>Latency (s)th>
238
+ <th>QPD</th>
239
+ <th>Latency (s)</th>
240
+ <th>QPD</th>
241
+ </tr>
242
+ </thead>
243
+ <tbody style="text-align: center">
244
+ <tr>
245
+ <th rowspan="3" valign="top">A6000x1</th>
246
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
247
+ <td></td>
248
+ <td>3.1</td>
249
+ <td>1454</td>
250
+ <td>1.8</td>
251
+ <td>2546</td>
252
+ <td>1.7</td>
253
+ <td>2610</td>
254
+ </tr>
255
+ <tr>
256
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
257
+ <td>1.27</td>
258
+ <td>2.6</td>
259
+ <td>1708</td>
260
+ <td>1.3</td>
261
+ <td>3340</td>
262
+ <td>1.3</td>
263
+ <td>3459</td>
264
+ </tr>
265
+ <tr>
266
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
267
+ <td>1.57</td>
268
+ <td>2.4</td>
269
+ <td>1886</td>
270
+ <td>1.0</td>
271
+ <td>4409</td>
272
+ <td>1.0</td>
273
+ <td>4409</td>
274
+ </tr>
275
+ <tr>
276
+ <th rowspan="3" valign="top">A100x1</th>
277
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
278
+ <td></td>
279
+ <td>2.2</td>
280
+ <td>920</td>
281
+ <td>1.3</td>
282
+ <td>1603</td>
283
+ <td>1.2</td>
284
+ <td>1636</td>
285
+ </tr>
286
+ <tr>
287
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
288
+ <td>1.09</td>
289
+ <td>2.1</td>
290
+ <td>975</td>
291
+ <td>1.2</td>
292
+ <td>1743</td>
293
+ <td>1.1</td>
294
+ <td>1814</td>
295
+ </tr>
296
+ <tr>
297
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
298
+ <td>1.20</td>
299
+ <td>2.0</td>
300
+ <td>1011</td>
301
+ <td>1.0</td>
302
+ <td>2015</td>
303
+ <td>1.0</td>
304
+ <td>2012</td>
305
+ </tr>
306
+ <tr>
307
+ <th rowspan="3" valign="top">H100x1</th>
308
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
309
+ <td>1.5</td>
310
+ <td>740</td>
311
+ <td>0.9</td>
312
+ <td>1221</td>
313
+ <td>0.9</td>
314
+ <td>1276</td>
315
+ </tr>
316
+ <tr>
317
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
318
+ <td>1.06</td>
319
+ <td>1.4</td>
320
+ <td>768</td>
321
+ <td>0.9</td>
322
+ <td>1276</td>
323
+ <td>0.8</td>
324
+ <td>1399</td>
325
+ </tr>
326
+ <tr>
327
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
328
+ <td>1.24</td>
329
+ <td>0.9</td>
330
+ <td>1219</td>
331
+ <td>0.9</td>
332
+ <td>1270</td>
333
+ <td>0.8</td>
334
+ <td>1304</td>
335
+ </tr>
336
+ </tbody>
337
+ </table>
338
+
339
+
340
+
341
+ ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
342
+
343
+ <table border="1" class="dataframe">
344
+ <thead>
345
+ <tr>
346
+ <th></th>
347
+ <th></th>
348
+ <th></th>
349
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
350
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
351
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
352
+ </tr>
353
+ <tr>
354
+ <th>Hardware</th>
355
+ <th>Model</th>
356
+ <th>Average Cost Reduction</th>
357
+ <th>Maximum throughput (QPS)</th>
358
+ <th>QPD</th>
359
+ <th>Maximum throughput (QPS)</th>
360
+ <th>QPD</th>
361
+ <th>Maximum throughput (QPS)</th>
362
+ <th>QPD</th>
363
+ </tr>
364
+ </thead>
365
+ <tbody style="text-align: center">
366
+ <tr>
367
+ <th rowspan="3" valign="top">A6000x1</th>
368
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
369
+ <td></td>
370
+ <td>0.5</td>
371
+ <td>2405</td>
372
+ <td>2.6</td>
373
+ <td>11889</td>
374
+ <td>2.9</td>
375
+ <td>12909</td>
376
+ </tr>
377
+ <tr>
378
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
379
+ <td>1.26</td>
380
+ <td>0.6</td>
381
+ <td>2725</td>
382
+ <td>3.4</td>
383
+ <td>15162</td>
384
+ <td>3.9</td>
385
+ <td>17673</td>
386
+ </tr>
387
+ <tr>
388
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
389
+ <td>1.39</td>
390
+ <td>0.6</td>
391
+ <td>2548</td>
392
+ <td>3.9</td>
393
+ <td>17437</td>
394
+ <td>4.7</td>
395
+ <td>21223</td>
396
+ </tr>
397
+ <tr>
398
+ <th rowspan="3" valign="top">A100x1</th>
399
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
400
+ <td></td>
401
+ <td>0.8</td>
402
+ <td>1663</td>
403
+ <td>3.9</td>
404
+ <td>7899</td>
405
+ <td>4.4</td>
406
+ <td>8924</td>
407
+ </tr>
408
+ <tr>
409
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th>
410
+ <td>1.06</td>
411
+ <td>0.9</td>
412
+ <td>1734</td>
413
+ <td>4.2</td>
414
+ <td>8488</td>
415
+ <td>4.7</td>
416
+ <td>9548</td>
417
+ </tr>
418
+ <tr>
419
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
420
+ <td>1.10</td>
421
+ <td>0.9</td>
422
+ <td>1775</td>
423
+ <td>4.2</td>
424
+ <td>8540</td>
425
+ <td>5.1</td>
426
+ <td>10318</td>
427
+ </tr>
428
+ <tr>
429
+ <th rowspan="3" valign="top">H100x1</th>
430
+ <th>Qwen/Qwen2.5-VL-3B-Instruct</th>
431
+ <td></td>
432
+ <td>1.1</td>
433
+ <td>1188</td>
434
+ <td>4.3</td>
435
+ <td>4656</td>
436
+ <td>4.3</td>
437
+ <td>4676</td>
438
+ </tr>
439
+ <tr>
440
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th>
441
+ <td>1.15</td>
442
+ <td>1.4</td>
443
+ <td>1570</td>
444
+ <td>4.3</td>
445
+ <td>4676</td>
446
+ <td>4.8</td>
447
+ <td>5220</td>
448
+ </tr>
449
+ <tr>
450
+ <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th>
451
+ <td>1.96</td>
452
+ <td>4.2</td>
453
+ <td>4598</td>
454
+ <td>4.1</td>
455
+ <td>4505</td>
456
+ <td>4.4</td>
457
+ <td>4838</td>
458
+ </tr>
459
+ </tbody>
460
+ </table>