--- tags: - vllm - vision - w4a16 license: apache-2.0 license_link: >- https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md language: - en base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers --- # Qwen2.5-VL-7B-Instruct-quantized-w4a16 ## Model Overview - **Model Architecture:** Qwen/Qwen2.5-VL-7B-Instruct - **Input:** Vision-Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** INT4 - **Activation quantization:** FP16 - **Release Date:** 2/24/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). ### Model Optimizations This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm.assets.image import ImageAsset from vllm import LLM, SamplingParams # prepare model llm = LLM( model="neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16", trust_remote_code=True, max_model_len=4096, max_num_seqs=2, ) # prepare inputs question = "What is the content of this image?" inputs = { "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n", "multi_modal_data": { "image": ImageAsset("cherry_blossom").pil_image.convert("RGB") }, } # generate response print("========== SAMPLE GENERATION ==============") outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64)) print(f"PROMPT : {outputs[0].prompt}") print(f"RESPONSE: {outputs[0].outputs[0].text}") print("==========================================") ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
Model Creation Code ```python import base64 from io import BytesIO import torch from datasets import load_dataset from qwen_vl_utils import process_vision_info from transformers import AutoProcessor from llmcompressor.modifiers.quantization import GPTQModifier from llmcompressor.transformers import oneshot from llmcompressor.transformers.tracing import ( TraceableQwen2_5_VLForConditionalGeneration, ) from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme # Load model. model_id = "Qwen/Qwen2.5-VL-7B-Instruct" model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained( model_id, device_map="auto", torch_dtype="auto", ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) # Oneshot arguments DATASET_ID = "lmms-lab/flickr30k" DATASET_SPLIT = {"calibration": "test[:512]"} NUM_CALIBRATION_SAMPLES = 512 MAX_SEQUENCE_LENGTH = 2048 # Load dataset and preprocess. ds = load_dataset(DATASET_ID, split=DATASET_SPLIT) ds = ds.shuffle(seed=42) dampening_frac=0.01 # Apply chat template and tokenize inputs. def preprocess_and_tokenize(example): # preprocess buffered = BytesIO() example["image"].save(buffered, format="PNG") encoded_image = base64.b64encode(buffered.getvalue()) encoded_image_text = encoded_image.decode("utf-8") base64_qwen = f"data:image;base64,{encoded_image_text}" messages = [ { "role": "user", "content": [ {"type": "image", "image": base64_qwen}, {"type": "text", "text": "What does the image show?"}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) # tokenize return processor( text=[text], images=image_inputs, videos=video_inputs, padding=False, max_length=MAX_SEQUENCE_LENGTH, truncation=True, ) ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names) # Define a oneshot data collator for multimodal inputs. def data_collator(batch): assert len(batch) == 1 return {key: torch.tensor(value) for key, value in batch[0].items()} recipe = GPTQModifier( targets="Linear", config_groups={ "config_group": QuantizationScheme( targets=["Linear"], weights=QuantizationArgs( num_bits=4, type=QuantizationType.INT, strategy=QuantizationStrategy.GROUP, group_size=128, symmetric=True, dynamic=False, actorder=ActivationOrdering.WEIGHT, ), ), }, sequential_targets=["Qwen2_5_VLDecoderLayer"], ignore=["lm_head", "re:visual.*"], update_size=NUM_CALIBRATION_SAMPLES, dampening_frac=dampening_frac ) SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16" # Perform oneshot oneshot( model=model, tokenizer=model_id, dataset=ds, recipe=recipe, max_seq_length=MAX_SEQUENCE_LENGTH, num_calibration_samples=NUM_CALIBRATION_SAMPLES, trust_remote_code_model=True, data_collator=data_collator, output_dir=SAVE_DIR ) ```
## Evaluation The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
Evaluation Commands ### Vision Tasks - vqav2 - docvqa - mathvista - mmmu - chartqa ``` vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7 python -m eval.run eval_vllm \ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \ --url http://0.0.0.0:8000 \ --output_dir ~/tmp \ --eval_name ``` ### Text-based Tasks #### MMLU ``` lm_eval \ --model vllm \ --model_args pretrained="",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks mmlu \ --num_fewshot 5 \ --batch_size auto \ --output_path output_dir ``` #### MGSM ``` lm_eval \ --model vllm \ --model_args pretrained="",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=,gpu_memory_utilization=0.9 \ --tasks mgsm_cot_native \ --num_fewshot 0 \ --batch_size auto \ --output_path output_dir ```
### Accuracy
Category Metric Qwen/Qwen2.5-VL-7B-Instruct Qwen2.5-VL-7B-Instruct-quantized.W4A16 Recovery (%)
Vision MMMU (val, CoT)
explicit_prompt_relaxed_correctness
52.00 51.11 98.29%
VQAv2 (val)
vqa_match
75.59 73.90 97.76%
DocVQA (val)
anls
94.27 94.13 99.85%
ChartQA (test, CoT)
anywhere_in_answer_relaxed_correctness
86.44 85.64 99.07%
Mathvista (testmini, CoT)
explicit_prompt_relaxed_correctness
69.47 67.17 96.69%
Average Score 75.95 74.79 98.47%
Text MGSM (CoT) 58.72 56.44 96.12%
MMLU (5-shot) 71.09 68.67 96.59%
## Inference Performance This model achieves up to 2.35x speedup in single-stream deployment and up to 2.02x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
Benchmarking Command ``` guidellm --model neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=,generated_tokens=,images=,width=,height= --max seconds 120 --backend aiohttp_server ```
### Single-stream performance (measured with vLLM version 0.7.2)
Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Model Average Cost Reduction Latency (s) Queries Per Dollar Latency (s)th> Queries Per Dollar Latency (s) Queries Per Dollar
A6000x1 Qwen/Qwen2.5-VL-7B-Instruct 4.9 912 3.2 1386 3.1 1431
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 1.50 3.6 1248 2.1 2163 2.0 2237
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 2.05 3.3 1351 1.4 3252 1.4 3321
A100x1 Qwen/Qwen2.5-VL-7B-Instruct 2.8 707 1.7 1162 1.7 1198
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 1.24 2.4 851 1.4 1454 1.3 1512
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 1.49 2.2 912 1.1 1791 1.0 1950
H100x1 Qwen/Qwen2.5-VL-7B-Instruct 2.0 557 1.2 919 1.2 941
neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic 1.28 1.6 698 0.9 1181 0.9 1219
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 1.28 1.6 686 0.9 1191 0.9 1228
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Model Average Cost Reduction Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar
A6000x1 Qwen/Qwen2.5-VL-7B-Instruct 0.4 1837 1.5 6846 1.7 7638
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 1.41 0.5 2297 2.3 10137 2.5 11472
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 1.60 0.4 1828 2.7 12254 3.4 15477
A100x1 Qwen/Qwen2.5-VL-7B-Instruct 0.7 1347 2.6 5221 3.0 6122
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 1.27 0.8 1639 3.4 6851 3.9 7918
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 1.21 0.7 1314 3.0 5983 4.6 9206
H100x1 Qwen/Qwen2.5-VL-7B-Instruct 0.9 969 3.1 3358 3.3 3615
neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic 1.29 1.2 1331 3.8 4109 4.2 4598
neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16 1.28 1.2 1298 3.8 4190 4.2 4573
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPS: Queries per second. **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).