shubhrapandit's picture
Update README.md
9c998f3 verified
|
raw
history blame
8.81 kB
---
tags:
- vllm
- vision
- fp8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
---
# Qwen2.5-VL-7B-Instruct-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-VL-7B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
from llmcompressor.modifiers.quantization import QuantizationModifier
# Load model.
model_id = Qwen/Qwen2.5-VL-7B-Instruct
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Recipe
recipe = [
QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
sequential_targets=["MistralDecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
),
]
SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic"
# Perform oneshot
oneshot(
model=model,
recipe=recipe,
trust_remote_code_model=True,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
<details>
<summary>Evaluation Commands</summary>
```
```
</details>
### Accuracy
## Inference Performance
This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>QPD</th>
<th>Latency (s)th>
<th>QPD</th>
<th>Latency (s)</th>
<th>QPD</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.8</td>
<td>707</td>
<td>1.7</td>
<td>1162</td>
<td>1.7</td>
<td>1198</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.24</td>
<td>2.4</td>
<td>851</td>
<td>1.4</td>
<td>1454</td>
<td>1.3</td>
<td>1512</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.49</td>
<td>2.2</td>
<td>912</td>
<td>1.1</td>
<td>1791</td>
<td>1.0</td>
<td>1950</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.0</td>
<td>557</td>
<td>1.2</td>
<td>919</td>
<td>1.2</td>
<td>941</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.28</td>
<td>1.6</td>
<td>698</td>
<td>0.9</td>
<td>1181</td>
<td>0.9</td>
<td>1219</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.6</td>
<td>686</td>
<td>0.9</td>
<td>1191</td>
<td>0.9</td>
<td>1228</td>
</tr>
</tbody>
</table>
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct-quantized.</th>
<td></td>
<td>0.7</td>
<td>1347</td>
<td>2.6</td>
<td>5221</td>
<td>3.0</td>
<td>6122</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.27</td>
<td>0.8</td>
<td>1639</td>
<td>3.4</td>
<td>6851</td>
<td>3.9</td>
<td>7918</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.21</td>
<td>0.7</td>
<td>1314</td>
<td>3.0</td>
<td>5983</td>
<td>4.6</td>
<td>9206</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.9</td>
<td>969</td>
<td>3.1</td>
<td>3358</td>
<td>3.3</td>
<td>3615</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.29</td>
<td>1.2</td>
<td>1331</td>
<td>3.8</td>
<td>4109</td>
<td>4.2</td>
<td>4598</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.2</td>
<td>1298</td>
<td>3.8</td>
<td>4190</td>
<td>4.2</td>
<td>4573</td>
</tr>
</tbody>
</table>