shubhrapandit commited on
Commit
e351f1e
·
verified ·
1 Parent(s): 9d89e98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -1
README.md CHANGED
@@ -178,18 +178,123 @@ oneshot(
178
 
179
  ## Evaluation
180
 
181
- The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
182
 
183
  <details>
184
  <summary>Evaluation Commands</summary>
 
 
 
 
 
 
 
 
 
 
185
 
 
 
 
 
 
186
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
  ```
 
 
 
 
 
 
 
188
 
 
189
  </details>
190
 
191
  ### Accuracy
192
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
193
  ## Inference Performance
194
 
195
 
 
178
 
179
  ## Evaluation
180
 
181
+ The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
182
 
183
  <details>
184
  <summary>Evaluation Commands</summary>
185
+
186
+ ### Vision Tasks
187
+ - vqav2
188
+ - docvqa
189
+ - mathvista
190
+ - mmmu
191
+ - chartqa
192
+
193
+ ```
194
+ vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
195
 
196
+ python -m eval.run eval_vllm \
197
+ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \
198
+ --url http://0.0.0.0:8000 \
199
+ --output_dir ~/tmp \
200
+ --eval_name <vision_task_name>
201
  ```
202
+
203
+ ### Text-based Tasks
204
+ #### MMLU
205
+
206
+ ```
207
+ lm_eval \
208
+ --model vllm \
209
+ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
210
+ --tasks mmlu \
211
+ --num_fewshot 5 \
212
+ --batch_size auto \
213
+ --output_path output_dir
214
+
215
+ ```
216
+
217
+ #### MGSM
218
+
219
  ```
220
+ lm_eval \
221
+ --model vllm \
222
+ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
223
+ --tasks mgsm_cot_native \
224
+ --num_fewshot 0 \
225
+ --batch_size auto \
226
+ --output_path output_dir
227
 
228
+ ```
229
  </details>
230
 
231
  ### Accuracy
232
 
233
+ <table>
234
+ <thead>
235
+ <tr>
236
+ <th>Category</th>
237
+ <th>Metric</th>
238
+ <th>Qwen/Qwen2.5-VL-7B-Instruct</th>
239
+ <th>Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
240
+ <th>Recovery (%)</th>
241
+ </tr>
242
+ </thead>
243
+ <tbody>
244
+ <tr>
245
+ <td rowspan="6"><b>Vision</b></td>
246
+ <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
247
+ <td>52.00</td>
248
+ <td>52.33</td>
249
+ <td>100.63%</td>
250
+ </tr>
251
+ <tr>
252
+ <td>VQAv2 (val)<br><i>vqa_match</i></td>
253
+ <td>75.59</td>
254
+ <td>75.46</td>
255
+ <td>99.83%</td>
256
+ </tr>
257
+ <tr>
258
+ <td>DocVQA (val)<br><i>anls</i></td>
259
+ <td>94.27</td>
260
+ <td>94.09</td>
261
+ <td>99.81%</td>
262
+ </tr>
263
+ <tr>
264
+ <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
265
+ <td>86.44</td>
266
+ <td>86.16</td>
267
+ <td>99.68%</td>
268
+ </tr>
269
+ <tr>
270
+ <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
271
+ <td>69.47</td>
272
+ <td>70.47</td>
273
+ <td>101.44%</td>
274
+ </tr>
275
+ <tr>
276
+ <td><b>Average Score</b></td>
277
+ <td><b>75.95</b></td>
278
+ <td><b>75.90</b></td>
279
+ <td><b>99.93%</b></td>
280
+ </tr>
281
+ <tr>
282
+ <td rowspan="3"><b>Text</b></td>
283
+ <td>MGSM (CoT)</td>
284
+ <td>58.72</td>
285
+ <td>59.92</td>
286
+ <td>102.04%</td>
287
+ </tr>
288
+ <tr>
289
+ <td>MMLU (5-shot)</td>
290
+ <td>71.09</td>
291
+ <td>70.57</td>
292
+ <td>99.27%</td>
293
+ </tr>
294
+ </tbody>
295
+ </table>
296
+
297
+
298
  ## Inference Performance
299
 
300