shubhrapandit commited on
Commit
e11def4
·
verified ·
1 Parent(s): 2ea9797

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -312,7 +312,7 @@ lm_eval \
312
  ## Inference Performance
313
 
314
 
315
- This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
316
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
317
 
318
  <details>
@@ -414,10 +414,10 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
414
  <td>5.66</td>
415
  <td>4.3</td>
416
  <td>252</td>
417
- <td>4.3</td>
418
- <td>252</td>
419
- <td>1.0</td>
420
- <td>1065</td>
421
  </tr>
422
  </tbody>
423
  </table>
@@ -465,22 +465,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
465
  <tr>
466
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8</td>
467
  <td>1.80</td>
468
- <td>1.2</td>
469
- <td>578</td>
470
- <td>4.0</td>
471
- <td>2040</td>
472
- <td>4.6</td>
473
- <td>2266</td>
474
  </tr>
475
  <tr>
476
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
477
  <td>2.75</td>
478
- <td>2.8</td>
479
- <td>1364</td>
480
- <td>12.8</td>
481
- <td>6352</td>
482
- <td>16.4</td>
483
- <td>8148</td>
484
  </tr>
485
  <tr>
486
  <th rowspan="3" valign="top">H100x4</th>
@@ -496,22 +496,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
496
  <tr>
497
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic</td>
498
  <td>1.73</td>
499
- <td>1.8</td>
500
- <td>479</td>
501
- <td>4.4</td>
502
- <td>1203</td>
503
- <td>4.8</td>
504
- <td>1296</td>
505
  </tr>
506
  <tr>
507
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
508
  <td>8.27</td>
509
- <td>13.2</td>
510
- <td>3652</td>
511
- <td>13.2</td>
512
- <td>3652</td>
513
- <td>99.2</td>
514
- <td>27108</td>
515
  </tr>
516
  </tbody>
517
  </table>
 
312
  ## Inference Performance
313
 
314
 
315
+ This model achieves up to 3.95x speedup in single-stream deployment and up to 6.6x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
316
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
317
 
318
  <details>
 
414
  <td>5.66</td>
415
  <td>4.3</td>
416
  <td>252</td>
417
+ <td>4.4</td>
418
+ <td>251</td>
419
+ <td>4.2</td>
420
+ <td>259</td>
421
  </tr>
422
  </tbody>
423
  </table>
 
465
  <tr>
466
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8</td>
467
  <td>1.80</td>
468
+ <td>0.6</td>
469
+ <td>289</td>
470
+ <td>2.0</td>
471
+ <td>1020</td>
472
+ <td>2.3</td>
473
+ <td>1133</td>
474
  </tr>
475
  <tr>
476
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
477
  <td>2.75</td>
478
+ <td>0.7</td>
479
+ <td>341</td>
480
+ <td>3.2</td>
481
+ <td>1588</td>
482
+ <td>4.1</td>
483
+ <td>2037</td>
484
  </tr>
485
  <tr>
486
  <th rowspan="3" valign="top">H100x4</th>
 
496
  <tr>
497
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic</td>
498
  <td>1.73</td>
499
+ <td>0.9</td>
500
+ <td>247</td>
501
+ <td>2.2</td>
502
+ <td>621</td>
503
+ <td>2.4</td>
504
+ <td>669</td>
505
  </tr>
506
  <tr>
507
  <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
508
  <td>8.27</td>
509
+ <td>3.3</td>
510
+ <td>913</td>
511
+ <td>3.3</td>
512
+ <td>898</td>
513
+ <td>3.6</td>
514
+ <td>991</td>
515
  </tr>
516
  </tbody>
517
  </table>