shubhrapandit commited on
Commit
0d2af68
·
verified ·
1 Parent(s): bbbee32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -25
README.md CHANGED
@@ -307,7 +307,7 @@ lm_eval \
307
  ## Inference Performance
308
 
309
 
310
- This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
311
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
312
 
313
  <details>
@@ -459,22 +459,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
459
  <tr>
460
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
461
  <td>1.84</td>
462
- <td>1.2</td>
463
- <td>586</td>
464
- <td>4.0</td>
465
- <td>2042</td>
466
- <td>4.6</td>
467
- <td>2270</td>
468
  </tr>
469
  <tr>
470
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
471
  <td>2.73</td>
472
- <td>2.4</td>
473
- <td>1256</td>
474
- <td>12.8</td>
475
- <td>6364</td>
476
- <td>16.0</td>
477
- <td>8076</td>
478
  </tr>
479
  <tr>
480
  <th rowspan="3" valign="top">H100x4</td>
@@ -490,22 +490,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
490
  <tr>
491
  <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
492
  <td>1.70</td>
493
- <td>1.6</td>
494
- <td>457</td>
495
- <td>4.4</td>
496
- <td>1207</td>
497
- <td>4.8</td>
498
- <td>1296</td>
499
  </tr>
500
  <tr>
501
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
502
  <td>2.35</td>
503
- <td>5.2</td>
504
- <td>1400</td>
505
- <td>13.2</td>
506
- <td>3640</td>
507
- <td>14.4</td>
508
- <td>3976</td>
509
  </tr>
510
  </tbody>
511
  </table>
 
307
  ## Inference Performance
308
 
309
 
310
+ This model achieves up to 3.7x speedup in single-stream deployment and up to 3.3x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
311
  The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
312
 
313
  <details>
 
459
  <tr>
460
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
461
  <td>1.84</td>
462
+ <td>0.6</td>
463
+ <td>293</td>
464
+ <td>2.0</td>
465
+ <td>1021</td>
466
+ <td>2.3</td>
467
+ <td>1135</td>
468
  </tr>
469
  <tr>
470
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
471
  <td>2.73</td>
472
+ <td>0.6</td>
473
+ <td>314</td>
474
+ <td>3.2</td>
475
+ <td>1591</td>
476
+ <td>4.0</td>
477
+ <td>2019</td>
478
  </tr>
479
  <tr>
480
  <th rowspan="3" valign="top">H100x4</td>
 
490
  <tr>
491
  <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
492
  <td>1.70</td>
493
+ <td>0.8</td>
494
+ <td>236</td>
495
+ <td>2.2</td>
496
+ <td>623</td>
497
+ <td>2.4</td>
498
+ <td>669</td>
499
  </tr>
500
  <tr>
501
  <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
502
  <td>2.35</td>
503
+ <td>1.3</td>
504
+ <td>350</td>
505
+ <td>3.3</td>
506
+ <td>910</td>
507
+ <td>3.6</td>
508
+ <td>994</td>
509
  </tr>
510
  </tbody>
511
  </table>