Update README.md
Browse files
README.md
CHANGED
@@ -307,7 +307,7 @@ lm_eval \
|
|
307 |
## Inference Performance
|
308 |
|
309 |
|
310 |
-
This model achieves up to
|
311 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
312 |
|
313 |
<details>
|
@@ -459,22 +459,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
459 |
<tr>
|
460 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
|
461 |
<td>1.84</td>
|
462 |
-
<td>
|
463 |
-
<td>
|
464 |
-
<td>
|
465 |
-
<td>
|
466 |
-
<td>
|
467 |
-
<td>
|
468 |
</tr>
|
469 |
<tr>
|
470 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
|
471 |
<td>2.73</td>
|
472 |
-
<td>
|
473 |
-
<td>
|
474 |
-
<td>
|
475 |
-
<td>
|
476 |
-
<td>
|
477 |
-
<td>
|
478 |
</tr>
|
479 |
<tr>
|
480 |
<th rowspan="3" valign="top">H100x4</td>
|
@@ -490,22 +490,22 @@ The following performance benchmarks were conducted with [vLLM](https://docs.vll
|
|
490 |
<tr>
|
491 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
|
492 |
<td>1.70</td>
|
493 |
-
<td>
|
494 |
-
<td>
|
495 |
-
<td>
|
496 |
-
<td>
|
497 |
-
<td>4
|
498 |
-
<td>
|
499 |
</tr>
|
500 |
<tr>
|
501 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
|
502 |
<td>2.35</td>
|
503 |
-
<td>
|
504 |
-
<td>
|
505 |
-
<td>
|
506 |
-
<td>
|
507 |
-
<td>
|
508 |
-
<td>
|
509 |
</tr>
|
510 |
</tbody>
|
511 |
</table>
|
|
|
307 |
## Inference Performance
|
308 |
|
309 |
|
310 |
+
This model achieves up to 3.7x speedup in single-stream deployment and up to 3.3x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
|
311 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
312 |
|
313 |
<details>
|
|
|
459 |
<tr>
|
460 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
|
461 |
<td>1.84</td>
|
462 |
+
<td>0.6</td>
|
463 |
+
<td>293</td>
|
464 |
+
<td>2.0</td>
|
465 |
+
<td>1021</td>
|
466 |
+
<td>2.3</td>
|
467 |
+
<td>1135</td>
|
468 |
</tr>
|
469 |
<tr>
|
470 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
|
471 |
<td>2.73</td>
|
472 |
+
<td>0.6</td>
|
473 |
+
<td>314</td>
|
474 |
+
<td>3.2</td>
|
475 |
+
<td>1591</td>
|
476 |
+
<td>4.0</td>
|
477 |
+
<td>2019</td>
|
478 |
</tr>
|
479 |
<tr>
|
480 |
<th rowspan="3" valign="top">H100x4</td>
|
|
|
490 |
<tr>
|
491 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
|
492 |
<td>1.70</td>
|
493 |
+
<td>0.8</td>
|
494 |
+
<td>236</td>
|
495 |
+
<td>2.2</td>
|
496 |
+
<td>623</td>
|
497 |
+
<td>2.4</td>
|
498 |
+
<td>669</td>
|
499 |
</tr>
|
500 |
<tr>
|
501 |
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
|
502 |
<td>2.35</td>
|
503 |
+
<td>1.3</td>
|
504 |
+
<td>350</td>
|
505 |
+
<td>3.3</td>
|
506 |
+
<td>910</td>
|
507 |
+
<td>3.6</td>
|
508 |
+
<td>994</td>
|
509 |
</tr>
|
510 |
</tbody>
|
511 |
</table>
|