Performance leap: TGI processes 3x more tokens, 13x faster than vLLM on long prompts. Zero config !
By reducing our memory footprint, we’re able to ingest many more tokens and more dynamically than before. A single L4 (24GB) can handle 30k tokens on llama 3.1-8B, while vLLM gets barely 10k. A lot of work went into reducing the footprint of the runtime and its effect are best seen on smaller constrained environments.
On long prompts (200k+ tokens) conversation replies take 27.5s in vLLM, while it takes only 2s in TGI. How so ? We keep the initial conversation around, so when a new reply comes in, we can answer almost instantly. The overhead of the lookup is ~5us. Thanks @Daniël de Kok for the beast data structure.
That’s it. Remove all the flags your are using and you’re likely to get the best performance. By evaluating the hardware and model, TGI carefully selects automatic values to give best performance. In production, we don’t have any flags anymore in our deployments. We kept all existing flags around, they may come in handy in niche scenarios.
To ensure accurate and reliable results, we employed a robust benchmarking protocol that addresses common pitfalls in performance evaluation. Specifically:
Note: Boundary effect is when the benchmarks are flaky because their results depend on fine details of the engine being benchmarked. For instance, a system ingesting a constant 10RPS, but receiving in the benchmark a single final request at -0.1s before the end of the benchmark, and that single request takes a full 10s to process. Then a benchmark taking 30s would measure 7.5RPS instead of the expected 10, because that single query isn’t being parallelized with others. Another very slightly slower engine would receive that request at +0.1s which would get discarded by the benchmark and therefore measure the slower system as being faster.
For more details on benchmarking in general we recommend the documentation of k6: https://grafana.com/docs/k6/latest/.
We selected a handful of scenarios to simplify the picture, they seem to accurately reflect a larger trend.
Small scenario: This scenario consists of the first 200 requests from the orca datasets being prompted to the model. The 200 requests total 8k tokens together and are representative of conversation starters. Prefix caching has very limited impact in that scenario and we feel it’s a relatively balanced benchmark for simple use cases.
Long scenario: This scenario consists of 20 requests totalling 200k prompt tokens which are essentially asking for summaries of large chunks for text. In practical scenarios this is really useful when you are feeding large chunks of code, large chunks of business data or documents repeatedly and ask simple questions about them (summarization, classification, or where to find some data). This scenario is the one closest to what a lot of professional use cases seem to be doing by including a lot of information in the prompt itself. Those very long conversations are the ones that benefit the most for our recent changes since we are enable ever larger prompts and ever faster caching.
L4
: This is a single L4 (24GB) which represents small or even home compute capabilities. We tested meta-llama/Meta-Llama-3.1-8B-Instruct
on it.4xL4
: This is a more beefy deployment usually used for either very large requests deployments for 8B models (the ones under test) or it can also easily handle all 30GB models. For this benchmark we tested meta-llama/Meta-Llama-3.1-8B-Instruct
8xH100
This is one of the beefiest deployments possible. We tested meta-llama/Meta-Llama-3.1-70B-Instruct
as it’s the most representative models of this size. Llama 3.3 wasn’t released at the time of benchmarking (it’s the exact same model so it doesn’t make any difference).The commands to run the benchmarks are as follows:
cd text-generation-inference/load_tests
make prepare_orca
python long.py
TGI: text-generation-launcher --model-id $MODEL_ID --num-shard $N --port 8000
(or docker variant)
vLLM: vllm serve $MODEL_ID --tensor-parallel $N —enable-prefix-caching
(or docker variant)
MODEL_ID=$MODEL_ID HOST=localhost:8000 k6 run load_tests/common.js
Long: MODEL_ID=$MODEL_ID HOST=localhost:8000 k6 run load_tests/long.js
Our benchmarking results show significant performance gains, with a 13x speedup over vLLM with prefix caching, and up to 30x speedup without prefix caching. These results are consistent with our production data and demonstrate the effectiveness of our optimized LLM architecture.
Raw results
2nd run | TGI v3 (time in s) | vLLM (s) | Amount of req | |
Llama 3.1 8b | Small test - L4 - 8B | 17.5 | 19.9 | 200 |
Llama 3.1 8b | Long test* - L4 - 8B | 53 | 57 | 10 |
Llama 3.1 8b | Small test - 4xL4 - 8B | 4.8 | 6 | 200 |
Llama 3.1 8b | Long test - 4xL4 - 8B | 3.2 | 12.5 | 20 |
Llama 3.1 70b | Small test - 8XH100 - 70B | 6.2 | 7.4 | 200 |
Llama 3.1 70b | Long test - 8H100 - 70B | 2 | 27.5 | 20 |
1st run | TGI (s) | vLLM (s) | Amount of req | |
Llama 3.1 8b | Small test - L4 | 19.9 | 19.9 | 200 |
Llama 3.1 8b | Long test (10) - L4 | 49.8 | 55 | 10 |
Llama 3.1 8b | Small test - 4xL4 | 13 | 12.6 | 200 |
Llama 3.1 8b | Long test - 4xL4 | 47 | 50.3 | 20 |
Llama 3.1 70b | Small test - 8XH100 | 7.5 | 7.6 | 200 |
Llama 3.1 70b | Long test - 8H100 | 12.1 | 28.3 | 20 |
While our results are promising, there are some caveats to consider:
--max-total-tokens
to reduce individual queries impact. You can also use more GPUs or larger GPUs in order to increase the size of the kv-cache.Our performance gains can be attributed to several key factors:
flashinfer
and flashdecoding
, offer improved performance at large prompt lengths and enable more efficient scheduling.logits
calculation. Logits for llama 3.1-8b take 25.6GB (=100k tokens 128k vocabulary 2(f16)) which is more than the full model which is 16GB. The thing is that in general we do not need every prompt logits, so we simply removed them and removed them from being potentially asked by users by default. We think this is ok since they are mostly used by researchers. You can enable your deployments to have them again by using the --enable-prefill-logprobs
flag, but you will experience reduced token prompt size.While we’ve made significant progress, there are still opportunities for improvement:
By sharing our benchmarking methodology, results, and technical insights, we aim to contribute to the ongoing development of more efficient and effective LLMs.
< > Update on GitHub