How fast is Llama-2-13b on Inferentia2? Let’s figure out!
For this benchmark we will use the following configurations:
Model type | batch_size | sequence_length |
---|---|---|
Llama2 13b BS1 | 1 | 4096 |
Llama2 13b BS4 | 4 | 4096 |
Llama2 13b BS8 | 8 | 4096 |
Note: all models are compiled to use the full extent of cores available on the inf2.48xlarge
instance.
Note: please refer to the inferentia2 product page for details on the available instances.
To evaluate the models, we generate tokens up to a total sequence length of 1024, starting from 256 input tokens (i.e. we generate 256, 512 and 768 tokens).
The encoding time or time to first token is the time required to process the input tokens and generate the first output token. It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens.
We test the encoding time for increasing context sizes, 256 input tokens corresponding roughly to a typical Q/A usage, while 768 is more typical of a Retrieval Augmented Generation (RAG) use-case.
Encoding time is expressed in seconds.
The end-to-end latency corresponds to the total time to reach a sequence length of 1024 tokens.
It therefore includes the encoding and generation time.
Latency is expressed in seconds.
We adopt the same convention as other benchmarks to evaluate the throughput, by dividing the end-to-end
latency by the sum of both input and output tokens.
In other words, we divide the end-to-end latency by batch_size * sequence_length
to obtain the number of generated tokens per second.
Throughput is expressed in tokens/second.