Lighteval allows you to use vllm
as backend allowing great speedups.
To use, simply change the model_args
to reflect the arguments you want to pass to vllm.
lighteval vllm \
"pretrained=HuggingFaceH4/zephyr-7b-beta,dtype=float16" \
"leaderboard|truthfulqa:mc|0|0"
vllm
is able to distribute the model across multiple GPUs using data
parallelism, pipeline parallelism or tensor parallelism.
You can choose the parallelism method by setting in the the model_args
.
For example if you have 4 GPUs you can split it across using tensor_parallelism
:
export VLLM_WORKER_MULTIPROC_METHOD=spawn && lighteval vllm \
"pretrained=HuggingFaceH4/zephyr-7b-beta,dtype=float16,tensor_parallel_size=4" \
"leaderboard|truthfulqa:mc|0|0"
Or, if your model fits on a single GPU, you can use data_parallelism
to speed up the evaluation:
lighteval vllm \
"pretrained=HuggingFaceH4/zephyr-7b-beta,dtype=float16,data_parallel_size=4" \
"leaderboard|truthfulqa:mc|0|0"
Available arguments for vllm
can be found in the VLLMModelConfig
:
In the case of OOM issues, you might need to reduce the context size of the
model as well as reduce the gpu_memory_utilisation
parameter.