Quicktour

We recommend using the --help flag to get more information about the available options for each command. lighteval --help

Lighteval can be used with a few different commands.

Accelerate

Evaluate a model on a GPU

To evaluate GPT-2 on the Truthful QA benchmark, run:

lighteval accelerate \
     "pretrained=gpt2" \
     "leaderboard|truthfulqa:mc|0|0"

Here, --tasks refers to either a comma-separated list of supported tasks from the tasks_list in the format:

{suite}|{task}|{num_few_shot}|{0 or 1 to automatically reduce `num_few_shot` if prompt is too long}

or a file path like examples/tasks/recommended_set.txt which specifies multiple task configurations.

Tasks details can be found in the file implementing them.

Evaluate a model on one or more GPUs

Data parallelism

To evaluate a model on one or more GPUs, first create a multi-gpu config by running.

accelerate config

You can then evaluate a model using data parallelism on 8 GPUs like follows:

accelerate launch --multi_gpu --num_processes=8 -m \
    lighteval accelerate \
    "pretrained=gpt2" \
    "leaderboard|truthfulqa:mc|0|0"

Here, --override_batch_size defines the batch size per device, so the effective batch size will be override_batch_size * num_gpus.

Pipeline parallelism

To evaluate a model using pipeline parallelism on 2 or more GPUs, run:

lighteval accelerate \
    "pretrained=gpt2,model_parallel=True" \
    "leaderboard|truthfulqa:mc|0|0"

This will automatically use accelerate to distribute the model across the GPUs.

Both data and pipeline parallelism can be combined by setting model_parallel=True and using accelerate to distribute the data across the GPUs.

Model Arguments

The model-args argument takes a string representing a list of model argument. The arguments allowed vary depending on the backend you use (vllm or accelerate).

Accelerate

VLLM

Nanotron

To evaluate a model trained with nanotron on a single gpu.

Nanotron models cannot be evaluated without torchrun.

 torchrun --standalone --nnodes=1 --nproc-per-node=1  \
 src/lighteval/__main__.py nanotron \
 --checkpoint-config-path ../nanotron/checkpoints/10/config.yaml \
 --lighteval-config-path examples/nanotron/lighteval_config_override_template.yaml

The nproc-per-node argument should match the data, tensor and pipeline parallelism confidured in the lighteval_config_template.yaml file. That is: nproc-per-node = data_parallelism * tensor_parallelism * pipeline_parallelism.

< > Update on GitHub