Lighteval documentation

Litellm as backend

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Litellm as backend

Lighteval allows to use litellm, a backend allowing you to call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.].

Documentation for available APIs and compatible endpoints can be found here.

Quick use

lighteval endpoint litellm \
    "gpt-3.5-turbo" \
    "lighteval|gsm8k|0|0"

Using a config file

Litellm allows generation with any OpenAI compatible endpoint, for example you can evaluate a model running on a local vllm server.

To do so you will need to use a config file like so:

model:
  base_params:
    model_name: "openai/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
    base_url: "URL OF THE ENDPOINT YOU WANT TO USE"
    api_key: "" # remove or keep empty as needed
  generation:
    temperature: 0.5
    max_new_tokens: 256
    stop_tokens: [""]
    top_p: 0.9
    seed: 0
    repetition_penalty: 1.0
    frequency_penalty: 0.0
< > Update on GitHub