With a finetuned model through either SFT or LoRA SFT, we should evaluate it on standard benchmarks.
Automatic benchmarks serve as standardized tools for evaluating language models across different tasks and capabilities. While they provide a useful starting point for understanding model performance, it’s important to recognize that they represent only one piece of a comprehensive evaluation strategy.
Automatic benchmarks typically consist of curated datasets with predefined tasks and evaluation metrics. These benchmarks aim to assess various aspects of model capability, from basic language understanding to complex reasoning. The key advantage of using automatic benchmarks is their standardization - they allow for consistent comparison across different models and provide reproducible results.
However, it’s crucial to understand that benchmark performance doesn’t always translate directly to real-world effectiveness. A model that excels at academic benchmarks may still struggle with specific domain applications or practical use cases.
MMLU (Massive Multitask Language Understanding) tests knowledge across 57 subjects, from science to humanities. While comprehensive, it may not reflect the depth of expertise needed for specific domains. TruthfulQA evaluates a model’s tendency to reproduce common misconceptions, though it can’t capture all forms of misinformation.
BBH (Big Bench Hard) and GSM8K focus on complex reasoning tasks. BBH tests logical thinking and planning, while GSM8K specifically targets mathematical problem-solving. These benchmarks help assess analytical capabilities but may not capture the nuanced reasoning required in real-world scenarios.
HELM provides a holistic evaluation framework, while WinoGrande tests common sense through pronoun disambiguation. These benchmarks offer insights into language processing capabilities but may not fully represent the complexity of natural conversation or domain-specific terminology.
Many organizations have developed alternative evaluation methods to address the limitations of standard benchmarks:
Using one language model to evaluate another’s outputs has become increasingly popular. This approach can provide more nuanced feedback than traditional metrics, though it comes with its own biases and limitations.
Platforms like Anthropic’s Constitutional AI Arena allow models to interact and evaluate each other in controlled environments. This can reveal strengths and weaknesses that might not be apparent in traditional benchmarks.
Organizations often develop internal benchmark suites tailored to their specific needs and use cases. These might include domain-specific knowledge tests or evaluation scenarios that mirror actual deployment conditions.
While standard benchmarks provide a useful baseline, they shouldn’t be your only evaluation method. Here’s how to develop a more comprehensive approach:
Start with relevant standard benchmarks to establish a baseline and enable comparison with other models.
Identify the specific requirements and challenges of your use case. What tasks will your model actually perform? What kinds of errors would be most problematic?
Develop custom evaluation datasets that reflect your actual use case. This might include:
Consider implementing a multi-layered evaluation strategy:
In this section, we will implement evaluation for our finetuned model. We can use lighteval
to evaluate our finetuned model on standard benchmarks, which contains a wide range of tasks built into the library. We just need to define the tasks we want to evaluate and the parameters for the evaluation.
LightEval tasks are defined using a specific format:
{suite}|{task}|{num_few_shot}|{auto_reduce}
Parameter | Description |
---|---|
suite | The benchmark suite (e.g., ‘mmlu’, ‘truthfulqa’) |
task | Specific task within the suite (e.g., ‘abstract_algebra’) |
num_few_shot | Number of examples to include in prompt (0 for zero-shot) |
auto_reduce | Whether to automatically reduce few-shot examples if prompt is too long (0 or 1) |
Example: "mmlu|abstract_algebra|0|0"
evaluates on MMLU’s abstract algebra task with zero-shot inference.
Let’s set up an evaluation pipeline for our finetuned model. We will evaluate the model on set of sub tasks that relate to the domain of medicine.
Here’s a complete example of evaluating on automatic benchmarks relevant to one specific domain using Lighteval with the VLLM backend:
lighteval vllm \
"pretrained=your-model-name" \
"mmlu|anatomy|0|0" \
"mmlu|high_school_biology|0|0" \
"mmlu|high_school_chemistry|0|0" \
"mmlu|professional_medicine|0|0" \
--max_samples 40 \
--batch_size 1 \
--output_path "./results" \
--save_generations true
Results are displayed in a tabular format showing:
| Task |Version|Metric|Value | |Stderr|
|----------------------------------------|------:|------|-----:|---|-----:|
|all | |acc |0.3333|± |0.1169|
|leaderboard:mmlu:_average:5 | |acc |0.3400|± |0.1121|
|leaderboard:mmlu:anatomy:5 | 0|acc |0.4500|± |0.1141|
|leaderboard:mmlu:high_school_biology:5 | 0|acc |0.1500|± |0.0819|
Lighteval also include a python API for more detailed evaluation tasks, which is useful for manipulating the results in a more flexible way. Check out the Lighteval documentation for more information.
✏️ Try it out! Evaluate your finetuned model on a specific task in lighteval.