With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
We wrote a release blog here to explain why we introduced this leaderboard!
Tasks
đ We evaluate models on 6 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.
IFEval (https://arxiv.org/abs/2311.07911)Â â IFEval is a dataset designed to test a modelâs ability to follow explicit instructions, such as âinclude keyword xâ or âuse format y.â The focus is on the modelâs adherence to formatting instructions rather than the content generated, allowing for the use of strict and rigorous metrics.
BBH (Big Bench Hard) (https://arxiv.org/abs/2210.09261) â A subset of 23 challenging tasks from the BigBench dataset to evaluate language models. The tasks use objective metrics, are highly difficult, and have sufficient sample sizes for statistical significance. They include multistep arithmetic, algorithmic reasoning (e.g., boolean expressions, SVG shapes), language understanding (e.g., sarcasm detection, name disambiguation), and world knowledge. BBH performance correlates well with human preferences, providing valuable insights into model capabilities.
MATH (https://arxiv.org/abs/2103.03874) â MATH is a compilation of high-school level competition problems gathered from several sources, formatted consistently using Latex for equations and Asymptote for figures. Generations must fit a very specific output format. We keep only level 5 MATH questions and call it MATH Lvl 5.
GPQA (Graduate-Level Google-Proof Q&A Benchmark) (https://arxiv.org/abs/2311.12022) â GPQA is a highly challenging knowledge dataset with questions crafted by PhD-level domain experts in fields like biology, physics, and chemistry. These questions are designed to be difficult for laypersons but relatively easy for experts. The dataset has undergone multiple rounds of validation to ensure both difficulty and factual accuracy. Access to GPQA is restricted through gating mechanisms to minimize the risk of data contamination. Consequently, we do not provide plain text examples from this dataset, as requested by the authors.
MuSR (Multistep Soft Reasoning) (https://arxiv.org/abs/2310.16049) â MuSR is a new dataset consisting of algorithmically generated complex problems, each around 1,000 words in length. The problems include murder mysteries, object placement questions, and team allocation optimizations. Solving these problems requires models to integrate reasoning with long-range context parsing. Few models achieve better than random performance on this dataset.
MMLU-PRO (Massive Multitask Language Understanding - Professional) (https://arxiv.org/abs/2406.01574) â MMLU-Pro is a refined version of the MMLU dataset, which has been a standard for multiple-choice knowledge assessment. Recent research identified issues with the original MMLU, such as noisy data (some unanswerable questions) and decreasing difficulty due to advances in model capabilities and increased data contamination. MMLU-Pro addresses these issues by presenting models with 10 choices instead of 4, requiring reasoning on more questions, and undergoing expert review to reduce noise. As a result, MMLU-Pro is of higher quality and currently more challenging than the original.
For all these evaluations, a higher score is a better score. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
If a modelâs name contains âFlaggedâ, this indicates it has been flagged by the community, and should probably be ignored! Clicking the link will redirect you to the discussion about the model.
Reproducibility
To reproduce our results, you can use our fork of lm_eval, as our PRs are not all merged in it at the moment.
git clone git@github.com:huggingface/lm-evaluation-harness.git
cd lm-evaluation-harness
git checkout main
pip install -e .
lm-eval --model_args="pretrained=<your_model>,revision=<your_model_revision>,dtype=<model_dtype>"--tasks=leaderboard --batch_size=auto --output_path=<output_path>
Attention:Â For instruction models add the --apply_chat_template and fewshot_as_multiturn option.
Note: You can expect results to vary slightly for different batch sizes because of padding.
Task Evaluations and Parameters
IFEval:
Task: âIFEvalâ
Measure: Strict Accuracy at Instance and Prompt Levels (inst_level_strict_acc,none and prompt_level_strict_acc,none)
Shots: 0-shot for both Instance-Level Strict Accuracy and Prompt-Level Strict Accuracy
num_choices: 0 for both Strict Accuracy at Instance and Prompt Levels.
Big Bench Hard (BBH):
Overview Task: âBBHâ
Shots: 3-shot for each subtask
Measure: Normalized Accuracy across all subtasks (acc_norm,none)