llama-r / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
d9d5ee6
|
raw
history blame
649 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.64
ARC (25-shot) 21.59
HellaSwag (10-shot) 30.18
MMLU (5-shot) 26.13
TruthfulQA (0-shot) 45.38
Winogrande (5-shot) 52.17
GSM8K (5-shot) 0.61
DROP (3-shot) 3.46