Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.97
ARC (25-shot) 62.03
HellaSwag (10-shot) 83.8
MMLU (5-shot) 58.39
TruthfulQA (0-shot) 49.92
Winogrande (5-shot) 77.27
GSM8K (5-shot) 12.43
DROP (3-shot) 12.96
Downloads last month
1,169
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Spaces using adonlee/LLaMA_2_13B_SFT_v0 6