Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -236,3 +236,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
236 |
|Winogrande (5-shot) |84.85|
|
237 |
|GSM8k (5-shot) |71.34|
|
238 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
236 |
|Winogrande (5-shot) |84.85|
|
237 |
|GSM8k (5-shot) |71.34|
|
238 |
|
239 |
+
|
240 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
241 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MixtureofMerges-MoE-4x7b-v4)
|
242 |
+
|
243 |
+
| Metric |Value|
|
244 |
+
|---------------------------------|----:|
|
245 |
+
|Avg. |76.23|
|
246 |
+
|AI2 Reasoning Challenge (25-Shot)|72.53|
|
247 |
+
|HellaSwag (10-Shot) |88.85|
|
248 |
+
|MMLU (5-Shot) |64.53|
|
249 |
+
|TruthfulQA (0-shot) |75.30|
|
250 |
+
|Winogrande (5-shot) |84.85|
|
251 |
+
|GSM8k (5-shot) |71.34|
|
252 |
+
|