Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -124,4 +124,17 @@ print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
|
|
124 |
|
125 |
## Citation
|
126 |
|
127 |
-
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
|
125 |
## Citation
|
126 |
|
127 |
+
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
|
128 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
129 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RWKV__rwkv-4-3b-pile)
|
130 |
+
|
131 |
+
| Metric | Value |
|
132 |
+
|-----------------------|---------------------------|
|
133 |
+
| Avg. | 31.0 |
|
134 |
+
| ARC (25-shot) | 36.01 |
|
135 |
+
| HellaSwag (10-shot) | 59.66 |
|
136 |
+
| MMLU (5-shot) | 24.67 |
|
137 |
+
| TruthfulQA (0-shot) | 32.14 |
|
138 |
+
| Winogrande (5-shot) | 58.33 |
|
139 |
+
| GSM8K (5-shot) | 0.68 |
|
140 |
+
| DROP (3-shot) | 5.52 |
|