legolasyiu commited on
Commit
68cb9d9
·
verified ·
1 Parent(s): a826306

Adding Evaluation Results (#2)

Browse files

- Adding Evaluation Results (bfe47edc6568fcd7015c2fb970fdcc335ef3186d)

Files changed (1) hide show
  1. README.md +113 -5
README.md CHANGED
@@ -1,16 +1,111 @@
1
  ---
2
- base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
 
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - llama
8
  - trl
9
- license: llama3.2
10
- language:
11
- - en
12
  datasets:
13
  - openai/gsm8k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  A better version is available: [ReasoningCore-3B-RE1-V2](https://huggingface.co/EpistemeAI/ReasoningCore-3B-RE1-V2)
@@ -177,4 +272,17 @@ For further details, questions, or feedback, please email [email protected]
177
 
178
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
179
 
180
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: llama3.2
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
  - llama
10
  - trl
11
+ base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
 
 
12
  datasets:
13
  - openai/gsm8k
14
+ model-index:
15
+ - name: ReasoningCore-3B-0
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: HuggingFaceH4/ifeval
23
+ args:
24
+ num_few_shot: 0
25
+ metrics:
26
+ - type: inst_level_strict_acc and prompt_level_strict_acc
27
+ value: 73.41
28
+ name: strict accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: BBH (3-Shot)
37
+ type: BBH
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 22.17
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: hendrycks/competition_math
53
+ args:
54
+ num_few_shot: 4
55
+ metrics:
56
+ - type: exact_match
57
+ value: 15.86
58
+ name: exact match
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: GPQA (0-shot)
67
+ type: Idavidrein/gpqa
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 3.02
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 2.56
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 24.14
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/ReasoningCore-3B-0
108
+ name: Open LLM Leaderboard
109
  ---
110
 
111
  A better version is available: [ReasoningCore-3B-RE1-V2](https://huggingface.co/EpistemeAI/ReasoningCore-3B-RE1-V2)
 
272
 
273
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
274
 
275
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
276
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
277
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/EpistemeAI__ReasoningCore-3B-0-details)
278
+
279
+ | Metric |Value|
280
+ |-------------------|----:|
281
+ |Avg. |23.53|
282
+ |IFEval (0-Shot) |73.41|
283
+ |BBH (3-Shot) |22.17|
284
+ |MATH Lvl 5 (4-Shot)|15.86|
285
+ |GPQA (0-shot) | 3.02|
286
+ |MuSR (0-shot) | 2.56|
287
+ |MMLU-PRO (5-shot) |24.14|
288
+