prithivMLmods commited on
Commit
2e84ead
·
verified ·
1 Parent(s): b677094

Adding Evaluation Results

Browse files

This is an automated PR created with [this space](https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard)!

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -6,6 +6,105 @@ library_name: transformers
6
  base_model:
7
  - Qwen/Qwen2.5-1.5B-Instruct
8
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  <pre align="center">
11
  ____ ____ __ __ __ ____ ____ ____ _ _
@@ -105,4 +204,18 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
105
  - May inadvertently generate inappropriate or harmful content if safeguards are not applied, particularly in sensitive applications.
106
 
107
  9. **Real-Time Usability:**
108
- - Latency in inference time could limit its effectiveness in real-time applications or when scaling to large user bases.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  base_model:
7
  - Qwen/Qwen2.5-1.5B-Instruct
8
  pipeline_tag: text-generation
9
+ model-index:
10
+ - name: Bellatrix-1.5B-xElite
11
+ results:
12
+ - task:
13
+ type: text-generation
14
+ name: Text Generation
15
+ dataset:
16
+ name: IFEval (0-Shot)
17
+ type: wis-k/instruction-following-eval
18
+ split: train
19
+ args:
20
+ num_few_shot: 0
21
+ metrics:
22
+ - type: inst_level_strict_acc and prompt_level_strict_acc
23
+ value: 19.64
24
+ name: averaged accuracy
25
+ source:
26
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
27
+ name: Open LLM Leaderboard
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: BBH (3-Shot)
33
+ type: SaylorTwift/bbh
34
+ split: test
35
+ args:
36
+ num_few_shot: 3
37
+ metrics:
38
+ - type: acc_norm
39
+ value: 9.49
40
+ name: normalized accuracy
41
+ source:
42
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
43
+ name: Open LLM Leaderboard
44
+ - task:
45
+ type: text-generation
46
+ name: Text Generation
47
+ dataset:
48
+ name: MATH Lvl 5 (4-Shot)
49
+ type: lighteval/MATH-Hard
50
+ split: test
51
+ args:
52
+ num_few_shot: 4
53
+ metrics:
54
+ - type: exact_match
55
+ value: 12.61
56
+ name: exact match
57
+ source:
58
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
59
+ name: Open LLM Leaderboard
60
+ - task:
61
+ type: text-generation
62
+ name: Text Generation
63
+ dataset:
64
+ name: GPQA (0-shot)
65
+ type: Idavidrein/gpqa
66
+ split: train
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: acc_norm
71
+ value: 3.8
72
+ name: acc_norm
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: MuSR (0-shot)
81
+ type: TAUR-Lab/MuSR
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 4.44
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU-PRO (5-shot)
96
+ type: TIGER-Lab/MMLU-Pro
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 7.3
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBellatrix-1.5B-xElite
107
+ name: Open LLM Leaderboard
108
  ---
109
  <pre align="center">
110
  ____ ____ __ __ __ ____ ____ ____ _ _
 
204
  - May inadvertently generate inappropriate or harmful content if safeguards are not applied, particularly in sensitive applications.
205
 
206
  9. **Real-Time Usability:**
207
+ - Latency in inference time could limit its effectiveness in real-time applications or when scaling to large user bases.
208
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
209
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Bellatrix-1.5B-xElite-details)!
210
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FBellatrix-1.5B-xElite&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
211
+
212
+ | Metric |Value (%)|
213
+ |-------------------|--------:|
214
+ |**Average** | 9.55|
215
+ |IFEval (0-Shot) | 19.64|
216
+ |BBH (3-Shot) | 9.49|
217
+ |MATH Lvl 5 (4-Shot)| 12.61|
218
+ |GPQA (0-shot) | 3.80|
219
+ |MuSR (0-shot) | 4.44|
220
+ |MMLU-PRO (5-shot) | 7.30|
221
+