Transformers
GGUF
English
llama
text-generation-inference
unsloth
Eval Results
Inference Endpoints
conversational
Quazim0t0 commited on
Commit
1dff65d
·
verified ·
1 Parent(s): c2b7ff5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -4
README.md CHANGED
@@ -1,17 +1,112 @@
1
  ---
2
- base_model: unsloth/phi-4-unsloth-bnb-4bit
 
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - llama
8
  - gguf
9
- license: apache-2.0
10
- language:
11
- - en
12
  datasets:
13
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
14
  - ServiceNow-AI/R1-Distill-SFT
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  # Uploaded model
@@ -24,3 +119,16 @@ datasets:
24
 
25
 
26
  If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phithink/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
  - llama
10
  - gguf
11
+ base_model: unsloth/phi-4-unsloth-bnb-4bit
 
 
12
  datasets:
13
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
14
  - ServiceNow-AI/R1-Distill-SFT
15
+ model-index:
16
+ - name: ThinkPhi1.1-Tensors
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: IFEval (0-Shot)
23
+ type: HuggingFaceH4/ifeval
24
+ args:
25
+ num_few_shot: 0
26
+ metrics:
27
+ - type: inst_level_strict_acc and prompt_level_strict_acc
28
+ value: 39.08
29
+ name: strict accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BBH (3-Shot)
38
+ type: BBH
39
+ args:
40
+ num_few_shot: 3
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 49.14
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MATH Lvl 5 (4-Shot)
53
+ type: hendrycks/competition_math
54
+ args:
55
+ num_few_shot: 4
56
+ metrics:
57
+ - type: exact_match
58
+ value: 0.0
59
+ name: exact match
60
+ source:
61
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: GPQA (0-shot)
68
+ type: Idavidrein/gpqa
69
+ args:
70
+ num_few_shot: 0
71
+ metrics:
72
+ - type: acc_norm
73
+ value: 6.49
74
+ name: acc_norm
75
+ source:
76
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
77
+ name: Open LLM Leaderboard
78
+ - task:
79
+ type: text-generation
80
+ name: Text Generation
81
+ dataset:
82
+ name: MuSR (0-shot)
83
+ type: TAUR-Lab/MuSR
84
+ args:
85
+ num_few_shot: 0
86
+ metrics:
87
+ - type: acc_norm
88
+ value: 11.28
89
+ name: acc_norm
90
+ source:
91
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
92
+ name: Open LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: MMLU-PRO (5-shot)
98
+ type: TIGER-Lab/MMLU-Pro
99
+ config: main
100
+ split: test
101
+ args:
102
+ num_few_shot: 5
103
+ metrics:
104
+ - type: acc
105
+ value: 43.42
106
+ name: accuracy
107
+ source:
108
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Quazim0t0/ThinkPhi1.1-Tensors
109
+ name: Open LLM Leaderboard
110
  ---
111
 
112
  # Uploaded model
 
119
 
120
 
121
  If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phithink/
122
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
123
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Quazim0t0__ThinkPhi1.1-Tensors-details)
124
+
125
+ | Metric |Value|
126
+ |-------------------|----:|
127
+ |Avg. |24.90|
128
+ |IFEval (0-Shot) |39.08|
129
+ |BBH (3-Shot) |49.14|
130
+ |MATH Lvl 5 (4-Shot)| 0.00|
131
+ |GPQA (0-shot) | 6.49|
132
+ |MuSR (0-shot) |11.28|
133
+ |MMLU-PRO (5-shot) |43.42|
134
+