Serlop commited on
Commit
fff0e58
·
verified ·
1 Parent(s): 8162ce1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +157 -0
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - es
6
+ - pt
7
+ license: other
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+ - falcon3
13
+ - llama-cpp
14
+ - gguf-my-repo
15
+ base_model: suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
16
+ license_name: falcon-llm-license
17
+ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
18
+ model-index:
19
+ - name: Falcon3-Jessi-v0.4-7B-Slerp
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: IFEval (0-Shot)
26
+ type: HuggingFaceH4/ifeval
27
+ args:
28
+ num_few_shot: 0
29
+ metrics:
30
+ - type: inst_level_strict_acc and prompt_level_strict_acc
31
+ value: 76.76
32
+ name: strict accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BBH (3-Shot)
41
+ type: BBH
42
+ args:
43
+ num_few_shot: 3
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 37.29
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MATH Lvl 5 (4-Shot)
56
+ type: hendrycks/competition_math
57
+ args:
58
+ num_few_shot: 4
59
+ metrics:
60
+ - type: exact_match
61
+ value: 34.59
62
+ name: exact match
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: GPQA (0-shot)
71
+ type: Idavidrein/gpqa
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 8.28
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 20.49
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 34.0
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
112
+ name: Open LLM Leaderboard
113
+ ---
114
+
115
+ # Serlop/Falcon3-Jessi-v0.4-7B-Slerp-Q6_K-GGUF
116
+ This model was converted to GGUF format from [`suayptalha/Falcon3-Jessi-v0.4-7B-Slerp`](https://huggingface.co/suayptalha/Falcon3-Jessi-v0.4-7B-Slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
117
+ Refer to the [original model card](https://huggingface.co/suayptalha/Falcon3-Jessi-v0.4-7B-Slerp) for more details on the model.
118
+
119
+ ## Use with llama.cpp
120
+ Install llama.cpp through brew (works on Mac and Linux)
121
+
122
+ ```bash
123
+ brew install llama.cpp
124
+
125
+ ```
126
+ Invoke the llama.cpp server or the CLI.
127
+
128
+ ### CLI:
129
+ ```bash
130
+ llama-cli --hf-repo Serlop/Falcon3-Jessi-v0.4-7B-Slerp-Q6_K-GGUF --hf-file falcon3-jessi-v0.4-7b-slerp-q6_k.gguf -p "The meaning to life and the universe is"
131
+ ```
132
+
133
+ ### Server:
134
+ ```bash
135
+ llama-server --hf-repo Serlop/Falcon3-Jessi-v0.4-7B-Slerp-Q6_K-GGUF --hf-file falcon3-jessi-v0.4-7b-slerp-q6_k.gguf -c 2048
136
+ ```
137
+
138
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
139
+
140
+ Step 1: Clone llama.cpp from GitHub.
141
+ ```
142
+ git clone https://github.com/ggerganov/llama.cpp
143
+ ```
144
+
145
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
146
+ ```
147
+ cd llama.cpp && LLAMA_CURL=1 make
148
+ ```
149
+
150
+ Step 3: Run inference through the main binary.
151
+ ```
152
+ ./llama-cli --hf-repo Serlop/Falcon3-Jessi-v0.4-7B-Slerp-Q6_K-GGUF --hf-file falcon3-jessi-v0.4-7b-slerp-q6_k.gguf -p "The meaning to life and the universe is"
153
+ ```
154
+ or
155
+ ```
156
+ ./llama-server --hf-repo Serlop/Falcon3-Jessi-v0.4-7B-Slerp-Q6_K-GGUF --hf-file falcon3-jessi-v0.4-7b-slerp-q6_k.gguf -c 2048
157
+ ```