smalinin commited on
Commit
2b067de
·
1 Parent(s): 57852b7

Add model files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Llama-3.1-Storm-8B_q4f32_1-webgpu.wasm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fa0007c9d50252f3132138cf415301c20129291f9420e9fae44e61d6670304a
3
+ size 4786825
README.md CHANGED
@@ -1,3 +1,470 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - de
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ license: llama3.1
12
+ library_name: transformers
13
+ tags:
14
+ - llama-3.1
15
+ - conversational
16
+ - instruction following
17
+ - reasoning
18
+ - function calling
19
+ - mergekit
20
+ - finetuning
21
+ - axolotl
22
+ pipeline_tag: text-generation
23
+ model-index:
24
+ - name: Llama-3.1-Storm-8B
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 80.51
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 31.49
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 16.62
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 10.18
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 9.12
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 31.15
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B
117
+ name: Open LLM Leaderboard
118
+ ---
119
+
120
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg)
121
+
122
+ Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
123
+
124
+ **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
125
+
126
+ **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
127
+
128
+
129
+ ## TL;DR
130
+
131
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png)
132
+
133
+ We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
134
+ 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
135
+ 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
136
+ 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
137
+
138
+ ## 🏆 Introducing Llama-3.1-Storm-8B
139
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
140
+
141
+ As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
142
+
143
+ We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
144
+
145
+
146
+ ## Llama-3.1-Storm-8B Model Strengths
147
+ Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
148
+
149
+ <table>
150
+ <tr>
151
+ <td><strong>Model Strength</strong>
152
+ </td>
153
+ <td><strong>Relevant Benchmarks</strong>
154
+ </td>
155
+ <tr>
156
+ <tr>
157
+ <td>🎯 Improved Instruction Following
158
+ </td>
159
+ <td>IFEval Strict (+3.93%)
160
+ </td>
161
+ <tr>
162
+ <tr>
163
+ <td>🌐 Enhanced Knowledge Driven Question Answering
164
+ </td>
165
+ <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
166
+ </td>
167
+ <tr>
168
+ <tr>
169
+ <td>🧠 Better Reasoning
170
+ </td>
171
+ <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
172
+ </td>
173
+ <tr>
174
+ <tr>
175
+ <td>🤖 Superior Agentic Capabilities
176
+ </td>
177
+ <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
178
+ </td>
179
+ <tr>
180
+ <tr>
181
+ <td>🚫 Reduced Hallucinations
182
+ </td>
183
+ <td>TruthfulQA (+9%)
184
+ </td>
185
+ <tr>
186
+ </table>
187
+
188
+ **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
189
+
190
+
191
+ ## Llama-3.1-Storm-8B Models
192
+ 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
193
+ 2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
194
+ 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
195
+ 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
196
+
197
+
198
+ ## 💻 How to Use the Model
199
+ The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
200
+
201
+ ### Installation
202
+ ```bash
203
+ pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
204
+ ```
205
+
206
+ Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
207
+
208
+ ### Conversational Use-case
209
+ #### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
210
+ ##### Using `transformers.pipeline()` API
211
+ ```python
212
+ import transformers
213
+ import torch
214
+
215
+ model_id = "akjindal53244/Llama-3.1-Storm-8B"
216
+ pipeline = transformers.pipeline(
217
+ "text-generation",
218
+ model=model_id,
219
+ model_kwargs={"torch_dtype": torch.bfloat16},
220
+ device_map="auto",
221
+ )
222
+
223
+ messages = [
224
+ {"role": "system", "content": "You are a helpful assistant."},
225
+ {"role": "user", "content": "What is 2+2?"}
226
+ ]
227
+
228
+ outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
229
+ print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
230
+ ```
231
+
232
+ ##### Using `model.generate()` API
233
+ ```bash
234
+ pip install flash_attn==2.6.3
235
+ ```
236
+
237
+ ```python
238
+ import torch
239
+ from transformers import AutoTokenizer, LlamaForCausalLM
240
+
241
+ # Apply Llama3.1 chat-template
242
+ def format_prompt(user_query):
243
+ template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
244
+ return template.format(user_query)
245
+
246
+
247
+ model_id = 'akjindal53244/Llama-3.1-Storm-8B'
248
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
249
+ model = LlamaForCausalLM.from_pretrained(
250
+ model_id,
251
+ torch_dtype=torch.bfloat16,
252
+ device_map="auto",
253
+ load_in_8bit=False,
254
+ load_in_4bit=False,
255
+ use_flash_attention_2=True
256
+ )
257
+
258
+ # Build final input prompt after applying chat-template
259
+ prompt = format_prompt("What is 2+2?")
260
+
261
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
262
+ generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
263
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
264
+ print(response) # Expected Output: '2 + 2 = 4'
265
+ ```
266
+
267
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
268
+ ```python
269
+ from vllm import LLM, SamplingParams
270
+ from transformers import AutoTokenizer
271
+
272
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
273
+ num_gpus = 1
274
+
275
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
276
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
277
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
278
+
279
+ messages = [
280
+ {"role": "system", "content": "You are a helpful assistant."},
281
+ {"role": "user", "content": "What is 2+2?"}
282
+ ]
283
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
284
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
285
+ ```
286
+
287
+ #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
288
+ ```bash
289
+ pip install 'litgpt[all]'
290
+ litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
291
+ ```
292
+
293
+ ```python
294
+ from litgpt import LLM
295
+
296
+ llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
297
+ llm.generate("What do Llamas eat?")
298
+ ```
299
+
300
+ ### Function Calling Use-case
301
+
302
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
303
+
304
+ #### Prompt Format for Function Calling
305
+ Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
306
+ ```
307
+ You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
308
+
309
+ Here are the available functions:
310
+ <tools>LIST_OF_TOOLS</tools>
311
+
312
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
313
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
314
+ ```
315
+ Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
316
+
317
+
318
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
319
+ ```python
320
+ import json
321
+ from vllm import LLM, SamplingParams
322
+ from transformers import AutoTokenizer
323
+
324
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
325
+ num_gpus = 1
326
+
327
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
328
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
329
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
330
+
331
+
332
+ def create_system_prompt(tools_list):
333
+ system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
334
+
335
+ Here are the available functions:
336
+ <tools>{}</tools>
337
+
338
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
339
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
340
+
341
+ # Convert the tools list to a string representation
342
+ tools_str = json.dumps(tools_list, ensure_ascii=False)
343
+ # Format the system prompt with the tools list
344
+ system_prompt = system_prompt_format.format(tools_str)
345
+ return system_prompt
346
+
347
+
348
+ # Example tools list
349
+ tools_list = [
350
+ {
351
+ "name": "peers",
352
+ "description": "Retrieves a list of company peers given a stock symbol.",
353
+ "parameters": {
354
+ "symbol": {
355
+ "description": "The stock symbol for the company.",
356
+ "type": "str",
357
+ "default": ""
358
+ }
359
+ }
360
+ },
361
+ {
362
+ "name": "web_chain_details",
363
+ "description": "python",
364
+ "parameters": {
365
+ "chain_slug": {
366
+ "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
367
+ "type": "str",
368
+ "default": "ethereum"
369
+ }
370
+ }
371
+ }
372
+ ]
373
+
374
+ # Create the system prompt with the tools list
375
+ system_prompt = create_system_prompt(tools_list)
376
+
377
+ messages = [
378
+ {"role": "system", "content": system_prompt},
379
+ {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
380
+ ]
381
+
382
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
383
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
384
+ ```
385
+
386
+ #### Use with [Ollama](https://ollama.com/)
387
+ ```
388
+ import ollama
389
+
390
+ tools = [{
391
+ 'type': 'function',
392
+ 'function': {
393
+ 'name': 'get_current_weather',
394
+ 'description': 'Get the current weather for a city',
395
+ 'parameters': {
396
+ 'type': 'object',
397
+ 'properties': {
398
+ 'city': {
399
+ 'type': 'string',
400
+ 'description': 'The name of the city',
401
+ },
402
+ },
403
+ 'required': ['city'],
404
+ },
405
+ },
406
+ },
407
+ {
408
+ 'type': 'function',
409
+ 'function': {
410
+ 'name': 'get_places_to_vist',
411
+ 'description': 'Get places to visit in a city',
412
+ 'parameters': {
413
+ 'type': 'object',
414
+ 'properties': {
415
+ 'city': {
416
+ 'type': 'string',
417
+ 'description': 'The name of the city',
418
+ },
419
+ },
420
+ 'required': ['city'],
421
+ },
422
+ },
423
+ },
424
+ ]
425
+
426
+ response = ollama.chat(
427
+ model='ajindal/llama3.1-storm:8b',
428
+ messages=[
429
+ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
430
+ {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
431
+ ],
432
+ tools=tools
433
+ )
434
+
435
+ print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
436
+ ```
437
+
438
+
439
+ ## Alignment Note
440
+ While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
441
+
442
+ ## Cite Our Work
443
+ ```
444
+ @misc {ashvini_kumar_jindal_2024,
445
+ author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
446
+ title = { Llama-3.1-Storm-8B },
447
+ year = 2024,
448
+ url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
449
+ doi = { 10.57967/hf/2902 },
450
+ publisher = { Hugging Face }
451
+ }
452
+ ```
453
+
454
+ ## Support Our Work
455
+ With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
456
+
457
+ **Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**
458
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
459
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/akjindal53244__Llama-3.1-Storm-8B-details)
460
+
461
+ | Metric |Value|
462
+ |-------------------|----:|
463
+ |Avg. |29.84|
464
+ |IFEval (0-Shot) |80.51|
465
+ |BBH (3-Shot) |31.49|
466
+ |MATH Lvl 5 (4-Shot)|16.62|
467
+ |GPQA (0-shot) |10.18|
468
+ |MuSR (0-shot) | 9.12|
469
+ |MMLU-PRO (5-shot) |31.15|
470
+
mlc-chat-config.json ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "version": "0.1.0",
3
+ "model_type": "llama",
4
+ "quantization": "q4f32_1",
5
+ "model_config": {
6
+ "hidden_size": 4096,
7
+ "intermediate_size": 14336,
8
+ "num_attention_heads": 32,
9
+ "num_hidden_layers": 32,
10
+ "rms_norm_eps": 1e-05,
11
+ "vocab_size": 128256,
12
+ "tie_word_embeddings": false,
13
+ "position_embedding_base": 500000.0,
14
+ "rope_scaling": {
15
+ "factor": 8.0,
16
+ "low_freq_factor": 1.0,
17
+ "high_freq_factor": 4.0,
18
+ "original_max_position_embeddings": 8192,
19
+ "rope_type": "llama3"
20
+ },
21
+ "context_window_size": 131072,
22
+ "prefill_chunk_size": 2042,
23
+ "num_key_value_heads": 8,
24
+ "head_dim": 128,
25
+ "tensor_parallel_shards": 1,
26
+ "pipeline_parallel_stages": 1,
27
+ "max_batch_size": 128,
28
+ "disaggregation": false
29
+ },
30
+ "vocab_size": 128256,
31
+ "context_window_size": 131072,
32
+ "sliding_window_size": -1,
33
+ "prefill_chunk_size": 2042,
34
+ "attention_sink_size": -1,
35
+ "tensor_parallel_shards": 1,
36
+ "pipeline_parallel_stages": 1,
37
+ "temperature": 0.6,
38
+ "presence_penalty": 0.0,
39
+ "frequency_penalty": 0.0,
40
+ "repetition_penalty": 1.0,
41
+ "top_p": 0.9,
42
+ "tokenizer_files": [
43
+ "tokenizer.json",
44
+ "tokenizer_config.json"
45
+ ],
46
+ "tokenizer_info": {
47
+ "token_postproc_method": "byte_level",
48
+ "prepend_space_in_encode": false,
49
+ "strip_space_in_decode": false
50
+ },
51
+ "conv_template": {
52
+ "name": "llama-3_1",
53
+ "system_template": "<|start_header_id|>system<|end_header_id|>\n\n{system_message}<|eot_id|>",
54
+ "system_message": "You are a helpful, respectful and honest assistant.",
55
+ "system_prefix_token_ids": [
56
+ 128000
57
+ ],
58
+ "add_role_after_system_message": true,
59
+ "roles": {
60
+ "user": "<|start_header_id|>user",
61
+ "assistant": "<|start_header_id|>assistant",
62
+ "tool": "<|start_header_id|>ipython"
63
+ },
64
+ "role_templates": {
65
+ "user": "{user_message}",
66
+ "assistant": "{assistant_message}",
67
+ "tool": "{tool_message}"
68
+ },
69
+ "messages": [],
70
+ "seps": [
71
+ "<|eot_id|>"
72
+ ],
73
+ "role_content_sep": "<|end_header_id|>\n\n",
74
+ "role_empty_sep": "<|end_header_id|>\n\n",
75
+ "stop_str": [],
76
+ "stop_token_ids": [
77
+ 128001,
78
+ 128008,
79
+ 128009
80
+ ],
81
+ "function_string": "",
82
+ "use_function_calling": false
83
+ },
84
+ "pad_token_id": 0,
85
+ "bos_token_id": 128000,
86
+ "eos_token_id": [
87
+ 128001,
88
+ 128008,
89
+ 128009
90
+ ]
91
+ }
ndarray-cache-b16.json ADDED
The diff for this file is too large to render. See raw diff
 
ndarray-cache.json ADDED
The diff for this file is too large to render. See raw diff
 
params_shard_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f910a7c2c549253fcdc78f9ec58dcd4c413f69663b21e23e79e2f2e67b653165
3
+ size 262668288
params_shard_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:498ff6b5b11129f430623e072fc577e668efe2a010142986fdb5363aa1e65c72
3
+ size 262668288
params_shard_10.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e9af502aa30fc6814f0ab13e2c585d2266dbee94c235ec30b2a09fdf8750d4e
3
+ size 29360128
params_shard_100.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd13ebe0b854db834ad48df35c93d6c500d9667fa3f6ac7f2a0c0d8e06001c5a
3
+ size 33046528
params_shard_101.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dc5de6120172e85749244426930203719dff6dc5a8a3d57b2e0dd0dc3ecb203
3
+ size 29360128
params_shard_102.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b762c3649f9d19944d6a8cb41d4cc869fb8f970712a2af8c1d88c8f0c0be84dd
3
+ size 58720256
params_shard_103.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb45e90fadc5ff1b124d7dcbb19b056a08d1f521481f6d8c3efdeb779637c993
3
+ size 22036480
params_shard_104.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ea24a93db2bffe5ab1f84ce0088b4bd0e19fe546311cc9cbd2b48399ac4257
3
+ size 29360128
params_shard_105.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b34ee606e65d652e1c198afbc5c8244857054dab918f34891ca42f3071f24bd
3
+ size 58720256
params_shard_106.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a8e449883d8dad4892266f2fb758f33f72baf77448f46959be3e1c30e74a6d2
3
+ size 27271168
params_shard_107.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dcf9400882d92c31f8e9bc33c589a40470fc4bd7f1ee760e536322454a99541
3
+ size 30949376
params_shard_11.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68fff464b5a9bfd4fbac6a27a1534412ab0a508567bd1570703a47429ce3ce95
3
+ size 58720256
params_shard_12.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df4d2c73ab2936469da98cdb115df5a2c8e0c5730dbeac69c5b8ded9309d7a29
3
+ size 22036480
params_shard_13.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c30ba54d20c6365900d3a345a27f82fa75af9a8831d274fc7bbbd7758d1a2860
3
+ size 29360128
params_shard_14.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b083df307598640ce4474210740b93707a771c944e786206b036ebdd5f7f0160
3
+ size 58720256
params_shard_15.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d458ef5fcfc7588f160ba9f01812c6b32e0d982b2d7193a3f66d08729f5535d
3
+ size 27271168
params_shard_16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:783ea21be010924fed7dacf0f298dc764c410785c18eb4167c66f5da455ab313
3
+ size 29360128
params_shard_17.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df997b1322e478f02dfdffd17aad1a8ff95a2e287e99a5e4da3f8244e1d3b3e8
3
+ size 30949376
params_shard_18.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fece203c58301867f21ab588c37f9792efdeb3c4960e943b32d2ec17a912d2fd
3
+ size 58720256
params_shard_19.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24d3d0f042dd6f6c6ba119d38a6385530859e2c93e7634c4b05233d2f753e84f
3
+ size 25174016
params_shard_2.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef54b4a3fbb69baeaa73eb38ab9efa8c8ffc48958910c2ac3ac045ca3ba3caab
3
+ size 32833536
params_shard_20.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbd3079a036e754cfa4d6373224162747bb1b5e654a4e8381b97ab18439025bb
3
+ size 29360128
params_shard_21.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4bbdad77cf509bb774b4443641c7aa0c6c76ce1de873a57b5c6182cce8cdaf6
3
+ size 58720256
params_shard_22.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5edfdfbf75d789f3d0ab5d0d86e406d8ef96d43f317b0184e1a0987148047ba4
3
+ size 33046528
params_shard_23.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66a2c3e609d07848c8cbb9c29bd6bff13fb06461b08aff248310b0400d8394c9
3
+ size 29360128
params_shard_24.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7531f7a037c1138a59b8e3a20093eb96515b35f7f978597cd0a6dbb0e44d957e
3
+ size 58720256
params_shard_25.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d3192d73ded2f53e68ed659fc92cd6e2a07afc5122f692c760aba6c64c9fd76
3
+ size 22036480
params_shard_26.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b1058e3d721f45ac76f4d45886796de060834588fbbb1deedefb3c2f7c5ea62
3
+ size 29360128
params_shard_27.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a7d0a0502f0199e539359f3ffa8c9432aebbf7ea0c37f9c21dc435d458e9728
3
+ size 58720256
params_shard_28.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9aca5c5987950e2e19dba2701eb21cf31f752f0d008931f82d6acac5ac9a780
3
+ size 27271168
params_shard_29.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c76f61ec9664a6def72c5fb14931242616bce4ec8916768805d6a6647ffac0f
3
+ size 29360128
params_shard_3.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e160294e017a92895c19cbbc196f83eff133b1ca732494c387a9c99e2475bd33
3
+ size 29360128
params_shard_30.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c95be0ad356c19e3ba8e49dec430f4421b26d200a1322928fba614cf10d7765
3
+ size 30949376
params_shard_31.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d82cdae817b12944872c2b35825750cf09aec1e5ea340419c4ee2dd2ae46bb1
3
+ size 58720256
params_shard_32.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8014732d825c6957ec9c32a1430dc3ebdee066e7f8fecaf30ea445617cfcb67e
3
+ size 25174016
params_shard_33.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dd1c635bfe57d6c7cf7591e8065fc226b8086517b33708c7f688bbe410923f1
3
+ size 29360128
params_shard_34.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b58637875e08ddedac391d370a119533dcda1bd06c53dffb2ce20fdc8a6b9e3
3
+ size 58720256
params_shard_35.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f0b17d9e076b5a2207e026b40b4d451eeddb343f67ff0107b303987cba9eeb6
3
+ size 33046528
params_shard_36.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5d546bca36529c4f28e00167ca1221c0de89c26e550365405992c60a6586a9f
3
+ size 29360128
params_shard_37.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aac8a6bf28a92bd8f4c3bfde080e662be12d8b9f1c7802bfd5d77a597213fbc
3
+ size 58720256
params_shard_38.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f16a4bf423ce9afbd6a8d55a1fc6b3d6b2b3f6e85ff0a9fe8fd58b336dc82ad9
3
+ size 22036480
params_shard_39.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdfc7460086514da58f8fdb33adbfff73b5a315160d5fa96194c2c87a848b951
3
+ size 29360128
params_shard_4.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e473de85228a4242c10cd54ade2aae473d620c9082a846aa5ca637e7f70225d6
3
+ size 32841728
params_shard_40.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1f63a34be30ce9b5d70f11ff508c3d6d72f0647c16708798ebbb5c2bbc1a0bb
3
+ size 58720256
params_shard_41.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40b239332182f640ae593443712bff4f1d95ef05de1974e24571cac93f6a45ba
3
+ size 27271168