avemio-digital commited on
Commit
f7dd6ce
verified
1 Parent(s): 56a70cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -271
README.md CHANGED
@@ -1,271 +1,214 @@
1
- ---
2
- language:
3
- - en
4
- - fr
5
- - de
6
- - es
7
- - it
8
- - pt
9
- - ru
10
- - zh
11
- - ja
12
- license: apache-2.0
13
- base_model: avemio-digital/Mistral-Nemo-Base-2407_CPT_with_2epochs
14
- extra_gated_description: If you want to learn more about how we process your personal
15
- data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
16
- ---
17
-
18
- # Model Card for Mistral-Nemo-Instruct-2407
19
-
20
- The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
21
-
22
- For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
23
-
24
- ## Key features
25
- - Released under the **Apache 2 License**
26
- - Pre-trained and instructed versions
27
- - Trained with a **128k context window**
28
- - Trained on a large proportion of **multilingual and code data**
29
- - Drop-in replacement of Mistral 7B
30
-
31
- ## Model Architecture
32
- Mistral Nemo is a transformer model, with the following architecture choices:
33
- - **Layers:** 40
34
- - **Dim:** 5,120
35
- - **Head dim:** 128
36
- - **Hidden dim:** 14,336
37
- - **Activation Function:** SwiGLU
38
- - **Number of heads:** 32
39
- - **Number of kv-heads:** 8 (GQA)
40
- - **Vocabulary size:** 2**17 ~= 128k
41
- - **Rotary embeddings (theta = 1M)**
42
-
43
- ## Metrics
44
-
45
- ### Main Benchmarks
46
-
47
- | Benchmark | Score |
48
- | --- | --- |
49
- | HellaSwag (0-shot) | 83.5% |
50
- | Winogrande (0-shot) | 76.8% |
51
- | OpenBookQA (0-shot) | 60.6% |
52
- | CommonSenseQA (0-shot) | 70.4% |
53
- | TruthfulQA (0-shot) | 50.3% |
54
- | MMLU (5-shot) | 68.0% |
55
- | TriviaQA (5-shot) | 73.8% |
56
- | NaturalQuestions (5-shot) | 31.2% |
57
-
58
- ### Multilingual Benchmarks (MMLU)
59
-
60
- | Language | Score |
61
- | --- | --- |
62
- | French | 62.3% |
63
- | German | 62.7% |
64
- | Spanish | 64.6% |
65
- | Italian | 61.3% |
66
- | Portuguese | 63.3% |
67
- | Russian | 59.2% |
68
- | Chinese | 59.0% |
69
- | Japanese | 59.0% |
70
-
71
- ## Usage
72
-
73
- The model can be used with three different frameworks
74
-
75
- - [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#mistral-inference)
76
- - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
77
- - [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
78
-
79
- ### Mistral Inference
80
-
81
- #### Install
82
-
83
- It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
84
-
85
- ```
86
- pip install mistral_inference
87
- ```
88
-
89
- #### Download
90
-
91
- ```py
92
- from huggingface_hub import snapshot_download
93
- from pathlib import Path
94
-
95
- mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
96
- mistral_models_path.mkdir(parents=True, exist_ok=True)
97
-
98
- snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
99
- ```
100
-
101
- #### Chat
102
-
103
- After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
104
-
105
- ```
106
- mistral-chat $HOME/mistral_models/Nemo-Instruct --instruct --max_tokens 256 --temperature 0.35
107
- ```
108
-
109
- *E.g.* Try out something like:
110
- ```
111
- How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
112
- ```
113
-
114
- #### Instruct following
115
-
116
- ```py
117
- from mistral_inference.transformer import Transformer
118
- from mistral_inference.generate import generate
119
-
120
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
121
- from mistral_common.protocol.instruct.messages import UserMessage
122
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
123
-
124
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
125
- model = Transformer.from_folder(mistral_models_path)
126
-
127
- prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
128
-
129
- completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
130
-
131
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
132
-
133
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
134
- result = tokenizer.decode(out_tokens[0])
135
-
136
- print(result)
137
- ```
138
-
139
- #### Function calling
140
-
141
- ```py
142
- from mistral_common.protocol.instruct.tool_calls import Function, Tool
143
- from mistral_inference.transformer import Transformer
144
- from mistral_inference.generate import generate
145
-
146
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
147
- from mistral_common.protocol.instruct.messages import UserMessage
148
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
149
-
150
-
151
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
152
- model = Transformer.from_folder(mistral_models_path)
153
-
154
- completion_request = ChatCompletionRequest(
155
- tools=[
156
- Tool(
157
- function=Function(
158
- name="get_current_weather",
159
- description="Get the current weather",
160
- parameters={
161
- "type": "object",
162
- "properties": {
163
- "location": {
164
- "type": "string",
165
- "description": "The city and state, e.g. San Francisco, CA",
166
- },
167
- "format": {
168
- "type": "string",
169
- "enum": ["celsius", "fahrenheit"],
170
- "description": "The temperature unit to use. Infer this from the users location.",
171
- },
172
- },
173
- "required": ["location", "format"],
174
- },
175
- )
176
- )
177
- ],
178
- messages=[
179
- UserMessage(content="What's the weather like today in Paris?"),
180
- ],
181
- )
182
-
183
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
184
-
185
- out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
186
- result = tokenizer.decode(out_tokens[0])
187
-
188
- print(result)
189
- ```
190
-
191
- ### Transformers
192
-
193
- > [!IMPORTANT]
194
- > NOTE: Until a new release has been made, you need to install transformers from source:
195
- > ```sh
196
- > pip install git+https://github.com/huggingface/transformers.git
197
- > ```
198
-
199
- If you want to use Hugging Face `transformers` to generate text, you can do something like this.
200
-
201
- ```py
202
- from transformers import pipeline
203
-
204
- messages = [
205
- {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
206
- {"role": "user", "content": "Who are you?"},
207
- ]
208
- chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407",max_new_tokens=128)
209
- chatbot(messages)
210
- ```
211
-
212
- ## Function calling with `transformers`
213
-
214
- To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
215
- [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
216
- in the `transformers` docs for more information.
217
-
218
- ```python
219
- from transformers import AutoModelForCausalLM, AutoTokenizer
220
- import torch
221
-
222
- model_id = "mistralai/Mistral-Nemo-Instruct-2407"
223
- tokenizer = AutoTokenizer.from_pretrained(model_id)
224
-
225
- def get_current_weather(location: str, format: str):
226
- """
227
- Get the current weather
228
-
229
- Args:
230
- location: The city and state, e.g. San Francisco, CA
231
- format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
232
- """
233
- pass
234
-
235
- conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
236
- tools = [get_current_weather]
237
-
238
- # format and tokenize the tool use prompt
239
- inputs = tokenizer.apply_chat_template(
240
- conversation,
241
- tools=tools,
242
- add_generation_prompt=True,
243
- return_dict=True,
244
- return_tensors="pt",
245
- )
246
-
247
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
248
-
249
- inputs.to(model.device)
250
- outputs = model.generate(**inputs, max_new_tokens=1000)
251
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
252
- ```
253
-
254
- Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
255
- results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
256
- see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
257
- and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
258
- exactly 9 alphanumeric characters.
259
-
260
- > [!TIP]
261
- > Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
262
-
263
- ## Limitations
264
-
265
- The Mistral Nemo Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
266
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
267
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
268
-
269
- ## The Mistral AI Team
270
-
271
- Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, L茅lio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Micka毛l Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Th茅ophile Gervet, Timoth茅e Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - avemio/GRAG-CPT-HESSIAN-AI
5
+ - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
6
+ language:
7
+ - en
8
+ - de
9
+ base_model:
10
+ - avemio/GRAG-LLAMA-3.1-8B-CPT-HESSIAN-AI
11
+ pipeline_tag: question-answering
12
+ tags:
13
+ - German
14
+ - RAG
15
+ - Retrieval
16
+ - Question-Answering
17
+ - Summarization
18
+ - Reasoning
19
+ ---
20
+
21
+
22
+ <img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
+
24
+
25
+ # Model Card for GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI
26
+
27
+ <!-- Provide a quick summary of what the model is/does. -->
28
+
29
+ **GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
30
+
31
+ Our GRAG-LLAMA-SFT model are trained on this **[GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset.**
32
+
33
+ ## Model Details
34
+
35
+ The core models released in this batch are the following:
36
+ | Size | Training Tokens |
37
+ |------|--------|
38
+ | [GRAG-LLAMA-CPT](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-CPT-HESSIAN-AI) | 507.47 million |
39
+ | [GRAG-LLAMA-SFT](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI) | 2.03 billion |
40
+ | [GRAG-LLAMA-ORPO](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | 2.0577 billion |
41
+ ### Model Description
42
+
43
+ <!-- Provide a longer summary of what this model is. -->
44
+
45
+ - **Developed by:** Avemio AI Team
46
+ - **Supported by:** Hessian AI
47
+ - **Model type:** a Transformer style autoregressive language model.
48
+ - **Language(s) (NLP):** German, English
49
+ - **License:** The code and model are released under Apache 2.0.
50
+ - **Contact:** [[email protected]](mailto:[email protected])
51
+
52
+
53
+ ### Model Sources
54
+
55
+ <!-- Provide the basic links for the model. -->
56
+
57
+ - **Project Page:**
58
+ - **Repositories:**
59
+ - Training:
60
+ - Evaluation code:
61
+ - **Technical blog post:**
62
+ <!-- - **Press release:** TODO -->
63
+
64
+ ## Uses
65
+
66
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
67
+
68
+ ### Inference
69
+ Quickly get inference running with the following required installation:
70
+ Now, proceed as usual with HuggingFace:
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+
74
+ model_name = "avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI"
75
+
76
+ model = AutoModelForCausalLM.from_pretrained(
77
+ model_name,
78
+ torch_dtype="auto",
79
+ device_map="auto"
80
+ )
81
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
82
+ im_end_token_id = tokenizer.convert_tokens_to_ids('<|im_end|>')
83
+ im_start_token_id = tokenizer.convert_tokens_to_ids('<|im_start|>')
84
+
85
+ messages = [
86
+ {"role": "system", "content": "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine 脺berlegungen zur L枚sung des Problems."},
87
+ {"role": "user", "content": "Ferdinand steht vor der Herausforderung, eine faire Besuchsregelung f眉r seine drei Kinder zu finden, die den Bed眉rfnissen jedes einzelnen Kindes gerecht wird. Jedes Kind hat unterschiedliche Vorlieben und Bed眉rfnisse, die in den Besuchsplan integriert werden m眉ssen. Er muss sicherstellen, dass die Regelung sowohl den Interessen der Kinder als auch den rechtlichen Vorgaben entspricht. Ferdinand hat eine Woche Zeit, um einen Vorschlag zu erarbeiten, den er mit seinem Anwalt besprechen kann."}
88
+ ]
89
+ text = tokenizer.apply_chat_template(
90
+ messages,
91
+ tokenize=False,
92
+ add_generation_prompt=False
93
+ )
94
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
95
+
96
+ generated_ids = model.generate(
97
+ **model_inputs,
98
+ max_length=2024,
99
+ temperature=0.01,
100
+ do_sample=False,
101
+ #bos_token_id=im_start_token_id,
102
+ eos_token_id=im_end_token_id,
103
+ pad_token_id=tokenizer.eos_token_id,
104
+ repetition_penalty=1.1,
105
+ num_return_sequences=1,
106
+ top_k=40,
107
+ top_p=0.95,
108
+ )
109
+ generated_ids = [
110
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
111
+ ]
112
+
113
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
114
+
115
+ ```
116
+
117
+ ### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts)
118
+
119
+ ### Fine-tuning
120
+ We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
121
+ [Colab-Notebook](https://colab.research.google.com/drive/1U6aP3vIkABaCm7doGV1waHgTLvXNGbBp?usp=sharing).
122
+
123
+ ## Evaluation
124
+
125
+ <!-- This section describes the evaluation protocols and provides the results. -->
126
+ The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.
127
+
128
+ Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.
129
+
130
+ - **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
131
+ - **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric.
132
+ - **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task.
133
+ - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
134
+
135
+
136
+ | Metric | [Vanila-Phi-3.5-Mini-4B](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [GRAG-PHI-SFT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | [GRAG-PHI-ORPO](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | [GRAG-PHI-MERGED]() | GPT-3.5-TURBO |
137
+ |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
138
+ | **Average_language_quality** | 85.88 | 89.61 | 89.1 | | |
139
+ | **extraction_recall_weighted_overall_score** | 35.2 | 52.3 | 48.8 | | |
140
+ | **qa_multiple_references_weighted_overall_score** | 65.3 | 71.0 | 74.0 | | |
141
+ | **qa_without_time_difference_weighted_overall_score** | 71.5 | 85.6 | 85.6 | | |
142
+ | **qa_with_time_difference_weighted_overall_score** | 65.3 | 87.9 | 85.4 | | |
143
+ | **reasoning_weighted_overall_score** | 69.4 | 71.5 | 73.4 | | |
144
+ | **relevant_context_weighted_overall_score** | 71.3 | 69.1 | 65.5 | | |
145
+ | **summarizations_weighted_overall_score** | 73.8 | 81.6 | 80.3 | | |
146
+
147
+ ## Model Details
148
+
149
+ ### Data
150
+ For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
151
+
152
+ #### Description
153
+ The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
154
+ This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.
155
+
156
+ #### Task Instruction Format
157
+ The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
158
+ The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
159
+ Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
160
+ The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.
161
+
162
+ ### Architecture
163
+
164
+
165
+ | Parameter | GRAG-LLAMA-SFT |
166
+ |-----------------------|-----------------------------------------------------------------------------------------------|
167
+ | **d_model** | 3072 |
168
+ | **num heads** | 32 |
169
+ | **num layers** | 32 |
170
+ | **MLP ratio** | 3.5 |
171
+ | **LayerNorm type** | RMSNorm |
172
+ | **pos embeddings** | RoPE |
173
+ | **attention variant**| Standard Multi-Head Self Attention |
174
+ | **biases** | none |
175
+ | **block type** | sequential |
176
+ | **activation** | SiLU |
177
+ | **sequence length** | 131072 |
178
+ | **weight typing** | bfloat16
179
+
180
+ ### Hyperparameters
181
+
182
+
183
+ | Parameter | GRAG-LLAMA-SFT |
184
+ |---------------------------|--------------------|
185
+ | **warmup steps** | 50 |
186
+ | **peak LR** | 5.0E-07 |
187
+ | **weight decay** | 0.1 |
188
+ | **LR schedule** | linear |
189
+ | **gradient reduce dtype** | FP32 |
190
+ | **optimizer state dtype** | FP32 |
191
+
192
+ ## Environmental Impact
193
+
194
+ GRAG-PHI-SFT, running on NVIDIA A100 with 8 GPUs for 5 days, has an approximate power consumption as follows:
195
+
196
+ It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
197
+
198
+ | Model | GPU Type | Power Consumption From GPUs |
199
+ |----------------|---------------------|-----------------------------|
200
+ | GRAG-PHI-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.288 MWh |
201
+ ## Bias, Risks, and Limitations
202
+
203
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
204
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
205
+
206
+ Otherwise, many facts from GRAG-Phi-SFT or any LLM will often not be true, so they should be checked.
207
+
208
+
209
+
210
+
211
+ ## Model Card Contact
212
+
213
+
214
+ For errors in this model card, please contact ([grag@avemio.digital](mailto:grag@avemio.digital)).