File size: 16,415 Bytes
f7dd6ce 5c95f01 f7dd6ce 82329a4 f7dd6ce 9f1d728 f7dd6ce deb6c8f f7dd6ce d390516 f7dd6ce b31cd69 f7dd6ce ca89072 f7dd6ce b31cd69 f7dd6ce d390516 f7dd6ce ca89072 f7dd6ce 2c5f925 f9a329a 2c5f925 f9a329a 2c5f925 f7dd6ce e0fa939 48de833 e0fa939 48de833 e0fa939 f7dd6ce e0fa939 f7dd6ce d390516 f7dd6ce d390516 f7dd6ce d390516 f7dd6ce e0fa939 f7dd6ce 1208260 f7dd6ce 1208260 f7dd6ce 59374d4 f7dd6ce b0d4b25 670681b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 |
---
license: apache-2.0
datasets:
- avemio/GRAG-CPT-HESSIAN-AI
- avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
language:
- en
- de
base_model:
- avemio/GRAG-NEMO-12B-CPT-HESSIAN-AI
pipeline_tag: question-answering
tags:
- German
- RAG
- Retrieval
- Question-Answering
- Summarization
- Reasoning
---
<img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# GRAG-NEMO-12B-SFT-HESSIAN-AI
<!-- Provide a quick summary of what the model is/does. -->
**GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
Our GRAG-MISTRAL-NEMO-SFT model are trained on this **[GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset.**
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens |
|------|--------|
| [GRAG-NEMO-CPT](https://huggingface.co/avemio/GRAG-NEMO-12B-CPT-HESSIAN-AI) | 507.47 million |
| [GRAG-NEMO-SFT](https://huggingface.co/avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI) | 2.03 billion |
| [GRAG-NEMO-ORPO](https://huggingface.co/avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI) | 2.0577 billion |
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Avemio AI Team
- **Supported by:** Hessian AI
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** German, English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** [[email protected]](mailto:[email protected])
### Model Sources
<!-- Provide the basic links for the model. -->
- **Training Study:** [Training Study](https://avemio.digital/wp-content/uploads/2025/01/GRAG-TRAINING-STUDY-Advancing-German-Language-AI-with-hessian-AI.pdf)
- **Repositories:**
- Training: [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing)
- Evaluation code:
- [GRAG-LLM-HARD-BENCHMARK](https://github.com/avemio-digital/GRAG-LLM-HARD-BENCHMARK.git)
- [GRAG-LLM-EASY-BENCHMARK](https://github.com/avemio-digital/GRAG-LLM-EASY-BENCHMARK.git)
- **Technical blog post:**
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following required installation:
Now, proceed as usual with HuggingFace:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
im_end_token_id = tokenizer.convert_tokens_to_ids('<|im_end|>')
im_start_token_id = tokenizer.convert_tokens_to_ids('<|im_start|>')
messages = [
{"role": "system", "content": "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine Überlegungen zur Lösung des Problems."},
{"role": "user", "content": "Ferdinand steht vor der Herausforderung, eine faire Besuchsregelung für seine drei Kinder zu finden, die den Bedürfnissen jedes einzelnen Kindes gerecht wird. Jedes Kind hat unterschiedliche Vorlieben und Bedürfnisse, die in den Besuchsplan integriert werden müssen. Er muss sicherstellen, dass die Regelung sowohl den Interessen der Kinder als auch den rechtlichen Vorgaben entspricht. Ferdinand hat eine Woche Zeit, um einen Vorschlag zu erarbeiten, den er mit seinem Anwalt besprechen kann."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_length=2024,
temperature=0.01,
do_sample=False,
#bos_token_id=im_start_token_id,
eos_token_id=im_end_token_id,
pad_token_id=tokenizer.eos_token_id,
repetition_penalty=1.1,
num_return_sequences=1,
top_k=40,
top_p=0.95,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts)
### Fine-tuning
We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
[Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.
Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.
- **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
- **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric.
- **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task.
- **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
| Metric | [Vanila-Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) | **[GRAG-NEMO-SFT](https://huggingface.co/avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI)** | [GRAG-NEMO-ORPO](https://huggingface.co/avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI) | GPT-3.5-TURBO |
|------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|----------------|
| Average Language Quality | 85.88 | **89.61** | 89.1 | 91.86 |
| **OVERALL SCORES (weighted):** | | | | |
| extraction_recall | 35.2 | **52.3** | 48.8 | 87.2 |
| qa_multiple_references | 65.3 | **71.0** | 74.0 | 77.2 |
| qa_without_time_difference | 71.5 | **85.6** | 85.6 | 83.1 |
| qa_with_time_difference | 65.3 | **87.9** | 85.4 | 83.2 |
| relevant_context | 71.3 | **69.1** | 65.5 | 89.5 |
| summarizations | 73.8 | **81.6** | 80.3 | 86.9 |
## Model Details
### Data
For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
#### Description
The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([“Scaling Synthetic Data Creation with 1,000,000,000 Personas”](https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.
#### Task Instruction Format
The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.
### Architecture
| Parameter | GRAG-NEMO-SFT |
|-----------------------|-----------------------------------------------------------------------------------------------|
| **d_model** | 5120 |
| **num heads** | 32 |
| **num layers** | 40 |
| **MLP ratio** | 2.8 |
| **LayerNorm type** | RMSNorm |
| **pos embeddings** | RoPE |
| **attention variant**| Standard Multi-Head Self Attention |
| **biases** | none |
| **block type** | sequential |
| **activation** | SiLU |
| **sequence length** | 1024000 |
| **weight typing** | bfloat16
### Hyperparameters
| Parameter | GRAG-NEMO-SFT |
|---------------------------|--------------------|
| **warmup steps** | 50 |
| **peak LR** | 5.0E-07 |
| **weight decay** | 0.1 |
| **LR schedule** | linear |
| **gradient reduce dtype** | FP32 |
| **optimizer state dtype** | FP32 |
## Environmental Impact
GRAG-NEMO-SFT, running on NVIDIA A100 with 40 GPUs for 9 days, has an approximate power consumption as follows:
It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
| Model | GPU Type | Power Consumption From GPUs |
|----------------|---------------------|-----------------------------|
| GRAG-NEMO-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.02592 MWh |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from GRAG-NEMO-SFT or any LLM will often not be true, so they should be checked.
## Model Card Contact
For errors in this model card, please contact ([[email protected]](mailto:[email protected])).
## The GRAG AI Team
[Marcel Rosiak](https://de.linkedin.com/in/marcel-rosiak)
[Soumya Paul](https://de.linkedin.com/in/soumya-paul-1636a68a)
[Siavash Mollaebrahim](https://de.linkedin.com/in/siavash-mollaebrahim-4084b5153?trk=people-guest_people_search-card)
[Zain ul Haq](https://de.linkedin.com/in/zain-ul-haq-31ba35196) |