avemio-digital commited on
Commit
48de833
verified
1 Parent(s): d8edc46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -24
README.md CHANGED
@@ -3,6 +3,7 @@ license: mit
3
  datasets:
4
  - avemio/GRAG-CPT-HESSIAN-AI
5
  - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
 
6
  language:
7
  - en
8
  - de
@@ -22,13 +23,13 @@ tags:
22
  <img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
 
24
 
25
- # Model Card for GRAG-NEMO-12B-SFT-HESSIAN-AI
26
 
27
  <!-- Provide a quick summary of what the model is/does. -->
28
 
29
  **GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
30
 
31
- Our GRAG-LLAMA-SFT model are trained on this **[GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset.**
32
 
33
  ## Model Details
34
 
@@ -133,37 +134,76 @@ Four evaluation metrics were employed across all subsets: language quality, over
133
  - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
134
 
135
 
136
- | Metric | [Vanila-Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) | **[GRAG-NEMO-SFT](https://huggingface.co/avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI)** | [GRAG-NEMO-ORPO](https://huggingface.co/avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI) | [GRAG-NEMO-MERGED]() | GPT-3.5-TURBO |
137
  |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
138
- | Average Language Quality | 85.88 | **89.61** | 89.1 | | |
139
  | **OVERALL SCORES (weighted):** | | | | | |
140
- | extraction_recall | 35.2 | **52.3** | 48.8 | | |
141
- | qa_multiple_references | 65.3 | **71.0** | 74.0 | | |
142
- | qa_without_time_difference | 71.5 | **85.6** | 85.6 | | |
143
- | qa_with_time_difference | 65.3 | **87.9** | 85.4 | | |
144
- | reasoning | 69.4 | **71.5** | 73.4 | | |
145
- | relevant_context | 71.3 | **69.1** | 65.5 | | |
146
- | summarizations | 73.8 | **81.6** | 80.3 | | |
147
 
148
  ## Model Details
149
 
150
  ### Data
151
- For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
 
153
- #### Description
154
- The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
155
- This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.
156
 
157
- #### Task Instruction Format
158
- The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
159
- The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
160
- Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
161
- The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.
162
 
163
  ### Architecture
164
 
165
 
166
- | Parameter | GRAG-NEMO-SFT |
167
  |-----------------------|-----------------------------------------------------------------------------------------------|
168
  | **d_model** | 5120 |
169
  | **num heads** | 32 |
@@ -181,7 +221,7 @@ The implementation of these tasks within RAG systems can significantly improve o
181
  ### Hyperparameters
182
 
183
 
184
- | Parameter | GRAG-NEMO-SFT |
185
  |---------------------------|--------------------|
186
  | **warmup steps** | 50 |
187
  | **peak LR** | 5.0E-07 |
@@ -198,13 +238,13 @@ It's important to note that the actual power consumption may vary depending on t
198
 
199
  | Model | GPU Type | Power Consumption From GPUs |
200
  |----------------|---------------------|-----------------------------|
201
- | GRAG-NEMO-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.288 MWh |
202
  ## Bias, Risks, and Limitations
203
 
204
  Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
205
  Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
206
 
207
- Otherwise, many facts from GRAG-NEMO-SFT or any LLM will often not be true, so they should be checked.
208
 
209
 
210
 
 
3
  datasets:
4
  - avemio/GRAG-CPT-HESSIAN-AI
5
  - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
6
+ - avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI
7
  language:
8
  - en
9
  - de
 
23
  <img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
24
 
25
 
26
+ # Model Card for GRAG-NEMO-12B-ORPO-HESSIAN-AI
27
 
28
  <!-- Provide a quick summary of what the model is/does. -->
29
 
30
  **GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
31
 
32
+ Our GRAG-MISTRAL-NEMO-ORPO model are trained on this **[GRAG-ORPO](https://huggingface.co/datasets/avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI) dataset.**
33
 
34
  ## Model Details
35
 
 
134
  - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
135
 
136
 
137
+ | Metric | [Vanila-Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) | [GRAG-NEMO-SFT](https://huggingface.co/avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI) | **[GRAG-NEMO-ORPO](https://huggingface.co/avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI)** | [GRAG-NEMO-MERGED]() | GPT-3.5-TURBO |
138
  |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
139
+ | Average Language Quality | 85.88 | 89.61 | **89.1** | | |
140
  | **OVERALL SCORES (weighted):** | | | | | |
141
+ | extraction_recall | 35.2 | 52.3 | **48.8** | | |
142
+ | qa_multiple_references | 65.3 | 71.0 | **74.0** | | |
143
+ | qa_without_time_difference | 71.5 | 85.6 | **85.6** | | |
144
+ | qa_with_time_difference | 65.3 | 87.9 | **85.4** | | |
145
+ | reasoning | 69.4 | 71.5 | **73.4** | | |
146
+ | relevant_context | 71.3 | 69.1 | **65.5** | | |
147
+ | summarizations | 73.8 | 81.6 | **80.3** | | |
148
 
149
  ## Model Details
150
 
151
  ### Data
152
+ For training data details, please see the [GRAG-ORPO-Dataset](https://huggingface.co/datasets/avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI) documentation.
153
+
154
+ The ORPO Tasks Dataset represents a specialized collection for fine-tuning language models with a focus on RAG-specific capabilities.
155
+
156
+ The subsets can be for this training step are derived from 3 different sources:
157
+ - **SauerkrautLM Preference Datasets**:
158
+ - [SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO): is a specialized dataset designed for training language models in function calling irrelevance detection using Preference Optimization. The dataset consists of 2,000 carefully evaluated instruction-response pairs, specifically curated to help models recognize situations where function calls are unnecessary and direct responses are more appropriate.
159
+ - [SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO): is a high-quality German instruction-response dataset specifically designed for Preference Optimization training. The dataset consists of 3,305 instruction-response pairs. Rather than being merged from existing German datasets, it was carefully created through a sophisticated augmentation process, transforming curated English instructions and responses into culturally adapted German content. Each pair includes comprehensive quality metrics and rejected responses for Preference training.
160
+ - **Hard Reasoning DE & EN**: Synthetic generation inspired by Tencent's ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)).
161
+ - **Multi-Turn-QA**: Developed by Avemio AG, this dataset builds upon and enhances the German Wikipedia dump provided by Cohere ([wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings)), expanding it with synthetic examples and structured tasks to create a robust training resource.
162
+
163
+ ### Data Subsets
164
+
165
+ | Subset | Examples per Task |
166
+ |-------|------------------|
167
+ | SauerkrautLM-Fermented-GER-DPO | 3.31k |
168
+ | SauerkrautLM-Fermented-Irrelevance-GER-DPO | 2k |
169
+ | hard-reasoning-de | 3.19k |
170
+ | hard-reasoning-en | 1.97k |
171
+ | multi-turn-qa | 3.2k |
172
+
173
+
174
+ ### Source Data: SauerkrautLM
175
+ [SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO)
176
+
177
+ [SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO)
178
+
179
+ ### Source Data: Hard-Reasoning DE & EN
180
+ - Base: ([proj-Persona/PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub))
181
+ - Enhancement: Synthetic data generation by Avemio AG
182
+ - Quality: Automatic validation and curation of examples by Open Source LLM's
183
+
184
+ ### Methodology: Reasoning-DE & Reasoning-EN
185
+ - Providing Persona Descriptions and rewriting in a similar style with a different focus area and name in german/english language
186
+ - Generating Simple Logical Problems out of Persona-specific Views & Language.
187
+ - Generating Approaches, Thinking-Steps & Solutions separately verified by Llama-3.1-405B-Instruct
188
+ - Quality assurance and validation
189
+
190
+ ### Source Data: Multi-Turn-QA
191
+ - Base: ([cohere/wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings))
192
+ - Enhancement: Synthetic data generation by Avemio AG
193
+ - Quality: Automatic validation and curation of examples by Open Source LLM's
194
+
195
+ ### Methodology: Multi-Turn-QA
196
+ 1. Extraction of base content from German Wikipedia
197
+ 2. Enhancement through synthetic example generation
198
+ 3. Structure addition for specific task types
199
+ 4. Quality assurance and validation
200
 
 
 
 
201
 
 
 
 
 
 
202
 
203
  ### Architecture
204
 
205
 
206
+ | Parameter | GRAG-NEMO-ORPO |
207
  |-----------------------|-----------------------------------------------------------------------------------------------|
208
  | **d_model** | 5120 |
209
  | **num heads** | 32 |
 
221
  ### Hyperparameters
222
 
223
 
224
+ | Parameter | GRAG-NEMO-ORPO |
225
  |---------------------------|--------------------|
226
  | **warmup steps** | 50 |
227
  | **peak LR** | 5.0E-07 |
 
238
 
239
  | Model | GPU Type | Power Consumption From GPUs |
240
  |----------------|---------------------|-----------------------------|
241
+ | GRAG-NEMO-ORPO | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.288 MWh |
242
  ## Bias, Risks, and Limitations
243
 
244
  Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
245
  Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
246
 
247
+ Otherwise, many facts from GRAG-NEMO-ORPO or any LLM will often not be true, so they should be checked.
248
 
249
 
250