avemio-digital
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -148,61 +148,24 @@ Four evaluation metrics were employed across all subsets: language quality, over
|
|
148 |
## Model Details
|
149 |
|
150 |
### Data
|
151 |
-
For training data details, please see the [GRAG-
|
152 |
|
153 |
-
|
|
|
|
|
154 |
|
155 |
-
|
156 |
-
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
- **Multi-Turn-QA**: Developed by Avemio AG, this dataset builds upon and enhances the German Wikipedia dump provided by Cohere ([wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings)), expanding it with synthetic examples and structured tasks to create a robust training resource.
|
161 |
-
|
162 |
-
### Data Subsets
|
163 |
-
|
164 |
-
| Subset | Examples per Task |
|
165 |
-
|-------|------------------|
|
166 |
-
| SauerkrautLM-Fermented-GER-DPO | 3.31k |
|
167 |
-
| SauerkrautLM-Fermented-Irrelevance-GER-DPO | 2k |
|
168 |
-
| hard-reasoning-de | 3.19k |
|
169 |
-
| hard-reasoning-en | 1.97k |
|
170 |
-
| multi-turn-qa | 3.2k |
|
171 |
-
|
172 |
-
|
173 |
-
### Source Data: SauerkrautLM
|
174 |
-
[SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO)
|
175 |
-
|
176 |
-
[SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO)
|
177 |
-
|
178 |
-
### Source Data: Hard-Reasoning DE & EN
|
179 |
-
- Base: ([proj-Persona/PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub))
|
180 |
-
- Enhancement: Synthetic data generation by Avemio AG
|
181 |
-
- Quality: Automatic validation and curation of examples by Open Source LLM's
|
182 |
-
|
183 |
-
### Methodology: Reasoning-DE & Reasoning-EN
|
184 |
-
- Providing Persona Descriptions and rewriting in a similar style with a different focus area and name in german/english language
|
185 |
-
- Generating Simple Logical Problems out of Persona-specific Views & Language.
|
186 |
-
- Generating Approaches, Thinking-Steps & Solutions separately verified by Llama-3.1-405B-Instruct
|
187 |
-
- Quality assurance and validation
|
188 |
-
|
189 |
-
### Source Data: Multi-Turn-QA
|
190 |
-
- Base: ([cohere/wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings))
|
191 |
-
- Enhancement: Synthetic data generation by Avemio AG
|
192 |
-
- Quality: Automatic validation and curation of examples by Open Source LLM's
|
193 |
-
|
194 |
-
### Methodology: Multi-Turn-QA
|
195 |
-
1. Extraction of base content from German Wikipedia
|
196 |
-
2. Enhancement through synthetic example generation
|
197 |
-
3. Structure addition for specific task types
|
198 |
-
4. Quality assurance and validation
|
199 |
|
200 |
|
201 |
|
202 |
### Architecture
|
203 |
|
204 |
|
205 |
-
| Parameter | GRAG-NEMO-
|
206 |
|-----------------------|-----------------------------------------------------------------------------------------------|
|
207 |
| **d_model** | 5120 |
|
208 |
| **num heads** | 32 |
|
@@ -220,7 +183,7 @@ The subsets can be for this training step are derived from 3 different sources:
|
|
220 |
### Hyperparameters
|
221 |
|
222 |
|
223 |
-
| Parameter | GRAG-NEMO-
|
224 |
|---------------------------|--------------------|
|
225 |
| **warmup steps** | 50 |
|
226 |
| **peak LR** | 5.0E-07 |
|
@@ -237,7 +200,7 @@ It's important to note that the actual power consumption may vary depending on t
|
|
237 |
|
238 |
| Model | GPU Type | Power Consumption From GPUs |
|
239 |
|----------------|---------------------|-----------------------------|
|
240 |
-
| GRAG-NEMO-
|
241 |
## Bias, Risks, and Limitations
|
242 |
|
243 |
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
|
|
|
148 |
## Model Details
|
149 |
|
150 |
### Data
|
151 |
+
For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
|
152 |
|
153 |
+
#### Description
|
154 |
+
The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
|
155 |
+
This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.
|
156 |
|
157 |
+
#### Task Instruction Format
|
158 |
+
The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
|
159 |
+
The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
|
160 |
+
Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
|
161 |
+
The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
|
163 |
|
164 |
|
165 |
### Architecture
|
166 |
|
167 |
|
168 |
+
| Parameter | GRAG-NEMO-SFT |
|
169 |
|-----------------------|-----------------------------------------------------------------------------------------------|
|
170 |
| **d_model** | 5120 |
|
171 |
| **num heads** | 32 |
|
|
|
183 |
### Hyperparameters
|
184 |
|
185 |
|
186 |
+
| Parameter | GRAG-NEMO-SFT |
|
187 |
|---------------------------|--------------------|
|
188 |
| **warmup steps** | 50 |
|
189 |
| **peak LR** | 5.0E-07 |
|
|
|
200 |
|
201 |
| Model | GPU Type | Power Consumption From GPUs |
|
202 |
|----------------|---------------------|-----------------------------|
|
203 |
+
| GRAG-NEMO-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.288 MWh |
|
204 |
## Bias, Risks, and Limitations
|
205 |
|
206 |
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
|