Text Generation
Safetensors
English
llama
samsja commited on
Commit
3b8d48b
·
verified ·
1 Parent(s): d139ddf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -53
README.md CHANGED
@@ -6,17 +6,6 @@ datasets:
6
  - PrimeIntellect/StackV1-popular
7
  - mlfoundations/dclm-baseline-1.0-parquet
8
  - open-web-math/open-web-math
9
- - MaziyarPanahi/open-perfectblend-fixed
10
- - mlabonne/orca-agentinstruct-1M-v1-cleaned
11
- - Post-training-Data-Flywheel/AutoIF-instruct-61k
12
- - Team-ACE/ToolACE
13
- - MaziyarPanahi/Synthia-Coder-v1.5-I-sharegpt
14
- - ServiceNow-AI/M2Lingual
15
- - AI-MO/NuminaMath-TIR
16
- - allenai/tulu-3-sft-personas-code
17
- - tulu-3-sft-personas-math
18
- - tulu-3-sft-personas-math-grade
19
- - tulu-3-sft-personas-algebra
20
  language:
21
  - en
22
  pipeline_tag: text-generation
@@ -87,48 +76,6 @@ print(pipe("What is prime intellect ?"))
87
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
88
 
89
 
90
- ## Post-training
91
-
92
- The post-training has been handle by [arcee](https://huggingface.co/arcee-ai)
93
-
94
- We applied several post-training techniques to enhance INTELLECT-1's capabilities and task-specific performance. Our post-training methodology consisted of three main phases.
95
-
96
- First, we conducted an extensive series of 16 Supervised Fine-Tuning (SFT) trainings, with individual runs ranging from 1 to 3.3 billion tokens each. The most successful configuration used 2.4 billion training tokens over 3 epochs. We used [MergeKit](https://github.com/arcee-ai/mergekit), [EvolKit](https://github.com/arcee-ai/EvolKit), and [DistillKit](https://github.com/arcee-ai/DistillKit) from Arcee AI to combine the models, generate the data sets, and distill the logits, respectively. For training data, we used a diverse set of high-quality datasets:
97
-
98
- ## Post-training
99
-
100
- After completing the globally distributed pretraining phase, we applied several post-training techniques to enhance INTELLECT-1's capabilities and task-specific performance. Our post-training methodology consisted of three main phases.
101
-
102
- First, we conducted an extensive series of 16 Supervised Fine-Tuning (SFT) trainings, with individual runs ranging from 1 to 3.3 billion tokens each. The most successful configuration used 2.4 billion training tokens over 3 epochs. We used MergeKit, EvolKit, and DistillKit from Arcee AI to combine the models, generate the data sets, and distill the logits, respectively. For training data, we used a diverse set of high-quality datasets:
103
-
104
- 1. **New Datasets** (released with INTELLECT-1):
105
- - arcee-ai/EvolKit-75k (generated via EvolKit)
106
- - arcee-ai/Llama-405B-Logits
107
- - arcee-ai/The-Tomb
108
-
109
- 2. **Instruction Following**:
110
- - [mlabonne/open-perfectblend-fixed](https://huggingface.co/datasets/MaziyarPanahi/open-perfectblend-fixed) (generalist capabilities)
111
- - [microsoft/orca-agentinstruct-1M-v1-cleaned](https://huggingface.co/datasets/mlabonne/orca-agentinstruct-1M-v1-cleaned) (Chain-of-Thought)
112
- - [Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs](https://huggingface.co/datasets/Post-training-Data-Flywheel/AutoIF-instruct-61k)
113
-
114
- 3. **Domain-Specific**:
115
- - [Team-ACE/ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) (function calling)
116
- - [Synthia coder](https://huggingface.co/datasets/MaziyarPanahi/Synthia-Coder-v1.5-I-sharegpt) (programming)
117
- - [ServiceNow-AI/M2Lingual](https://huggingface.co/datasets/ServiceNow-AI/M2Lingual) (multilingual)
118
- - [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) (mathematics)
119
-
120
- 4. **Tulu-3 Persona Datasets**:
121
- - [allenai/tulu-3-sft-personas-code](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-code)
122
- - [allenai/tulu-3-sft-personas-math](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math)
123
- - [allenai/tulu-3-sft-personas-math-grade](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math-grade)
124
- - [allenai/tulu-3-sft-personas-algebra](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-algebra)
125
-
126
- Second, we execute 8 distinct Direct Preference Optimization (DPO) runs with various combinations of data sets to enhance specific performance metrics and align the model with human preferences. A key advantage in our post-training process was INTELLECT-1's use of the Llama-3 tokenizer, which allowed us to utilize logits from Llama-3.1-405B to heal and maintain precision during the post-training process via DistillKit.
127
-
128
- Finally, we performed 16 strategic merges between candidate models using MergeKit to create superior combined models that leverage the strengths of different training runs. During the post-training phase, we observed that when using a ChatML template without an explicit BOS (begin-of-sequence) token, the initial loss was approximately 15. However, when switching to the Llama 3.1 chat template, the loss for these trainings started much lower at approximately 1.1, indicating better alignment with the underlying Llama 3 tokenizer.
129
-
130
- The combination of these post-training techniques resulted in significant improvements in various benchmarks, particularly in knowledge retrieval, grade school math, instruction following and reasoning.
131
-
132
  **Performance on benchmarks**
133
 
134
 
 
6
  - PrimeIntellect/StackV1-popular
7
  - mlfoundations/dclm-baseline-1.0-parquet
8
  - open-web-math/open-web-math
 
 
 
 
 
 
 
 
 
 
 
9
  language:
10
  - en
11
  pipeline_tag: text-generation
 
76
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
77
 
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  **Performance on benchmarks**
80
 
81