Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,150 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
11 |
|
12 |
-
## Model Details
|
13 |
|
14 |
-
### Model Description
|
15 |
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
### Direct Use
|
41 |
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
|
58 |
-
|
|
|
|
|
|
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
|
167 |
-
#### Software
|
168 |
|
169 |
-
|
170 |
|
171 |
-
## Citation [optional]
|
172 |
|
173 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
174 |
|
175 |
-
|
|
|
176 |
|
177 |
-
[More Information Needed]
|
178 |
|
179 |
-
**APA:**
|
180 |
|
181 |
-
[More Information Needed]
|
182 |
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
188 |
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- DAMO-NLP-SG/Mistral-7B-LongPO-512K-tokenized
|
6 |
+
base_model:
|
7 |
+
- DAMO-NLP-SG/Mistral-7B-LongPO-128K
|
8 |
---
|
9 |
|
10 |
+
# LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
|
11 |
|
12 |
+
This repo provides the checkpoint of Mistral-7B-LongPO-512K in our paper "LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization".
|
13 |
|
14 |
+
(Note that it is an experimental an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.)
|
15 |
|
16 |
|
|
|
17 |
|
|
|
18 |
|
|
|
19 |
|
|
|
20 |
|
21 |
+
## Highlights of LongPO
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
+
- Self-evolving long-context alignment without human/superior LLMs annotations.
|
24 |
+
- Extending context length while keeping aligned in one stage.
|
25 |
+
- No degradation on short-context capabilities.
|
26 |
|
|
|
27 |
|
28 |
+
## Models and Training Data
|
|
|
|
|
29 |
|
30 |
+
| Models | Base Model | Training Data | # Data Samples |
|
31 |
+
| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | -------------- |
|
32 |
+
| [Mistral-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-128K) | Mistral-7B-Instruct-v0.2 | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-128K-tokenized) | 45K |
|
33 |
+
| [Qwen2.5-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K) | Qwen2.5-7B-Instruct | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized) | 32K |
|
34 |
+
| [Mistral-7B-LongPO-256K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-256K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized) | 16K |
|
35 |
+
| [Mistral-7B-LongPO-512K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-512K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-512K-tokenized) | 2.5K |
|
36 |
|
37 |
+
\* indicates an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.
|
38 |
|
|
|
39 |
|
|
|
40 |
|
|
|
41 |
|
|
|
42 |
|
43 |
+
## Training Process:
|
44 |
|
45 |
+
1. Prompt a short-context instruct LLM (e.g., Mistral-7B-Instruct-v0.2) to self-generate short-to-long preference data as illustrated in [data_prepare](data_prepare/readme.md).
|
46 |
|
47 |
+
2. Replace the (Flash) Attention module into Ulyssess (Flash) Attn using monkey patch to apply sequence parallel.
|
48 |
|
49 |
+
3. Using our custom LongPO Trainer: `LongPOMTLMUlyssesTrainer`
|
50 |
|
51 |
+
4. Train Script (using Mistral-7B-Instruct-v0.2 as example):
|
52 |
|
53 |
+
```
|
54 |
+
export training_length=131072
|
55 |
+
export gradient_accumulation_steps=8
|
56 |
+
export batch_size=1
|
57 |
|
58 |
+
accelerate launch \
|
59 |
+
--config_file playground/accelerate_single_node_zero3.yaml \
|
60 |
+
train/train_longpo.py \
|
61 |
+
--model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
|
62 |
+
--ref_model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
|
63 |
+
--data_path /path/to/data \
|
64 |
+
--bf16 True \
|
65 |
+
--run_name mistral_longpo \
|
66 |
+
--report_to wandb \
|
67 |
+
--output_dir path/to/save \
|
68 |
+
--num_train_epochs 1 \
|
69 |
+
--per_device_train_batch_size $batch_size \
|
70 |
+
--gradient_accumulation_steps $gradient_accumulation_steps \
|
71 |
+
--save_strategy "steps" \
|
72 |
+
--save_steps 500 \
|
73 |
+
--evaluation_strategy "no" \
|
74 |
+
--learning_rate 5e-7 \
|
75 |
+
--weight_decay 0. \
|
76 |
+
--warmup_ratio 0.1 \
|
77 |
+
--lr_scheduler_type "cosine" \
|
78 |
+
--optim "rmsprop" \
|
79 |
+
--logging_steps 1 \
|
80 |
+
--tf32 True \
|
81 |
+
--model_max_length $training_length \
|
82 |
+
--gradient_checkpointing True \
|
83 |
+
--do_train True \
|
84 |
+
--do_eval False \
|
85 |
+
--do_predict False \
|
86 |
+
--seed 42 \
|
87 |
+
--use_sequence_parallel True \
|
88 |
+
--dpo_beta 0.01 \
|
89 |
+
--dpo_lambda 0.01 \
|
90 |
+
--rope_theta 10000000
|
91 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
## Evaluation
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
|
|
96 |
|
97 |
+
### InfiniteBench
|
98 |
|
|
|
99 |
|
100 |
+
| Model | Train/Claimed Length | En.Sum | En.QA | En.MC | AVG. |
|
101 |
+
| ---------------- | -------------------- | ------ | ------ | ------ | ------ |
|
102 |
+
| GPT-4-128K | 128K | 14.73 | 22.44 | 67.25 | 34.81 |
|
103 |
+
| Qwen2-72B | 128K | 24.32ᵇ | 7.03ᵇ | 72.05ᵇ | 34.47ᵇ |
|
104 |
+
| LLaMA 3.1-70B | 128K | 33.55ᵇ | 36.08ᵇ | 69.00ᵇ | 46.21ᵇ |
|
105 |
+
| LLaMA 3.1-8B | 128K | 28.06ᵇ | 30.47ᵇ | 58.08ᵇ | 38.87ᵇ |
|
106 |
+
| GLM-4-9B | 128K | 14.84ᵇ | 9.51ᵇ | 67.25ᵇ | 30.53ᵇ |
|
107 |
+
| GLM-4-9B-1M | 1M | 28.3 | 9.7 | 68.6 | 35.53 |
|
108 |
+
| LWM-7B-1M | 1M | 4.33ᵇ | 0.0ᵇ | 3.06ᵇ | 2.46ᵇ |
|
109 |
+
| YaRN-Mistral-7B | 128K | 9.09 | 9.55 | 27.95 | 15.53 |
|
110 |
+
| Mistral-7B | 32K | 22.13 | 4.93 | 14.41 | 13.82 |
|
111 |
+
| - SFT | 128K | 23.44 | 13.45 | 53.21 | 30.03 |
|
112 |
+
| - DPO | 128K | 15.21 | 10.34 | 48.14 | 25.56 |
|
113 |
+
| - LongPO (iter1) | 128K | 27.05 | 23.51 | 67.25 | 39.27 |
|
114 |
+
| - LongPO (iter2) | 256K | 28.16 | 24.43 | 66.35 | 39.65 |
|
115 |
+
| - LongPO (iter3) | 512K | 29.10 | 27.85 | 66.67 | 41.21 |
|
116 |
+
| Qwen2.5-7B | 128K | 22.89 | 6.08 | 52.4 | 27.12 |
|
117 |
+
| - LongPO (iter1) | 128K | 32.06 | 17.32 | 72.05 | 40.48 |
|
118 |
|
119 |
+
- Our results are evaluated with greedy decoding.
|
120 |
+
- Baseline results marked with ᵇ are evaluated by us, while unmarked baseline results are sourced from their official report.
|
121 |
|
|
|
122 |
|
|
|
123 |
|
|
|
124 |
|
|
|
125 |
|
126 |
+
### RULER
|
127 |
|
128 |
+
| Model | NIAH | VT | AGG | QA | AVG (13 tasks) |
|
129 |
+
| ------------------------ | ----- | ----- | ----- | ----- | -------------- |
|
130 |
+
| Qwen2.5-7B-Instruct | 82.10 | 80.09 | 74.50 | 54.30 | 76.50 |
|
131 |
+
| Qwen2.5-7B-LongPO-128K | 95.82 | 89.71 | 78.67 | 59.40 | 87.11 |
|
132 |
+
| Mistral-7B-Instruct-v0.2 | 72.60 | 74.40 | 64.40 | 52.20 | 68.40 |
|
133 |
+
| Mistral-7B-LongPO-128K | 96.88 | 96.49 | 71.55 | 64.81 | 88.02 |
|
134 |
+
| Mistral-7B-LongPO-256K | 96.80 | 97.00 | 69.14 | 64.87 | 87.65 |
|
135 |
+
| Mistral-7B-LongPO-512K | 97.28 | 97.48 | 69.22 | 64.92 | 88.00 |
|
136 |
|
|
|
137 |
|
|
|
138 |
|
|
|
139 |
|
|
|
140 |
|
141 |
+
### Short Context
|
142 |
|
143 |
+
| Model | MMLU | ARC-C | Hellaswag | Winogrande | Avg |
|
144 |
+
|-------|-------|--------|------------|-------------|-----|
|
145 |
+
| Mistral-7B-Instruction-v0.2 | 59.15 | 59.26 | 83.2 | 78.4 | 70.00 |
|
146 |
+
| Mistral-7B-LongPO-128K | 59.99 | 59.34 | 82.99 | 78.53 | 70.21 |
|
147 |
+
| Mistral-7B-LongPO-256K-EXP | 59.47 | 60.28 | 83.14 | 78.14 | 70.26 |
|
148 |
+
| Mistral-7B-LongPO-512K-EXP | 59.51 | 60.58 | 82.87 | 77.66 | 70.16 |
|
149 |
+
| Qwen2.5-7B-Instruct | 74.28 | 67.15 | 81.41 | 74.66 | 74.38 |
|
150 |
+
| Qwen2.5-7B-LongPO-128K | 73.64 | 65.70 | 80.82 | 74.98 | 73.79 |
|