jncraton commited on
Commit
5223a60
·
verified ·
1 Parent(s): d758d6c

Upload folder using huggingface_hub

Browse files
Files changed (7) hide show
  1. README.md +144 -0
  2. config.json +7 -0
  3. model.bin +3 -0
  4. special_tokens_map.json +23 -0
  5. tokenizer.json +0 -0
  6. tokenizer_config.json +41 -0
  7. vocabulary.json +0 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - nlp
5
+ - math
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+
12
+ <h1 align="center">
13
+ Rho-1: Not All Tokens Are What You Need
14
+ </h1>
15
+
16
+
17
+ <p align="center">
18
+ <a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
19
+ <a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
20
+ <a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
21
+ <a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
22
+ </p>
23
+
24
+ <p align="center">
25
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
26
+ <br>
27
+ <em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
28
+ </p>
29
+
30
+
31
+ ## 🔥 News
32
+
33
+ - [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
34
+ - [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
35
+ - [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
36
+ - [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
37
+ - [2024/04/11] Rho-1 paper and repo released.
38
+
39
+
40
+
41
+ ## 💡 Introduction
42
+
43
+ Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
44
+
45
+
46
+ ### Selective Lanugage Modeling (SLM)
47
+
48
+ <p align="center">
49
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
50
+ <br>
51
+ <em>Figure 2:
52
+ <b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
53
+ <b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
54
+ <b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
55
+ </p>
56
+
57
+ <p align="center">
58
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
59
+ <br>
60
+ <em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
61
+ SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
62
+ It involves three steps:
63
+ (Step 1) Initially, train a reference model on high-quality data.
64
+ (Step 2) Then, score each token's loss in a corpus using the reference model.
65
+ (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
66
+ </p>
67
+
68
+ <!-- results: -->
69
+
70
+ ### Evaluation Results
71
+
72
+ Base models (Few-shot CoT):
73
+
74
+ | **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
75
+ |:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
76
+ | 1-2B Base Models | | | | | | | | |
77
+ | Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
78
+ | Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
79
+ | DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
80
+ | [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
81
+ | >= 7B Base Models | | | | | | | | |
82
+ | Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
83
+ | Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
84
+ | LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
85
+ | InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
86
+ | DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
87
+ | [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
88
+
89
+
90
+ [Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
91
+
92
+ | **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
93
+ |------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
94
+ | gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
95
+ | gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
96
+ | Open-Source Small Models | | | | | | | | | |
97
+ | MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
98
+ | ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
99
+ | ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
100
+ | DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
101
+ | [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
102
+ | [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
103
+
104
+
105
+ ## 🚀 Quick Start
106
+
107
+
108
+ ### Evaluation
109
+
110
+ ```sh
111
+ git clone [email protected]:microsoft/rho.git
112
+ cd rho-1/math-evaluation-harness
113
+ ```
114
+
115
+ Base model few-shot evaluation:
116
+
117
+ ```sh
118
+ bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
119
+ ```
120
+
121
+ SFT model (code-interpreter) evaluation:
122
+
123
+ ```sh
124
+ bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
125
+ ```
126
+
127
+ Our reproduced outputs are provided in `rho-1/outputs.zip`.
128
+
129
+
130
+
131
+ ## ☕️ Citation
132
+
133
+ If you find this repository helpful, please consider citing our paper:
134
+
135
+ ```
136
+ @misc{lin2024rho1,
137
+ title={Rho-1: Not All Tokens Are What You Need},
138
+ author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
139
+ year={2024},
140
+ eprint={2404.07965},
141
+ archivePrefix={arXiv},
142
+ primaryClass={cs.CL}
143
+ }
144
+ ```
config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "layer_norm_epsilon": 1e-05,
5
+ "multi_query_attention": true,
6
+ "unk_token": "<unk>"
7
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6eb737a1e04d6a2c9fa2d02bd8f4eda29461fdec2b2fa8bdd01428482175caac
3
+ size 1102178826
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "bos_token": "<s>",
31
+ "clean_up_tokenization_spaces": false,
32
+ "eos_token": "</s>",
33
+ "legacy": false,
34
+ "model_max_length": 1000000000000000019884624838656,
35
+ "pad_token": null,
36
+ "padding_side": "right",
37
+ "sp_model_kwargs": {},
38
+ "tokenizer_class": "LlamaTokenizer",
39
+ "unk_token": "<unk>",
40
+ "use_default_system_prompt": false
41
+ }
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff