Upload 10 files
Browse files- .gitattributes +1 -0
- LICENSE +21 -0
- README.md +114 -0
- config.json +3 -0
- generation_config.json +3 -0
- model.safetensors.index.json +3 -0
- output.safetensors +3 -0
- special_tokens_map.json +3 -0
- tokenizer.json +3 -0
- tokenizer_config.json +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2025 Agentica
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
README.md
ADDED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
library_name: transformers
|
4 |
+
datasets:
|
5 |
+
- AI-MO/NuminaMath-CoT
|
6 |
+
- KbsdJames/Omni-MATH
|
7 |
+
- RUC-AIBOX/STILL-3-Preview-RL-Data
|
8 |
+
- hendrycks/competition_math
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
base_model:
|
12 |
+
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
13 |
+
---
|
14 |
+
|
15 |
+
<div align="center">
|
16 |
+
<span style="font-family: default; font-size: 1.5em;">DeepScaleR-1.5B-Preview</span>
|
17 |
+
<div>
|
18 |
+
🚀 Democratizing Reinforcement Learning for LLMs 🌟
|
19 |
+
</div>
|
20 |
+
</div>
|
21 |
+
<br>
|
22 |
+
<div align="center" style="line-height: 1;">
|
23 |
+
<a href="https://github.com/agentica-project/deepscaler" style="margin: 2px;">
|
24 |
+
<img alt="Code" src="https://img.shields.io/badge/DeepScaleR-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
25 |
+
</a>
|
26 |
+
<a href="https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2" target="_blank" style="margin: 2px;">
|
27 |
+
<img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
28 |
+
</a>
|
29 |
+
<a href="https://x.com/Agentica_/status/1889006266661617779" style="margin: 2px;">
|
30 |
+
<img alt="X.ai" src="https://img.shields.io/badge/Agentica-white?style=for-the-badge&logo=X&logoColor=000&color=000&labelColor=white" style="display: inline-block; vertical-align: middle;"/>
|
31 |
+
</a>
|
32 |
+
<a href="https://huggingface.co/agentica-org" style="margin: 2px;">
|
33 |
+
<img alt="Hugging Face" src="https://img.shields.io/badge/Agentica-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor" style="display: inline-block; vertical-align: middle;"/>
|
34 |
+
</a>
|
35 |
+
</div>
|
36 |
+
</div>
|
37 |
+
</div>
|
38 |
+
|
39 |
+
## DeepScaleR Overview
|
40 |
+
DeepScaleR-1.5B-Preview is a language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 43.1% Pass@1 accuracy on AIME 2024, representing a 15% improvement over the base model (28.8%) and surpassing OpenAI's O1-Preview performance with just 1.5B parameters.
|
41 |
+
|
42 |
+
## Data
|
43 |
+
Our training dataset consists of approximately 40,000 unique problem-answer pairs compiled from:
|
44 |
+
- AIME problems (1984-2023)
|
45 |
+
- AMC problems (prior to 2023)
|
46 |
+
- Omni-MATH dataset
|
47 |
+
- Still dataset
|
48 |
+
|
49 |
+
## Training Recipe
|
50 |
+
We employ Deepseek's Group Relative Policy Optimization (GRPO), a simplified RL algorithm that extends PPO by:
|
51 |
+
- Normalizing advantage function over all samples generated from the same prompt.
|
52 |
+
- Applying KL divergence regularization on top of PPO's surrogate loss to prevent significant policy drift.
|
53 |
+
|
54 |
+
**Reward Function**: Our reward function is simple but effective:
|
55 |
+
- 1 for correct answers passing LaTeX/Sympy checks
|
56 |
+
- 0 for incorrect or improperly formatted answers
|
57 |
+
- Note: No partial rewards (such as PRMs) or intermediate feedback.
|
58 |
+
|
59 |
+
**Iterative Context Lengthening**: A key challenge in scaling RL for reasoning is compute cost. Our approach trains models with progressively longer contexts as the model improves, thus saving monetary costs and end2end training time:
|
60 |
+
- Initial 8K Context (0-1040 steps):
|
61 |
+
- 22.9% -> 33% Pass@1 on AIME 2024
|
62 |
+
- Trained on 8 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 8 = 1024
|
63 |
+
- Extended to 16K (steps 1040-1520):
|
64 |
+
- 33% -> 43% Pass@1 on AIME 2024
|
65 |
+
- Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048
|
66 |
+
- Further extended to 24K (step 1520+):
|
67 |
+
- 38% -> 43% Pass@1 on AIME 2024
|
68 |
+
- Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048
|
69 |
+
- Significant improvements within <200 steps
|
70 |
+
|
71 |
+
A more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2).
|
72 |
+
|
73 |
+
## Evaluation
|
74 |
+
We report Pass@1 accuracy averaged over 16 samples for each problem.
|
75 |
+
| Model | AIME 2024 | MATH 500 | AMC 2023 | Minerva Math | OlympiadBench | Avg. |
|
76 |
+
|-------|-----------|-----------|-----------|--------------|---------------|------|
|
77 |
+
| 2.5-7B-Instruct | 13.3 | 79.8 | 50.6 | 34.6 | 40.7 | 43.8 |
|
78 |
+
| rStar-Math-7B | 26.7 | 78.4 | 47.5 | - | 47.1 | - |
|
79 |
+
| Eurus-2-7B-PRIME | 26.7 | 79.2 | 57.8 | 38.6 | 42.1 | 48.9 |
|
80 |
+
| Qwen2.5-7B-SimpleRL | 26.7 | 82.4 | 62.5 | <strong>39.7</strong> | 43.3 | 50.9 |
|
81 |
+
| DeepSeek-R1-Distill-Qwen-1.5B | 28.8 | 82.8 | 62.9 | 26.5 | 43.3 | 48.9 |
|
82 |
+
| Still-1.5B | 32.5 | 84.4 | 66.7 | 29.0 | 45.4 | 51.6 |
|
83 |
+
| <strong>DeepScaleR-1.5B-Preview</strong> | <strong>43.1</strong> | <strong>87.8</strong> | <strong>73.6</strong> | 30.2 | <strong>50.0</strong> | <strong>57.0</strong> |
|
84 |
+
| O1-Preview | 40.0 | 81.4 | - | - | - | - |
|
85 |
+
|
86 |
+
## Serving DeepScaleR
|
87 |
+
Our model can be served using popular high-performance inference systems:
|
88 |
+
- vLLM
|
89 |
+
- Hugging Face Text Generation Inference (TGI)
|
90 |
+
- SGLang
|
91 |
+
- TensorRT-LLM
|
92 |
+
|
93 |
+
All these systems support the OpenAI Chat Completions API format.
|
94 |
+
|
95 |
+
## License
|
96 |
+
This project is released under the MIT License, reflecting our commitment to open and accessible AI development.
|
97 |
+
We believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.
|
98 |
+
This permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.
|
99 |
+
|
100 |
+
## Acknowledgement
|
101 |
+
- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source RLHF library.
|
102 |
+
- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
|
103 |
+
- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).
|
104 |
+
|
105 |
+
## Citation
|
106 |
+
```bibtex
|
107 |
+
@misc{deepscaler2025,
|
108 |
+
title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
|
109 |
+
author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Erran Li and Raluca Ada Popa and Ion Stoica},
|
110 |
+
year={2025},
|
111 |
+
howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
|
112 |
+
note={Notion Blog}
|
113 |
+
year={2025}
|
114 |
+
}
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d5b7f26c5a01d9a5b0a1fc75d8f60fe9ef0d284636bdb4d4bd0a894da3fe0ce
|
3 |
+
size 1151
|
generation_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8885016cae80b4ceae316ab78ef62825b302a985b7f364270d47135b3b48e5ea
|
3 |
+
size 181
|
model.safetensors.index.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:182b576f521e239d797460755941a1e4c8ffb838a83b0a6aa7dc449f942498df
|
3 |
+
size 27751
|
output.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4d9f87aefa9a72ffd58e80bc52909f8fc722e08aaaf8d7c7d4c4707041b21092
|
3 |
+
size 1307760356
|
special_tokens_map.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:59cda48bbe8bab9d61ffb410e6e3c07b6d98bff73cee7c88ff8b51f95f21ab1c
|
3 |
+
size 485
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e20ddafc659ba90242154b55275402edeca0715e5dbb30f56815a4ce081f4893
|
3 |
+
size 11422778
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b869a935677e8f9dc3896cb69982de89843c1bbff27194eb83542c0e3f82babc
|
3 |
+
size 6754
|