File size: 3,989 Bytes
4166237
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21920db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4166237
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
license: mit
library_name: transformers
datasets:
- AI-MO/NuminaMath-CoT
- KbsdJames/Omni-MATH
- RUC-AIBOX/STILL-3-Preview-RL-Data
- hendrycks/competition_math
language:
- en
base_model: agentica-org/DeepScaleR-1.5B-Preview
tags:
- llama-cpp
- gguf-my-repo
---

# Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF
This model was converted to GGUF format from [`agentica-org/DeepScaleR-1.5B-Preview`](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) for more details on the model.

---
DeepScaleR-1.5B-Preview is a language model fine-tuned from 
DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning
 (RL) to scale up to long context lengths. The model achieves 43.1% 
Pass@1 accuracy on AIME 2024, representing a 15% improvement over the 
base model (28.8%) and surpassing OpenAI's O1-Preview performance with 
just 1.5B parameters.



	
		
	

		Data
	



Our training dataset consists of approximately 40,000 unique problem-answer pairs compiled from:


AIME problems (1984-2023)
AMC problems (prior to 2023)
Omni-MATH dataset
Still dataset



	
		
	

		Training Recipe
	



We employ Deepseek's Group Relative Policy Optimization (GRPO), a simplified RL algorithm that extends PPO by:


Normalizing advantage function over all samples generated from the same prompt.
Applying KL divergence regularization on top of PPO's surrogate loss to prevent significant policy drift.


Reward Function: Our reward function is simple but effective:


1 for correct answers passing LaTeX/Sympy checks
0 for incorrect or improperly formatted answers
Note: No partial rewards (such as PRMs) or intermediate feedback.


Iterative Context Lengthening: A key challenge in 
scaling RL for reasoning is compute cost. Our approach trains models 
with progressively longer contexts as the model improves, thus saving 
monetary costs and end2end training time: 


Initial 8K Context (0-1040 steps):
22.9% -> 33% Pass@1 on AIME 2024
Trained on 8 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 8 = 1024


Extended to 16K (steps 1040-1520):
33% -> 43% Pass@1 on AIME 2024
Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048


Further extended to 24K (step 1520+):
38% -> 43% Pass@1 on AIME 2024
Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048
Significant improvements within <200 steps




A more detailed description of the training recipe can be found in our blog post.

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -c 2048
```