Triangle104's picture
Update README.md
21920db verified
metadata
license: mit
library_name: transformers
datasets:
  - AI-MO/NuminaMath-CoT
  - KbsdJames/Omni-MATH
  - RUC-AIBOX/STILL-3-Preview-RL-Data
  - hendrycks/competition_math
language:
  - en
base_model: agentica-org/DeepScaleR-1.5B-Preview
tags:
  - llama-cpp
  - gguf-my-repo

Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF

This model was converted to GGUF format from agentica-org/DeepScaleR-1.5B-Preview using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


DeepScaleR-1.5B-Preview is a language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 43.1% Pass@1 accuracy on AIME 2024, representing a 15% improvement over the base model (28.8%) and surpassing OpenAI's O1-Preview performance with just 1.5B parameters.

    Data

Our training dataset consists of approximately 40,000 unique problem-answer pairs compiled from:

AIME problems (1984-2023) AMC problems (prior to 2023) Omni-MATH dataset Still dataset

    Training Recipe

We employ Deepseek's Group Relative Policy Optimization (GRPO), a simplified RL algorithm that extends PPO by:

Normalizing advantage function over all samples generated from the same prompt. Applying KL divergence regularization on top of PPO's surrogate loss to prevent significant policy drift.

Reward Function: Our reward function is simple but effective:

1 for correct answers passing LaTeX/Sympy checks 0 for incorrect or improperly formatted answers Note: No partial rewards (such as PRMs) or intermediate feedback.

Iterative Context Lengthening: A key challenge in scaling RL for reasoning is compute cost. Our approach trains models with progressively longer contexts as the model improves, thus saving monetary costs and end2end training time:

Initial 8K Context (0-1040 steps): 22.9% -> 33% Pass@1 on AIME 2024 Trained on 8 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 8 = 1024

Extended to 16K (steps 1040-1520): 33% -> 43% Pass@1 on AIME 2024 Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048

Further extended to 24K (step 1520+): 38% -> 43% Pass@1 on AIME 2024 Trained on 32 A100-80GB GPUs, BS= (Prompts) * (Samples/Prompt) = 128 * 16 = 2048 Significant improvements within <200 steps

A more detailed description of the training recipe can be found in our blog post.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/DeepScaleR-1.5B-Preview-Q5_K_S-GGUF --hf-file deepscaler-1.5b-preview-q5_k_s.gguf -c 2048