Papers
arxiv:2601.04809

SCALER:Synthetic Scalable Adaptive Learning Environment for Reasoning

Published on Jan 8
· Submitted by
Caijun Xu
on Jan 15
Authors:
,
,
,

Abstract

SCALER is a reinforcement learning framework that maintains effective training signals for language models through adaptive environment design and multi-environment strategies, enabling sustained performance improvements in reasoning tasks.

AI-generated summary

Reinforcement learning (RL) offers a principled way to enhance the reasoning capabilities of large language models, yet its effectiveness hinges on training signals that remain informative as models evolve. In practice, RL progress often slows when task difficulty becomes poorly aligned with model capability, or when training is dominated by a narrow set of recurring problem patterns. To jointly address these issues, we propose SCALER (Synthetic sCalable Adaptive Learning Environment for Reasoning), a framework that sustains effective learning signals through adaptive environment design. SCALER introduces a scalable synthesis pipeline that converts real-world programming problems into verifiable reasoning environments with controllable difficulty and unbounded instance generation, enabling RL training beyond finite datasets while preserving strong correctness guarantees. Building on this, SCALER further employs an adaptive multi-environment RL strategy that dynamically adjusts instance difficulty and curates the active set of environments to track the model's capability frontier and maintain distributional diversity. This co-adaptation prevents reward sparsity, mitigates overfitting to narrow task patterns, and supports sustained improvement throughout training. Extensive experiments show that SCALER consistently outperforms dataset-based RL baselines across diverse reasoning benchmarks and exhibits more stable, long-horizon training dynamics.

Community

Paper author Paper submitter

rl_example_page-0001
Scalable Environment Synthesis
Given a programming problem (statement + reference solution), SCALER synthesizes a reasoning environment with:

  • Verifiability: deterministic oracle / unit tests provide correctness signals.
  • Difficulty control: explicit scale parameters discretized into difficulty levels.
  • Unbounded instance generation: randomized testcase generation yields unlimited training instances.

Adaptive Multi-Environment RL

SCALER sustains learning signals at two levels:

  • In-environment difficulty controller: keeps sampling near a target success regime.
  • Environment curation: maintains an active set and replaces saturated/uninformative environments to preserve diversity and long-horizon improvements.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.04809 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.04809 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.04809 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.