Papers
arxiv:2509.06949

Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models

Published on Sep 8
· Submitted by Lingaaaaaaa on Sep 9
#3 Paper of the day
Authors:
,
,
,
,
,

Abstract

TraceRL enhances diffusion language models with trajectory-aware reinforcement learning, improving reasoning performance on complex tasks and enabling flexible sampling.

AI-generated summary

We propose TraceRL, a trajectory-aware reinforcement learning framework for diffusion language models (DLMs) that incorporates preferred inference trajectory into post-training, and is applicable across different architectures. Equipped with a diffusion-based value model that enhances training stability, we demonstrate improved reasoning performance on complex math and coding tasks. Besides, it can also be applied to adapt block-specific models to larger blocks, which improves sampling flexibility. Employing TraceRL, we derive a series of state-of-the-art diffusion language models, namely TraDo. Although smaller than 7B-scale AR models, TraDo-4B-Instruct still consistently outperforms them across complex math reasoning tasks. TraDo-8B-Instruct achieves relative accuracy improvements of 6.1% over Qwen2.5-7B-Instruct and 51.3% over Llama3.1-8B-Instruct on mathematical reasoning benchmarks. Through curriculum learning, we also derive the first long-CoT DLM, outperforming Qwen2.5-7B-Instruct on MATH500 with an 18.1% relative accuracy gain. To facilitate reproducible research and practical applications, we release a comprehensive open-source framework for building, training, and deploying diffusion LLMs across diverse architectures. The framework integrates accelerated KV-cache techniques and inference engines for both inference and reinforcement learning, and includes implementations of various supervised fine-tuning and RL methods for mathematics, coding, and general tasks. Code and Models: https://github.com/Gen-Verse/dLLM-RL

Community

Paper submitter

image.png

Yo, the arxiv page appears to be broken. It reads:
“File unavailable for 2509.06949
HTML or source was not provided to generate HTML or a PDF.”

·

Please wait for the arxiv update (within one hour), thank you

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.06949 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.06949 in a Space README.md to link it from this page.

Collections including this paper 1