Papers
arxiv:2603.22458

MinerU-Diffusion: Rethinking Document OCR as Inverse Rendering via Diffusion Decoding

Published on Mar 23
· Submitted by
taesiri
on Mar 25
#1 Paper of the day
Authors:
,
,
,
,
,

Abstract

MinerU-Diffusion is a diffusion-based framework that replaces autoregressive decoding with parallel diffusion denoising for document OCR, improving robustness and decoding speed.

AI-generated summary

Optical character recognition (OCR) has evolved from line-level transcription to structured document parsing, requiring models to recover long-form sequences containing layout, tables, and formulas. Despite recent advances in vision-language models, most existing systems rely on autoregressive decoding, which introduces sequential latency and amplifies error propagation in long documents. In this work, we revisit document OCR from an inverse rendering perspective, arguing that left-to-right causal generation is an artifact of serialization rather than an intrinsic property of the task. Motivated by this insight, we propose MinerU-Diffusion, a unified diffusion-based framework that replaces autoregressive sequential decoding with parallel diffusion denoising under visual conditioning. MinerU-Diffusion employs a block-wise diffusion decoder and an uncertainty-driven curriculum learning strategy to enable stable training and efficient long-sequence inference. Extensive experiments demonstrate that MinerU-Diffusion consistently improves robustness while achieving up to 3.2x faster decoding compared to autoregressive baselines. Evaluations on the proposed Semantic Shuffle benchmark further confirm its reduced dependence on linguistic priors and stronger visual OCR capability.

Community

Paper submitter

Proposes MinerU-Diffusion, a diffusion-based inverse-rendering OCR that replaces autoregressive decoding with parallel denoising for long documents, improving robustness and speeding inference.

Could you please update this video as the cover.

the block-wise diffusion decoder with bidirectional within-block attention and limited cross-block links is the real clever trick here, letting you parallelize long sequences without losing global layout cues. anchoring blocks at a coarse autoregressive scaffold preserves structure while driving near-linear compute growth as length grows. i’d be curious how sensitive boundary performance is to block size, especially on irregular layouts like merged cells or multi-row headers, since boundary misalignment could be a bottleneck. a targeted ablation showing how much boundary refinement contributes versus within-block denoising would help pin down where the gains come from. btw the arxivlens breakdown helped me parse the method details and gives a nice, accessible walkthrough: https://arxivlens.com/PaperView/Details/mineru-diffusion-rethinking-document-ocr-as-inverse-rendering-via-diffusion-decoding-4133-33d0c85c

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.22458 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 8