Papers
arxiv:2503.11576

SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

Published on Mar 14
· Submitted by andito on Mar 17
#3 Paper of the day

Abstract

We introduce SmolDocling, an ultra-compact vision-language model targeting end-to-end document conversion. Our model comprehensively processes entire pages by generating DocTags, a new universal markup format that captures all page elements in their full context with location. Unlike existing approaches that rely on large foundational models, or ensemble solutions that rely on handcrafted pipelines of multiple specialized models, SmolDocling offers an end-to-end conversion for accurately capturing content, structure and spatial location of document elements in a 256M parameters vision-language model. SmolDocling exhibits robust performance in correctly reproducing document features such as code listings, tables, equations, charts, lists, and more across a diverse range of document types including business documents, academic papers, technical reports, patents, and forms -- significantly extending beyond the commonly observed focus on scientific papers. Additionally, we contribute novel publicly sourced datasets for charts, tables, equations, and code recognition. Experimental results demonstrate that SmolDocling competes with other Vision Language Models that are up to 27 times larger in size, while reducing computational requirements substantially. The model is currently available, datasets will be publicly available soon.

Community

Paper author Paper submitter
edited 1 day ago

Happy to reply to any questions :)

·

Hello 🤗
The project sounds great. I'd like to ask which all languages it'll support for the documents?

Did you train your model from scratch or finetune an existing one?

Best

·
Paper author

We trained everything from scratch! But it's a long way:
SmolLM2 -> SmolVLM -> SmolDocling.
I was personally involved in all three projects but only "trained" SmolVLM.

Hello @taxiraph ,
We fine tuned over HuggingFace’s SmolVLM 256M.
https://huggingface.co/blog/smolervlm

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.11576 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 12