--- pipeline_tag: image-text-to-text library_name: monkeyocr language: - zh - en tags: - OCR ---

MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm

[![arXiv](https://img.shields.io/badge/Arxiv-MonkeyOCR-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2506.05218) [![HuggingFace](https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace)](https://huggingface.co/echo840/MonkeyOCR) [![GitHub issues](https://img.shields.io/github/issues/Yuliang-Liu/MonkeyOCR?color=critical&label=Issues)](https://github.com/Yuliang-Liu/MonkeyOCR/issues?q=is%3Aopen+is%3Aissue) [![GitHub closed issues](https://img.shields.io/github/issues-closed/Yuliang-Liu/MonkeyOCR?color=success&label=Issues)](https://github.com/Yuliang-Liu/MonkeyOCR/issues?q=is%3Aissue+is%3Aclosed) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/Yuliang-Liu/MonkeyOCR/blob/main/LICENSE.txt) [![GitHub views](https://komarev.com/ghpvc/?username=Yuliang-Liu&repo=MonkeyOCR&color=brightgreen&label=Views)](https://github.com/Yuliang-Liu/MonkeyOCR)
> **MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm**
> Zhang Li, Yuliang Liu, Qiang Liu, Zhiyin Ma, Ziyang Zhang, Shuo Zhang, Zidun Guo, Jiarui Zhang, Xinyu Wang, Xiang Bai
[![arXiv](https://img.shields.io/badge/Arxiv-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2506.05218) [![Source_code](https://img.shields.io/badge/Code-Available-white)](README.md) [![Model Weight](https://img.shields.io/badge/HuggingFace-gray)](https://huggingface.co/echo840/MonkeyOCR) [![Model Weight](https://img.shields.io/badge/ModelScope-green)](https://modelscope.cn/models/l1731396519/MonkeyOCR) [![Public Courses](https://img.shields.io/badge/Openbayes-yellow)](https://openbayes.com/console/public/tutorials/91ESrGvEvBq) [![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlabmonkey.xyz:7685/) ## Introduction MonkeyOCR adopts a Structure-Recognition-Relation (SRR) triplet paradigm, which simplifies the multi-tool pipeline of modular approaches while avoiding the inefficiency of using large multimodal models for full-page document processing. 1. MonkeyOCR-pro-1.2B surpasses MonkeyOCR-3B by 7.4% on Chinese documents. 2. MonkeyOCR-pro-1.2B delivers approximately a 36% speed improvement over MonkeyOCR-pro-3B, with approximately 1.6% drop in performance. 3. On olmOCR-Bench, MonkeyOCR-pro-1.2B outperforms Nanonets-OCR-3B by 7.3%. 4. On OmniDocBench, MonkeyOCR-pro-3B achieves the best overall performance on both English and Chinese documents, outperforming even closed-source and extra-large open-source VLMs such as Gemini 2.0-Flash, Gemini 2.5-Pro, Qwen2.5-VL-72B, GPT-4o, and InternVL3-78B. See detailed results below. ### Comparing MonkeyOCR with closed-source and extra large open-source VLMs. EKhkhY.png ## Inference Speed (Pages/s) on Different GPUs and [PDF](https://drive.google.com/drive/folders/1geumlJmVY7UUKdr8324sYZ0FHSAElh7m?usp=sharing) Page Counts
Model GPU 50 Pages 100 Pages 300 Pages 500 Pages 1000 Pages
MonkeyOCR-pro-3B 3090 0.492 0.484 0.497 0.492 0.496
A6000 0.585 0.587 0.609 0.598 0.608
H800 0.923 0.768 0.897 0.930 0.891
4090 0.972 0.969 1.006 0.986 1.006
MonkeyOCR-pro-1.2B 3090 0.615 0.660 0.677 0.687 0.683
A6000 0.709 0.786 0.825 0.829 0.825
H800 0.965 1.082 1.101 1.145 1.015
4090 1.194 1.314 1.436 1.442 1.434
## VLM OCR Speed (Pages/s) on Different GPUs and [PDF](https://drive.google.com/drive/folders/1geumlJmVY7UUKdr8324sYZ0FHSAElh7m?usp=sharing) Page Counts
Model GPU 50 Pages 100 Pages 300 Pages 500 Pages 1000 Pages
MonkeyOCR-pro-3B 3090 0.705 0.680 0.711 0.700 0.724
A6000 0.885 0.860 0.915 0.892 0.934
H800 1.371 1.135 1.339 1.433 1.509
4090 1.321 1.300 1.384 1.343 1.410
MonkeyOCR-pro-1.2B 3090 0.919 1.086 1.166 1.182 1.199
A6000 1.177 1.361 1.506 1.525 1.569
H800 1.466 1.719 1.763 1.875 1.650
4090 1.759 1.987 2.260 2.345 2.415
## Supported Hardware Due to the limited types of GPUs available to us, we may not be able to provide highly accurate hardware specifications. We've tested the model on GPUs such as the 3090, 4090, A6000, H800, A100, and even the 4060 with 8GB of VRAM (suitable for deploying quantized 3B model and 1.2B model). We are very grateful for the feedback and contributions from the open-source community, who have also successfully run the model on [50-series GPUs](https://github.com/Yuliang-Liu/MonkeyOCR/issues/90), [H200](https://github.com/Yuliang-Liu/MonkeyOCR/issues/151), [L20](https://github.com/Yuliang-Liu/MonkeyOCR/issues/133), [V100](https://github.com/Yuliang-Liu/MonkeyOCR/issues/144), [2080 Ti](https://github.com/Yuliang-Liu/MonkeyOCR/pull/1) and [npu](https://github.com/Yuliang-Liu/MonkeyOCR/pull/226/files). ## News * ```2025.07.10 ``` 🚀 We release [MonkeyOCR-pro-1.2B](https://huggingface.co/echo840/MonkeyOCR-pro-1.2B), — a leaner and faster version model that outperforms our previous 3B version in accuracy, speed, and efficiency. * ```2025.06.12 ``` 🚀 The model’s trending on [Hugging Face](https://huggingface.co/models?sort=trending). Thanks for the love! * ```2025.06.05 ``` 🚀 We release [MonkeyOCR](https://huggingface.co/echo840/MonkeyOCR), an English and Chinese documents parsing model. # Quick Start ## Locally Install ### 1. Install MonkeyOCR See the [installation guide](https://github.com/Yuliang-Liu/MonkeyOCR/blob/main/docs/install_cuda_pp.md#install-with-cuda-support) to set up your environment. ### 2. Download Model Weights Download our model from Huggingface. ```python pip install huggingface_hub python tools/download_model.py -n MonkeyOCR-pro-3B # or MonkeyOCR ``` You can also download our model from ModelScope. ```python pip install modelscope python tools/download_model.py -t modelscope -n MonkeyOCR-pro-3B # or MonkeyOCR ``` ### 3. Inference You can parse a file or a directory containing PDFs or images using the following commands: ```bash # Replace input_path with the path to a PDF or image or directory # End-to-end parsing python parse.py input_path # Parse files in a dir with specific group page num python parse.py input_path -g 20 # Single-task recognition (outputs markdown only) python parse.py input_path -t text/formula/table # Parse PDFs in input_path and split results by pages python parse.py input_path -s # Specify output directory and model config file python parse.py input_path -o ./output -c config.yaml ```
More usage examples ```bash # Single file processing python parse.py input.pdf # Parse single PDF file python parse.py input.pdf -o ./output # Parse with custom output dir python parse.py input.pdf -s # Parse PDF with page splitting python parse.py image.jpg # Parse single image file # Single task recognition python parse.py image.jpg -t text # Text recognition from image python parse.py image.jpg -t formula # Formula recognition from image python parse.py image.jpg -t table # Table recognition from image python parse.py document.pdf -t text # Text recognition from all PDF pages # Folder processing (all files individually) python parse.py /path/to/folder # Parse all files in folder python parse.py /path/to/folder -s # Parse with page splitting python parse.py /path/to/folder -t text # Single task recognition for all files # Multi-file grouping (batch processing by page count) python parse.py /path/to/folder -g 5 # Group files with max 5 total pages python parse.py /path/to/folder -g 10 -s # Group files with page splitting python parse.py /path/to/folder -g 8 -t text # Group files for single task recognition # Advanced configurations python parse.py input.pdf -c model_configs.yaml # Custom model configuration python parse.py /path/to/folder -g 15 -s -o ./out # Group files, split pages, custom output python parse.py input.pdf --pred-abandon # Enable predicting abandon elements python parse.py /path/to/folder -g 10 -m # Group files and merge text blocks in output ```
Output Results MonkeyOCR mainly generates three types of output files: 1. **Processed Markdown File** (`your.md`): The final parsed document content in markdown format, containing text, formulas, tables, and other structured elements. 2. **Layout Results** (`your_layout.pdf`): The layout results drawed on origin PDF. 2. **Intermediate Block Results** (`your_middle.json`): A JSON file containing detailed information about all detected blocks, including: - Block coordinates and positions - Block content and type information - Relationship information between blocks These files provide both the final formatted output and detailed intermediate results for further analysis or processing.
### 4. Gradio Demo ```bash python demo/demo_gradio.py ``` Once the demo is running, you can access it at http://localhost:7860. ### 5. Fast API You can start the MonkeyOCR FastAPI service with the following command: ```bash uvicorn api.main:app --port 8000 ``` Once the API service is running, you can access the API documentation at http://localhost:8000/docs to explore available endpoints. > [!TIP] > To improve API concurrency performance, consider configuring the inference backend as `lmdeploy_queue` or `vllm_queue`. ## Docker Deployment 1. Navigate to the `docker` directory: ```bash cd docker ``` 2. **Prerequisite:** Ensure NVIDIA GPU support is available in Docker (via `nvidia-docker2`). If GPU support is not enabled, run the following to set up the environment: ```bash bash env.sh ``` 3. Build the Docker image: ```bash docker compose build monkeyocr ``` > [!IMPORTANT] > > If your GPU is from the 20/30/40-series, V100, L20/L40 or similar, please build the patched Docker image for LMDeploy compatibility: > > ```bash > docker compose build monkeyocr-fix > ``` > > Otherwise, you may encounter the following error: `triton.runtime.errors.OutOfResources: out of resource: shared memory` 4. Run the container with the Gradio demo (accessible on port 7860): ```bash docker compose up monkeyocr-demo ``` Alternatively, start an interactive development environment: ```bash docker compose run --rm monkeyocr-dev ``` 5. Run the FastAPI service (accessible on port 7861): ```bash docker compose up monkeyocr-api ``` Once the API service is running, you can access the API documentation at http://localhost:7861/docs to explore available endpoints. ## Windows Support See the [windows support guide](docs/windows_support.md) for details. ## Quantization This model can be quantized using AWQ. Follow the instructions in the [quantization guide](docs/Quantization.md). ## Benchmark Results Here are the evaluation results of our model on OmniDocBench. MonkeyOCR-3B uses DocLayoutYOLO as the structure detection model, while MonkeyOCR-3B* uses our trained structure detection model with improved Chinese performance. ### 1. The end-to-end evaluation results of different tasks.
Model
Type
Methods OverallEdit TextEdit FormulaEdit TableTEDS TableEdit Read OrderEdit
EN ZH EN ZH EN ZH EN ZH EN ZH EN ZH
Pipeline
Tools
MinerU 0.150 0.357 0.061 0.215 0.278 0.577 78.6 62.1 0.180 0.344 0.079 0.292
Marker 0.336 0.556 0.080 0.315 0.530 0.883 67.6 49.2 0.619 0.685 0.114 0.340
Mathpix 0.191 0.365 0.105 0.384 0.306 0.454 77.0 67.1 0.243 0.320 0.108 0.304
Docling 0.589 0.909 0.416 0.987 0.999 1 61.3 25.0 0.627 0.810 0.313 0.837
Pix2Text 0.320 0.528 0.138 0.356 0.276 0.611 73.6 66.2 0.584 0.645 0.281 0.499
Unstructured 0.586 0.716 0.198 0.481 0.999 1 0 0.06 1 0.998 0.145 0.387
OpenParse 0.646 0.814 0.681 0.974 0.996 1 64.8 27.5 0.284 0.639 0.595 0.641
PP-StructureV3 0.145 0.206 0.058 0.088 0.295 0.535 - - 0.159 0.109 0.069 0.091
Expert
VLMs
GOT-OCR 0.287 0.411 0.189 0.315 0.360 0.528 53.2 47.2 0.459 0.520 0.141 0.280
Nougat 0.452 0.973 0.365 0.998 0.488 0.941 39.9 0 0.572 1.000 0.382 0.954
Mistral OCR 0.268 0.439 0.072 0.325 0.318 0.495 75.8 63.6 0.600 0.650 0.083 0.284
OLMOCR-sglang 0.326 0.469 0.097 0.293 0.455 0.655 68.1 61.3 0.6080.652 0.145 0.277
SmolDocling-256M 0.493 0.816 0.262 0.838 0.753 0.997 44.9 16.5 0.729 0.907 0.227 0.522
Dolphin 0.206 0.306 0.107 0.197 0.447 0.580 77.3 67.2 0.180 0.285 0.091 0.162
MinerU 2 0.139 0.240 0.047 0.109 0.297 0.536 82.5 79.0 0.141 0.195 0.069 0.118
OCRFlux 0.195 0.281 0.064 0.183 0.379 0.613 71.6 81.3 0.253 0.139 0.086 0.187
General
VLMs
GPT4o 0.233 0.399 0.144 0.409 0.425 0.606 72.0 62.9 0.234 0.329 0.128 0.251
Qwen2.5-VL-7B 0.312 0.406 0.157 0.228 0.351 0.574 76.4 72.2 0.588 0.619 0.149 0.203
InternVL3-8B 0.314 0.383 0.134 0.218 0.417 0.563 66.1 73.1 0.586 0.564 0.118 0.186
Mix MonkeyOCR-3B [Weight] 0.140 0.297 0.058 0.185 0.238 0.506 80.2 77.7 0.170 0.253 0.093 0.244
MonkeyOCR-3B* [Weight] 0.154 0.277 0.073 0.134 0.255 0.529 78.2 76.2 0.182 0.262 0.105 0.183
MonkeyOCR-pro-3B [Weight] 0.138 0.206 0.067 0.107 0.246 0.421 81.5 87.5 0.139 0.111 0.100 0.185
MonkeyOCR-pro-1.2B [Weight] 0.153 0.223 0.066 0.123 0.272 0.449 76.5 83.7 0.176 0.131 0.097 0.187
### 2. The end-to-end text recognition performance across 9 PDF page types.
Model
Type
Models Book Slides Financial
Report
Textbook Exam
Paper
Magazine Academic
Papers
Notes Newspaper Overall
Pipeline
Tools
MinerU 0.055 0.124 0.033 0.102 0.159 0.072 0.025 0.984 0.171 0.206
Marker 0.074 0.340 0.089 0.319 0.452 0.153 0.059 0.651 0.192 0.274
Mathpix 0.131 0.220 0.202 0.216 0.278 0.147 0.091 0.634 0.690 0.300
Expert
VLMs
GOT-OCR 0.111 0.222 0.067 0.132 0.204 0.198 0.179 0.388 0.771 0.267
Nougat 0.734 0.958 1.000 0.820 0.930 0.830 0.214 0.991 0.871 0.806
Dolphin 0.091 0.131 0.057 0.146 0.231 0.121 0.074 0.363 0.307 0.177
OCRFlux 0.068 0.125 0.092 0.102 0.119 0.083 0.047 0.223 0.536 0.149
General
VLMs
GPT4o 0.157 0.163 0.348 0.187 0.281 0.173 0.146 0.607 0.751 0.316
Qwen2.5-VL-7B 0.148 0.053 0.111 0.137 0.189 0.117 0.134 0.204 0.706 0.205
InternVL3-8B 0.163 0.056 0.107 0.109 0.129 0.100 0.159 0.150 0.681 0.188
Mix MonkeyOCR-3B [Weight] 0.046 0.120 0.024 0.100 0.129 0.086 0.024 0.643 0.131 0.155
MonkeyOCR-3B* [Weight] 0.054 0.203 0.038 0.112 0.138 0.111 0.032 0.194 0.136 0.120
MonkeyOCR-pro-3B [Weight] 0.084 0.129 0.060 0.090 0.107 0.073 0.050 0.171 0.107 0.100
MonkeyOCR-pro-1.2B [Weight] 0.087 0.142 0.059 0.093 0.115 0.085 0.045 0.226 0.122 0.112
### 3. The evaluation results of olmOCR-bench.
Model ArXiv Old Scans
Math
Tables Old Scans Headers and
Footers
Multi
column
Long Tiny
Text
Base Overall
GOT OCR 52.7 52.0 0.2 22.1 93.6 42.0 29.9 94.0 48.3 ± 1.1
Marker 76.0 57.9 57.6 27.8 84.9 72.9 84.6 99.1 70.1 ± 1.1
MinerU 75.4 47.4 60.9 17.3 96.6 59.0 39.1 96.6 61.5 ± 1.1
Mistral OCR 77.2 67.5 60.6 29.3 93.6 71.3 77.1 99.4 72.0 ± 1.1
Nanonets OCR 67.0 68.6 77.7 39.5 40.7 69.9 53.4 99.3 64.5 ± 1.1
GPT-4o
(No Anchor)
51.5 75.5 69.1 40.9 94.2 68.9 54.1 96.7 68.9 ± 1.1
GPT-4o
(Anchored)
53.5 74.5 70.0 40.7 93.8 69.3 60.6 96.8 69.9 ± 1.1
Gemini Flash 2
(No Anchor)
32.1 56.3 61.4 27.8 48.0 58.7 84.4 94.0 57.8 ± 1.1
Gemini Flash 2
(Anchored)
54.5 56.1 72.1 34.2 64.7 61.5 71.5 95.6 63.8 ± 1.2
Qwen 2 VL
(No Anchor)
19.7 31.7 24.2 17.1 88.9 8.3 6.8 55.5 31.5 ± 0.9
Qwen 2.5 VL
(No Anchor)
63.1 65.7 67.3 38.6 73.6 68.3 49.1 98.3 65.5 ± 1.2
olmOCR v0.1.75
(No Anchor)
71.5 71.4 71.4 42.8 94.1 77.7 71.0 97.8 74.7 ± 1.1
olmOCR v0.1.75
(Anchored)
74.9 71.2 71.0 42.2 94.5 78.3 73.3 98.3 75.5 ± 1.0
MonkeyOCR-pro-3B [Weight] 83.8 68.8 74.6 36.1 91.2 76.6 80.1 95.3 75.8 ± 1.0
MonkeyOCR-pro-1.2B [Weight] 80.5 62.9 71.1 32.9 92.2 68.3 74.0 92.6 71.8 ± 1.1
## Visualization Demo Get a Quick Hands-On Experience with Our Demo: http://vlrlabmonkey.xyz:7685 (The latest model is available for selection) > Our demo is simple and easy to use: > > 1. Upload a PDF or image. > 2. Click “Parse (解析)” to let the model perform structure detection, content recognition, and relationship prediction on the input document. The final output will be a markdown-formatted version of the document. > 3. Select a prompt and click “Test by prompt” to let the model perform content recognition on the image based on the selected prompt. ### Example for formula document 7jVLgB.jpg ### Example for table document 7jcOaa.png ### Example for newspaper 7jcP5V.png ### Example for financial report 7jc10I.png 7jcRCL.png ## Citing MonkeyOCR If you wish to refer to the baseline results published here, please use the following BibTeX entries: ```BibTeX @misc{li2025monkeyocrdocumentparsingstructurerecognitionrelation, title={MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm}, author={Zhang Li and Yuliang Liu and Qiang Liu and Zhiyin Ma and Ziyang Zhang and Shuo Zhang and Zidun Guo and Jiarui Zhang and Xinyu Wang and Xiang Bai}, year={2025}, eprint={2506.05218}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2506.05218}, } ``` ## Acknowledgments We would like to thank [MinerU](https://github.com/opendatalab/MinerU), [DocLayout-YOLO](https://github.com/opendatalab/DocLayout-YOLO), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), [layoutreader](https://github.com/ppaanngggg/layoutreader), [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [LMDeploy](https://github.com/InternLM/lmdeploy), [PP-StructureV3](https://github.com/PaddlePaddle/PaddleOCR), [PP-DocLayout_plus-L](https://huggingface.co/PaddlePaddle/PP-DocLayout_plus-L), and [InternVL3](https://github.com/OpenGVLab/InternVL) for providing base code and models, as well as their contributions to this field. We also thank [M6Doc](https://github.com/HCIILAB/M6Doc), [DocLayNet](https://github.com/DS4SD/DocLayNet), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery), [DocGenome](https://github.com/Alpha-Innovator/DocGenome), [PubTabNet](https://github.com/ibm-aur-nlp/PubTabNet), and [UniMER-1M](https://github.com/opendatalab/UniMERNet) for providing valuable datasets. We also thank everyone who contributed to this open-source effort. ## Limitation Currently, MonkeyOCR do not yet fully support for photographed text, handwritten content, Traditional Chinese characters, or multilingual text. We plan to consider adding support for these features in future public releases. Additionally, our model is deployed on a single GPU, so if too many users upload files at the same time, issues like “This application is currently busy” may occur. The processing time shown on the demo page does not reflect computation time alone—it also includes result uploading and other overhead. During periods of high traffic, this time may be longer. The inference speeds of MonkeyOCR, MinerU, and Qwen2.5 VL-7B were measured on an H800 GPU. ## Copyright Please don’t hesitate to share your valuable feedback — it’s a key motivation that drives us to continuously improve our framework. Note: Our model is intended for academic research and non-commercial use only. If you are interested in faster (smaller) or stronger one, please contact us at xbai@hust.edu.cn or ylliu@hust.edu.cn.