InduOCRBench
[📜 arXiv] | [Dataset (🤗Hugging Face)]
News
- [2026-04] InduOCRBench paper accepted to ACL 2026 Industry Track. Dataset released.
📖 Introduction
InduOCRBench is an OCR benchmark for industrial RAG systems, covering 11 challenging document types observed in real-world enterprise workflows. It addresses the gap between traditional character-level OCR metrics and actual downstream RAG utility, evaluating OCR robustness in terms of both transcription fidelity and end-to-end retrieval performance.
Key Features:
- Real-world scenarios: Data sampled from 10,000 documents spanning 12 industries.
- Scale and diversity: Contains 570 PDF documents and 3,402 pages covering 11 challenge types + 1 Normal category.
- High-quality annotations: Fine-grained Hybrid Markdown annotations (Markdown + HTML tables + LaTeX formulas + style tags), with a 3-stage human-in-the-loop quality control achieving 98% accuracy.
- Dual evaluation tracks: OCR fidelity (character/structure metrics) and RAG impact (end-to-end retrieval + generation accuracy).
Key findings:
- Models achieving near-perfect scores on standard benchmarks (OmniDocBench) decline sharply on InduOCRBench (e.g., PP-StructureV3 drops 26.4 points, PaddleOCR-VL drops 14.7 points).
- High OCR accuracy does not necessarily translate into strong downstream RAG performance.
VisualStyledocuments achieve 82.9% OCR accuracy yet only 52.8% RAG accuracy — a 30.1-point discrepancy. - OCR-induced information loss is a strong and stable upstream limiting factor across all OCR-first RAG architectures.
The benchmark releases two evaluation tracks:
- OCR Fidelity Evaluation — character/structure-level metrics comparing OCR output to ground-truth Markdown (
ocr_data/). - RAG Impact Evaluation — end-to-end pipeline evaluation measuring how OCR quality affects retrieval and answer accuracy (
RAG_eval/).
📊 Dataset Statistics
| Statistic | Value |
|---|---|
| Total documents | 570 |
| Total pages | 3,402 |
| Industries covered | 11 challenge types + 1 Normal |
| QA pairs (RAG eval) | 2071 |
| Annotation format | Hybrid Markdown (Markdown + HTML tables + LaTeX formulas + style) |
11 challenge document types including:
- ComplexBackground
- HighPixel
- UltraLong
- MultiColumn
- UltraWide
- HistoryBooks
- Handwriting
- MultiFont
- VisualStyle
- Watermark
- CrosspageTable
📂 Dataset Structure
The dataset is stored in the ocr_data directory, containing original files and annotations in two formats:
InduOCRBench/
├── ocr_data/
│ ├── pdf.zip # Original PDF documents (570 files, 3402 pages)
│ ├── md.zip # [Recommended] Ground-truth Markdown for OCR evaluation
│ └── md_original.zip # Full-fidelity annotations preserving all visual style tags
│
├── RAG_eval/
│ ├── QA_pairs.jsonl # QA pairs for RAG pipeline evaluation
│ └── doc_md/ # Ground-truth Markdown files referenced by QA_pairs.jsonl
│
├── README.md
└── README_zh-CN.md
md_original: Full-fidelity Markdown annotations preserving all visual style tags (e.g., font, color, alignment, layout). Suitable for studies requiring high-fidelity document reconstruction.
md: Style-stripped Markdown annotations containing only textual content. This version serves as the standard Ground Truth for OCR evaluation to ensure fair comparison.
doc_md: Hybrid Markdown annotations for RAG construction. Style information is preserved for VisualStyle documents while removed for other document types. This version is designated as the standard Ground Truth for RAG indexing and QA evaluation.
🚀 OCR Evaluation
This benchmark uses the md2md method from OmniDocBench for evaluation. For details, see: https://github.com/opendatalab/OmniDocBench/tree/main
Evaluation Result
| Model Type | Methods | Size | Overall↑ | TextEDS↑ | FormulaCDM↑ | TableTEDS↑ | TableTEDS-S↑ | Read OrderEDS↑ |
|---|---|---|---|---|---|---|---|---|
| Specialized VLMs |
PaddleOCR-VL-1.5 | 0.9B | 79.01 | 88.33 | 75.3 | 73.41 | 77.27 | 85.3 |
| PaddleOCR-VL | 0.9B | 78.24 | 88.1 | 74.6 | 72.03 | 75.87 | 85.6 | |
| Logics-Parsing-v2 | 4B | 75.71 | 84.94 | 72.3 | 69.90 | 76.17 | 88.9 | |
| MinerU2.5-Pro | 1.2B | 74.47 | 81.63 | 75.8 | 65.99 | 70.46 | 79.1 | |
| FireRed-OCR | 2B | 74.09 | 87.9 | 72.4 | 61.98 | 66.45 | 85.8 | |
| MinerU2.5 | 1.2B | 72.50 | 81.8 | 75.4 | 60.31 | 63.10 | 84.4 | |
| GLM-OCR | 0.9B | 68.64 | 63.18 | 72.1 | 70.64 | 76.72 | 77.2 | |
| hunyuan-ocr | 0.9B | 68.08 | 86.1 | 65.6 | 52.53 | 58.34 | 85.7 | |
| deepseek-ocr | 1.2B | 61.46 | 75.5 | 61.8 | 47.07 | 49.31 | 81.8 | |
| General VLMs |
Gemini-2.5 Pro | - | 74.53 | 83.1 | 77.2 | 63.29 | 67.28 | 81.1 |
| Qwen3-VL-235B | 235B | 70.91 | 83.3 | 74.8 | 54.63 | 59.43 | 82.1 | |
| Ovis2.6-30B-A3B | 30B | 59.34 | 60.2 | 65.8 | 52.03 | 57.00 | 64.4 | |
| GPT-4o | - | 52.01 | 60.8 | 58.1 | 37.15 | 43.83 | 70.0 | |
| Pipeline Tools |
Mineru2-pipeline | - | 66.54 | 80.1 | 63.2 | 56.32 | 62.05 | 81.3 |
| PP-StructureV3 | - | 60.32 | 78.2 | 53.7 | 49.07 | 62.06 | 79.1 |
Evaluation Setup
To ensure maximum fairness, please follow these settings:
- Ground Truth: Please use the Markdown files extracted from
ocr_data/md.zipas the benchmark. - Metric: Use the
md2mdevaluation metric to calculate similarity scores.
Note: Although
md_originalis provided, for standard leaderboard evaluations, please uniformly use the data under themddirectory to align with the evaluation standards.
📝 Usage
Download and extract the data:
cd ocr_data unzip pdf.zip unzip md.zipRun your model to perform inference on the documents in the
pdfdirectory, generating prediction results in Markdown format.Use the evaluation script to compare your prediction results with the Ground Truth under the
mddirectory.
RAG Impact Evaluation
The RAG evaluation track measures how OCR quality affects end-to-end retrieval-augmented generation performance, going beyond character-level metrics to capture structural and semantic preservation.
RAG Evaluation Data
The RAG_eval/ directory contains:
QA_pairs.jsonl: 2,071 QA pairs covering all 11 document challenge types.doc_md/: Ground-truth Markdown files for RAG indexing, using thedoc_mdformat (style information preserved for VisualStyle document types, removed for others).
Each QA entry has the following fields:
{
"doc_type": "cross_page_table",
"filename": "cross_page_table_1.md",
"title": "Document title",
"file_path": "RAG_eval/doc_md/cross_page_table_1.md",
"question_category": "...",
"question": "...",
"answer": "...",
"evidence": "..."
}
RAG Pipeline Setup
We adopt the FlashRAG Naive pipeline with the following configuration:
| Component | Setting |
|---|---|
| Embedding | BGE-M3 |
| Retrieval | Dense, Flat index, top-100 |
| Reranking | BGE-Rerank-V2-M3, top-10 |
| Generation | ChatGPT-5 |
| Chunking | HTML tree structure, max 256 tokens |
| Evaluation | RAGAS framework (GPT-OSS-120B as judge) |
RAG Evaluation Metrics
- Context Recall: Measures whether retrieved passages contain evidence supporting the ground-truth answer.
- Answer Accuracy: Evaluates the correctness of the generated answer relative to the ground truth.
Key RAG Findings
| Document Type | OCR Accuracy | RAG Accuracy | Gap |
|---|---|---|---|
| VisualStyle | 82.9% | 52.8% | -30.1 pts (blind spot) |
| CrosspageTbl | 40.7% | 63.8% | +23.1 pts (LLM compensates) |
| UltraWide | 28.1% | 49.1% | low-low (structural failure) |
| MultiFont | 97.2% | 97.5% | ≈0 (aligned) |
High OCR accuracy does not guarantee strong RAG performance. VisualStyle documents demonstrate the largest OCR–RAG discrepancy: despite 82.9% character-level accuracy, only 52.8% RAG accuracy is achieved because OCR strips visual formatting cues (strikethroughs, color emphasis) that encode critical semantics.
📄 License
This project is released under an open-source license. Please comply with relevant laws and regulations when using this dataset. The data is for research and academic purposes only.
Acknowledgement
- Thank OmniDocBench for OCR metric calculation.
- Thank FlashRAG for the RAG pipeline framework.
- Thank ragas for the RAG metrics evaluation.
🖊️ Citation
If you use InduOCRBench in your research, please consider citing:
@misc{induocrbench,
title={When Good OCR Is Not Enough: Benchmarking OCR Robustness for Retrieval-Augmented Generation},
author={Lin Sun and Wangdexian and Jingang Huang and Linglin Zhang and Change Jia and Zhengwei Cheng and Xiangzheng Zhang},
year={2026},
eprint={2605.00911},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2605.00911},
}
