File size: 13,312 Bytes
02f0893 b7085f2 4273e37 02f0893 4273e37 02f0893 cf37239 02f0893 cf37239 02f0893 4273e37 02f0893 4273e37 02f0893 5567ae7 02f0893 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | <h1 align="center">InduOCRBench</h1>
<div align="center">
English | <a href="https://huggingface.co/datasets/qihoo360/InduOCRBench/blob/main/README_zh-CN.md">简体中文</a>
</div>
[\[📜 arXiv\]](https://arxiv.org/abs/2605.00911) | [[Dataset (🤗Hugging Face)]](https://huggingface.co/datasets/qihoo360/InduOCRBench)
---
## News
- **[2026-04]** InduOCRBench paper accepted to ACL 2026 Industry Track. Dataset released.
---

---
## 📖 Introduction
**InduOCRBench** is an OCR benchmark for industrial RAG systems, covering 11 challenging document types observed in real-world enterprise workflows. It addresses the gap between traditional character-level OCR metrics and actual downstream RAG utility, evaluating OCR robustness in terms of both transcription fidelity and end-to-end retrieval performance.
**Key Features:**
- **Real-world scenarios**: Data sampled from 10,000 documents spanning 12 industries.
- **Scale and diversity**: Contains **570** PDF documents and **3,402** pages covering **11** challenge types + 1 Normal category.
- **High-quality annotations**: Fine-grained Hybrid Markdown annotations (Markdown + HTML tables + LaTeX formulas + style tags), with a 3-stage human-in-the-loop quality control achieving 98% accuracy.
- **Dual evaluation tracks**: OCR fidelity (character/structure metrics) and RAG impact (end-to-end retrieval + generation accuracy).
**Key findings:**
- Models achieving near-perfect scores on standard benchmarks (OmniDocBench) decline sharply on InduOCRBench (e.g., PP-StructureV3 drops 26.4 points, PaddleOCR-VL drops 14.7 points).
- High OCR accuracy does **not** necessarily translate into strong downstream RAG performance. `VisualStyle` documents achieve 82.9% OCR accuracy yet only 52.8% RAG accuracy — a 30.1-point discrepancy.
- OCR-induced information loss is a strong and stable upstream limiting factor across all OCR-first RAG architectures.
**The benchmark releases two evaluation tracks:**
1. **OCR Fidelity Evaluation** — character/structure-level metrics comparing OCR output to ground-truth Markdown (`ocr_data/`).
2. **RAG Impact Evaluation** — end-to-end pipeline evaluation measuring how OCR quality affects retrieval and answer accuracy (`RAG_eval/`).
---
## 📊 Dataset Statistics
| Statistic | Value |
|---|---|
| Total documents | 570 |
| Total pages | 3,402 |
| Industries covered | 11 challenge types + 1 Normal |
| QA pairs (RAG eval) | 2071 |
| Annotation format | Hybrid Markdown (Markdown + HTML tables + LaTeX formulas + style) |
11 challenge document types including:
- ComplexBackground
- HighPixel
- UltraLong
- MultiColumn
- UltraWide
- HistoryBooks
- Handwriting
- MultiFont
- VisualStyle
- Watermark
- CrosspageTable
---
## 📂 Dataset Structure
The dataset is stored in the `ocr_data` directory, containing original files and annotations in two formats:
```text
InduOCRBench/
├── ocr_data/
│ ├── pdf.zip # Original PDF documents (570 files, 3402 pages)
│ ├── md.zip # [Recommended] Ground-truth Markdown for OCR evaluation
│ └── md_original.zip # Full-fidelity annotations preserving all visual style tags
│
├── RAG_eval/
│ ├── QA_pairs.jsonl # QA pairs for RAG pipeline evaluation
│ └── doc_md/ # Ground-truth Markdown files referenced by QA_pairs.jsonl
│
├── README.md
└── README_zh-CN.md
```
- **md_original**: Full-fidelity Markdown annotations preserving all visual style tags (e.g., font, color, alignment, layout). Suitable for studies requiring high-fidelity document reconstruction.
- **md**: Style-stripped Markdown annotations containing only textual content. This version serves as the standard Ground Truth for OCR evaluation to ensure fair comparison.
- **doc_md**: Hybrid Markdown annotations for RAG construction. Style information is preserved for VisualStyle documents while removed for other document types. This version is designated as the standard Ground Truth for RAG indexing and QA evaluation.
## 🚀 OCR Evaluation
This benchmark uses the `md2md` method from **OmniDocBench** for evaluation. For details, see: https://github.com/opendatalab/OmniDocBench/tree/main
### Evaluation Result
<div align="center">
Table 1. OCR fidelity evaluation on InduOCRBench using md2md metrics
</div>
<table style="width:100%; border-collapse: collapse;">
<thead>
<tr>
<th>Model Type</th>
<th>Methods</th>
<th>Size</th>
<th>Overall↑</th>
<th>Text<sup>EDS</sup>↑</th>
<th>Formula<sup>CDM</sup>↑</th>
<th>Table<sup>TEDS</sup>↑</th>
<th>Table<sup>TEDS-S</sup>↑</th>
<th>Read Order<sup>EDS</sup>↑</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="9"><strong>Specialized</strong><br><strong>VLMs</strong></td>
<td>PaddleOCR-VL-1.5</td>
<td>0.9B</td>
<td><strong>79.01</strong></td>
<td><strong>88.33</strong></td>
<td>75.3</td>
<td><strong>73.41</strong></td>
<td><strong>77.27</strong></td>
<td>85.3</td>
</tr>
<tr>
<td>PaddleOCR-VL</td>
<td>0.9B</td>
<td><ins>78.24</ins></td>
<td><ins>88.1</ins></td>
<td>74.6</td>
<td><ins>72.03</ins></td>
<td>75.87</td>
<td>85.6</td>
</tr>
<tr>
<td>Logics-Parsing-v2</td>
<td>4B</td>
<td>75.71</td>
<td>84.94</td>
<td>72.3</td>
<td>69.90</td>
<td>76.17</td>
<td><strong>88.9</strong></td>
</tr>
<tr>
<td>MinerU2.5-Pro</td>
<td>1.2B</td>
<td>74.47</td>
<td>81.63</td>
<td><ins>75.8</ins></td>
<td>65.99</td>
<td>70.46</td>
<td>79.1</td>
</tr>
<tr>
<td>FireRed-OCR</td>
<td>2B</td>
<td>74.09</td>
<td>87.9</td>
<td>72.4</td>
<td>61.98</td>
<td>66.45</td>
<td><ins>85.8</ins></td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>72.50</td>
<td>81.8</td>
<td>75.4</td>
<td>60.31</td>
<td>63.10</td>
<td>84.4</td>
</tr>
<tr>
<td>GLM-OCR</td>
<td>0.9B</td>
<td>68.64</td>
<td>63.18</td>
<td>72.1</td>
<td>70.64</td>
<td><ins>76.72</ins></td>
<td>77.2</td>
</tr>
<tr>
<td>hunyuan-ocr</td>
<td>0.9B</td>
<td>68.08</td>
<td>86.1</td>
<td>65.6</td>
<td>52.53</td>
<td>58.34</td>
<td>85.7</td>
</tr>
<tr>
<td>deepseek-ocr</td>
<td>1.2B</td>
<td>61.46</td>
<td>75.5</td>
<td>61.8</td>
<td>47.07</td>
<td>49.31</td>
<td>81.8</td>
</tr>
<tr>
<td rowspan="4"><strong>General</strong><br><strong>VLMs</strong></td>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>74.53</td>
<td>83.1</td>
<td><strong>77.2</strong></td>
<td>63.29</td>
<td>67.28</td>
<td>81.1</td>
</tr>
<tr>
<td>Qwen3-VL-235B</td>
<td>235B</td>
<td>70.91</td>
<td>83.3</td>
<td>74.8</td>
<td>54.63</td>
<td>59.43</td>
<td>82.1</td>
</tr>
<tr>
<td>Ovis2.6-30B-A3B</td>
<td>30B</td>
<td>59.34</td>
<td>60.2</td>
<td>65.8</td>
<td>52.03</td>
<td>57.00</td>
<td>64.4</td>
</tr>
<tr>
<td>GPT-4o</td>
<td>-</td>
<td>52.01</td>
<td>60.8</td>
<td>58.1</td>
<td>37.15</td>
<td>43.83</td>
<td>70.0</td>
</tr>
<tr>
<td rowspan="2"><strong>Pipeline</strong><br><strong>Tools</strong></td>
<td>Mineru2-pipeline</td>
<td>-</td>
<td>66.54</td>
<td>80.1</td>
<td>63.2</td>
<td>56.32</td>
<td>62.05</td>
<td>81.3</td>
</tr>
<tr>
<td>PP-StructureV3</td>
<td>-</td>
<td>60.32</td>
<td>78.2</td>
<td>53.7</td>
<td>49.07</td>
<td>62.06</td>
<td>79.1</td>
</tr>
</tbody>
</table>
### Evaluation Setup
To ensure maximum fairness, please follow these settings:
1. **Ground Truth**: Please use the Markdown files extracted from `ocr_data/md.zip` as the benchmark.
2. **Metric**: Use the `md2md` evaluation metric to calculate similarity scores.
> Note: Although `md_original` is provided, for standard leaderboard evaluations, please uniformly use the data under the `md` directory to align with the evaluation standards.
## 📝 Usage
1. Download and extract the data:
```bash
cd ocr_data
unzip pdf.zip
unzip md.zip
```
2. Run your model to perform inference on the documents in the `pdf` directory, generating prediction results in Markdown format.
3. Use the evaluation script to compare your prediction results with the Ground Truth under the `md` directory.
## RAG Impact Evaluation
The RAG evaluation track measures how OCR quality affects end-to-end retrieval-augmented generation performance, going beyond character-level metrics to capture structural and semantic preservation.
### RAG Evaluation Data
The `RAG_eval/` directory contains:
- **`QA_pairs.jsonl`**: 2,071 QA pairs covering all 11 document challenge types.
- **`doc_md/`**: Ground-truth Markdown files for RAG indexing, using the `doc_md` format (style information preserved for VisualStyle document types, removed for others).
Each QA entry has the following fields:
```json
{
"doc_type": "cross_page_table",
"filename": "cross_page_table_1.md",
"title": "Document title",
"file_path": "RAG_eval/doc_md/cross_page_table_1.md",
"question_category": "...",
"question": "...",
"answer": "...",
"evidence": "..."
}
```
### RAG Pipeline Setup
We adopt the [FlashRAG](https://github.com/RUC-NLPIR/FlashRAG) Naive pipeline with the following configuration:
| Component | Setting |
|---|---|
| Embedding | BGE-M3 |
| Retrieval | Dense, Flat index, top-100 |
| Reranking | BGE-Rerank-V2-M3, top-10 |
| Generation | ChatGPT-5 |
| Chunking | HTML tree structure, max 256 tokens |
| Evaluation | RAGAS framework (GPT-OSS-120B as judge) |
### RAG Evaluation Metrics
- **Context Recall**: Measures whether retrieved passages contain evidence supporting the ground-truth answer.
- **Answer Accuracy**: Evaluates the correctness of the generated answer relative to the ground truth.
### Key RAG Findings
| Document Type | OCR Accuracy | RAG Accuracy | Gap |
|---|---|---|---|
| VisualStyle | 82.9% | 52.8% | -30.1 pts (blind spot) |
| CrosspageTbl | 40.7% | 63.8% | +23.1 pts (LLM compensates) |
| UltraWide | 28.1% | 49.1% | low-low (structural failure) |
| MultiFont | 97.2% | 97.5% | ≈0 (aligned) |
**High OCR accuracy does not guarantee strong RAG performance.** VisualStyle documents demonstrate the largest OCR–RAG discrepancy: despite 82.9% character-level accuracy, only 52.8% RAG accuracy is achieved because OCR strips visual formatting cues (strikethroughs, color emphasis) that encode critical semantics.
## 📄 License
This project is released under an open-source license. Please comply with relevant laws and regulations when using this dataset. The data is for research and academic purposes only.
## Acknowledgement
- Thank [OmniDocBench](https://github.com/opendatalab/OmniDocBench) for OCR metric calculation.
- Thank [FlashRAG](https://github.com/RUC-NLPIR/FlashRAG) for the RAG pipeline framework.
- Thank [ragas](https://github.com/vibrantlabsai/ragas) for the RAG metrics evaluation.
## 🖊️ Citation
If you use InduOCRBench in your research, please consider citing:
```bibtex
@misc{induocrbench,
title={When Good OCR Is Not Enough: Benchmarking OCR Robustness for Retrieval-Augmented Generation},
author={Lin Sun and Wangdexian and Jingang Huang and Linglin Zhang and Change Jia and Zhengwei Cheng and Xiangzheng Zhang},
year={2026},
eprint={2605.00911},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2605.00911},
}
``` |