Datasets:

Formats:
json
ArXiv:
wdx360 commited on
Commit
02f0893
·
1 Parent(s): cf37239
Files changed (1) hide show
  1. README.md +373 -1
README.md CHANGED
@@ -1,3 +1,375 @@
 
 
 
 
 
 
 
1
  ---
2
- license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">InduOCRBench</h1>
2
+
3
+ <div align="center">
4
+ English | <a href="./README_zh-CN.md">简体中文</a>
5
+
6
+ [\[📜 arXiv\]]() | [[Dataset (🤗Hugging Face)]]()
7
+
8
  ---
9
+ ## News
10
+
11
+ - **[2026-04]** InduOCRBench paper accepted to ACL 2026 Industry Track. Dataset released.
12
+
13
  ---
14
+ <p align="center">
15
+
16
+ ![](assets/InduOCRBench_overview.png)
17
+
18
+ </p>
19
+
20
+ ---
21
+
22
+ ## 📖 Introduction
23
+
24
+ **InduOCRBench** is an OCR benchmark for industrial RAG systems, covering 11 challenging document types observed in real-world enterprise workflows. It addresses the gap between traditional character-level OCR metrics and actual downstream RAG utility, evaluating OCR robustness in terms of both transcription fidelity and end-to-end retrieval performance.
25
+
26
+ **Key Features:**
27
+ - **Real-world scenarios**: Data sampled from 10,000 documents spanning 12 industries.
28
+ - **Scale and diversity**: Contains **570** PDF documents and **3,402** pages covering **11** challenge types + 1 Normal category.
29
+ - **High-quality annotations**: Fine-grained Hybrid Markdown annotations (Markdown + HTML tables + LaTeX formulas + style tags), with a 3-stage human-in-the-loop quality control achieving 98% accuracy.
30
+ - **Dual evaluation tracks**: OCR fidelity (character/structure metrics) and RAG impact (end-to-end retrieval + generation accuracy).
31
+
32
+ **Key findings:**
33
+ - Models achieving near-perfect scores on standard benchmarks (OmniDocBench) decline sharply on InduOCRBench (e.g., PP-StructureV3 drops 26.4 points, PaddleOCR-VL drops 14.7 points).
34
+ - High OCR accuracy does **not** necessarily translate into strong downstream RAG performance. `VisualStyle` documents achieve 82.9% OCR accuracy yet only 52.8% RAG accuracy — a 30.1-point discrepancy.
35
+ - OCR-induced information loss is a strong and stable upstream limiting factor across all OCR-first RAG architectures.
36
+
37
+ **The benchmark releases two evaluation tracks:**
38
+ 1. **OCR Fidelity Evaluation** — character/structure-level metrics comparing OCR output to ground-truth Markdown (`ocr_data/`).
39
+ 2. **RAG Impact Evaluation** — end-to-end pipeline evaluation measuring how OCR quality affects retrieval and answer accuracy (`RAG_eval/`).
40
+
41
+ ---
42
+
43
+ ## 📊 Dataset Statistics
44
+
45
+ | Statistic | Value |
46
+ |---|---|
47
+ | Total documents | 570 |
48
+ | Total pages | 3,402 |
49
+ | Industries covered | 11 challenge types + 1 Normal |
50
+ | QA pairs (RAG eval) | 2071 |
51
+ | Annotation format | Hybrid Markdown (Markdown + HTML tables + LaTeX formulas + style) |
52
+
53
+ 11 challenge document types including:
54
+ - ComplexBackground
55
+ - HighPixel
56
+ - UltraLong
57
+ - MultiColumn
58
+ - UltraWide
59
+ - HistoryBooks
60
+ - Handwriting
61
+ - MultiFont
62
+ - VisualStyle
63
+ - Watermark
64
+ - CrosspageTable
65
+ ---
66
+
67
+ ## 📂 Dataset Structure
68
+
69
+ The dataset is stored in the `ocr_data` directory, containing original files and annotations in two formats:
70
+
71
+ ```text
72
+ InduOCRBench/
73
+ ├── ocr_data/
74
+ │ ├── pdf.zip # Original PDF documents (570 files, 3402 pages)
75
+ │ ├── md.zip # [Recommended] Ground-truth Markdown for OCR evaluation
76
+ │ └── md_original.zip # Full-fidelity annotations preserving all visual style tags
77
+
78
+ ├── RAG_eval/
79
+ │ ├── QA_pairs.jsonl # QA pairs for RAG pipeline evaluation
80
+ │ └── doc_md/ # Ground-truth Markdown files referenced by QA_pairs.jsonl
81
+
82
+ ├── README.md
83
+ └── README_zh-CN.md
84
+ ```
85
+
86
+ - **md_original**: Full-fidelity Markdown annotations preserving all visual style tags (e.g., font, color, alignment, layout). Suitable for studies requiring high-fidelity document reconstruction.
87
+
88
+ - **md**: Style-stripped Markdown annotations containing only textual content. This version serves as the standard Ground Truth for OCR evaluation to ensure fair comparison.
89
+
90
+ - **doc_md**: Hybrid Markdown annotations for RAG construction. Style information is preserved for VisualStyle documents while removed for other document types. This version is designated as the standard Ground Truth for RAG indexing and QA evaluation.
91
+
92
+ ## 🚀 OCR Evaluation
93
+
94
+ This benchmark uses the `md2md` method from **OmniDocBench** for evaluation. For details, see: https://github.com/opendatalab/OmniDocBench/tree/main
95
+
96
+ ### Evaluation Result
97
+
98
+ <div align="center">
99
+ Table 1. OCR fidelity evaluation on InduOCRBench using md2md metrics
100
+ </div>
101
+
102
+ <table style="width:100%; border-collapse: collapse;">
103
+ <thead>
104
+ <tr>
105
+ <th>Model Type</th>
106
+ <th>Methods</th>
107
+ <th>Size</th>
108
+ <th>Overall&#x2191;</th>
109
+ <th>Text<sup>EDS</sup>&#x2191;</th>
110
+ <th>Formula<sup>CDM</sup>&#x2191;</th>
111
+ <th>Table<sup>TEDS</sup>&#x2191;</th>
112
+ <th>Table<sup>TEDS-S</sup>&#x2191;</th>
113
+ <th>Read Order<sup>EDS</sup>&#x2191;</th>
114
+ </tr>
115
+ </thead>
116
+ <tbody>
117
+ <tr>
118
+ <td rowspan="9"><strong>Specialized</strong><br><strong>VLMs</strong></td>
119
+ <td>PaddleOCR-VL-1.5</td>
120
+ <td>0.9B</td>
121
+ <td><strong>79.01</strong></td>
122
+ <td><strong>88.33</strong></td>
123
+ <td>75.3</td>
124
+ <td><strong>73.41</strong></td>
125
+ <td><strong>77.27</strong></td>
126
+ <td>85.3</td>
127
+ </tr>
128
+ <tr>
129
+ <td>PaddleOCR-VL</td>
130
+ <td>0.9B</td>
131
+ <td><ins>78.24</ins></td>
132
+ <td><ins>88.1</ins></td>
133
+ <td>74.6</td>
134
+ <td><ins>72.03</ins></td>
135
+ <td>75.87</td>
136
+ <td>85.6</td>
137
+ </tr>
138
+ <tr>
139
+ <td>Logics-Parsing-v2</td>
140
+ <td>4B</td>
141
+ <td>75.71</td>
142
+ <td>84.94</td>
143
+ <td>72.3</td>
144
+ <td>69.90</td>
145
+ <td>76.17</td>
146
+ <td><strong>88.9</strong></td>
147
+ </tr>
148
+ <tr>
149
+ <td>MinerU2.5-Pro</td>
150
+ <td>1.2B</td>
151
+ <td>74.47</td>
152
+ <td>81.63</td>
153
+ <td><ins>75.8</ins></td>
154
+ <td>65.99</td>
155
+ <td>70.46</td>
156
+ <td>79.1</td>
157
+ </tr>
158
+ <tr>
159
+ <td>FireRed-OCR</td>
160
+ <td>2B</td>
161
+ <td>74.09</td>
162
+ <td>87.9</td>
163
+ <td>72.4</td>
164
+ <td>61.98</td>
165
+ <td>66.45</td>
166
+ <td><ins>85.8</ins></td>
167
+ </tr>
168
+ <tr>
169
+ <td>MinerU2.5</td>
170
+ <td>1.2B</td>
171
+ <td>72.50</td>
172
+ <td>81.8</td>
173
+ <td>75.4</td>
174
+ <td>60.31</td>
175
+ <td>63.10</td>
176
+ <td>84.4</td>
177
+ </tr>
178
+ <tr>
179
+ <td>GLM-OCR</td>
180
+ <td>0.9B</td>
181
+ <td>68.64</td>
182
+ <td>63.18</td>
183
+ <td>72.1</td>
184
+ <td>70.64</td>
185
+ <td><ins>76.72</ins></td>
186
+ <td>77.2</td>
187
+ </tr>
188
+ <tr>
189
+ <td>hunyuan-ocr</td>
190
+ <td>0.9B</td>
191
+ <td>68.08</td>
192
+ <td>86.1</td>
193
+ <td>65.6</td>
194
+ <td>52.53</td>
195
+ <td>58.34</td>
196
+ <td>85.7</td>
197
+ </tr>
198
+ <tr>
199
+ <td>deepseek-ocr</td>
200
+ <td>1.2B</td>
201
+ <td>61.46</td>
202
+ <td>75.5</td>
203
+ <td>61.8</td>
204
+ <td>47.07</td>
205
+ <td>49.31</td>
206
+ <td>81.8</td>
207
+ </tr>
208
+ <tr>
209
+ <td rowspan="4"><strong>General</strong><br><strong>VLMs</strong></td>
210
+ <td>Gemini-2.5 Pro</td>
211
+ <td>-</td>
212
+ <td>74.53</td>
213
+ <td>83.1</td>
214
+ <td><strong>77.2</strong></td>
215
+ <td>63.29</td>
216
+ <td>67.28</td>
217
+ <td>81.1</td>
218
+ </tr>
219
+ <tr>
220
+ <td>Qwen3-VL-235B</td>
221
+ <td>235B</td>
222
+ <td>70.91</td>
223
+ <td>83.3</td>
224
+ <td>74.8</td>
225
+ <td>54.63</td>
226
+ <td>59.43</td>
227
+ <td>82.1</td>
228
+ </tr>
229
+ <tr>
230
+ <td>Ovis2.6-30B-A3B</td>
231
+ <td>30B</td>
232
+ <td>59.34</td>
233
+ <td>60.2</td>
234
+ <td>65.8</td>
235
+ <td>52.03</td>
236
+ <td>57.00</td>
237
+ <td>64.4</td>
238
+ </tr>
239
+ <tr>
240
+ <td>GPT-4o</td>
241
+ <td>-</td>
242
+ <td>52.01</td>
243
+ <td>60.8</td>
244
+ <td>58.1</td>
245
+ <td>37.15</td>
246
+ <td>43.83</td>
247
+ <td>70.0</td>
248
+ </tr>
249
+ <tr>
250
+ <td rowspan="2"><strong>Pipeline</strong><br><strong>Tools</strong></td>
251
+ <td>Mineru2-pipeline</td>
252
+ <td>-</td>
253
+ <td>66.54</td>
254
+ <td>80.1</td>
255
+ <td>63.2</td>
256
+ <td>56.32</td>
257
+ <td>62.05</td>
258
+ <td>81.3</td>
259
+ </tr>
260
+ <tr>
261
+ <td>PP-StructureV3</td>
262
+ <td>-</td>
263
+ <td>60.32</td>
264
+ <td>78.2</td>
265
+ <td>53.7</td>
266
+ <td>49.07</td>
267
+ <td>62.06</td>
268
+ <td>79.1</td>
269
+ </tr>
270
+ </tbody>
271
+ </table>
272
+
273
+
274
+ ### Evaluation Setup
275
+ To ensure maximum fairness, please follow these settings:
276
+ 1. **Ground Truth**: Please use the Markdown files extracted from `ocr_data/md.zip` as the benchmark.
277
+ 2. **Metric**: Use the `md2md` evaluation metric to calculate similarity scores.
278
+
279
+ > Note: Although `md_original` is provided, for standard leaderboard evaluations, please uniformly use the data under the `md` directory to align with the evaluation standards.
280
+
281
+
282
+ ## 📝 Usage
283
+
284
+ 1. Download and extract the data:
285
+ ```bash
286
+ cd ocr_data
287
+ unzip pdf.zip
288
+ unzip md.zip
289
+ ```
290
+
291
+ 2. Run your model to perform inference on the documents in the `pdf` directory, generating prediction results in Markdown format.
292
+
293
+ 3. Use the evaluation script to compare your prediction results with the Ground Truth under the `md` directory.
294
+
295
+ ## RAG Impact Evaluation
296
+
297
+ The RAG evaluation track measures how OCR quality affects end-to-end retrieval-augmented generation performance, going beyond character-level metrics to capture structural and semantic preservation.
298
+
299
+ ### RAG Evaluation Data
300
+
301
+ The `RAG_eval/` directory contains:
302
+
303
+ - **`QA_pairs.jsonl`**: 2,071 QA pairs covering all 11 document challenge types.
304
+ - **`doc_md/`**: Ground-truth Markdown files for RAG indexing, using the `doc_md` format (style information preserved for VisualStyle document types, removed for others).
305
+
306
+ Each QA entry has the following fields:
307
+
308
+ ```json
309
+ {
310
+ "doc_type": "cross_page_table",
311
+ "filename": "cross_page_table_1.md",
312
+ "title": "Document title",
313
+ "file_path": "RAG_eval/doc_md/cross_page_table_1.md",
314
+ "question_category": "...",
315
+ "question": "...",
316
+ "answer": "...",
317
+ "evidence": "..."
318
+ }
319
+ ```
320
+
321
+ ### RAG Pipeline Setup
322
+
323
+ We adopt the [FlashRAG](https://github.com/RUC-NLPIR/FlashRAG) Naive pipeline with the following configuration:
324
+
325
+ | Component | Setting |
326
+ |---|---|
327
+ | Embedding | BGE-M3 |
328
+ | Retrieval | Dense, Flat index, top-100 |
329
+ | Reranking | BGE-Rerank-V2-M3, top-10 |
330
+ | Generation | ChatGPT-5 |
331
+ | Chunking | HTML tree structure, max 256 tokens |
332
+ | Evaluation | RAGAS framework (GPT-OSS-120B as judge) |
333
+
334
+ ### RAG Evaluation Metrics
335
+
336
+ - **Context Recall**: Measures whether retrieved passages contain evidence supporting the ground-truth answer.
337
+ - **Answer Accuracy**: Evaluates the correctness of the generated answer relative to the ground truth.
338
+
339
+ ### Key RAG Findings
340
+
341
+ | Document Type | OCR Accuracy | RAG Accuracy | Gap |
342
+ |---|---|---|---|
343
+ | VisualStyle | 82.9% | 52.8% | -30.1 pts (blind spot) |
344
+ | CrosspageTbl | 40.7% | 63.8% | +23.1 pts (LLM compensates) |
345
+ | UltraWide | 28.1% | 49.1% | low-low (structural failure) |
346
+ | MultiFont | 97.2% | 97.5% | ≈0 (aligned) |
347
+
348
+ **High OCR accuracy does not guarantee strong RAG performance.** VisualStyle documents demonstrate the largest OCR–RAG discrepancy: despite 82.9% character-level accuracy, only 52.8% RAG accuracy is achieved because OCR strips visual formatting cues (strikethroughs, color emphasis) that encode critical semantics.
349
+
350
+ ## 📄 License
351
+
352
+ This project is released under an open-source license. Please comply with relevant laws and regulations when using this dataset. The data is for research and academic purposes only.
353
+
354
+ ## Acknowledgement
355
+
356
+ - Thank [OmniDocBench](https://github.com/opendatalab/OmniDocBench) for OCR metric calculation.
357
+ - Thank [FlashRAG](https://github.com/RUC-NLPIR/FlashRAG) for the RAG pipeline framework.
358
+ - Thank [ragas](https://github.com/vibrantlabsai/ragas) for the RAG metrics evaluation.
359
+
360
+
361
+ ## 🖊️ Citation
362
+
363
+ If you use InduOCRBench in your research, please consider citing:
364
+
365
+ ```bibtex
366
+ @misc{induocrbench,
367
+ title={When Good OCR Is Not Enough: Benchmarking OCR Robustness for Retrieval-Augmented Generation},
368
+ author={Lin Sun and Wangdexian and Jingang Huang and Linglin Zhang and Change Jia and Zhengwei Cheng and Xiangzheng Zhang},
369
+ year={2026},
370
+ eprint={},
371
+ archivePrefix={arXiv},
372
+ primaryClass={cs.CV},
373
+ url={https://github.com/Qihoo360/InduOCRBench},
374
+ }
375
+ ```