Datasets:
Tasks:
Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
ArXiv:
License:
File size: 3,601 Bytes
8d14517 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | ---
license: other
pretty_name: RPC-Bench
task_categories:
- question-answering
language:
- en
tags:
- research-paper
- document-understanding
- multimodal
- benchmark
- llm
- vlm
---
<div align="center">
# RPC-Bench: A Fine-grained Benchmark for Research Paper Comprehension
</div>
<p align="center">
π <a href="https://rpc-bench.github.io/" target="_blank">Project Page</a> β’
π» <a href="https://github.com/RPC-Bench/PRC-Bench" target="_blank">GitHub</a> β’
π <a href="https://arxiv.org/abs/2601.14289" target="_blank">Paper</a> β’
π€ <a href="https://arxiv.org/abs/2601.14289" target="_blank">Paper</a> β’
π§ <a href="https://community.modelscope.cn/" target="_blank">ModelScope</a>
</p>
<div align="center">
<img src="assets/pipeline.png" width="100%" />
</div>
RPC-Bench is a fine-grained benchmark for research paper comprehension. It is built from review-rebuttal exchanges of high-quality academic papers and supports both text-only and visual evaluation through complementary paper representations.
## Data Structure
RPC-Bench is split into `train`, `dev`, and `test` subsets. Each subset is stored in the dataset structure and recorded in `manifest.jsonl`.
`md/` contains Markdown files parsed from each paper by MinerU. These files provide the text input for LLM-oriented evaluation.
`parse/` contains the full MinerU parsing outputs for each paper, including structured layout and content artifacts.
`pdf/` contains the original paper PDFs.
`vlm/` contains page images rendered from the PDFs with PyMuPDF at 200 DPI for VLM-oriented evaluation.
```text
RPC-Bench/
βββ README.md
βββ manifest.jsonl
βββ parse/
β βββ train/
β β βββ <paper_id>/
β βββ dev/
β β βββ <paper_id>/
β βββ test/
β βββ <paper_id>/
βββ md/
β βββ train/
β β βββ <paper_id>/
β β βββ <paper_id>.md
β βββ dev/
β β βββ <paper_id>/
β β βββ <paper_id>.md
β βββ test/
β βββ <paper_id>/
β βββ <paper_id>.md
βββ pdf/
β βββ train/
β β βββ <paper_id>.pdf
β βββ dev/
β β βββ <paper_id>.pdf
β βββ test/
β βββ <paper_id>.pdf
βββ vlm/
βββ train/
β βββ <paper_id>/
βββ dev/
β βββ <paper_id>/
βββ test/
βββ <paper_id>/
```
## Practical Uses
RPC-Bench can be used to try paper-centric systems that require broader document understanding rather than local snippet matching.
- Research paper comprehension: try models on full-paper understanding, including core concepts, methods, and experimental findings.
- Long-context evaluation: try whether longer context windows or long-context architectures improve document-level reasoning.
- Multimodal reasoning: try models that combine textual evidence with page-level figures, tables, and diagrams in the original PDF layout.
- RAG system diagnosis: try retrieval, chunking, and evidence-fusion strategies for paper-centric workflows beyond snippet-level retrieval accuracy.
## Citation
```bibtex
@article{chen2026rpc,
title={RPC-Bench: A Fine-grained Benchmark for Research Paper Comprehension},
author={Chen, Yelin and Zhang, Fanjin and Sun, Suping and Pang, Yunhe and Wang, Yuanchun and Song, Jian and Li, Xiaoyan and Hou, Lei and Zhao, Shu and Tang, Jie and others},
journal={arXiv preprint arXiv:2601.14289},
year={2026}
}
```
|