| --- |
| pretty_name: Paper2Thesis |
| license: cc-by-4.0 |
| task_categories: |
| - text-generation |
| language: |
| - en |
| --- |
| |
| # Paper2Thesis |
|
|
| > **Anonymized release for NeurIPS 2026 Datasets and Benchmarks Track review.** |
| > The non-anonymous version, including author information and a permanent DOI, will be released upon acceptance. |
|
|
| ## Overview |
|
|
| Paper2Thesis is a benchmark for **extreme-length multi-document synthesis**. Each instance maps a set of input arXiv research papers to a target arXiv PhD thesis. The task requires generating a thesis-scale document that integrates multiple papers into a coherent, structured narrative. |
|
|
| This benchmark targets a regime beyond standard text generation, involving **long-context reasoning**, **cross-document integration**, and **structured generation**. |
|
|
| --- |
|
|
| ## Task |
|
|
| Each example is defined as: |
|
|
| [input_paper_ids] → target_thesis_id |
|
|
| The objective is to synthesize the input papers into a unified thesis-level document. Unlike summarization, the task requires **expansion**, **restructuring**, and **integration** across multiple sources. |
|
|
| --- |
|
|
| ## Dataset Structure |
|
|
| The dataset is provided in JSONL format: |
|
|
| paper2thesis/ |
| data/ |
| train.jsonl |
| validation.jsonl |
| test.jsonl |
| |
| Each line in a JSONL file corresponds to one example. |
|
|
| Example: |
|
|
| { |
| "example_id": "p2t_000033", |
| "input_paper_ids": ["2208.09377", "2503.22525", "2405.08703"], |
| "target_thesis_id": "2508.07998", |
| "field": "astro-ph.SR", |
| "target_year": 2025, |
| "num_input_papers": 3, |
| "input_total_words": 85000, |
| "target_total_words": 100000, |
| "target_page_count": 180, |
| "input_versions": ["v1", "v1", "v1"], |
| "target_version": "v1", |
| "input_licenses": ["...", "...", "..."], |
| "target_license": "..." |
| } |
| |
| --- |
| |
| ## Construction |
| |
| Data is derived from arXiv. |
| |
| Pipeline: |
| |
| 1. Scan arXiv submission comments for explicit terms like "PhD thesis" or "dissertation" |
| 2. Enforce single-author constraint and 100-page threshold |
| 3. Verify source availability (LaTeX or Word) |
| 4. Identify self-authored publications within each thesis's bibliography (author surname match) |
| 5. Map candidate chapter-forming papers to their arXiv records (extract explicit arXiv IDs from the citation strings → fallback title-match search) |
| 6. Manually select chapter-forming papers from the candidates |
| |
| --- |
| |
| ## Splits |
| |
| The dataset is split at the **thesis level**: |
| |
| - Each target thesis appears in exactly one split |
| - All associated input papers remain within the same split |
| - No input paper identifier appears in more than one split |
| |
| This design prevents document-level leakage. |
| |
| --- |
| |
| ## Statistics |
| |
| Typical characteristics: |
| |
| - Input papers per example: ~4.73 |
| - Input length: ~60k words |
| - Thesis length: ~70k words |
| |
| The scale exceeds the context window of most current models. |
| |
| --- |
| |
| ## Usage |
| |
| This dataset is intended for: |
| |
| - Long-context language modeling |
| - Multi-document synthesis |
| - Scientific writing generation |
| - Structure and discourse modeling at document scale |
| |
| --- |
| |
| ## Validation |
| |
| A validation script is provided to ensure dataset integrity. |
| |
| It checks: |
| |
| - JSON format validity |
| - Required fields |
| - arXiv ID format |
| - Duplicate examples |
| - Duplicate target theses |
| - Input-paper overlap across splits |
| - Consistency of num_input_papers |
| - Presence of license fields |
| |
| Run: |
| |
| python scripts/validate_ids.py --data_dir data/ |
| |
| --- |
| |
| ## Data Access |
| |
| This dataset does **not** include full text of papers or theses. |
| |
| The release contains only: |
| |
| - arXiv identifiers |
| - metadata |
| - dataset splits |
| |
| To retrieve PDFs: |
| |
| python scripts/download_arxiv_pdfs.py --input data/test.jsonl --out_dir local_pdfs/ |
| |
| To retrieve LaTeX/source files: |
| |
| python scripts/download_arxiv_sources.py --input data/test.jsonl --out_dir local_sources/ |
| |
| --- |
| |
| ## Licensing |
| |
| This dataset (metadata and scripts) is released under |
| Creative Commons Attribution 4.0 International (CC BY 4.0). |
| |
| The dataset does not redistribute arXiv PDFs, source files, or extracted text. |
| |
| All referenced arXiv documents remain subject to their original licenses, including: |
| |
| - arXiv Non-Exclusive Distribution License |
| - Creative Commons licenses (e.g., CC BY, CC BY-SA) |
| |
| Users are responsible for complying with the license terms of each document. |
| |
| --- |
| |
| ## Citation |
| |
| Author and citation information have been redacted to preserve anonymity during peer review. A complete citation will be provided upon acceptance. |