PresentBench / README.md
lynnzuo's picture
Duplicate from PresentBench/PresentBench
bcb919b
metadata
task_categories:
  - any-to-any
language:
  - en
  - zh
size_categories:
  - n<1K

PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation

[🌐 Homepage] [πŸ“– Paper] [πŸ’» Code]

This repository hosts the PresentBench benchmark dataset.

πŸ“„ Abstract

Slides serve as a critical medium for conveying information in presentation-oriented scenarios such as academia, education, and business. Despite their importance, creating high-quality slide decks remains time-consuming and cognitively demanding. Recent advances in generative models, such as Nano Banana Pro, have made automated slide generation increasingly feasible. However, existing evaluations of slide generation are often coarse-grained and rely on holistic judgments, making it difficult to accurately assess model capabilities or track meaningful advances in the field. In practice, the lack of fine-grained, verifiable evaluation criteria poses a critical bottleneck for both research and real-world deployment.

In this paper, we propose PresentBench, a fine-grained, rubric-based benchmark for evaluating automated real-world slide generation. It contains 238 evaluation instances, each supplemented with background materials required for slide creation. Moreover, we manually design an average of 54.1 checklist items per instance, each formulated as a binary question, to enable fine-grained, instance-specific evaluation of the generated slide decks.

Extensive experiments show that PresentBench provides more reliable evaluation results than existing methods, and exhibits significantly stronger alignment with human preferences. Furthermore, our benchmark reveals that NotebookLM significantly outperforms other slide generation methods, highlighting substantial recent progress in this domain.

πŸ† Leaderboard

Comparative results across five domains. The highest scores are highlighted in red, and the second-highest scores are highlighted in blue.

Method Total Academia Advertising Education Economics Talk
NotebookLM 62.5 68.6 54.9 55.0 58.2 69.2
Manus 1.6 57.8 64.0 52.4 50.7 52.8 63.0
Tiangong 54.7 59.2 44.5 53.7 46.5 59.8
Zhipu 53.6 57.5 41.0 52.5 47.6 59.0
PPTAgent v2 50.2 53.3 46.7 46.1 46.1 56.6
Gamma 49.2 54.4 46.7 47.8 35.1 56.3
Doubao 48.0 50.3 42.9 45.4 44.0 54.7
Qwen 35.9 39.4 31.9 36.6 26.5 38.6

πŸ—‚οΈ Dataset Structure

Domains under <dataset_root>/ include (non‑exhaustive):

  • academia/
  • advertising/
  • economics/
  • education/
  • talk/ Each leaf case typically looks like:
  • material.pdf|material.md|material_N.md|material_N.pdf – source documents (PDFs, text, etc.).
  • generation_task/ – prompts and evaluation configuration:
    • generation_prompt.md
    • judge_prompt.json

βš™οΈ Usage

To evaluate slide generation systems with this dataset, please follow the evaluation pipeline and scripts provided in the code repository (e.g., environment setup, data preparation, inference, and evaluation).

πŸ“œ Licensing Information

The PresentBench benchmark aggregates background materials collected from multiple public sources. Each source remains governed by its own original license and terms of use.

  • Data Source Licenses: Users must strictly comply with the licensing terms and conditions of each original background-material source included in this benchmark. We recommend carefully reviewing the original license for each source before use.

  • Prompts and Evaluation Rubrics: The task instructions and evaluation checklists are created by us. To the extent that we hold any related intellectual property rights, these contributions are made available under the Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0) license.

  • Copyright Concerns: This benchmark is compiled for academic research purposes. If you believe any content in PresentBench infringes upon your copyright, please contact us immediately at chen.xs.gm[at]gmail.com. We will promptly review and address the matter, including removal of the concerned content upon verification.

πŸ“š Citation

BibTeX:

@article{chen2026presentbench,
  title={PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation},
  author={Chen, Xin-Sheng and Zhu, Jiayu and Li, Pei-lin and Wang, Hanzheng and Yang, Shuojin and Guo, Meng-Hao},
  journal={arXiv preprint arXiv:2603.07244},
  year={2026}
}