TabComp / README.md
somgautam's picture
Update README.md
4375490 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 11563543600
      num_examples: 16450
    - name: test
      num_bytes: 1638512366
      num_examples: 3159
  download_size: 15933142593
  dataset_size: 13202055966
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

TabComp πŸ“Š

A Benchmark for OCR-Free Visual Table Reading Comprehension

This dataset accompanies the paper TabComp: A Dataset for Visual Table Reading Comprehension

TabComp evaluates Vision-Language Models (VLMs) on their ability to read, understand, and reason over table images without relying on OCR, using generative question answering.

πŸ” Why TabComp?

Modern VLMs perform well on general VQA but struggle with tables, which require:

  • Structured reasoning across rows/columns
  • Understanding layout + text jointly
  • Multi-step inference over semi-structured data

πŸ‘‰ TabComp isolates this challenge and provides a focused benchmark for table understanding.

πŸ“Š Dataset Overview

  • Images: 3,318 table images
  • QA pairs: 19,610
  • Answer type: Generative (natural language)
  • Domain: Industrial documents
  • Text types: Printed + handwritten

Task Definition

Given:

  • A table image
  • A question

Generate:

  • A natural language answer requiring table comprehension

🧠 What Makes It Challenging?

  • ❌ No OCR signals
  • βœ… Dense textual + structural information
  • βœ… Long-range dependencies across table cells
  • βœ… Generative answers (not extractive spans)

πŸ“ Data Format

πŸ† Leaderboard (Baseline Results) Performance on TabComp (generative metrics):

Model Setting B-4 ↑ ROUGE-L ↑ BERTScore ↑ METEOR ↑
Donut-base Fine-tuned 42.69 37.29 83.38 60.14
Donut-base End-to-end 28.59 32.24 85.06 47.19
Donut-proto Fine-tuned 6.49 17.84 73.26 19.80
Donut-proto End-to-end 34.87 37.02 87.74 56.49
UReader Zero-shot 28.14 37.64 88.04 20.71

Full metrics (BLEU-1/2/3/4, CIDEr) available in the paper.

We welcome:

  • New model evaluations
  • Error analysis
  • Extensions to multilingual / multi-table settings

Contact

For collaboration, email Somraj Gautam gautam.8@iitj.ac.in