Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:

Improve dataset card: add metadata, paper/code links, and evaluation instructions

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +83 -2
README.md CHANGED
@@ -1,6 +1,87 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Citation
5
- If you find this dataset useful, please cite our paper:
6
- https://arxiv.org/abs/2602.04802
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ - vlm
10
+ - benchmark
11
+ - visualized-text
12
  ---
13
+
14
+ # VISTA-Bench
15
+
16
+ [**Paper**](https://arxiv.org/abs/2602.04802) | [**GitHub**](https://github.com/QingAnLiu/VISTA-Bench)
17
+
18
+ VISTA-Bench is a systematic benchmark spanning multimodal perception, reasoning, and unimodal understanding. It evaluates **visualized text understanding** by contrasting **pure-text** and **visualized-text (VT)** questions under controlled rendering conditions.
19
+
20
+ ## Dataset Summary
21
+
22
+ Existing benchmarks predominantly focus on pure-text queries, but in real-world scenarios, language frequently appears as visualized text embedded in images. VISTA-Bench evaluates whether current Vision-Language Models (VLMs) handle such input requests comparably. Extensive evaluation reveals a pronounced modality gap: models that perform well on pure-text queries often degrade substantially when equivalent semantic content is presented as visualized text.
23
+
24
+ - **Size:** 1,500 instances
25
+ - **Composition:** Predominantly multiple-choice questions (MCQ), with a small portion of open-ended queries
26
+ - **Task Taxonomy:**
27
+ - **Unimodal Knowledge:** 500 instances
28
+ - **Multimodal Knowledge:** 400 instances
29
+ - **Multimodal Perception:** 300 instances
30
+ - **Multimodal Reasoning:** 300 instances
31
+
32
+ ## Repository Structure
33
+
34
+ ```text
35
+ VISTA-Bench/
36
+ β”œβ”€ images/ # original images (for multimodal instances)
37
+ β”œβ”€ questions/ # rendered question/option images (VT setting)
38
+ β”œβ”€ VLMEvalKit/ # evaluation toolkit
39
+ β”œβ”€ VISTA-Bench.tsv # dataset index
40
+ └─ VISTA-Bench-VT.tsv # dataset index (visualized text variant)
41
+ ```
42
+
43
+ ## Evaluation (VLMEvalKit)
44
+
45
+ VISTA-Bench is evaluated using `VLMEvalKit`. Before running evaluation, it is recommended to convert the TSV file(s) into a normalized format with absolute image paths.
46
+
47
+ ### 1) Convert TSV to normalized paths
48
+
49
+ Use the provided helper script to normalize the paths:
50
+
51
+ ```bash
52
+ python VISTA-Bench/VLMEvalKit/utils/convert_data_file.py \
53
+ --in VISTA-Bench/VISTA-Bench.tsv \
54
+ --out VISTA-Bench/VISTA-Bench_norm.tsv \
55
+ --image-prefix /ABS/PATH/TO/VISTA-Bench
56
+ ```
57
+
58
+ ### 2) Run evaluation
59
+
60
+ **Pure-text setting:**
61
+ ```bash
62
+ python /VISTA-Bench/VLMEvalKit/run.py \
63
+ --data VISTA-Bench_norm \
64
+ --model llava_v1.5_7b \
65
+ --verbose
66
+ ```
67
+
68
+ **Visualized-text (VT) setting:**
69
+ ```bash
70
+ python /VISTA-Bench/VLMEvalKit/run.py \
71
+ --data VISTA-Bench-VT \
72
+ --model llava_v1.5_7b \
73
+ --verbose
74
+ ```
75
+
76
  ## Citation
77
+
78
+ If you find this dataset useful, please cite the following paper:
79
+
80
+ ```bibtex
81
+ @article{liu2026vistabench,
82
+ title={VISTA-Bench: Do Vision-Language Models Really Understand Visualized Text as Well as Pure Text?},
83
+ author={Liu, Qing'an and Feng, Juntong and Wang, Yuhao and Han, Xinzhe and Cheng, Yujie and Zhu, Yue and Diao, Haiwen and Zhuge, Yunzhi and Lu, Huchuan},
84
+ journal={arXiv preprint arXiv:2602.04802},
85
+ year={2026}
86
+ }
87
+ ```