Volavion commited on
Commit
1459f0b
·
verified ·
1 Parent(s): c1e94ad

Add dataset card for FineSightBench-Large v1.0

Browse files
Files changed (1) hide show
  1. README.md +165 -28
README.md CHANGED
@@ -1,34 +1,171 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: perception
6
- path: data/perception-*
7
- - split: reasoning
8
- path: data/reasoning-*
9
- dataset_info:
10
- features:
11
- - name: image
12
- dtype: image
13
- - name: image_id
14
- dtype: string
15
- - name: task_type
16
- dtype: string
17
- - name: question
18
- dtype: string
19
- - name: answer
20
- dtype: string
21
- - name: difficulty
22
- dtype: string
23
- - name: metadata
24
- dtype: string
25
- splits:
26
  - name: perception
27
- num_bytes: 87693559
28
  num_examples: 42000
29
  - name: reasoning
30
- num_bytes: 126526373
31
  num_examples: 39200
32
- download_size: 7116750355
33
- dataset_size: 214219932
34
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ task_categories:
6
+ - visual-question-answering
7
+ - image-classification
8
+ - image-to-text
9
+ pretty_name: FineSightBench-Large
10
+ size_categories:
11
+ - 10K<n<100K
12
+ tags:
13
+ - VLM-evaluation
14
+ - fine-grained-visual-perception
15
+ - fine-grained-visual-reasoning
16
+ - text-in-the-wild
17
+ - scene-text-recognition
18
+ splits:
 
 
 
 
 
 
 
19
  - name: perception
 
20
  num_examples: 42000
21
  - name: reasoning
 
22
  num_examples: 39200
 
 
23
  ---
24
+
25
+ # FineSightBench-Large
26
+
27
+ **FineSightBench-Large** is a **10× scaled** edition of [FineSightBench](https://huggingface.co/datasets/Volavion/FineSightBench) — identical task design, difficulty sweep, answer schemas, and image regimes, with every base sample count multiplied by ten for higher statistical power and robust per-(task, size, count) evaluation.
28
+
29
+ **FineSightBench** is a fine-grained visual benchmark for evaluating Vision-Language Models (VLMs) on pixel-level perception and reasoning tasks. It combines two complementary image regimes:
30
+
31
+ 1. **Synthetic canvas** — controlled white-background images with precisely-sized geometric/semantic targets (letters, animals, shapes, blocks, dots).
32
+ 2. **Text in the wild** (SynthText-style) — English words rendered onto real natural-scene photographs from the [SynthText](https://github.com/ankush-me/SynthText) `bg_img` set, with **pixel-accurate control of character cap-height**.
33
+
34
+ All images are **448 × 448 px**. The primary difficulty axis is the **target pixel size** (cap-height for text), swept over `[4, 8, 12, 16, 24, 32, 48]`, mapped to `extreme / hard / medium / easy`.
35
+
36
+ ## Dataset Summary
37
+
38
+ | Split | #Samples | #Task types | Regimes |
39
+ |-------|---------:|:-----------:|---------|
40
+ | `perception` | 42 000 | 6 | synthetic canvas + text-in-the-wild |
41
+ | `reasoning` | 39 200 | 6 | synthetic canvas + text-in-the-wild |
42
+
43
+ ## Dataset Structure
44
+
45
+ ### `perception` split — 42 000 samples
46
+
47
+ Single-target identification tasks. 7 000 samples per task, 1 000 samples per pixel size × 7 sizes.
48
+
49
+ | `task_type` | Description | Source |
50
+ |-------------|-------------|--------|
51
+ | `letter_recognition` | Identify a rendered uppercase letter (A–Z) | synthetic canvas |
52
+ | `animal_recognition` | Identify an animal silhouette (cat/dog/fish/bird/rabbit/turtle) | synthetic canvas |
53
+ | `shape_recognition` | Identify a geometric shape (circle/triangle/square/star/diamond/pentagon/hexagon/cross) | synthetic canvas |
54
+ | `block_recognition` | Detect / count square blocks | synthetic canvas |
55
+ | `color_block_recognition` | Identify the color of a block | synthetic canvas |
56
+ | `text_recognition` | Read a single English word overlaid on a natural scene | **text in the wild** |
57
+
58
+ ### `reasoning` split — 39 200 samples
59
+
60
+ Chain-reasoning tasks requiring counting, ordering, and spatial reasoning across multiple targets.
61
+
62
+ | `task_type` | Description | Source |
63
+ |-------------|-------------|--------|
64
+ | `spatial_chain` | List all objects left→right or top→bottom | synthetic canvas |
65
+ | `comparison_chain` | List all objects smallest→largest by size | synthetic canvas |
66
+ | `counting_chain` | Count objects per type + total | synthetic canvas |
67
+ | `blur_chain` | Count objects on a blurred/textured background | synthetic canvas |
68
+ | `text_reading_chain` | Read multiple overlaid words in left→right / top→bottom order | **text in the wild** |
69
+ | `text_counting_chain` | Total word count + # words containing a queried letter | **text in the wild** |
70
+
71
+ ### Difficulty levels
72
+
73
+ | Difficulty | Target / cap-height |
74
+ |------------|---------------------|
75
+ | `extreme` | ≤ 5 px |
76
+ | `hard` | 6–12 px |
77
+ | `medium` | 13–24 px |
78
+ | `easy` | 25–48 px |
79
+
80
+ ## Fields
81
+
82
+ | Field | Type | Description |
83
+ |-------|------|-------------|
84
+ | `image` | Image | 448×448 PNG |
85
+ | `image_id` | string | Unique identifier (encodes task, size, count) |
86
+ | `task_type` | string | See tables above |
87
+ | `question` | string | Prompt for the VLM (asks for a structured JSON answer) |
88
+ | `answer` | string | Ground-truth answer. JSON-encoded (see below) |
89
+ | `difficulty` | string | `easy` / `medium` / `hard` / `extreme` |
90
+ | `metadata` | string | JSON with canvas size, target pixel size, positions, colors, bounding boxes, sub-answers, etc. |
91
+
92
+ ### Answer schemas (examples)
93
+
94
+ | Task | Answer JSON |
95
+ |------|-------------|
96
+ | `letter_recognition` | `{"letter": "H"}` |
97
+ | `animal_recognition` | `{"animal": "rabbit"}` |
98
+ | `shape_recognition` | `{"shape": "triangle"}` |
99
+ | `color_block_recognition` | `{"color": "blue"}` |
100
+ | `text_recognition` | `{"word": "HOME"}` |
101
+ | `spatial_chain` | `{"objects": ["red A", "blue K", ...]}` |
102
+ | `comparison_chain` | `{"objects": ["blue dog", "magenta bird"]}` |
103
+ | `counting_chain` | `{"counts": {"red": 2, "blue": 1}, "total": 3}` |
104
+ | `blur_chain` | `{"counts": {"circle": 1, "square": 2}, "total": 3}` |
105
+ | `text_reading_chain` | `{"words": ["HOME", "CITY", "EXIT"]}` |
106
+ | `text_counting_chain`| `{"total": 6, "with_letter": 3}` |
107
+
108
+ ## Usage
109
+
110
+ ```python
111
+ from datasets import load_dataset
112
+
113
+ ds = load_dataset("Volavion/FineSightBench-Large")
114
+ print(ds)
115
+ # DatasetDict({
116
+ # perception: Dataset({features: [...], num_rows: 42000}),
117
+ # reasoning: Dataset({features: [...], num_rows: 39200})
118
+ # })
119
+
120
+ sample = ds["perception"][0]
121
+ sample["image"].show()
122
+ print(sample["question"])
123
+ print(sample["answer"]) # JSON string, e.g. '{"letter": "A"}'
124
+ ```
125
+
126
+ Filter by task or difficulty:
127
+
128
+ ```python
129
+ text_subset = ds["perception"].filter(lambda x: x["task_type"] == "text_recognition")
130
+ extreme = ds["perception"].filter(lambda x: x["difficulty"] == "extreme")
131
+ ```
132
+
133
+ ## Design Philosophy
134
+
135
+ * **Pixel-size is the primary difficulty axis.** Targets (objects or characters) are rendered at exact cap-heights across `[4, 8, 12, 16, 24, 32, 48]` px so that the same semantic task can be probed from *easily readable* to *near-imperceptible* scales on a single fixed 448×448 canvas.
136
+ * **Controlled composition.** Every sample exposes pixel-precise target positions, bounding boxes, colors (with RGB), and sub-answers in `metadata`, enabling per-task, per-size, per-color, and positional analyses.
137
+ * **Two image regimes.** The synthetic canvas removes distribution confounders, while the SynthText-style text-in-the-wild regime stresses models with the same text task on varied, real photographs.
138
+
139
+ ## Generation
140
+
141
+ Generated with the [FineSightBench repository](https://github.com/Volavion/FineSightBench):
142
+
143
+ ```bash
144
+ # 10× base counts (perception: --num-per-config 1000, reasoning: N_PER_CONFIG=200)
145
+ python scripts/generate_large_dataset.py # FSB_LARGE_SCALE=10 by default
146
+ ```
147
+
148
+ **Text-in-the-wild backgrounds**: the first ~1 500 JPEGs from the SynthText `bg_img.tar.gz` set ([mirror](https://thor.robots.ox.ac.uk/scenetext/preproc/bg_img.tar.gz)) are center-cropped and resized to 448×448. Text glyphs use system sans-serif fonts; cap-height is calibrated per render to match the requested pixel size exactly.
149
+
150
+ ## Citation
151
+
152
+ If you use FineSightBench, please cite the repository and the SynthText background source:
153
+
154
+ ```bibtex
155
+ @misc{finesightbench_large2026,
156
+ title = {FineSightBench-Large: 10 imes Scaled Fine-grained Visual Perception \& Reasoning Benchmark for VLMs},
157
+ year = {2026},
158
+ url = {https://huggingface.co/datasets/Volavion/FineSightBench-Large}
159
+ }
160
+
161
+ @inproceedings{Gupta16,
162
+ author = {A. Gupta and A. Vedaldi and A. Zisserman},
163
+ title = {Synthetic Data for Text Localisation in Natural Images},
164
+ booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
165
+ year = {2016}
166
+ }
167
+ ```
168
+
169
+ ## License
170
+
171
+ Apache-2.0 for the FineSightBench benchmark code, annotations, and synthetic images. The natural-scene backgrounds for the text-in-the-wild tasks are derived from the SynthText `bg_img` set; please refer to the original SynthText dataset for the background-image license/terms.