Volavion commited on
Commit
6bd211f
·
verified ·
1 Parent(s): ef882b9

Add dataset card (v1.1 with text-in-the-wild tasks)

Browse files
Files changed (1) hide show
  1. README.md +123 -71
README.md CHANGED
@@ -1,98 +1,107 @@
1
  ---
2
  language:
3
- - en
4
  license: apache-2.0
5
  task_categories:
6
- - visual-question-answering
7
- - image-classification
 
8
  pretty_name: FineSightBench
9
  size_categories:
10
- - 1K<n<10K
11
  tags:
12
- - VLM-evaluation
13
- - fine-grained-visual-perception
14
- - fine-grained-visual-reasoning
 
 
15
  splits:
16
- - name: perception
17
- num_examples: 3500
18
- - name: reasoning
19
- num_examples: 2520
20
- dataset_info:
21
- features:
22
- - name: image
23
- dtype: image
24
- - name: image_id
25
- dtype: string
26
- - name: task_type
27
- dtype: string
28
- - name: question
29
- dtype: string
30
- - name: answer
31
- dtype: string
32
- - name: difficulty
33
- dtype: string
34
- - name: metadata
35
- dtype: string
36
- splits:
37
  - name: perception
38
- num_bytes: 9514691
39
  num_examples: 4200
40
  - name: reasoning
41
- num_bytes: 16327640
42
  num_examples: 3920
43
- download_size: 707864892
44
- dataset_size: 25842331
45
- configs:
46
- - config_name: default
47
- data_files:
48
- - split: perception
49
- path: data/perception-*
50
- - split: reasoning
51
- path: data/reasoning-*
52
  ---
53
 
54
  # FineSightBench
55
 
56
- FineSightBench is a fine-grained visual benchmark for evaluating Vision Language Models (VLMs) on pixel-level perception and reasoning tasks.
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ## Dataset Structure
59
 
60
- The dataset consists of two splits:
61
 
62
- ### `perception` (3 500 images)
63
 
64
- Fine-grained single-target identification tasks at 8 pixel-size difficulty levels.
 
 
 
 
 
 
 
65
 
66
- | Task | Description |
67
- |------|-------------|
68
- | Color Identification | Identify the color of a target object |
69
- | Letter Recognition | Identify a rendered letter |
70
- | Animal Recognition | Identify an animal silhouette |
71
- | Shape Recognition | Identify a geometric shape |
72
- | Dot Color Recognition | Identify the color of a tiny dot |
73
 
74
- ### `reasoning` (2 520 images)
75
 
76
- Chain-reasoning tasks requiring counting, ordering, and spatial reasoning.
 
 
 
 
 
 
 
77
 
78
- | Task | Description |
79
- |------|-------------|
80
- | `spatial_chain` | List objects left→right or top→bottom |
81
- | `comparison_chain` | List objects smallest→largest by size |
82
- | `counting_chain` | Count objects per type + total |
83
- | `blur_chain` | Count objects on blurred background |
 
 
84
 
85
  ## Fields
86
 
87
  | Field | Type | Description |
88
  |-------|------|-------------|
89
- | `image` | Image | 448×448 PNG canvas |
90
- | `image_id` | string | Unique identifier |
91
- | `task_type` | string | Task category |
92
- | `question` | string | Prompt for the VLM |
93
- | `answer` | string | Ground-truth answer (JSON string for reasoning) |
94
- | `difficulty` | string | `easy` / `medium` / `hard` / `extreme` |
95
- | `metadata` | string | JSON with canvas_size, pixel_size, targets list |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  ## Usage
98
 
@@ -102,17 +111,60 @@ from datasets import load_dataset
102
  ds = load_dataset("Volavion/FineSightBench")
103
  print(ds)
104
  # DatasetDict({
105
- # perception: Dataset({features: [...], num_rows: 3500}),
106
- # reasoning: Dataset({features: [...], num_rows: 2520})
107
  # })
108
 
109
  sample = ds["perception"][0]
110
  sample["image"].show()
111
  print(sample["question"])
112
- print(sample["answer"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ```
114
 
115
- ## Canvas Design
116
 
117
- All images are 448×448 pixels on a white background.
118
- Object pixel sizes range from **3 px** (extreme) to **48 px** (easy), controlling task difficulty.
 
1
  ---
2
  language:
3
+ - en
4
  license: apache-2.0
5
  task_categories:
6
+ - visual-question-answering
7
+ - image-classification
8
+ - image-to-text
9
  pretty_name: FineSightBench
10
  size_categories:
11
+ - 1K<n<10K
12
  tags:
13
+ - VLM-evaluation
14
+ - fine-grained-visual-perception
15
+ - fine-grained-visual-reasoning
16
+ - text-in-the-wild
17
+ - scene-text-recognition
18
  splits:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - name: perception
 
20
  num_examples: 4200
21
  - name: reasoning
 
22
  num_examples: 3920
 
 
 
 
 
 
 
 
 
23
  ---
24
 
25
  # FineSightBench
26
 
27
+ **FineSightBench** is a fine-grained visual benchmark for evaluating Vision-Language Models (VLMs) on pixel-level perception and reasoning tasks. It combines two complementary image regimes:
28
+
29
+ 1. **Synthetic canvas** — controlled white-background images with precisely-sized geometric/semantic targets (letters, animals, shapes, blocks, dots).
30
+ 2. **Text in the wild** (SynthText-style) — English words rendered onto real natural-scene photographs from the [SynthText](https://github.com/ankush-me/SynthText) `bg_img` set, with **pixel-accurate control of character cap-height**.
31
+
32
+ All images are **448 × 448 px**. The primary difficulty axis is the **target pixel size** (cap-height for text), swept over `[4, 8, 12, 16, 24, 32, 48]`, mapped to `extreme / hard / medium / easy`.
33
+
34
+ ## Dataset Summary
35
+
36
+ | Split | #Samples | #Task types | Regimes |
37
+ |-------|---------:|:-----------:|---------|
38
+ | `perception` | 4 200 | 6 | synthetic canvas + text-in-the-wild |
39
+ | `reasoning` | 3 920 | 6 | synthetic canvas + text-in-the-wild |
40
 
41
  ## Dataset Structure
42
 
43
+ ### `perception` split 4 200 samples
44
 
45
+ Single-target identification tasks. 700 samples per task, 100 samples per pixel size × 7 sizes.
46
 
47
+ | `task_type` | Description | Source |
48
+ |-------------|-------------|--------|
49
+ | `letter_recognition` | Identify a rendered uppercase letter (A–Z) | synthetic canvas |
50
+ | `animal_recognition` | Identify an animal silhouette (cat/dog/fish/bird/rabbit/turtle) | synthetic canvas |
51
+ | `shape_recognition` | Identify a geometric shape (circle/triangle/square/star/diamond/pentagon/hexagon/cross) | synthetic canvas |
52
+ | `block_recognition` | Detect / count square blocks | synthetic canvas |
53
+ | `color_block_recognition` | Identify the color of a block | synthetic canvas |
54
+ | `text_recognition` | Read a single English word overlaid on a natural scene | **text in the wild** |
55
 
56
+ ### `reasoning` split 3 920 samples
 
 
 
 
 
 
57
 
58
+ Chain-reasoning tasks requiring counting, ordering, and spatial reasoning across multiple targets.
59
 
60
+ | `task_type` | Description | Source |
61
+ |-------------|-------------|--------|
62
+ | `spatial_chain` | List all objects left→right or top→bottom | synthetic canvas |
63
+ | `comparison_chain` | List all objects smallest→largest by size | synthetic canvas |
64
+ | `counting_chain` | Count objects per type + total | synthetic canvas |
65
+ | `blur_chain` | Count objects on a blurred/textured background | synthetic canvas |
66
+ | `text_reading_chain` | Read multiple overlaid words in left→right / top→bottom order | **text in the wild** |
67
+ | `text_counting_chain` | Total word count + # words containing a queried letter | **text in the wild** |
68
 
69
+ ### Difficulty levels
70
+
71
+ | Difficulty | Target / cap-height |
72
+ |------------|---------------------|
73
+ | `extreme` | 5 px |
74
+ | `hard` | 6–12 px |
75
+ | `medium` | 13–24 px |
76
+ | `easy` | 25–48 px |
77
 
78
  ## Fields
79
 
80
  | Field | Type | Description |
81
  |-------|------|-------------|
82
+ | `image` | Image | 448×448 PNG |
83
+ | `image_id` | string | Unique identifier (encodes task, size, count) |
84
+ | `task_type` | string | See tables above |
85
+ | `question` | string | Prompt for the VLM (asks for a structured JSON answer) |
86
+ | `answer` | string | Ground-truth answer. JSON-encoded (see below) |
87
+ | `difficulty` | string | `easy` / `medium` / `hard` / `extreme` |
88
+ | `metadata` | string | JSON with canvas size, target pixel size, positions, colors, bounding boxes, sub-answers, etc. |
89
+
90
+ ### Answer schemas (examples)
91
+
92
+ | Task | Answer JSON |
93
+ |------|-------------|
94
+ | `letter_recognition` | `{"letter": "H"}` |
95
+ | `animal_recognition` | `{"animal": "rabbit"}` |
96
+ | `shape_recognition` | `{"shape": "triangle"}` |
97
+ | `color_block_recognition` | `{"color": "blue"}` |
98
+ | `text_recognition` | `{"word": "HOME"}` |
99
+ | `spatial_chain` | `{"objects": ["red A", "blue K", ...]}` |
100
+ | `comparison_chain` | `{"objects": ["blue dog", "magenta bird"]}` |
101
+ | `counting_chain` | `{"counts": {"red": 2, "blue": 1}, "total": 3}` |
102
+ | `blur_chain` | `{"counts": {"circle": 1, "square": 2}, "total": 3}` |
103
+ | `text_reading_chain` | `{"words": ["HOME", "CITY", "EXIT"]}` |
104
+ | `text_counting_chain`| `{"total": 6, "with_letter": 3}` |
105
 
106
  ## Usage
107
 
 
111
  ds = load_dataset("Volavion/FineSightBench")
112
  print(ds)
113
  # DatasetDict({
114
+ # perception: Dataset({features: [...], num_rows: 4200}),
115
+ # reasoning: Dataset({features: [...], num_rows: 3920})
116
  # })
117
 
118
  sample = ds["perception"][0]
119
  sample["image"].show()
120
  print(sample["question"])
121
+ print(sample["answer"]) # JSON string, e.g. '{"letter": "A"}'
122
+ ```
123
+
124
+ Filter by task or difficulty:
125
+
126
+ ```python
127
+ text_subset = ds["perception"].filter(lambda x: x["task_type"] == "text_recognition")
128
+ extreme = ds["perception"].filter(lambda x: x["difficulty"] == "extreme")
129
+ ```
130
+
131
+ ## Design Philosophy
132
+
133
+ * **Pixel-size is the primary difficulty axis.** Targets (objects or characters) are rendered at exact cap-heights across `[4, 8, 12, 16, 24, 32, 48]` px so that the same semantic task can be probed from *easily readable* to *near-imperceptible* scales on a single fixed 448×448 canvas.
134
+ * **Controlled composition.** Every sample exposes pixel-precise target positions, bounding boxes, colors (with RGB), and sub-answers in `metadata`, enabling per-task, per-size, per-color, and positional analyses.
135
+ * **Two image regimes.** The synthetic canvas removes distribution confounders, while the SynthText-style text-in-the-wild regime stresses models with the same text task on varied, real photographs.
136
+
137
+ ## Generation
138
+
139
+ Generated with the [FineSightBench repository](https://github.com/Volavion/FineSightBench):
140
+
141
+ ```bash
142
+ python generate.py perception --output data/full_perception
143
+ python generate.py reasoning --output data/full_reasoning
144
+ python generate.py textwild --split all --output data # merges text-in-the-wild tasks
145
+ ```
146
+
147
+ **Text-in-the-wild backgrounds**: the first ~1 500 JPEGs from the SynthText `bg_img.tar.gz` set ([mirror](https://thor.robots.ox.ac.uk/scenetext/preproc/bg_img.tar.gz)) are center-cropped and resized to 448×448. Text glyphs use system sans-serif fonts; cap-height is calibrated per render to match the requested pixel size exactly.
148
+
149
+ ## Citation
150
+
151
+ If you use FineSightBench, please cite the repository and the SynthText background source:
152
+
153
+ ```bibtex
154
+ @misc{finesightbench2026,
155
+ title = {FineSightBench: Fine-grained Visual Perception \& Reasoning Benchmark for VLMs},
156
+ year = {2026},
157
+ url = {https://huggingface.co/datasets/Volavion/FineSightBench}
158
+ }
159
+
160
+ @inproceedings{Gupta16,
161
+ author = {A. Gupta and A. Vedaldi and A. Zisserman},
162
+ title = {Synthetic Data for Text Localisation in Natural Images},
163
+ booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
164
+ year = {2016}
165
+ }
166
  ```
167
 
168
+ ## License
169
 
170
+ Apache-2.0 for the FineSightBench benchmark code, annotations, and synthetic images. The natural-scene backgrounds for the text-in-the-wild tasks are derived from the SynthText `bg_img` set; please refer to the original SynthText dataset for the background-image license/terms.