Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# VistaQA: Benchmarking Joint Visual Question Answering and Pixel-Level Evidence
|
| 6 |
+
|
| 7 |
+
## Overview
|
| 8 |
+
VistaQA is a benchmark for the joint evaluation of free-form answer correctness and pixel-level visual evidence alignment in visual question answering. It contains 1,157 expert-curated samples across six task types and six visual domains, spanning perception to compositional and relational reasoning. Each sample requires both a textual answer and corresponding segmentation masks that support the prediction. The benchmark also includes hallucination-aware samples in which no valid visual evidence exists.
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
## Dataset Structure
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
VistaQA/
|
| 15 |
+
├── 1.jpg
|
| 16 |
+
├── 1.json
|
| 17 |
+
├── 2.jpg
|
| 18 |
+
├── 2.json
|
| 19 |
+
└── ...
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
Each image file (`.jpg` or `.png`) is paired with a corresponding `.json` file sharing the same file ID.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## Annotation Format (Example)
|
| 28 |
+
|
| 29 |
+
```json
|
| 30 |
+
{
|
| 31 |
+
"image": {
|
| 32 |
+
"image_id": 979,
|
| 33 |
+
"width": 1500,
|
| 34 |
+
"height": 2060,
|
| 35 |
+
"file_name": "979.jpg"
|
| 36 |
+
},
|
| 37 |
+
"question": "how many windows on the building are not partially occluded by the balusters?",
|
| 38 |
+
"answer": "there are 13 windows that not partially occluded by the balusters.",
|
| 39 |
+
"task_type": "counting",
|
| 40 |
+
"task_domain": "outdoor",
|
| 41 |
+
"num_instances": 13,
|
| 42 |
+
"hallucination": 0,
|
| 43 |
+
"annotations": [
|
| 44 |
+
{
|
| 45 |
+
"id": 523353741,
|
| 46 |
+
"segmentation": {
|
| 47 |
+
"size": [
|
| 48 |
+
2060,
|
| 49 |
+
1500
|
| 50 |
+
],
|
| 51 |
+
"counts": "l][T1n0g4TOPe1e6I6M3N10000O2O0000000000000O100001O00000000000000000000000000000000000000000000000001O0000000000O1000000001O0000O2O00000000000001O0O1000000000O101O0001O00O1000O1001N10000O1O100O1O1O1O1O1N2O1O1O1_N^VNXLbi1f3cVNVL^i1i3gVNRLZi1m3SWNfKoh1X4TWNfKlh15ZVNV3k0dLmh1OdVNV3`0kLmh1JoVNR35SMQi1BZWNo2F^Mjj14VTN]1Q1^Nmj1OZTNHKV1P1SOoj1IjTNl0;ZO\\l1:hSNE`n1O2Lcejb1"
|
| 52 |
+
},
|
| 53 |
+
"bbox": [
|
| 54 |
+
578.0,
|
| 55 |
+
636.0,
|
| 56 |
+
112.0,
|
| 57 |
+
228.0
|
| 58 |
+
],
|
| 59 |
+
"area": 23791
|
| 60 |
+
}
|
| 61 |
+
]
|
| 62 |
+
}
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
Note: For brevity, only one of the 13 masks is shown.
|
| 67 |
+
|
| 68 |
+
### Field Descriptions
|
| 69 |
+
|
| 70 |
+
- **image**: Filename of the associated image and its metadata (e.g., width, height)
|
| 71 |
+
- **question**: Visual question answering (VQA)
|
| 72 |
+
- **answer**: Ground-truth answer (free-form)
|
| 73 |
+
- **task_type**: Type of reasoning (attribute, identification, OCR, counting, spatial, comparison)
|
| 74 |
+
- **task_domain**: Domain category (AV, indoor, outdoor, robotics, math, science)
|
| 75 |
+
- **num_instances**: Number of instances for visual evidence masks
|
| 76 |
+
- **hallucination**: Indicates whether valid visual evidence exists (0 = evidence present, 1 = no valid evidence)
|
| 77 |
+
- **annotations**: Segmentation mask(s) representing supporting evidence
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
|