--- license: cc-by-4.0 --- # VistaQA: Benchmarking Joint Visual Question Answering and Pixel-Level Evidence ## Overview VistaQA is a benchmark for the joint evaluation of free-form answer correctness and pixel-level visual evidence alignment in visual question answering. It contains 1,157 expert-curated samples across six task types and six visual domains, spanning perception to compositional and relational reasoning. Each sample requires both a textual answer and corresponding segmentation masks that support the prediction. The benchmark also includes hallucination-aware samples in which no valid visual evidence exists. --- ## Dataset Structure ``` VistaQA/ ├── 1.jpg ├── 1.json ├── 2.jpg ├── 2.json └── ... ``` Each image file (`.jpg` or `.png`) is paired with a corresponding `.json` file sharing the same file ID. --- ## Annotation Format (Example) ```json { "image": { "image_id": 979, "width": 1500, "height": 2060, "file_name": "979.jpg" }, "question": "how many windows on the building are not partially occluded by the balusters?", "answer": "there are 13 windows that not partially occluded by the balusters.", "task_type": "counting", "task_domain": "outdoor", "num_instances": 13, "hallucination": 0, "annotations": [ { "id": 523353741, "segmentation": { "size": [ 2060, 1500 ], "counts": "l][T1n0g4TOPe1e6I6M3N10000O2O0000000000000O100001O00000000000000000000000000000000000000000000000001O0000000000O1000000001O0000O2O00000000000001O0O1000000000O101O0001O00O1000O1001N10000O1O100O1O1O1O1O1N2O1O1O1_N^VNXLbi1f3cVNVL^i1i3gVNRLZi1m3SWNfKoh1X4TWNfKlh15ZVNV3k0dLmh1OdVNV3`0kLmh1JoVNR35SMQi1BZWNo2F^Mjj14VTN]1Q1^Nmj1OZTNHKV1P1SOoj1IjTNl0;ZO\\l1:hSNE`n1O2Lcejb1" }, "bbox": [ 578.0, 636.0, 112.0, 228.0 ], "area": 23791 } ] } ``` Note: For brevity, only one of the 13 masks is shown. ### Field Descriptions - **image**: Filename of the associated image and its metadata (e.g., width, height) - **question**: Visual question answering (VQA) - **answer**: Ground-truth answer (free-form) - **task_type**: Type of reasoning (attribute, identification, OCR, counting, spatial, comparison) - **task_domain**: Domain category (AV, indoor, outdoor, robotics, math, science) - **num_instances**: Number of instances for visual evidence masks - **hallucination**: Indicates whether valid visual evidence exists (0 = evidence present, 1 = no valid evidence) - **annotations**: Segmentation mask(s) representing supporting evidence