license: cc-by-4.0
pretty_name: hpXgvFBl7ZxO
size_categories:
- n<1K
task_categories:
- visual-question-answering
- image-classification
language:
- en
tags:
- active-perception
- active-vision
- multimodal
- benchmark
- synthetic
- mllm-evaluation
- visual-reasoning
configs:
- config_name: default
data_files:
- split: test
path: data/manifest.json
ActiveVision
An exam for active observers — a benchmark for diagnosing whether multimodal large language models can iteratively look at an image during reasoning, instead of compressing it into a fixed embedding once.
What's in this archive
| Path | Contents |
|---|---|
data/images/ |
85 photorealistic PNGs, the released benchmark images |
data/manifest.json |
Canonical index — one record per instance with id, task, category, image, image_sha256, image_source_filename, question, answer |
data/annotations/<task>.jsonl |
Per-task verification metadata (5 records per file × 17 tasks = 85). Includes the structural ground truth used to compute each answer (region adjacency, arrow chains, traversal paths, Hausdorff distances, etc.) |
code/<category>/<task>/creation.py |
Seedable, deterministic generator. The released images at v0.4 are produced at --difficulty 4. |
code/<category>/<task>/creation.md |
Per-task design and anti-shortcut spec (where present). |
code/<category>/<task>/data.json |
Per-task definition: shared question text, answer format. |
code/gpt_image_prompts.json |
One gpt-image-2 image-edit prompt per task, used to re-render the matplotlib structural draft as a photorealistic variant while preserving the discriminative structure. |
code/scope.md |
Project specification: the three task families and the six shortcut classes the design defeats. |
croissant.json |
Croissant 1.0 + Croissant-RAI 1.0 metadata for this dataset. |
LICENSE |
CC BY 4.0. |
Statistics
- 85 instances, 17 tasks, 3 task families.
- Distributed Scanning (25 instances, 5 tasks): attribute_group_counting, bounded_faces_counting, counting_connected_components, counting_regions, tangled_loops.
- Sequential Traversal (25 instances, 5 tasks): arrow_chain, color_zone_sequence, line_intersections, maze, traverse_ordering.
- Visual Attribute Transfer (35 instances, 7 tasks): constellation_match_count, contour_silhouette_count, spot_the_contour_diff, spot_the_field_diff, spot_the_signal_diff, spot_the_stroke_diff, stroke_gesture_count.
Loading
import json, pathlib
root = pathlib.Path("data_neurips2026")
manifest = json.loads((root / "data" / "manifest.json").read_text())
for item in manifest:
image_path = root / item["image"]
question = item["question"]
gold = item["answer"]
Generation pipeline
The pipeline is the artifact. Every benchmark image is produced in two deterministic stages:
- Geometric draft (matplotlib).
creation.py --seed S --difficulty 4lays out a structural specification (region partition, arrow positions, maze graph, brush-stroke field, etc.) and computes the answer in closed form. Output: a plain matplotlib PNG. - Photorealistic re-render (gpt-image-2). The matplotlib draft is sent to OpenAI gpt-image-2 via the image-edit endpoint, with a per-task prompt from
code/gpt_image_prompts.jsonthat preserves silhouettes, positions, counts, and labels but replaces the surface material with a photorealistic style (stones on sand, hedge maze from above, starfield, etc.).
The released benchmark contains only the Stage-2 images. Held-out splits with unpublished seeds and additional difficulties can be regenerated from the included generators.
Responsible AI
See croissant.json for the full RAI block. Headlines:
- Synthetic only: 100% synthetic. No human subjects, no PII, no real-world events.
- Use cases: testing and validation. Not for training.
- Limitations: small evaluation set; adversarial-by-design (not predictive of general vision-language ability); photorealistic re-renders depend on a closed-source service.
- License: CC BY 4.0.
Validating the Croissant file
Before submission, validate croissant.json with the official Croissant validator at:
https://huggingface.co/spaces/JoaquinVanschoren/croissant-checker
(Run well in advance of any submission deadline — the doc warns of heavy load near deadlines.)
Citation
Anonymous. ActiveVision: An Exam for Active Observers.
NeurIPS 2026 Datasets and Benchmarks Track (under review).