hpXgvFBl7ZxO / code /scope.md
activevision's picture
Initial release v0.4.0 — ActiveVision benchmark (85 instances, 17 tasks)
f69e256 verified
|
raw
history blame
6.26 kB
# Active Vision Benchmark
Today's multimodal large language models (MLLMs) achieve strong performance on many vision-language tasks, but they typically process images as fixed embeddings. Human reasoning, by contrast, is often active: perception is continuously guided by intermediate reasoning. In psychology, active observers can readily solve tasks that are ill-posed for passive observers. In this paper, we investigate whether MLLMs can exhibit a similar form of active observation. We introduce a benchmark that requires iterative visual inspection, including distributed scanning, sequential traversal, and visual attribute transfer, where evidence must be accumulated across spatial locations and reasoning steps. State-of-the-art MLLMs show substantial performance drops on these tasks, and attention analysis reveals unstable and limited use of visual tokens during reasoning. These results suggest that current MLLMs lack robust active visual observation, motivating new methods and architectures for iterative, perception-driven reasoning.
### Distributed Scanning.
Tasks that require the model to inspect multiple spatially separated regions and aggregate locally identifiable evidence across the image. The main challenge is exhaustive visual coverage rather than global structural inference, as the answer is obtained by repeatedly finding and accumulating relevant local signals.
### Sequential Traversal.
Tasks that require the model to follow a path, line, wire, or other connected structure step by step while maintaining intermediate state. The answer depends on ordered inspection along the structure, such as tracing a route, identifying visited elements, or counting events encountered during traversal.
### Visual Attribute Transfer.
Tasks that require the model to extract a fine-grained visual property from one region and match or apply it to another region. The transferred property is primarily visual rather than linguistic, such as length, curvature, thickness, or spacing, and the task tests whether the model can preserve and compare such information across separated parts of the image.
### Requirement
A task qualifies for the benchmark only if it forces the model to *actively look at the image during reasoning*. Concretely, every task must defeat the following two shortcuts.
**Shortcut 1 — One-shot perception, then pure text reasoning.**
The model summarises the entire image in a single pass into a compact symbolic description (e.g. an adjacency list, a coordinate table, a grid of arrows), then solves the task by reasoning over that description without looking again. To block this, an instance must contain too much fine-grained visual state to be losslessly extracted in one pass at the resolutions models actually see. For example, a *Color Zone Sequence* image carries a continuous smooth curve whose region membership changes dozens of times along its length; verbalising every crossing in advance is itself the task. A *Connectivity Spotting* graph is dense enough that the model would have to verbalise the entire connectivity structure — at which point it is solving the task, not pre-extracting it. Difficulty should scale with image complexity (more zones, more arrows, more crossings) so that the one-shot description grows past what the model can reliably hold.
**Shortcut 2 — Write code to solve the task end-to-end.**
The model emits Python that runs OCR, edge detection, blob counting, or vector tracing on the raw image and returns the answer without further reasoning. To block this, the visual primitives must require human-style perception that off-the-shelf CV libraries do not solve cleanly: smoothly anti-aliased turtle curves rather than crisp lines, color-only region boundaries with no contour to detect, hand-placed arrows pointing in arbitrary directions, weighted-graph layouts whose edges are styled rather than thresholded. The answer should also depend on a continuous tracing or counting decision (which arrow comes next, which region is visited 4th) that scripted pipelines get wrong on the soft, irregular renderings used here.
**Shortcut 3 — Statistical priors / answer-distribution leakage.**
Without looking, the model guesses the modal answer for the task type (e.g. "shortest paths are usually 3 hops", "arrow chains usually terminate at A"). To block this, per-task answer distributions should be flat, with no correlation between trivially extractable image features (region count, canvas size, color palette) and the answer. Each task's annotations should be auditable for this.
**Shortcut 4 — Gestalt heuristics instead of step-by-step tracing.**
The model glances at the image and uses a learned visual prior — "the green arrow's circle is on the left, the red terminus is on the right, so go right" — without actually executing the traversal. This is especially dangerous on Sequential Traversal: the model can short-circuit a 6-hop arrow chain by interpolating from start-position and end-position. To block this, insert decoys and detours so the geometric "obvious" path is wrong, the answer requires more than half the canvas to be inspected, and the traversal length is long enough that intermediate state is required (mental running count, current-region tracking).
**Shortcut 5 — Memorisation of the released benchmark.**
Once the dataset is public, models can be fine-tuned on it directly. To block this, the generation pipeline is the artifact. We release `creation.py` with a seed protocol so that *new* unseen instances can be regenerated by evaluators, and we keep a small held-out split with seeds never published. Difficulty scaling (more zones, longer chains) also means evaluators can request harder splits than the released set.
**Shortcut 6 — Agent-tool zoom / crop / OCR loops.**
A tool-using agent can repeatedly crop, upscale, or OCR sub-regions until the answer falls out, without exercising the model's own active visual reasoning. To block this in tool-using settings, the canvas must already be at a resolution where a single zoom does not localise the answer, the answer should depend on integrating evidence across non-adjacent regions, and any single crop must omit information that another crop also needs.