SleepWalk: A Three-Tier Benchmark for Stress-Testing Instruction-Guided Vision-Language Navigation
Abstract
SleepWalk is a benchmark for evaluating vision-language models' ability to predict spatially coherent, executable trajectories in 3D environments based on textual instructions and visual observations.
Vision-Language Models (VLMs) have advanced rapidly in multimodal perception and language understanding, yet it remains unclear whether they can reliably ground language into spatially coherent, plausibly executable actions in 3D digital environments. We introduce SleepWalk, a benchmark for evaluating instruction-grounded trajectory prediction in single-scene 3D worlds generated from textual scene descriptions and filtered for navigability. Unlike prior navigation benchmarks centered on long-range exploration across rooms, SleepWalk targets localized, interaction-centric embodied reasoning: given rendered visual observations and a natural-language instruction, a model must predict a trajectory that respects scene geometry, avoids collisions, and terminates at an action-compatible location. The benchmark covers diverse indoor and outdoor environments and organizes tasks into three tiers of spatial and temporal difficulty, enabling fine-grained analysis of grounding under increasing compositional complexity. Using a standardized pointwise judge-based evaluation protocol, we evaluate three frontier VLMs on 2,472 curated 3D environments with nine instructions per scene. Results reveal systematic failures in grounded spatial reasoning, especially under occlusion, interaction constraints, and multi-step instructions: performance drops as the difficulty level of the tasks increase. In general, current VLMs can somewhat produce trajectories that are simultaneously spatially coherent, plausibly executable, and aligned with intended actions. By exposing failures in a controlled yet scalable setting, SleepWalk provides a critical benchmark for advancing grounded multimodal reasoning, embodied planning, vision-language navigation, and action-capable agents in 3D environments.
Community
SleepWalk introduces a scalable single-scene 3D benchmark that tests whether VLMs can translate natural-language instructions into continuous, collision-aware, interaction-compatible trajectories, revealing that current frontier models still fail at precise spatial grounding despite producing plausible-looking paths.
β‘οΈ πππ² ππ’π π‘π₯π’π π‘ππ¬ π¨π ππ₯πππ©πππ₯π€:
π§ πΊπππππ-πΊππππ ππ« π»πππππππππ π©ππππππππ: Introduces SleepWalk, built from 2,472 curated 3D environments reconstructed from text using Hunyuan3D-3.0, targeting localized, object-centric embodied reasoning rather than long-horizon room-to-room VLN.
π§© π»ππππ-π»πππ π°ππππππππππ πΊπππππ π»πππ: Generates nine instructions per scene with Qwen3-8B-VL, split into easy, medium, and hard tiers that progressively test single-goal localization, compositional spatial grounding, and multi-step interaction-aware planning.
βοΈ π·ππππππππ π±ππ ππ-π©ππππ π¬πππππππππ: Proposes a standardized GPT-5-mini judge protocol scoring trajectories on start-location consistency, goal satisfaction, obstacle avoidance, and trajectory efficiency, showing GPT-5-mini leads among tested models but still degrades sharply on harder interaction-heavy tasks.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper