image imagewidth (px) 576 1.02k |
|---|
๐ World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
๐ Paper โข ๐ป Code โข ๐ค Dataset
โจ Overview
This repository provides a demo dataset for the paper:
World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
Wanyue Zhang et al., 2026
๐ Motivation
Vision-Language Models (VLMs) excel at static visual understanding but struggle with dynamic spatial reasoning, such as predicting how a scene changes under actions (e.g., moving forward, turning).
๐ก Key Idea
We introduce World2VLM, a framework that uses world models as training-time teachers to distill spatial imagination into VLMsโenabling them to reason about future views and action consequences without external simulation at inference time.
๐ฆ This repository contains:
- A compact demo dataset showcasing the data construction pipeline
- Representative trajectory-based supervision samples
- Examples of 8 dynamic spatial reasoning task types
โ ๏ธ The full dataset will be released soon.
๐ง What is World2VLM?
World2VLM trains VLMs to mentally simulate the world by learning from world-model-generated transitions:
- Input: an image + an action (e.g., move forward)
- World model: generates the future view
- Output: structured supervision for reasoning
This enables two key capabilities:
- ๐ Inverse reasoning: infer the action from image changes
- ๐ฎ Forward reasoning: predict what happens after an action
Unlike prior work, no world model is needed at inference time.
๐ Dataset Structure
data-demo/
โโโ README.md
โโโ SVC-RealScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ SVC-SimulatedScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ HY-WorldPlay-RealScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ HY-WorldPlay-SimulatedScene-demo
โโโ tasks_demo.jsonl
โโโ scenes/demo_scene/...
๐ Included Demo Subsets
We provide four compact subsets covering:
| Teacher Model | Scene Type | Description |
|---|---|---|
| SVC | Real Scene | Camera-conditioned view synthesis |
| SVC | Simulated Scene | Synthetic environment transitions |
| HY-WorldPlay | Real Scene | Action-conditioned world dynamics |
| HY-WorldPlay | Simulated Scene | Long-horizon simulated trajectories |
Each subset includes:
- ๐ฌ A trajectory bundle (images + metadata)
- ๐ A
tasks_demo.jsonlfile with structured supervision
๐งพ Data Format
Each line in tasks_demo.jsonl represents one training example.
Common Fields
task_type
One of 8 spatial reasoning tasks:A1โA4,D1โD4messages
A two-turn conversation:- User prompt
- Target answer
images
Relative paths to referenced images
๐งฉ Task Suite (8 Types)
World2VLM defines a bidirectional task suite:
๐ Motion-Centric (A-series)
| Task | Description |
|---|---|
| A1 | Motion distance estimation |
| A2 | Motion orientation estimation |
| A3 | Multi-step motion prediction |
| A4 | Action-sequence verification |
๐ฏ Object-Centric (D-series)
| Task | Description |
|---|---|
| D1 | Post-action bounding box prediction |
| D2 | Post-action visibility detection |
| D3 | Cross-view action inference |
| D4 | Object consistency across views |
๐ก These tasks jointly enforce:
- Understanding camera motion
- Tracking object transformations
- Reasoning about viewpoint changes
โ๏ธ Data Construction Pipeline
The dataset is generated using world models as teachers:
- ๐ผ๏ธ Start from an anchor image
- ๐ฎ Sample an egocentric action sequence
- ๐ Generate future views via world models
- ๐ง Convert transitions into:
- Forward tasks (predict outcomes)
- Inverse tasks (recover actions)
This yields structured supervision of the form:
P(action | before, after)(inverse)P(outcome | before, action)(forward)
๐ Key Features
- ๐ง Spatial imagination distilled into VLMs
- ๐ Bidirectional reasoning supervision
- ๐งฉ Multi-task structured dataset
- โก No world model needed at inference
- ๐ Supports both real and simulated scenes
๐ Why This Matters
World2VLM addresses a core limitation:
โ VLMs fail at mental simulation
โ World models can simulateโbut are expensive
๐ Our solution:
Train VLMs to internalize world-model reasoning.
This leads to:
- Better dynamic spatial reasoning
- Lower inference cost
- Improved performance on spatial reasoning benchmarks
๐ Notes
- This repo contains demo-scale data only
- Full dataset (~100K samples) will be released soon
- Demo is intended for:
- ๐ Format inspection
- ๐งช Pipeline understanding
- ๐ง Task design exploration
๐ Citation
If you find this work useful, please cite:
@misc{zhang2026world2vlmdistillingworldmodel,
title={World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning},
author={Wanyue Zhang and Wenxiang Wu and Wang Xu and Jiaxin Luo and Helu Zhi and Yibin Huang and Shuo Ren and Zitao Liu and Jiajun Zhang},
year={2026},
eprint={2604.26934},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.26934},
}
๐ค Acknowledgements
We thank the community for advances in:
- ๐ World Models (e.g., SVC(https://arxiv.org/abs/2503.14489), HY-WorldPlay(https://arxiv.org/abs/2412.03603))
- ๐ค Vision-Language Models
- ๐ง Spatial reasoning benchmarks
๐ฌ Contact
For questions or collaborations, please open an issue or contact the authors via the paper.
- Downloads last month
- 42