File size: 6,252 Bytes
4efdfd7 31842c6 4efdfd7 31842c6 4efdfd7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 | ---
license: apache-2.0
---
# ๐ World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
<p align="center">
<a href="https://arxiv.org/abs/2604.26934">๐ Paper</a> โข
<a href="https://github.com/WanyueZhang-ai/World2VLM">๐ป Code</a> โข
<a href="https://huggingface.co/datasets/WanyueZhang/World2VLM">๐ค Dataset</a>
</p>
---
## โจ Overview
This repository provides a **demo dataset** for the paper:
> **World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning**
> *Wanyue Zhang et al., 2026*
๐ **Motivation**
Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning).
๐ก **Key Idea**
We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMsโenabling them to reason about **future views and action consequences without external simulation at inference time**.
๐ฆ **This repository** contains:
- A **compact demo dataset** showcasing the data construction pipeline
- Representative **trajectory-based supervision samples**
- Examples of **8 dynamic spatial reasoning task types**
โ ๏ธ The **full dataset will be released soon**.
---
## ๐ง What is World2VLM?
World2VLM trains VLMs to **mentally simulate the world** by learning from world-model-generated transitions:
- Input: an image + an action (e.g., move forward)
- World model: generates the future view
- Output: structured supervision for reasoning
This enables two key capabilities:
- ๐ **Inverse reasoning**: infer the action from image changes
- ๐ฎ **Forward reasoning**: predict what happens after an action
Unlike prior work, **no world model is needed at inference time**.
---
## ๐ Dataset Structure
```bash
data-demo/
โโโ README.md
โโโ SVC-RealScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ SVC-SimulatedScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ HY-WorldPlay-RealScene-demo
โ โโโ tasks_demo.jsonl
โ โโโ scenes/demo_scene/...
โโโ HY-WorldPlay-SimulatedScene-demo
โโโ tasks_demo.jsonl
โโโ scenes/demo_scene/...
```
## ๐ Included Demo Subsets
We provide **four compact subsets** covering:
| Teacher Model | Scene Type | Description |
|--------------|------------|-------------|
| SVC | Real Scene | Camera-conditioned view synthesis |
| SVC | Simulated Scene | Synthetic environment transitions |
| HY-WorldPlay | Real Scene | Action-conditioned world dynamics |
| HY-WorldPlay | Simulated Scene | Long-horizon simulated trajectories |
Each subset includes:
- ๐ฌ A **trajectory bundle** (images + metadata)
- ๐ A **`tasks_demo.jsonl`** file with structured supervision
---
## ๐งพ Data Format
Each line in `tasks_demo.jsonl` represents one training example.
### Common Fields
- `task_type`
One of 8 spatial reasoning tasks: `A1โA4`, `D1โD4`
- `messages`
A two-turn conversation:
- User prompt
- Target answer
- `images`
Relative paths to referenced images
---
## ๐งฉ Task Suite (8 Types)
World2VLM defines a **bidirectional task suite**:
### ๐ Motion-Centric (A-series)
| Task | Description |
|------|-------------|
| A1 | Motion distance estimation |
| A2 | Motion orientation estimation |
| A3 | Multi-step motion prediction |
| A4 | Action-sequence verification |
### ๐ฏ Object-Centric (D-series)
| Task | Description |
|------|-------------|
| D1 | Post-action bounding box prediction |
| D2 | Post-action visibility detection |
| D3 | Cross-view action inference |
| D4 | Object consistency across views |
๐ก These tasks jointly enforce:
- Understanding **camera motion**
- Tracking **object transformations**
- Reasoning about **viewpoint changes**
---
## โ๏ธ Data Construction Pipeline
The dataset is generated using **world models as teachers**:
1. ๐ผ๏ธ Start from an **anchor image**
2. ๐ฎ Sample an **egocentric action sequence**
3. ๐ Generate **future views** via world models
4. ๐ง Convert transitions into:
- Forward tasks (predict outcomes)
- Inverse tasks (recover actions)
This yields structured supervision of the form:
- `P(action | before, after)` (inverse)
- `P(outcome | before, action)` (forward)
---
## ๐ Key Features
- ๐ง **Spatial imagination distilled into VLMs**
- ๐ **Bidirectional reasoning supervision**
- ๐งฉ **Multi-task structured dataset**
- โก **No world model needed at inference**
- ๐ Supports both **real and simulated scenes**
---
## ๐ Why This Matters
World2VLM addresses a core limitation:
> โ VLMs fail at mental simulation
> โ
World models can simulateโbut are expensive
๐ **Our solution:**
Train VLMs to *internalize* world-model reasoning.
This leads to:
- Better **dynamic spatial reasoning**
- Lower **inference cost**
- Improved performance on spatial reasoning benchmarks
---
## ๐ Notes
- This repo contains **demo-scale data only**
- Full dataset (~100K samples) will be released soon
- Demo is intended for:
- ๐ Format inspection
- ๐งช Pipeline understanding
- ๐ง Task design exploration
---
## ๐ Citation
If you find this work useful, please cite:
```
@misc{zhang2026world2vlmdistillingworldmodel,
title={World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning},
author={Wanyue Zhang and Wenxiang Wu and Wang Xu and Jiaxin Luo and Helu Zhi and Yibin Huang and Shuo Ren and Zitao Liu and Jiajun Zhang},
year={2026},
eprint={2604.26934},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.26934},
}
```
---
## ๐ค Acknowledgements
We thank the community for advances in:
* ๐ World Models (e.g., SVC(https://arxiv.org/abs/2503.14489), HY-WorldPlay(https://arxiv.org/abs/2412.03603))
* ๐ค Vision-Language Models
* ๐ง Spatial reasoning benchmarks
---
## ๐ฌ Contact
For questions or collaborations, please open an issue or contact the authors via the paper.
--- |