Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 6,252 Bytes
4efdfd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31842c6
4efdfd7
 
31842c6
4efdfd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
---
license: apache-2.0
---
 # ๐ŸŒ World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning

<p align="center">
  <a href="https://arxiv.org/abs/2604.26934">๐Ÿ“„ Paper</a> โ€ข
  <a href="https://github.com/WanyueZhang-ai/World2VLM">๐Ÿ’ป Code</a> โ€ข
  <a href="https://huggingface.co/datasets/WanyueZhang/World2VLM">๐Ÿค— Dataset</a>
</p>

---

## โœจ Overview

This repository provides a **demo dataset** for the paper:

> **World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning**  
> *Wanyue Zhang et al., 2026*

๐Ÿ” **Motivation**  
Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning). 

๐Ÿ’ก **Key Idea**  
We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMsโ€”enabling them to reason about **future views and action consequences without external simulation at inference time**. 

๐Ÿ“ฆ **This repository** contains:
- A **compact demo dataset** showcasing the data construction pipeline
- Representative **trajectory-based supervision samples**
- Examples of **8 dynamic spatial reasoning task types**

โš ๏ธ The **full dataset will be released soon**.

---

## ๐Ÿง  What is World2VLM?

World2VLM trains VLMs to **mentally simulate the world** by learning from world-model-generated transitions:

- Input: an image + an action (e.g., move forward)
- World model: generates the future view
- Output: structured supervision for reasoning

This enables two key capabilities:
- ๐Ÿ” **Inverse reasoning**: infer the action from image changes  
- ๐Ÿ”ฎ **Forward reasoning**: predict what happens after an action  

Unlike prior work, **no world model is needed at inference time**.

---

## ๐Ÿ“‚ Dataset Structure

```bash
data-demo/
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ SVC-RealScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ”œโ”€โ”€ SVC-SimulatedScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ”œโ”€โ”€ HY-WorldPlay-RealScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ””โ”€โ”€ HY-WorldPlay-SimulatedScene-demo
    โ”œโ”€โ”€ tasks_demo.jsonl
    โ””โ”€โ”€ scenes/demo_scene/...
```

## ๐Ÿ” Included Demo Subsets

We provide **four compact subsets** covering:

| Teacher Model | Scene Type | Description |
|--------------|------------|-------------|
| SVC | Real Scene | Camera-conditioned view synthesis |
| SVC | Simulated Scene | Synthetic environment transitions |
| HY-WorldPlay | Real Scene | Action-conditioned world dynamics |
| HY-WorldPlay | Simulated Scene | Long-horizon simulated trajectories |

Each subset includes:
- ๐ŸŽฌ A **trajectory bundle** (images + metadata)
- ๐Ÿ“ A **`tasks_demo.jsonl`** file with structured supervision

---

## ๐Ÿงพ Data Format

Each line in `tasks_demo.jsonl` represents one training example.

### Common Fields

- `task_type`  
  One of 8 spatial reasoning tasks: `A1โ€“A4`, `D1โ€“D4`

- `messages`  
  A two-turn conversation:
  - User prompt  
  - Target answer  

- `images`  
  Relative paths to referenced images

---

## ๐Ÿงฉ Task Suite (8 Types)

World2VLM defines a **bidirectional task suite**:

### ๐Ÿ” Motion-Centric (A-series)

| Task | Description |
|------|-------------|
| A1 | Motion distance estimation |
| A2 | Motion orientation estimation |
| A3 | Multi-step motion prediction |
| A4 | Action-sequence verification |

### ๐ŸŽฏ Object-Centric (D-series)

| Task | Description |
|------|-------------|
| D1 | Post-action bounding box prediction |
| D2 | Post-action visibility detection |
| D3 | Cross-view action inference |
| D4 | Object consistency across views |

๐Ÿ’ก These tasks jointly enforce:
- Understanding **camera motion**
- Tracking **object transformations**
- Reasoning about **viewpoint changes**

---

## โš™๏ธ Data Construction Pipeline

The dataset is generated using **world models as teachers**:

1. ๐Ÿ–ผ๏ธ Start from an **anchor image**  
2. ๐ŸŽฎ Sample an **egocentric action sequence**  
3. ๐ŸŒ Generate **future views** via world models  
4. ๐Ÿง  Convert transitions into:
   - Forward tasks (predict outcomes)  
   - Inverse tasks (recover actions)  

This yields structured supervision of the form:

- `P(action | before, after)` (inverse)  
- `P(outcome | before, action)` (forward)  

---

## ๐Ÿš€ Key Features

- ๐Ÿง  **Spatial imagination distilled into VLMs**
- ๐Ÿ”„ **Bidirectional reasoning supervision**
- ๐Ÿงฉ **Multi-task structured dataset**
- โšก **No world model needed at inference**
- ๐ŸŒ Supports both **real and simulated scenes**

---

## ๐Ÿ“Š Why This Matters

World2VLM addresses a core limitation:

> โŒ VLMs fail at mental simulation  
> โœ… World models can simulateโ€”but are expensive  

๐Ÿ‘‰ **Our solution:**  
Train VLMs to *internalize* world-model reasoning.

This leads to:
- Better **dynamic spatial reasoning**
- Lower **inference cost**
- Improved performance on spatial reasoning benchmarks

---

## ๐Ÿ“ Notes

- This repo contains **demo-scale data only**
- Full dataset (~100K samples) will be released soon
- Demo is intended for:
  - ๐Ÿ” Format inspection  
  - ๐Ÿงช Pipeline understanding  
  - ๐Ÿง  Task design exploration  

---
## ๐Ÿ“š Citation

If you find this work useful, please cite:

```
@misc{zhang2026world2vlmdistillingworldmodel,
      title={World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning}, 
      author={Wanyue Zhang and Wenxiang Wu and Wang Xu and Jiaxin Luo and Helu Zhi and Yibin Huang and Shuo Ren and Zitao Liu and Jiajun Zhang},
      year={2026},
      eprint={2604.26934},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2604.26934}, 
}
```

---
## ๐Ÿค Acknowledgements

We thank the community for advances in:

* ๐ŸŒ World Models (e.g., SVC(https://arxiv.org/abs/2503.14489), HY-WorldPlay(https://arxiv.org/abs/2412.03603))
* ๐Ÿค– Vision-Language Models
* ๐Ÿง  Spatial reasoning benchmarks

---
## ๐Ÿ“ฌ Contact

For questions or collaborations, please open an issue or contact the authors via the paper.

---