Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
WanyueZhang commited on
Commit
4efdfd7
ยท
verified ยท
1 Parent(s): 6d50c4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +221 -3
README.md CHANGED
@@ -1,3 +1,221 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # ๐ŸŒ World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
5
+
6
+ <p align="center">
7
+ <a href="https://arxiv.org/abs/2604.26934">๐Ÿ“„ Paper</a> โ€ข
8
+ <a href="https://github.com/WanyueZhang-ai/World2VLM">๐Ÿ’ป Code</a> โ€ข
9
+ <a href="https://huggingface.co/datasets/WanyueZhang/World2VLM">๐Ÿค— Dataset</a>
10
+ </p>
11
+
12
+ ---
13
+
14
+ ## โœจ Overview
15
+
16
+ This repository provides a **demo dataset** for the paper:
17
+
18
+ > **World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning**
19
+ > *Wanyue Zhang et al., 2026*
20
+
21
+ ๐Ÿ” **Motivation**
22
+ Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning). [oai_citation:0โ€ก2604.26934v1.pdf](sediment://file_000000007620720b9a6b178b8a8dc3c9)
23
+
24
+ ๐Ÿ’ก **Key Idea**
25
+ We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMsโ€”enabling them to reason about **future views and action consequences without external simulation at inference time**. [oai_citation:1โ€ก2604.26934v1.pdf](sediment://file_000000007620720b9a6b178b8a8dc3c9)
26
+
27
+ ๐Ÿ“ฆ **This repository** contains:
28
+ - A **compact demo dataset** showcasing the data construction pipeline
29
+ - Representative **trajectory-based supervision samples**
30
+ - Examples of **8 dynamic spatial reasoning task types**
31
+
32
+ โš ๏ธ The **full dataset will be released soon**.
33
+
34
+ ---
35
+
36
+ ## ๐Ÿง  What is World2VLM?
37
+
38
+ World2VLM trains VLMs to **mentally simulate the world** by learning from world-model-generated transitions:
39
+
40
+ - Input: an image + an action (e.g., move forward)
41
+ - World model: generates the future view
42
+ - Output: structured supervision for reasoning
43
+
44
+ This enables two key capabilities:
45
+ - ๐Ÿ” **Inverse reasoning**: infer the action from image changes
46
+ - ๐Ÿ”ฎ **Forward reasoning**: predict what happens after an action
47
+
48
+ Unlike prior work, **no world model is needed at inference time**.
49
+
50
+ ---
51
+
52
+ ## ๐Ÿ“‚ Dataset Structure
53
+
54
+ ```bash
55
+ data-demo/
56
+ โ”œโ”€โ”€ README.md
57
+ โ”œโ”€โ”€ SVC-RealScene-demo
58
+ โ”‚ โ”œโ”€โ”€ tasks_demo.jsonl
59
+ โ”‚ โ””โ”€โ”€ scenes/demo_scene/...
60
+ โ”œโ”€โ”€ SVC-SimulatedScene-demo
61
+ โ”‚ โ”œโ”€โ”€ tasks_demo.jsonl
62
+ โ”‚ โ””โ”€โ”€ scenes/demo_scene/...
63
+ โ”œโ”€โ”€ HY-WorldPlay-RealScene-demo
64
+ โ”‚ โ”œโ”€โ”€ tasks_demo.jsonl
65
+ โ”‚ โ””โ”€โ”€ scenes/demo_scene/...
66
+ โ””โ”€โ”€ HY-WorldPlay-SimulatedScene-demo
67
+ โ”œโ”€โ”€ tasks_demo.jsonl
68
+ โ””โ”€โ”€ scenes/demo_scene/...
69
+ ```
70
+
71
+ ## ๐Ÿ” Included Demo Subsets
72
+
73
+ We provide **four compact subsets** covering:
74
+
75
+ | Teacher Model | Scene Type | Description |
76
+ |--------------|------------|-------------|
77
+ | SVC | Real Scene | Camera-conditioned view synthesis |
78
+ | SVC | Simulated Scene | Synthetic environment transitions |
79
+ | HY-WorldPlay | Real Scene | Action-conditioned world dynamics |
80
+ | HY-WorldPlay | Simulated Scene | Long-horizon simulated trajectories |
81
+
82
+ Each subset includes:
83
+ - ๐ŸŽฌ A **trajectory bundle** (images + metadata)
84
+ - ๐Ÿ“ A **`tasks_demo.jsonl`** file with structured supervision
85
+
86
+ ---
87
+
88
+ ## ๐Ÿงพ Data Format
89
+
90
+ Each line in `tasks_demo.jsonl` represents one training example.
91
+
92
+ ### Common Fields
93
+
94
+ - `task_type`
95
+ One of 8 spatial reasoning tasks: `A1โ€“A4`, `D1โ€“D4`
96
+
97
+ - `messages`
98
+ A two-turn conversation:
99
+ - User prompt
100
+ - Target answer
101
+
102
+ - `images`
103
+ Relative paths to referenced images
104
+
105
+ ---
106
+
107
+ ## ๐Ÿงฉ Task Suite (8 Types)
108
+
109
+ World2VLM defines a **bidirectional task suite**:
110
+
111
+ ### ๐Ÿ” Motion-Centric (A-series)
112
+
113
+ | Task | Description |
114
+ |------|-------------|
115
+ | A1 | Motion distance estimation |
116
+ | A2 | Motion orientation estimation |
117
+ | A3 | Multi-step motion prediction |
118
+ | A4 | Action-sequence verification |
119
+
120
+ ### ๐ŸŽฏ Object-Centric (D-series)
121
+
122
+ | Task | Description |
123
+ |------|-------------|
124
+ | D1 | Post-action bounding box prediction |
125
+ | D2 | Post-action visibility detection |
126
+ | D3 | Cross-view action inference |
127
+ | D4 | Object consistency across views |
128
+
129
+ ๐Ÿ’ก These tasks jointly enforce:
130
+ - Understanding **camera motion**
131
+ - Tracking **object transformations**
132
+ - Reasoning about **viewpoint changes**
133
+
134
+ ---
135
+
136
+ ## โš™๏ธ Data Construction Pipeline
137
+
138
+ The dataset is generated using **world models as teachers**:
139
+
140
+ 1. ๐Ÿ–ผ๏ธ Start from an **anchor image**
141
+ 2. ๐ŸŽฎ Sample an **egocentric action sequence**
142
+ 3. ๐ŸŒ Generate **future views** via world models
143
+ 4. ๐Ÿง  Convert transitions into:
144
+ - Forward tasks (predict outcomes)
145
+ - Inverse tasks (recover actions)
146
+
147
+ This yields structured supervision of the form:
148
+
149
+ - `P(action | before, after)` (inverse)
150
+ - `P(outcome | before, action)` (forward)
151
+
152
+ ---
153
+
154
+ ## ๐Ÿš€ Key Features
155
+
156
+ - ๐Ÿง  **Spatial imagination distilled into VLMs**
157
+ - ๐Ÿ”„ **Bidirectional reasoning supervision**
158
+ - ๐Ÿงฉ **Multi-task structured dataset**
159
+ - โšก **No world model needed at inference**
160
+ - ๐ŸŒ Supports both **real and simulated scenes**
161
+
162
+ ---
163
+
164
+ ## ๐Ÿ“Š Why This Matters
165
+
166
+ World2VLM addresses a core limitation:
167
+
168
+ > โŒ VLMs fail at mental simulation
169
+ > โœ… World models can simulateโ€”but are expensive
170
+
171
+ ๐Ÿ‘‰ **Our solution:**
172
+ Train VLMs to *internalize* world-model reasoning.
173
+
174
+ This leads to:
175
+ - Better **dynamic spatial reasoning**
176
+ - Lower **inference cost**
177
+ - Improved performance on spatial reasoning benchmarks
178
+
179
+ ---
180
+
181
+ ## ๐Ÿ“ Notes
182
+
183
+ - This repo contains **demo-scale data only**
184
+ - Full dataset (~100K samples) will be released soon
185
+ - Demo is intended for:
186
+ - ๐Ÿ” Format inspection
187
+ - ๐Ÿงช Pipeline understanding
188
+ - ๐Ÿง  Task design exploration
189
+
190
+ ---
191
+ ## ๐Ÿ“š Citation
192
+
193
+ If you find this work useful, please cite:
194
+
195
+ ```
196
+ @misc{zhang2026world2vlmdistillingworldmodel,
197
+ title={World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning},
198
+ author={Wanyue Zhang and Wenxiang Wu and Wang Xu and Jiaxin Luo and Helu Zhi and Yibin Huang and Shuo Ren and Zitao Liu and Jiajun Zhang},
199
+ year={2026},
200
+ eprint={2604.26934},
201
+ archivePrefix={arXiv},
202
+ primaryClass={cs.CV},
203
+ url={https://arxiv.org/abs/2604.26934},
204
+ }
205
+ ```
206
+
207
+ ---
208
+ ## ๐Ÿค Acknowledgements
209
+
210
+ We thank the community for advances in:
211
+
212
+ * ๐ŸŒ World Models (e.g., SVC(https://arxiv.org/abs/2503.14489), HY-WorldPlay(https://arxiv.org/abs/2412.03603))
213
+ * ๐Ÿค– Vision-Language Models
214
+ * ๐Ÿง  Spatial reasoning benchmarks
215
+
216
+ ---
217
+ ## ๐Ÿ“ฌ Contact
218
+
219
+ For questions or collaborations, please open an issue or contact the authors via the paper.
220
+
221
+ ---