Update README.md
Browse files
README.md
CHANGED
|
@@ -19,10 +19,10 @@ This repository provides a **demo dataset** for the paper:
|
|
| 19 |
> *Wanyue Zhang et al., 2026*
|
| 20 |
|
| 21 |
🔍 **Motivation**
|
| 22 |
-
Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning).
|
| 23 |
|
| 24 |
💡 **Key Idea**
|
| 25 |
-
We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMs—enabling them to reason about **future views and action consequences without external simulation at inference time**.
|
| 26 |
|
| 27 |
📦 **This repository** contains:
|
| 28 |
- A **compact demo dataset** showcasing the data construction pipeline
|
|
|
|
| 19 |
> *Wanyue Zhang et al., 2026*
|
| 20 |
|
| 21 |
🔍 **Motivation**
|
| 22 |
+
Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning).
|
| 23 |
|
| 24 |
💡 **Key Idea**
|
| 25 |
+
We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMs—enabling them to reason about **future views and action consequences without external simulation at inference time**.
|
| 26 |
|
| 27 |
📦 **This repository** contains:
|
| 28 |
- A **compact demo dataset** showcasing the data construction pipeline
|