Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
WanyueZhang commited on
Commit
31842c6
·
verified ·
1 Parent(s): dae56d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -19,10 +19,10 @@ This repository provides a **demo dataset** for the paper:
19
  > *Wanyue Zhang et al., 2026*
20
 
21
  🔍 **Motivation**
22
- Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning). [oai_citation:0‡2604.26934v1.pdf](sediment://file_000000007620720b9a6b178b8a8dc3c9)
23
 
24
  💡 **Key Idea**
25
- We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMs—enabling them to reason about **future views and action consequences without external simulation at inference time**. [oai_citation:1‡2604.26934v1.pdf](sediment://file_000000007620720b9a6b178b8a8dc3c9)
26
 
27
  📦 **This repository** contains:
28
  - A **compact demo dataset** showcasing the data construction pipeline
 
19
  > *Wanyue Zhang et al., 2026*
20
 
21
  🔍 **Motivation**
22
+ Vision-Language Models (VLMs) excel at static visual understanding but struggle with **dynamic spatial reasoning**, such as predicting how a scene changes under actions (e.g., moving forward, turning).
23
 
24
  💡 **Key Idea**
25
+ We introduce **World2VLM**, a framework that uses **world models as training-time teachers** to distill *spatial imagination* into VLMs—enabling them to reason about **future views and action consequences without external simulation at inference time**.
26
 
27
  📦 **This repository** contains:
28
  - A **compact demo dataset** showcasing the data construction pipeline