Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
World2VLM / README.md
WanyueZhang's picture
Update README.md
31842c6 verified
metadata
license: apache-2.0

๐ŸŒ World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning

๐Ÿ“„ Paper โ€ข ๐Ÿ’ป Code โ€ข ๐Ÿค— Dataset


โœจ Overview

This repository provides a demo dataset for the paper:

World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
Wanyue Zhang et al., 2026

๐Ÿ” Motivation
Vision-Language Models (VLMs) excel at static visual understanding but struggle with dynamic spatial reasoning, such as predicting how a scene changes under actions (e.g., moving forward, turning).

๐Ÿ’ก Key Idea
We introduce World2VLM, a framework that uses world models as training-time teachers to distill spatial imagination into VLMsโ€”enabling them to reason about future views and action consequences without external simulation at inference time.

๐Ÿ“ฆ This repository contains:

  • A compact demo dataset showcasing the data construction pipeline
  • Representative trajectory-based supervision samples
  • Examples of 8 dynamic spatial reasoning task types

โš ๏ธ The full dataset will be released soon.


๐Ÿง  What is World2VLM?

World2VLM trains VLMs to mentally simulate the world by learning from world-model-generated transitions:

  • Input: an image + an action (e.g., move forward)
  • World model: generates the future view
  • Output: structured supervision for reasoning

This enables two key capabilities:

  • ๐Ÿ” Inverse reasoning: infer the action from image changes
  • ๐Ÿ”ฎ Forward reasoning: predict what happens after an action

Unlike prior work, no world model is needed at inference time.


๐Ÿ“‚ Dataset Structure

data-demo/
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ SVC-RealScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ”œโ”€โ”€ SVC-SimulatedScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ”œโ”€โ”€ HY-WorldPlay-RealScene-demo
โ”‚   โ”œโ”€โ”€ tasks_demo.jsonl
โ”‚   โ””โ”€โ”€ scenes/demo_scene/...
โ””โ”€โ”€ HY-WorldPlay-SimulatedScene-demo
    โ”œโ”€โ”€ tasks_demo.jsonl
    โ””โ”€โ”€ scenes/demo_scene/...

๐Ÿ” Included Demo Subsets

We provide four compact subsets covering:

Teacher Model Scene Type Description
SVC Real Scene Camera-conditioned view synthesis
SVC Simulated Scene Synthetic environment transitions
HY-WorldPlay Real Scene Action-conditioned world dynamics
HY-WorldPlay Simulated Scene Long-horizon simulated trajectories

Each subset includes:

  • ๐ŸŽฌ A trajectory bundle (images + metadata)
  • ๐Ÿ“ A tasks_demo.jsonl file with structured supervision

๐Ÿงพ Data Format

Each line in tasks_demo.jsonl represents one training example.

Common Fields

  • task_type
    One of 8 spatial reasoning tasks: A1โ€“A4, D1โ€“D4

  • messages
    A two-turn conversation:

    • User prompt
    • Target answer
  • images
    Relative paths to referenced images


๐Ÿงฉ Task Suite (8 Types)

World2VLM defines a bidirectional task suite:

๐Ÿ” Motion-Centric (A-series)

Task Description
A1 Motion distance estimation
A2 Motion orientation estimation
A3 Multi-step motion prediction
A4 Action-sequence verification

๐ŸŽฏ Object-Centric (D-series)

Task Description
D1 Post-action bounding box prediction
D2 Post-action visibility detection
D3 Cross-view action inference
D4 Object consistency across views

๐Ÿ’ก These tasks jointly enforce:

  • Understanding camera motion
  • Tracking object transformations
  • Reasoning about viewpoint changes

โš™๏ธ Data Construction Pipeline

The dataset is generated using world models as teachers:

  1. ๐Ÿ–ผ๏ธ Start from an anchor image
  2. ๐ŸŽฎ Sample an egocentric action sequence
  3. ๐ŸŒ Generate future views via world models
  4. ๐Ÿง  Convert transitions into:
    • Forward tasks (predict outcomes)
    • Inverse tasks (recover actions)

This yields structured supervision of the form:

  • P(action | before, after) (inverse)
  • P(outcome | before, action) (forward)

๐Ÿš€ Key Features

  • ๐Ÿง  Spatial imagination distilled into VLMs
  • ๐Ÿ”„ Bidirectional reasoning supervision
  • ๐Ÿงฉ Multi-task structured dataset
  • โšก No world model needed at inference
  • ๐ŸŒ Supports both real and simulated scenes

๐Ÿ“Š Why This Matters

World2VLM addresses a core limitation:

โŒ VLMs fail at mental simulation
โœ… World models can simulateโ€”but are expensive

๐Ÿ‘‰ Our solution:
Train VLMs to internalize world-model reasoning.

This leads to:

  • Better dynamic spatial reasoning
  • Lower inference cost
  • Improved performance on spatial reasoning benchmarks

๐Ÿ“ Notes

  • This repo contains demo-scale data only
  • Full dataset (~100K samples) will be released soon
  • Demo is intended for:
    • ๐Ÿ” Format inspection
    • ๐Ÿงช Pipeline understanding
    • ๐Ÿง  Task design exploration

๐Ÿ“š Citation

If you find this work useful, please cite:

@misc{zhang2026world2vlmdistillingworldmodel,
      title={World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning}, 
      author={Wanyue Zhang and Wenxiang Wu and Wang Xu and Jiaxin Luo and Helu Zhi and Yibin Huang and Shuo Ren and Zitao Liu and Jiajun Zhang},
      year={2026},
      eprint={2604.26934},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2604.26934}, 
}

๐Ÿค Acknowledgements

We thank the community for advances in:


๐Ÿ“ฌ Contact

For questions or collaborations, please open an issue or contact the authors via the paper.