EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models
Paper • 2505.09694 • Published • 20
image imagewidth (px) 512 640 | label class label 2
classes |
|---|---|
0prompt | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video | |
1video |
Ground truth data for EWMBench evaluation, extracted and restructured for direct use by the wm-evaluation-harness framework.
Extracted from agibot-world/EWMBench (gt_dataset.tar). Original dataset by AgibotTech.
{task_id}/
├── {episode_id}/
│ ├── prompt/
│ │ ├── init_frame.png # Initial frame for generation
│ │ └── prompt.txt # Task description
│ └── video/
│ ├── frame_00000.jpg # Ground truth frame sequence
│ ├── frame_00001.jpg
│ └── ...
| Item | Count |
|---|---|
| Tasks | 7 (367, 392, 497, 511, 543, 558, 574) |
| Episodes per task | 3 |
| Total episodes | 21 |
| Total files | 3,959 |
| Size | ~129 MB |
This dataset is auto-downloaded by the framework when referenced as a HuggingFace dataset repo ID:
# configs/ewmbench_hf.yaml
evaluation:
gt_dir: Physis-AI/wm-eval-gt-ewmbench # auto-download
benchmarks:
ewmbench:
metrics: [psnr, ssim]
wm-eval evaluate --config configs/ewmbench_hf.yaml --videos <your_pred_dir>
Or used programmatically:
from wm_eval.evaluation.models.registry import ModelRegistry
registry = ModelRegistry()
gt_path = registry.resolve("Physis-AI/wm-eval-gt-ewmbench", repo_type="dataset")
This dataset follows the original CC BY-NC-SA 4.0 license from AgibotTech/EWMBench.
@article{ewmbench2025,
title={EWMBench: Evaluating Scene Understanding and Generation Quality for Embodied World Models},
author={AgibotTech},
year={2025},
eprint={2505.09694},
archivePrefix={arXiv}
}