xyzhang368 nielsr HF Staff commited on
Commit
34bd8a8
·
1 Parent(s): f4b741c

Add dataset card for RLA-WM (#1)

Browse files

- Add dataset card for RLA-WM (ea21c3cd40a58941a99718bbfa4cc90ba63f326d)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - robotics
4
+ ---
5
+
6
+ # Learning Visual Feature-Based World Models via Residual Latent Action
7
+
8
+ This repository contains the dataset artifacts for the paper [Learning Visual Feature-Based World Models via Residual Latent Action](https://huggingface.co/papers/2605.07079).
9
+
10
+ [Project page](https://mlzxy.github.io/rla-wm) | [GitHub](https://github.com/mlzxy/rla-wm)
11
+
12
+ ## Dataset Description
13
+
14
+ The primary dataset included here is **Maniskill3DWorld**, which consists of multi-modal ManiSkill trajectories. It is designed for 3D and multi-view research and includes:
15
+ - 7-camera RGB, depth, and masks
16
+ - Animated robot meshes
17
+ - Voxel point clouds
18
+
19
+ ## Sample Usage
20
+
21
+ You can download and extract the ManiSkill dataset using the Hugging Face CLI:
22
+
23
+ ```bash
24
+ # Create data directory
25
+ mkdir -p data && cd data
26
+
27
+ # Download the dataset
28
+ hf download xyzhang368/RLA-WM --repo-type dataset --include "maniskill.tar" --local-dir .
29
+
30
+ # Extract the data
31
+ tar -xf maniskill.tar
32
+ ```
33
+
34
+ ## Citation
35
+
36
+ ```bibtex
37
+ @article{zhang2026learning,
38
+ title={{Learning Visual Feature-Based World Models via Residual Latent Action}},
39
+ author={Zhang, Xinyu and Xu, Zhengtong and Tao, Yutian and Wang, Yeping and She, Yu and Boularias, Abdeslam},
40
+ journal={arXiv preprint arXiv:2605.07079},
41
+ year={2026},
42
+ eprint={2605.07079},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.CV}
45
+ }
46
+ ```