nielsr HF Staff commited on
Commit
4960586
·
verified ·
1 Parent(s): 2a4b93a

Add dataset card and link to paper

Browse files

This PR improves the dataset card for the LIBERO multi-view features dataset by adding relevant metadata and linking it to the original paper, project page, and GitHub repository. This ensures the dataset is correctly indexed and provides context for users within the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +27 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - robotics
4
+ ---
5
+
6
+ # Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation
7
+
8
+ This repository contains the multi-view features dataset used in the paper "[Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation](https://huggingface.co/papers/2605.11832)".
9
+
10
+ - **Project Page:** [https://junjxiao.github.io/Multi-view-VLA.github.io/](https://junjxiao.github.io/Multi-view-VLA.github.io/)
11
+ - **GitHub Repository:** [https://github.com/junjxiao/Multi-view-VLA](https://github.com/junjxiao/Multi-view-VLA)
12
+
13
+ ## Description
14
+ This dataset provides multi-view latent priors extracted for robotic manipulation benchmarks, specifically for LIBERO. The associated research addresses challenges in Vision-Language-Action (VLA) models by leveraging multi-view diffusion models to synthesize latent novel views, helping to resolve depth ambiguity from monocular inputs.
15
+
16
+ ## Citation
17
+
18
+ If you find this dataset or the associated work useful, please cite:
19
+
20
+ ```bibtex
21
+ @article{xiao2026learning,
22
+ title={Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation},
23
+ author={Junjin Xiao and Dongyang Li and Yandan Yang and Shuang Zeng and Tong Lin and Xinyuan Chang and Feng Xiong and Mu Xu and Xing Wei and Zhiheng Ma and Qing Zhang and Wei-Shi Zheng},
24
+ year={2026},
25
+ journal={arxiv:2605.11832},
26
+ }
27
+ ```