libero_mv_feats / README.md
junjin0's picture
Add dataset card and link to paper (#1)
67fe711
---
task_categories:
- robotics
---
# Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation
This repository contains the multi-view features dataset used in the paper "[Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation](https://huggingface.co/papers/2605.11832)".
- **Project Page:** [https://junjxiao.github.io/Multi-view-VLA.github.io/](https://junjxiao.github.io/Multi-view-VLA.github.io/)
- **GitHub Repository:** [https://github.com/junjxiao/Multi-view-VLA](https://github.com/junjxiao/Multi-view-VLA)
## Description
This dataset provides multi-view latent priors extracted for robotic manipulation benchmarks, specifically for LIBERO. The associated research addresses challenges in Vision-Language-Action (VLA) models by leveraging multi-view diffusion models to synthesize latent novel views, helping to resolve depth ambiguity from monocular inputs.
## Citation
If you find this dataset or the associated work useful, please cite:
```bibtex
@article{xiao2026learning,
title={Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation},
author={Junjin Xiao and Dongyang Li and Yandan Yang and Shuang Zeng and Tong Lin and Xinyuan Chang and Feng Xiong and Mu Xu and Xing Wei and Zhiheng Ma and Qing Zhang and Wei-Shi Zheng},
year={2026},
journal={arxiv:2605.11832},
}
```