Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation
Paper • 2605.11832 • Published
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This repository contains the multi-view features dataset used in the paper "Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation".
This dataset provides multi-view latent priors extracted for robotic manipulation benchmarks, specifically for LIBERO. The associated research addresses challenges in Vision-Language-Action (VLA) models by leveraging multi-view diffusion models to synthesize latent novel views, helping to resolve depth ambiguity from monocular inputs.
If you find this dataset or the associated work useful, please cite:
@article{xiao2026learning,
title={Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation},
author={Junjin Xiao and Dongyang Li and Yandan Yang and Shuang Zeng and Tong Lin and Xinyuan Chang and Feng Xiong and Mu Xu and Xing Wei and Zhiheng Ma and Qing Zhang and Wei-Shi Zheng},
year={2026},
journal={arxiv:2605.11832},
}