LAP
Collection
LAP: Language-Action Pre-training Enables Zero-Shot Cross-Embodiment Transfer • 2 items • Updated • 3
Language-Action Pre-Training Enables Zero-Shot Cross-Embodiment Transfer
🌐 Website: https://lap-vla.github.io/
📄 Paper: https://arxiv.org/abs/2602.10556
💻 Code: https://github.com/lihzha/lap
You can download the LAP checkpoint directly from the Hugging Face Hub.
Install the Hugging Face Hub CLI:
pip install -U huggingface_hub
Download the checkpoint to the expected directory:
hf download lihzha/LAP-3B-Libero --local-dir ./checkpoints/lap_libero
After downloading, the checkpoint will be located at:
./checkpoints/lap_libero
This matches the default path expected by the LAP codebase.
You can also download the checkpoint programmatically:
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="lihzha/LAP-3B-Libero",
local_dir="./checkpoints/lap_libero"
)
LAP-3B-Libero is a Vision-Language-Action (VLA) model fine-tuned from LAP-3B on the LIBERO benchmark.
If you use LAP-3B-Libero in your research, please cite:
@article{zha2026lap,
title={LAP: Language-Action Pre-Training Enables Zero-Shot Cross-Embodiment Transfer},
author={Zha, Lihan and Hancock, Asher and Zhang, Mingtong and Yin, Tenny and Huang, Yixuan and Shah, Dhruv and Ren, Allen Z. and Majumdar, Anirudha},
journal={arXiv preprint arXiv:2602.10556},
year={2026}
}