LAP-3B

Language-Action Pre-Training Enables Zero-Shot Cross-Embodiment Transfer

🌐 Website: https://lap-vla.github.io/
📄 Paper: https://arxiv.org/abs/2602.10556
💻 Code: https://github.com/lihzha/lap

Download

You can download the LAP checkpoint directly from the Hugging Face Hub.

Using the Hugging Face CLI (recommended)

Install the Hugging Face Hub CLI:

pip install -U huggingface_hub

Download the checkpoint to the expected directory:

hf download lihzha/LAP-3B-Libero --local-dir ./checkpoints/lap_libero

After downloading, the checkpoint will be located at:

./checkpoints/lap_libero

This matches the default path expected by the LAP codebase.

Alternative: Python API

You can also download the checkpoint programmatically:

from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="lihzha/LAP-3B-Libero",
    local_dir="./checkpoints/lap_libero"
)

Model Summary

LAP-3B-Libero is a Vision-Language-Action (VLA) model fine-tuned from LAP-3B on the LIBERO benchmark.

Citation

If you use LAP-3B-Libero in your research, please cite:

@article{zha2026lap,
  title={LAP: Language-Action Pre-Training Enables Zero-Shot Cross-Embodiment Transfer},
  author={Zha, Lihan and Hancock, Asher and Zhang, Mingtong and Yin, Tenny and Huang, Yixuan and Shah, Dhruv and Ren, Allen Z. and Majumdar, Anirudha},
  journal={arXiv preprint arXiv:2602.10556},
  year={2026}
}
Downloads last month
20
Video Preview
loading

Collection including lihzha/LAP-3B-Libero

Paper for lihzha/LAP-3B-Libero