LingBot-VA: Causal World Modeling for Robot Control

https://github.com/user-attachments/assets/cec7b7a6-953b-4fa4-8f1a-47efc1fce547 ## 💫 Meet **LingBot-VA**! We've built an AR diffusion framework for simultaneous world modeling and action! 🤖✨ **LingBot-VA** has focused on: - **Autoregressive Video-Action World Modeling**: Architecturally unifies visual dynamics prediction and action inference within a single interleaved sequence while maintaining their conceptual distinction. - **High-efficiency Execution**: A dual-stream mixture-of-transformers(MoT) architecture with Asynchronous Execution and KV Cache. - **Long-Horizon Performance and Generalization**: High improvements in sample efficiency, long-horizon success rates, and generalization to novel scenes. # 🚀 News - **[2026-01-29]** Weights and code for shared backbone released! Please stay tuned for our separated version! --- # 📦 Model Download - **Pretrained Checkpoints for Post-Training** | Model Name | Huggingface Repository | ModelScope Repository | Description | | :--- | :--- | :--- | :--- | | lingbot-va-base   | [🤗 robbyant/lingbot-va-base  ](https://huggingface.co/robbyant/lingbot-va-base) | [🤖 Robbyant/lingbot-va-base  ](https://modelscope.cn/models/Robbyant/lingbot-va-base) | LingBot-VA w/ shared backbone| | lingbot-va-posttrain-robotwin   | [🤗 robbyant/lingbot-va-posttrain-robotwin  ](https://huggingface.co/robbyant/lingbot-va-posttrain-robotwin) | [🤖 Robbyant/lingbot-va-posttrain-robotwin  ](https://modelscope.cn/models/Robbyant/lingbot-va-posttrain-robotwin) | LingBot-VA-Posttrain-Robotwin w/ shared backbone| --- # 🛠️ Quick Start ## Installation **Requirements** • Python == 3.10.16 • Pytorch == 2.9.0 • CUDA 12.6 ```bash pip install torch==2.9.0 torchvision==0.24.0 torchaudio==2.9.0 --index-url https://download.pytorch.org/whl/cu126 pip install websockets einops diffusers==0.36.0 transformers==4.55.2 accelerate msgpack opencv-python matplotlib ftfy easydict pip install flash-attn --no-build-isolation ``` ## Deploying LingBot-VA for Inference LingBot-VA supports both standalone execution and Server-Client architecture which separates the model environment from simulation. By isolating dependencies, the design avoids package clashes and supports distributed inference on GPUs, clusters, and other devices. ### Evaluation on RoboTwin-2.0 **Preparing the Environment** You can follow the official instructions from the original RoboTwin-2.0 repository: [https://robotwin-platform.github.io/doc/usage/robotwin-install.html](https://robotwin-platform.github.io/doc/usage/robotwin-install.html) In summary: 1. ```bash sudo apt install libvulkan1 mesa-vulkan-drivers vulkan-tools ``` 2. ```bash git clone https://github.com/RoboTwin-Platform/RoboTwin.git && cd RoboTwin ``` 3. modify script/requirements.txt ```bash transforms3d==0.4.2 sapien==3.0.0b1 scipy==1.10.1 mplib==0.2.1 gymnasium==0.29.1 trimesh==4.4.3 open3d==0.18.0 imageio==2.34.2 pydantic zarr openai huggingface_hub==0.36.2 h5py # For Description Generation azure==4.0.0 azure-ai-inference pyglet<2 wandb moviepy imageio termcolor av matplotlib ffmpeg ``` 4. modify line 8 of script/_install.sh: ```bash pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" --no-build-isolation ``` 5. run: ```bash bash script/_install.sh ``` 6. run: ```bash bash script/_download_assets.sh ``` **Deploying the Inference Server** ```bash # single GPU bash evaluation/robotwin/launch_server.sh # multi-GPU bash evaluation/robotwin/launch_server_multigpus.sh ``` **Executing the Inference Client** ```bash # single GPU task_name="adjust_bottle"; save_root="results/"; bash evaluation/robotwin/launch_client.sh ${save_root} ${task_name} # multi-GPU save_root="results/" task_group_id=0; bash evaluation/robotwin/launch_client_multigpus.sh ${save_root} ${task_group_id} ``` Related experiments results will be save in `/path/to/your/RoboTwin/${save_root}`. Please note that an `eval_result` folder is also generated. This is a native output from RoboTwin and is identical to the contents in the results folder; it can be safely ignored. It is important to note that the inference server and client must be deployed on the same machine. For launching multi-GPU client, we padded the original 50 tasks to 56 via duplication and partitioned them into 7 groups to align with the 8-GPU configuration of our inference node. You can specify the `task_group_id` (0-6) to select a particular group for inference. For detailed grouping configurations, please refer to `evaluation/robotwin/launch_client_multigpus.sh`. ### Run Image to Video-Action Generation We also provide a script for image to video-action generation: ```bash NGPU=1 CONFIG_NAME='robotwin_i2av' bash script/run_launch_va_server_sync.sh ``` --- # 📊 Performance We evaluate our model on both simulation benchmarks and real-world scenarios, and achieve state-of-the-art performance. ## Simulation Evaluation - **RoboTwin 2.0** We are the first to propel RoboTwin 2.0 metrics performance past the 90+ threshold!

* All metrics are reported in percentage (%). Higher values are bolded.

Method (Average 50 Tasks) Easy SR (%) Hard SR (%)
X-VLA 72.9 72.8
π0 65.9 58.4
π0.5 82.7 76.8
Motus 88.7 87.0
LingBot-VA (Ours) 92.9 (+4.2) 91.6 (+4.6)
- **LIBERO**

* All metrics are reported in percentage (%). Higher values are bolded.

Methods Spatial Object Goal Long Avg
π0 96.898.895.885.294.1
π0.5 98.898.298.092.496.9
OpenVLA 84.788.479.253.776.5
X-VLA 98.298.697.897.698.1
LingBot-VA (Ours) 98.5 ± 0.3 99.6 ± 0.3 97.2 ± 0.2 98.5 ± 0.5 98.5
  ## Real-world Deployment Six manipulation tasks across three categories: longhorizon tasks (Make Breakfast, Pick Screws), precision tasks (Insert Tube, Unpack Delivery), and deformable & articulated object manipulation (Fold Clothes, Fold Pants). Our method achieves state-of-the-art performance on both metrics (Progress Rate and Success Rate) with only 50 trials per task, substantially outperforming strong baseline π0.5.
Progress Score (PS): The average score across all trials divided by the maximum possible score, expressed as a percentage:
PS = Average_Progress / Max_Steps × 100%
Success Rate (SR): The number of successful trials divided by the total number of trials, expressed as a percentage:
SR = Successful_Trials / N × 100%

* All metrics are reported in percentage (%). Higher values are bolded.

Task Make Breakfast Pick Screws Insert Tube Unpack Delivery Fold Clothes Fold Pants
PS SR PS SR PS SR PS SR PS SR PS SR
π0.5 73.070.0 74.050.0 79.230.0 73.025.0 62.930.0 30.030.0
LingBot-VA (Ours) 97.075.0 82.570.0 85.840.0 84.565.0 48.835.0 76.770.0
# 🪪 License This project is released under the Apache License 2.0. See [LICENSE](LICENSE.txt) file for details. # 📚Citation ```bibtex @article{lingbot-va2026, title={Causal World Modeling for Robot Control}, author={Li, Lin and Zhang, Qihang and Luo, Yiming and Yang, Shuai and Wang, Ruilin and Han, Fei and Yu, Mingrui and Gao, Zelin and Xue, Nan and Zhu, Xing and Shen, Yujun and Xu, Yinghao}, journal={arXiv preprint arXiv:2601.21998}, year={2026} } ``` # 🧩 Acknowledgments This work builds upon several excellent open-source projects: - [Wan-Video](https://github.com/Wan-Video) - Vision transformer backbone - [MoT](https://github.com/facebookresearch/Mixture-of-Transformers) - Mixture-of-Transformers architecture - The broader open-source computer vision and robotics communities --- For questions, discussions, or collaborations: - **Issues**: Open an [issue](https://github.com/robbyant/lingbot-va/issues) on GitHub - **Email**: Contact Dr. [Qihang Zhang](https://zqh0253.github.io/) (liuhuan.zqh@antgroup.com) or Dr. [Lin Li](https://lilin-hitcrt.github.io/) (fengchang.ll@antgroup.com)