--- language: - en - zh tags: - robotics - manipulation - vla - trajectory-data - multimodal - vision-language-action license: other task_categories: - robotics - reinforcement-learning multimodal: vision+language+action dataset_info: features: - name: rgb_images dtype: image description: Multi-view RGB images - name: slam_poses sequence: float32 description: SLAM pose trajectories - name: vive_poses sequence: float32 description: Vive tracking system poses - name: point_clouds sequence: float32 description: Time-of-Flight point cloud data - name: clamp_data sequence: float32 description: Clamp sensor readings - name: merged_trajectory sequence: float32 description: Fused trajectory data configs: - config_name: default data_files: "**/*" ---
FastUMI Pro Dataset
![FastUMI](https://img.shields.io/badge/FastUMI-Pro-brightgreen) ![Dataset](https://img.shields.io/badge/Dataset-150K-blue) ![VLA](https://img.shields.io/badge/VLA-Ready-orange) **Enterprise-grade Robotic Manipulation Dataset for Universal Manipulation Interface** [Project Homepage](https://fastumi.com/pro/) | [FastUMI Home](https://fastumi.com) | [Example Data](https://huggingface.co/datasets/FastUMIPro/example_data_fastumi_pro_raw)
## 📖 Overview FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for general-purpose robotic manipulation tasks, designed to support hardware-agnostic, scalable, and efficient data collection and model training. The project provides: - Physical prototype systems - Complete data collection codebase - Standardized data formats and utilities - Tools for real-world manipulation learning research ## 🚀 Features ### FastUMI Pro Enhancements - ✅ **Higher precision trajectory data** - ✅ **Diverse embodiment support** for true "one-brain-multiple-forms" - ✅ **Enterprise-ready** pipeline and full-link data processing ### FastUMI-150K - ~150,000 real-world manipulation trajectories - Used by research partners for large-scale VLA (Vision-Language-Action) model training - Demonstrated significant multi-task generalization capabilities ## 📊 Model Performance **VLA Model Results**: [TBD] ## 🛠️ Toolchain ### Core Tools | Tool | Description | Link | |------|-------------|------| | **Single-Arm Demo Replay** | Single-arm data replay code | [GitHub](https://github.com/Loki-Lu/FastUMI_replay_singleARM) | | **Dual-Arm Demo Replay** | Dual-arm data replay code | [GitHub](https://github.com/Loki-Lu/FastUMI_replay_dualARM) | | **Hardware SDK** | FastUMI hardware development kit | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Hardware_SDK) | | **Monitor Tool** | Real-time device monitoring | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Monitor_Tool) | | **Data Collection** | Data collection utilities | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Data_Collection) | ### Research & Applications - **Paper**: [MLM: Learning Multi-task Loco-Manipulation Whole-Body Control for Quadruped Robot with Arm](https://arxiv.org/abs/2508.10538) - **Tutorial**: PI0 (FastUMI Data Lightweight Adaptation, Version V0) Full Pipeline ## 📥 Data Download ### Example Dataset ```bash # Direct download (may be slow in some regions) huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/ ``` Mirror Download (Recommended) ```bash # Set mirror endpoint export HF_ENDPOINT=https://hf-mirror.com ``` # Download via mirror huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/ 📁 Data Structure Each session represents an independent operation "episode" containing observation data and action sequences. ``` Directory Structure text session_001/ └── device_label_xv_serial/ └── session_timestamp/ ├── RGB_Images/ │ ├── timestamps.csv │ └── Frames/ │ ├── frame_000001.jpg │ └── ... ├── SLAM_Poses/ │ └── slam_raw.txt ├── Vive_Poses/ │ └── vive_data_tum.txt ├── ToF_PointClouds/ │ ├── timestamps.csv │ └── PointClouds/ │ └── pointcloud_000001.pcd ├── Clamp_Data/ │ └── clamp_data_tum.txt └── Merged_Trajectory/ ├── merged_trajectory.txt └── merge_stats.txt ``` ## Data Specifications | Data Type | Path | Shape| Type | Description | | :--- | :--- | :--- | :--- | :--- | | **RGB Images** | `session_XXX/RGB_Images/Video.MP4` | `(frames, 1080, 1920, 3)`| `uint8`| Camera video data, 60 FPS | | **SLAM Poses** | `session_XXX/SLAM_Poses/slam_raw.txt` | `(timestamps, 7)`| `float` | UMI end-effector poses | | **Vive Poses** | `session_XXX/Vive_Poses/vive_data_tum.txt` | `(timestamps, 7)`| `float` | Vive base station poses | | **ToF PointClouds** | `session_XXX/PointClouds/pointcloud_...pcd` | `pcd format` | Time-of-Flight point cloud data | | **Clamp Data** | `session_XXX/Clamp_Data/clamp_data_tum.txt` | `(timestamps, 1)`| `float` | Gripper spacing (mm) | | **Merged Trajectory** | `session_XXX/Merged_Trajectory/merged_trajectory.txt` | `(timestamps, 7)`| `float` | Fused trajectory (Vive/UMI based on velocity) | ### Pose Data Format All pose data (SLAM, Vive, Merged) follow the same format: | Column Name | Description | | :--- | :--- | | **Timestamp** | Unix timestamp of the trajectory data | | **Pos X** | X-coordinate of position (meters) | | **Pos Y** | Y-coordinate of position (meters) | | **Pos Z** | Z-coordinate of position (meters) | | **Q_X** | X-component of orientation quaternion | | **Q_Y** | Y-component of orientation quaternion | | **Q_Z** | Z-component of orientation quaternion | | **Q_W** | W-component of orientation quaternion | ## 🔄 Data Conversion [TBD - Data conversion methods will be added here] ## 🤝 Collaboration FastUMI Pro dataset is available for research collaboration. The full FastUMI-150K dataset has been provided to partner research teams for large-scale model training. ## 📞 Contact For questions or suggestions, please contact the development team Lead: Ding Yan Email: dingyan@lumosbot.tech WeChat: Duke_dingyan