# FastUMI Pro Dataset ## Project Description FastUMI Pro is the upgraded enterprise version of FastUMI, designed for streamlined, end-to-end data acquisition and transformation systems for corporate users. FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for universal robot manipulation tasks, supporting hardware-agnostic, scalable, and efficient data collection and model training. The project provides physical prototype systems, complete data collection code, standardized data formats, and utility tools to facilitate real-world manipulation learning research. ## Dataset Overview FastUMI Pro builds upon FastUMI with enhanced features: - Higher precision trajectory data - Support for more diverse robot embodiments, truly enabling "one-brain-multi-form" applications - Comprehensive data leadership in the field The original FastUMI open-sourced FastUMI-150K containing approximately 150,000 real-world manipulation trajectories, which was first provided to selected research partners for training large-scale VLA (Vision-Language-Action) models. ## Quick Start ### Download Example Data ```bash # Original command (may be slow in some regions) huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/ # Mirror acceleration solution export HF_ENDPOINT=https://hf-mirror.com huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/ ``` ## Data Structure FastUMI PRO uses raw format containing various types of raw sensor data, which can be easily converted to other formats. The raw format facilitates querying and validating original sensor outputs for rapid problem identification. DATA/ └── device_label_xv_serial/ └── session_timestamp/ ├── RGB_Images/ │ ├── timestamps.csv │ └── Frames/ │ ├── frame_000001.jpg │ ├── frame_000002.jpg │ └── ... ├── SLAM_Poses/ │ └── slam_raw.txt ├── Vive_Poses/ │ └── vive_data_tum.txt ├── ToF_PointClouds/ │ ├── timestamps.csv │ └── PointClouds/ │ ├── pointcloud_000001.pcd │ ├── pointcloud_000002.pcd │ └── ... ├── Clamp_Data/ │ └── clamp_data_tum.txt └── Merged_Trajectory/ ├── merged_trajectory.txt └── merge_stats.txt ### Directory Descriptions session_xxx: Individual data collection session RGB_Images: Frame images supporting multiple viewpoints; supports both Images and Videos SLAM_Poses: UMI pose data Vive_Poses: Vive tracking system pose data ToF_PointClouds: Time-of-Flight point cloud raw data (depth) Merged_Trajectory: Trajectory data ### Data Specifications Attributes sim: False: Real environment data True: Simulation data Observations observations/images/: Camera image data Default camera name: front Shape: (frames, 1920, 1080, 3) Data type: uint8 Compression: gzip (level 4) observations/qpos: Type: Floating point dataset Shape: (timesteps, 7) Meaning: Robot end-effector position + quaternion orientation Order: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W] Actions Type: Floating point dataset Shape: (timesteps, 7) Meaning: Actions (same structure as qpos, typically mirroring qpos) Data Conversion Supports one-click export to specific formats via web toolchain, or conversion between formats using tools like: Any4lerobot: GitHub - Tavish9/any4lerobot Conversion paths supported: hdf5 → lerobot v3.0 hdf5 → lerobot(Pi0) v2.0 hdf5 → rlds Model Performance Preliminary experiments show that models trained on this dataset demonstrate significant multi-task generalization capabilities in universal manipulation tasks: VLA Models: Including PI-O models with language understanding and action planning capabilities, exhibiting excellent generalization and execution stability in multi-task language-conditioned control VA Models: Classical visual control architectures like ACT, DP also show significant improvements, particularly in complex operation sequences, viewpoint perturbations, and fine motion tracking with enhanced robustness Related Links Project Homepage: https://fastumi.com/pro/ FastUMI Project: https://fastumi.com Hugging Face Dataset: https://huggingface.co/datasets/IPE... Research Paper: [2409.19499] FastUMI: A Scalable and... Open Source Toolchain: Demo Replay: GitHub - Loki-Lu/FastUMI_replay_sin... Dual-arm Demo: GitHub - Loki-Lu/FastUMI_replay_du... Hardware SDK: GitHub - FastUMIRobotics/FastUMI_... Monitoring Tools: GitHub - FastUMIRobotics/FastUMI_... Data Collection Tools: GitHub - FastUMIRobotics/FastUMI_... Related Research [2508.10538] MLM: Learning Multi-ta... PIO (FastUMI Lightweight Adaptation, Version V0) Full Tutorial: PIO (FastUMI数据轻量级适配,版本V0)… Citation If you use this dataset in your research, please cite the relevant papers: bibtex @article{fastumi2024, title={FastUMI: A Scalable and Hardware-Agnostic Framework for Robot Manipulation Learning}, author={FastUMI Team}, journal={arXiv preprint}, year={2024} } Contact For any questions or suggestions, please contact the development team: Lead: [Name] Email: [Email Address] WeChat: [WeChat ID]