Raffael-Kultyshev's picture
Rewrite README: accurate description of dataset content, pipeline, and data format
501820a verified
metadata
license: cc-by-nc-4.0
task_categories:
  - robotics
tags:
  - lerobot
  - hand-pose
  - humanoid
  - manipulation
  - 6dof
  - mediapipe
  - egocentric
  - imitation-learning
size_categories:
  - 10K<n<100K
language:
  - en
pretty_name: Dynamic Intelligence - Humanoid Robot Training Dataset

Dynamic Intelligence — Humanoid Robot Training Dataset

A first-person (egocentric) video dataset of human hand manipulation, designed for training humanoid robot policies via imitation learning. Each episode captures a person performing an everyday household task — folding clothes, moving dishes, opening doors — filmed from a head-mounted iPhone using its built-in LiDAR and depth sensors.

The dataset pairs each video with frame-level 3D hand tracking and camera pose data, giving learning algorithms both the visual input and the corresponding spatial trajectories they need to reproduce the demonstrated behavior on a robot.


How it works

Recording setup. A person wears an iPhone 13 Pro on their head (using a head mount). The phone runs the Record3D app, which simultaneously captures:

  • RGB video at 30 FPS
  • Depth maps via the LiDAR sensor
  • 6-DoF camera pose from ARKit (position + orientation of the phone in the room)

Processing pipeline. After recording, each episode goes through an offline pipeline:

  1. Hand detectionMediaPipe detects 2D hand landmarks in every RGB frame
  2. 3D reconstruction — The 2D landmarks are projected into 3D space using the corresponding depth map, producing real-world XYZ positions (in cm) relative to the camera
  3. Action computation — Frame-to-frame deltas are computed for both the camera and hand positions, representing the "actions" a robot would need to take

Result. Each episode contains a synchronized video and a parquet file with per-frame 3D observations and actions, formatted for the LeRobot framework.


Dataset overview

Episodes 145
Total data frames ~59,000
Video FPS 30
Tasks 12 household manipulation tasks
Format LeRobot v2.0
Sensor iPhone 13 Pro (RGB + LiDAR + ARKit)
Perspective Egocentric (head-mounted)

Tasks

# Task instruction Episodes Count
1 Fold the t-shirt on the bed. 0–7 8
2 Pick up the two items on the floor and put them on the bed. 8–17 10
3 Fold the jeans on the bed. 18–27 10
4 Fold the underwear on the table. 28–37 10
5 Put the pillow in its correct place. 38–47 10
6 Place the tableware on the kitchen counter. 48–57 10
7 Get out of the room and close the door behind you. 58–66 9
8 Put the sandals in the right place. 67–76 10
9 Put the cleaning cloth in the laundry basket. 77–86 10
10 Screw the cap back on the bottle. 87–95 9
11 Tuck the chairs into the table. 96–126 31
12 Put the dishes in the sink. 127–144 18

What's in the data

Each episode has two files: a video (.mp4) and a parquet table with one row per tracked frame.

Observations (what the robot sees)

Column Shape Unit Description
observation.camera_pose float[6] cm, degrees Position (x, y, z) and orientation (roll, pitch, yaw) of the head-mounted camera in the room. Comes from ARKit's visual-inertial odometry.
observation.left_hand float[9] cm 3D positions of 3 keypoints on the left hand: wrist, thumb tip, and index fingertip (x, y, z each).
observation.right_hand float[9] cm 3D positions of 3 keypoints on the right hand: wrist, index fingertip, and middle fingertip (x, y, z each).

Actions (what the robot should do)

Column Shape Description
action.camera_delta float[6] Frame-to-frame change in camera pose (dx, dy, dz, droll, dpitch, dyaw). Represents head movement.
action.left_hand_delta float[9] Frame-to-frame change in left hand keypoint positions.
action.right_hand_delta float[9] Frame-to-frame change in right hand keypoint positions.

Metadata columns

Column Type Description
episode_index int Which episode (0–144)
frame_index int Frame number within the episode
timestamp float Time in seconds from episode start
language_instruction string Natural language task description (same for all frames in an episode)
next.done bool Whether this is the last frame of the episode

Coordinate system

All 3D positions are relative to the camera:

  • X → right
  • Y → down
  • Z → forward (into the scene)

Hand values of [0, 0, 0] mean the hand was not detected in that frame (e.g. out of view or occluded).


File structure

├── data/
│   ├── chunk-000/          # Parquet files for episodes 0–99
│   └── chunk-001/          # Parquet files for episodes 100–144
├── videos/
│   ├── chunk-000/rgb/      # MP4 videos for episodes 0–99
│   └── chunk-001/rgb/      # MP4 videos for episodes 100–144
├── meta/
│   ├── info.json           # LeRobot dataset config
│   └── stats.json          # Column statistics (min/max/mean/std)
└── README.md

Quick start

With LeRobot

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")

sample = dataset[0]
print(sample["language_instruction"])        # "Fold the t-shirt on the bed."
print(sample["observation.camera_pose"])     # tensor of shape [6]
print(sample["action.left_hand_delta"])      # tensor of shape [9]

Direct download

import pandas as pd
from huggingface_hub import hf_hub_download

path = hf_hub_download(
    repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
    filename="data/chunk-000/episode_000000.parquet",
    repo_type="dataset",
)
df = pd.read_parquet(path)
print(f"{len(df)} frames")
print(df[["timestamp", "observation.camera_pose", "language_instruction"]].head())

Visualizer

Browse episodes interactively: DI Hand Pose Sample Dataset Viewer

The viewer shows the egocentric video alongside time-series plots of camera pose and hand positions, so you can see exactly what the person was doing and how the tracking data aligns with the video.


Citation

@dataset{dynamic_intelligence_2025,
  author = {Dynamic Intelligence},
  title = {Humanoid Robot Training Dataset: Egocentric Hand Manipulation Demonstrations},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
}

Contact

Organization: Dynamic Intelligence Email: shayan@dynamicintelligence.company