--- license: apache-2.0 task_categories: - robotics - video-classification tags: - lerobot - tactile - multimodal - egocentric - manipulation - imu - stereo - robotics pretty_name: Human Archive Multisensory Dataset Samples size_categories: - 100K dict: """Extract per-finger taxels from the raw 256-element array. Args: tactile_256: Shape (256,) array of pressure values. hand: "left" or "right". Returns: Dict mapping finger name to (12,) array [4 phalanges x 3 taxels]. """ if hand == "left": mapping = { "thumb": [30,29,28, 14,13,12, 254,253,252, 238,237,236], "index": [27,26,25, 11,10,9, 251,250,249, 235,234,233], "middle": [24,23,22, 8,7,6, 248,247,246, 232,231,230], "ring": [21,20,19, 5,4,3, 245,244,243, 229,228,227], "pinky": [18,17,16, 2,1,0, 242,241,240, 226,225,224], } else: mapping = { "thumb": [239,238,237, 255,254,253, 15,14,13, 31,30,29], "index": [236,235,234, 252,251,250, 12,11,10, 28,27,26], "middle": [233,232,231, 249,248,247, 9,8,7, 25,24,23], "ring": [230,229,228, 246,245,244, 6,5,4, 22,21,20], "pinky": [227,226,225, 243,242,241, 3,2,1, 19,18,17], } return {name: tactile_256[idx] for name, idx in mapping.items()} ``` ### 3. Body IMUs (8 streams) IMU sensors are distributed across the upper body, providing acceleration and angular velocity data. | Feature | Shape | Channels | Placement | |---|---|---|---| | `observation.imu.head` | (9,) | accel(3) + gyro(3) + mag(3) | On the camera, ~2 inches in front of the forehead | | `observation.imu.chest` | (6,) | accel(3) + gyro(3) | Center of the sternum | | `observation.imu.left_bicep` | (6,) | accel(3) + gyro(3) | Outer surface of the left upper arm, midway between shoulder and elbow | | `observation.imu.right_bicep` | (6,) | accel(3) + gyro(3) | Outer surface of the right upper arm, midway between shoulder and elbow | | `observation.imu.left_forearm` | (6,) | accel(3) + gyro(3) | Outer surface of the left forearm, midway between elbow and wrist | | `observation.imu.right_forearm` | (6,) | accel(3) + gyro(3) | Outer surface of the right forearm, midway between elbow and wrist | | `observation.imu.left_hand` | (4,) | quaternion(4) | Back of the left hand (from glove) | | `observation.imu.right_hand` | (4,) | quaternion(4) | Back of the right hand (from glove) | For the 6-axis IMUs, the channel layout is `[accel_x, accel_y, accel_z, gyro_x, gyro_y, gyro_z]`. The head IMU includes 3 additional magnetometer channels. The hand IMUs provide orientation quaternions `[qx, qy, qz, qw]`. All IMU data has been resampled to 30 fps to align with video frames using sample-and-hold interpolation from the original variable-rate sensor streams. ### 4. Temporal Alignment All sensor streams are synchronized to the video frame clock at 30 fps. Cross-modal alignment error is less than 33 ms (less than 1 frame). Variable-rate sensors (tactile gloves, BLE IMUs) are resampled to 30 fps using sample-and-hold: each video frame carries the most recent sensor reading available at that timestamp. ### 5. Hand Motion Capture from Tactile Data The tactile data can be used to derive per-finger bend (curl) values, providing a form of hand motion capture without an optical tracking system. The method sums the pressure across the fingertip phalanges for each finger to estimate how curled it is: ```python import numpy as np def compute_finger_bend(tactile_256: np.ndarray, hand: str = "left") -> np.ndarray: """Compute a bend value (0 = open, higher = curled) for each finger. Sums 6 taxels across the first 3 phalanges of each finger. Returns shape (5,) array: [thumb, index, middle, ring, pinky]. """ if hand == "left": finger_indices = { "thumb": [30, 29, 14, 13, 254, 253], "index": [27, 26, 11, 10, 251, 250], "middle": [24, 23, 8, 7, 248, 247], "ring": [21, 20, 5, 4, 245, 244], "pinky": [18, 17, 2, 1, 242, 241], } else: finger_indices = { "thumb": [239, 238, 255, 254, 15, 14], "index": [236, 235, 252, 251, 12, 11], "middle": [233, 232, 249, 248, 9, 8], "ring": [230, 229, 246, 245, 6, 5], "pinky": [227, 226, 243, 242, 3, 2], } return np.array([tactile_256[idx].sum() for idx in finger_indices.values()]) def normalize_finger_bend( bend_raw: np.ndarray, open_hand: np.ndarray, closed_fist: np.ndarray, ) -> np.ndarray: """Normalize raw bend values to 0.0 (open) – 1.0 (fully curled). Requires calibration frames: one with hand open, one with a closed fist. """ range_ = closed_fist - open_hand range_[range_ == 0] = 1 # avoid division by zero normalized = (bend_raw - open_hand) / range_ return np.clip(normalized, 0.0, 1.0) ``` To calibrate, capture one frame with the hand fully open and one with a closed fist. The normalized value for each finger then maps directly to joint angle: 0.0 corresponds to fully extended and 1.0 to fully curled. All joints in a finger chain (MCP, PIP, DIP) can use the same normalized value, applying a rotation of `-pi/2 * normalized` per joint for a simple kinematic model. Combined with the hand IMU quaternion (for wrist orientation) and the per-finger bend values (for finger curl), this provides 6-DOF wrist pose plus 5-DOF finger articulation per hand. --- ## Loading the Dataset ### Prerequisites ```bash pip install huggingface_hub pandas pyarrow ``` ### Download ```bash huggingface-cli login --token YOUR_TOKEN huggingface-cli download humanarchive/HA-Multi-Samples \ --repo-type dataset \ --local-dir ~/HA-Multi-Samples ``` ### Loading in Python ```python import pandas as pd import json from pathlib import Path DATASET_DIR = Path("~/HA-Multi-Samples").expanduser() # Load metadata with open(DATASET_DIR / "meta" / "info.json") as f: info = json.load(f) with open(DATASET_DIR / "meta" / "stats.json") as f: stats = json.load(f) tasks = pd.read_parquet(DATASET_DIR / "meta" / "tasks.parquet") episodes = pd.read_parquet(DATASET_DIR / "meta" / "episodes" / "chunk-000" / "file-000.parquet") # Load all sensor data data = pd.read_parquet(DATASET_DIR / "data" / "chunk-000" / "file-000.parquet") print(f"Frames: {len(data):,}") print(f"Episodes: {len(episodes)}") print(f"Tasks: {list(tasks['task'])}") print(f"Columns: {list(data.columns)}") ``` ### Accessing a Single Episode Sensor columns are stored as nested arrays (each cell contains a numpy array), not as flattened individual columns. ```python import numpy as np episode_id = 0 ep = data[data["episode_index"] == episode_id].reset_index(drop=True) # Timestamp in seconds timestamps = ep["timestamp"].values # Tactile data — shape (N, 256) left_tactile = np.stack(ep["observation.tactile.left"].values) # (N, 256) right_tactile = np.stack(ep["observation.tactile.right"].values) # (N, 256) # Hand IMU — shape (N, 12) left_hand_imu = np.stack(ep["observation.tactile.left_glove_imu"].values) # (N, 12) right_hand_imu = np.stack(ep["observation.tactile.right_glove_imu"].values) # (N, 12) # Body IMUs chest_imu = np.stack(ep["observation.imu.chest"].values) # (N, 6) head_imu = np.stack(ep["observation.imu.head"].values) # (N, 9) left_bicep = np.stack(ep["observation.imu.left_bicep"].values) # (N, 6) right_bicep = np.stack(ep["observation.imu.right_bicep"].values) # (N, 6) # Video path for this episode video_path = DATASET_DIR / "videos" / "observation.images.egocentric" / "chunk-000" / f"file-{episode_id:03d}.mp4" ``` ### Playing Videos ```bash # Play a single camera view for episode 0 open ~/HA-Multi-Samples/videos/observation.images.egocentric/chunk-000/file-000.mp4 # Play with ffplay (if ffmpeg installed) ffplay ~/HA-Multi-Samples/videos/observation.images.chest/chunk-000/file-005.mp4 ``` --- ## Per-Episode Reference | Episode | Task | Environment | Frames | Duration | |---|---|---|---|---| | 0 | Cooking | Kitchen | 26,469 | 14.7 min | | 1 | Cleaning | Living room | 17,792 | 9.9 min | | 2 | Cleaning | Living room | 38,395 | 21.3 min | | 3 | Folding and cleaning | Bedroom | 36,747 | 20.4 min | | 4 | Placing shoes | Hallway | 1,855 | 1.0 min | | 5 | Cleaning | Bathroom | 9,147 | 5.1 min | | 6 | Cleaning | Office | 6,842 | 3.8 min | | 7 | Cleaning | Bedroom | 17,165 | 9.5 min | | 8 | Folding and cleaning | Bedroom | 9,061 | 5.0 min | | 9 | Cleaning | Bathroom | 10,974 | 6.1 min | | 10 | Cleaning | Bedroom | 9,378 | 5.2 min | | 11 | Folding clothes | Bedroom | 19,023 | 10.6 min | | 12 | Cleaning | Kitchen | 17,684 | 9.8 min | | 13 | Cleaning | Living room | 1,925 | 1.1 min | | 14 | Cleaning | Bedroom | 15,399 | 8.6 min | | 15 | Folding clothes | Bedroom | 902 | 0.5 min | | 16 | Folding clothes | Bedroom | 1,193 | 0.7 min | | 17 | Folding clothes | Bedroom | 3,001 | 1.7 min | | 18 | Folding clothes | Bedroom | 675 | 0.4 min | | 19 | Folding clothes | Bedroom | 706 | 0.4 min | | 20 | Cleaning | Bedroom | 670 | 0.4 min | | 21 | Cleaning | Bathroom | 12,838 | 7.1 min | | 22 | Cleaning | Hallway | 2,603 | 1.4 min | | 23 | Cooking | Kitchen | 36,822 | 20.5 min | | 24 | Cooking | Kitchen | 3,772 | 2.1 min | | 25 | Cooking | Kitchen | 10,386 | 5.8 min | | 26 | Cleaning | Bedroom | 5,622 | 3.1 min | | 27 | Cleaning | Bedroom | 10,024 | 5.6 min | | 28 | Cleaning | Bedroom | 1,656 | 0.9 min | | 29 | Cleaning | Kitchen | 8,522 | 4.7 min | | 30 | Cleaning | Kitchen | 6,173 | 3.4 min | | 31 | Cleaning | Bedroom | 21,041 | 11.7 min | | 32 | Ironing | Bedroom | 1,909 | 1.1 min | | 33 | Ironing | Bedroom | 27,703 | 15.4 min | | 34 | Ironing | Bedroom | 14,642 | 8.1 min | | 35 | Ironing | Bedroom | 11,914 | 6.6 min | --- ## Feature Reference Complete list of columns in `data/chunk-000/file-000.parquet`. Scalar columns store one value per row. Array columns store a numpy array per row (access with `np.stack(df["column"].values)` to get a 2D matrix). | Column | Type | Shape | Description | |---|---|---|---| | `index` | int64 | scalar | Global frame index across entire dataset | | `episode_index` | int64 | scalar | Episode number (0–35) | | `frame_index` | int64 | scalar | Frame number within the episode | | `timestamp` | float32 | scalar | Time in seconds from episode start | | `task_index` | int64 | scalar | Task ID (see tasks.parquet) | | `observation.tactile.left` | float32[] | (256,) | Left hand tactile pressure | | `observation.tactile.right` | float32[] | (256,) | Right hand tactile pressure | | `observation.tactile.left_glove_imu` | float32[] | (12,) | Left hand IMU (quaternion + accel/gyro) | | `observation.tactile.right_glove_imu` | float32[] | (12,) | Right hand IMU (quaternion + accel/gyro) | | `observation.imu.head` | float32[] | (9,) | Head IMU (accel + gyro + mag) | | `observation.imu.chest` | float32[] | (6,) | Chest IMU (accel + gyro) | | `observation.imu.left_bicep` | float32[] | (6,) | Left upper arm IMU | | `observation.imu.right_bicep` | float32[] | (6,) | Right upper arm IMU | | `observation.imu.left_forearm` | float32[] | (6,) | Left forearm IMU | | `observation.imu.right_forearm` | float32[] | (6,) | Right forearm IMU | | `observation.imu.left_hand` | float32[] | (4,) | Left hand quaternion | | `observation.imu.right_hand` | float32[] | (4,) | Right hand quaternion | --- ## Format This dataset uses the [LeRobot v3.0](https://github.com/huggingface/lerobot) chunked format. Key conventions: - Video files are stored separately from sensor data, referenced by episode index - Sensor data is stored in Parquet files with one row per frame - All modalities are time-aligned at 30 fps - Episodes are independent recording segments; `frame_index` resets to 0 at the start of each episode - `meta/stats.json` contains per-feature min, max, mean, and standard deviation computed across the full dataset