UniHand_Preview / human_data /Human_Data.md
Lightet's picture
Upload folder using huggingface_hub
2259d70 verified

UniHand Human Data Preview (Customized LeRobot v3 Format)

This is a preview of the human data of UniHand in the customized LeRobot v3 format.

We customized the LeRobot v3 format to better fit the structure of our human data and to facilitate downstream usage. The dataset layout and file formats are designed to be intuitive and efficient for common use cases.

1. Dataset Layout

{dataset_root}/
  data/
    chunk-000/file-000000.parquet
    chunk-000/file-000001.parquet
    ...
  videos/
    ego-view/
      chunk-000/file-000000.mp4
      chunk-000/file-000001.mp4
      ...
  meta/
    info.json
    tasks_instruction.jsonl
    tasks_description.jsonl
    episodes/
      instruction/
        chunk-000/file-000000.parquet
        ...
      description/
        chunk-000/file-000000.parquet
        ...

2. meta/info.json

meta/info.json stores dataset-level metadata.

Example:

{
  "base_dataset": "arctic",
  "dataset_variants": ["arctic", "arctic_aug", "arctic_aug2"],
  "fps": 30.0,
  "action_stride": 1,
  "robot_type": "human hand",
  "layout": "file-centric-split-mirror-v4",
  "mirror_video_transform": "horizontal_flip",
  "mirror_text_transform_version": "left-right-clockwise-v1",
  "num_files": 1641,
  "num_episodes": 229814
}

Common keys:

  • base_dataset
  • dataset_variants
  • fps
  • action_stride
  • robot_type
  • layout
  • mirror_video_transform
  • mirror_text_transform_version
  • num_files
  • num_episodes

How to use it:

  • read layout as the dataset layout identifier
  • read action_stride to interpret action horizons
  • read fps as the default frame rate if you need dataset-level timing metadata

3. meta/tasks_instruction.jsonl

This file is the text registry for instruction / prediction samples.

Each line is one JSON object:

{"task_index": 0, "task": "Pick up the object ..."}

Fields:

  • task_index: integer task id
  • task: task text

How to use it:

  • if you are reading from meta/episodes/instruction/..., use this file to resolve task_index
  • task_index is contiguous and starts from 0 within this file

4. meta/tasks_description.jsonl

This file is the text registry for description samples.

Each line is one JSON object:

{"task_index": 0, "task": "The hands lift the object ..."}

Fields:

  • task_index: integer task id
  • task: text description

How to use it:

  • if you are reading from meta/episodes/description/..., use this file to resolve task_index
  • task_index is contiguous and starts from 0 within this file

5. meta/episodes/instruction/**/*.parquet

These parquet shards store instruction / prediction episode rows.

Each row describes one temporal slice inside one exported file.

Common fields:

  • file_id
  • start_timestep
  • end_timestep
  • embodiment
  • task_index
  • row_id

Meaning:

  • file_id: file-level id used to locate motion parquet and video
  • start_timestep: inclusive start frame
  • end_timestep: exclusive end frame
  • embodiment: effective embodiment for this row
  • task_index: text id in meta/tasks_instruction.jsonl
  • row_id: stable row id

How to use it:

  • read one row
  • resolve text from meta/tasks_instruction.jsonl
  • resolve motion/video from file_id
  • slice the file timeline with [start_timestep, end_timestep)
  • each parquet shard contains many episode rows, and those rows may reference many different file_id values

6. meta/episodes/description/**/*.parquet

These parquet shards store description episode rows.

The row structure is the same as the instruction split:

  • file_id
  • start_timestep
  • end_timestep
  • embodiment
  • task_index
  • row_id

How to use it:

  • read one row
  • resolve text from meta/tasks_description.jsonl
  • resolve motion/video from file_id
  • slice the file timeline with [start_timestep, end_timestep)
  • each parquet shard contains many episode rows, and those rows may reference many different file_id values

7. data/chunk-xxx/file-xxxxxx.parquet

Each motion parquet stores frame-level hand motion for one file_id.

Common per-frame columns:

  • camera_c2w
  • left.trans_w
  • left.rot_axis_angle_w
  • left.theta
  • left.beta
  • right.trans_w
  • right.rot_axis_angle_w
  • right.theta
  • right.beta
  • valid.left_horizon
  • valid.right_horizon
  • valid.joint_horizon

Meaning:

  • camera_c2w: flattened camera-to-world transform
  • left/right.trans_w: wrist translation in world frame
  • left/right.rot_axis_angle_w: wrist rotation in axis-angle form
  • left/right.theta: MANO pose parameters
  • left/right.beta: MANO shape parameters
  • valid.*_horizon: future validity horizon for action extraction

How to use it:

  • locate the file from file_id
  • load the parquet
  • use frame indices in the global file timeline
  • if an episode row is (start_timestep, end_timestep), only use frames in that interval

Path rule:

data/chunk-{file_id // 1000:03d}/file-{file_id:06d}.parquet

8. videos/ego-view/chunk-xxx/file-xxxxxx.mp4

These mp4 files store the ego-view video aligned with the motion parquet.

How to resolve the path:

  1. Read file_id.
  2. Use:
videos/ego-view/chunk-{file_id // 1000:03d}/file-{file_id:06d}.mp4

9. How To Read One Sample

Instruction / prediction sample

  1. Read one row from meta/episodes/instruction/**/*.parquet.
  2. Use task_index to look up text in meta/tasks_instruction.jsonl.
  3. Use file_id to locate the motion parquet in data/....
  4. Use file_id to locate the ego-view video in videos/ego-view/....
  5. Restrict the valid episode range to start_timestep <= t < end_timestep.
  6. Read the frame(s) you need from the motion parquet.
  7. Read the aligned video frame(s) using the same file-global frame index.

Description sample

  1. Read one row from meta/episodes/description/**/*.parquet.
  2. Use task_index to look up text in meta/tasks_description.jsonl.
  3. Use file_id to locate the motion parquet in data/....
  4. Use file_id to locate the ego-view video in videos/ego-view/....
  5. Restrict the valid episode range to start_timestep <= t < end_timestep.
  6. Read the frame(s) you need from the motion parquet.
  7. Read the aligned video frame(s) using the same file-global frame index.

10. Valid Horizon

The valid.*_horizon columns are used to check whether a timestep can support future action extraction.

Use:

  • valid.left_horizon for left-hand rows
  • valid.right_horizon for right-hand rows
  • valid.joint_horizon for bimanual rows

The horizon is measured in units of meta/info.json["action_stride"].

If your base timestep is t and your future chunk needs K stride-steps, require:

selected_horizon[t] >= K

and also require all sampled future timesteps to stay inside the episode range:

t + K * action_stride < end_timestep