Datasets:
Upload folder using huggingface_hub
Browse files- human_data/EgoDex_lerobot_v3.tar +3 -0
- human_data/Human_Data.md +264 -0
- human_data/arctic_lerobot_v3.tar +3 -0
- human_data/dex-ycb_lerobot_v3.tar +3 -0
- human_data/fpha_lerobot_v3.tar +3 -0
- human_data/h2o_lerobot_v3.tar +3 -0
- human_data/hoi4d_lerobot_v3.tar +3 -0
- human_data/oakink2_lerobot_v3.tar +3 -0
- human_data/taco_lerobot_v3.tar +3 -0
- human_data/taste-rob_lerobot_v3.tar +3 -0
human_data/EgoDex_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:24c3d3cb8bf1bf3f785d4985624a8b916a53ddeac9f669f4b7b5ad6f7d7383f4
|
| 3 |
+
size 52179435520
|
human_data/Human_Data.md
ADDED
|
@@ -0,0 +1,264 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UniHand Human Data Preview (Customized LeRobot v3 Format)
|
| 2 |
+
|
| 3 |
+
This is a preview of the human data of UniHand in the customized LeRobot v3 format.
|
| 4 |
+
|
| 5 |
+
We customized the LeRobot v3 format to better fit the structure of our human data and to facilitate downstream usage. The dataset layout and file formats are designed to be intuitive and efficient for common use cases.
|
| 6 |
+
|
| 7 |
+
## 1. Dataset Layout
|
| 8 |
+
|
| 9 |
+
```text
|
| 10 |
+
{dataset_root}/
|
| 11 |
+
data/
|
| 12 |
+
chunk-000/file-000000.parquet
|
| 13 |
+
chunk-000/file-000001.parquet
|
| 14 |
+
...
|
| 15 |
+
videos/
|
| 16 |
+
ego-view/
|
| 17 |
+
chunk-000/file-000000.mp4
|
| 18 |
+
chunk-000/file-000001.mp4
|
| 19 |
+
...
|
| 20 |
+
meta/
|
| 21 |
+
info.json
|
| 22 |
+
tasks_instruction.jsonl
|
| 23 |
+
tasks_description.jsonl
|
| 24 |
+
episodes/
|
| 25 |
+
instruction/
|
| 26 |
+
chunk-000/file-000000.parquet
|
| 27 |
+
...
|
| 28 |
+
description/
|
| 29 |
+
chunk-000/file-000000.parquet
|
| 30 |
+
...
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## 2. `meta/info.json`
|
| 34 |
+
|
| 35 |
+
`meta/info.json` stores dataset-level metadata.
|
| 36 |
+
|
| 37 |
+
Example:
|
| 38 |
+
|
| 39 |
+
```json
|
| 40 |
+
{
|
| 41 |
+
"base_dataset": "arctic",
|
| 42 |
+
"dataset_variants": ["arctic", "arctic_aug", "arctic_aug2"],
|
| 43 |
+
"fps": 30.0,
|
| 44 |
+
"action_stride": 1,
|
| 45 |
+
"robot_type": "human hand",
|
| 46 |
+
"layout": "file-centric-split-mirror-v4",
|
| 47 |
+
"mirror_video_transform": "horizontal_flip",
|
| 48 |
+
"mirror_text_transform_version": "left-right-clockwise-v1",
|
| 49 |
+
"num_files": 1641,
|
| 50 |
+
"num_episodes": 229814
|
| 51 |
+
}
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
Common keys:
|
| 55 |
+
|
| 56 |
+
- `base_dataset`
|
| 57 |
+
- `dataset_variants`
|
| 58 |
+
- `fps`
|
| 59 |
+
- `action_stride`
|
| 60 |
+
- `robot_type`
|
| 61 |
+
- `layout`
|
| 62 |
+
- `mirror_video_transform`
|
| 63 |
+
- `mirror_text_transform_version`
|
| 64 |
+
- `num_files`
|
| 65 |
+
- `num_episodes`
|
| 66 |
+
|
| 67 |
+
How to use it:
|
| 68 |
+
|
| 69 |
+
- read `layout` as the dataset layout identifier
|
| 70 |
+
- read `action_stride` to interpret action horizons
|
| 71 |
+
- read `fps` as the default frame rate if you need dataset-level timing metadata
|
| 72 |
+
|
| 73 |
+
## 3. `meta/tasks_instruction.jsonl`
|
| 74 |
+
|
| 75 |
+
This file is the text registry for instruction / prediction samples.
|
| 76 |
+
|
| 77 |
+
Each line is one JSON object:
|
| 78 |
+
|
| 79 |
+
```json
|
| 80 |
+
{"task_index": 0, "task": "Pick up the object ..."}
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
Fields:
|
| 84 |
+
|
| 85 |
+
- `task_index`: integer task id
|
| 86 |
+
- `task`: task text
|
| 87 |
+
|
| 88 |
+
How to use it:
|
| 89 |
+
|
| 90 |
+
- if you are reading from `meta/episodes/instruction/...`, use this file to resolve `task_index`
|
| 91 |
+
- `task_index` is contiguous and starts from `0` within this file
|
| 92 |
+
|
| 93 |
+
## 4. `meta/tasks_description.jsonl`
|
| 94 |
+
|
| 95 |
+
This file is the text registry for description samples.
|
| 96 |
+
|
| 97 |
+
Each line is one JSON object:
|
| 98 |
+
|
| 99 |
+
```json
|
| 100 |
+
{"task_index": 0, "task": "The hands lift the object ..."}
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
Fields:
|
| 104 |
+
|
| 105 |
+
- `task_index`: integer task id
|
| 106 |
+
- `task`: text description
|
| 107 |
+
|
| 108 |
+
How to use it:
|
| 109 |
+
|
| 110 |
+
- if you are reading from `meta/episodes/description/...`, use this file to resolve `task_index`
|
| 111 |
+
- `task_index` is contiguous and starts from `0` within this file
|
| 112 |
+
|
| 113 |
+
## 5. `meta/episodes/instruction/**/*.parquet`
|
| 114 |
+
|
| 115 |
+
These parquet shards store instruction / prediction episode rows.
|
| 116 |
+
|
| 117 |
+
Each row describes one temporal slice inside one exported file.
|
| 118 |
+
|
| 119 |
+
Common fields:
|
| 120 |
+
|
| 121 |
+
- `file_id`
|
| 122 |
+
- `start_timestep`
|
| 123 |
+
- `end_timestep`
|
| 124 |
+
- `embodiment`
|
| 125 |
+
- `task_index`
|
| 126 |
+
- `row_id`
|
| 127 |
+
|
| 128 |
+
Meaning:
|
| 129 |
+
|
| 130 |
+
- `file_id`: file-level id used to locate motion parquet and video
|
| 131 |
+
- `start_timestep`: inclusive start frame
|
| 132 |
+
- `end_timestep`: exclusive end frame
|
| 133 |
+
- `embodiment`: effective embodiment for this row
|
| 134 |
+
- `task_index`: text id in `meta/tasks_instruction.jsonl`
|
| 135 |
+
- `row_id`: stable row id
|
| 136 |
+
|
| 137 |
+
How to use it:
|
| 138 |
+
|
| 139 |
+
- read one row
|
| 140 |
+
- resolve text from `meta/tasks_instruction.jsonl`
|
| 141 |
+
- resolve motion/video from `file_id`
|
| 142 |
+
- slice the file timeline with `[start_timestep, end_timestep)`
|
| 143 |
+
- each parquet shard contains many episode rows, and those rows may reference many different `file_id` values
|
| 144 |
+
|
| 145 |
+
## 6. `meta/episodes/description/**/*.parquet`
|
| 146 |
+
|
| 147 |
+
These parquet shards store description episode rows.
|
| 148 |
+
|
| 149 |
+
The row structure is the same as the instruction split:
|
| 150 |
+
|
| 151 |
+
- `file_id`
|
| 152 |
+
- `start_timestep`
|
| 153 |
+
- `end_timestep`
|
| 154 |
+
- `embodiment`
|
| 155 |
+
- `task_index`
|
| 156 |
+
- `row_id`
|
| 157 |
+
|
| 158 |
+
How to use it:
|
| 159 |
+
|
| 160 |
+
- read one row
|
| 161 |
+
- resolve text from `meta/tasks_description.jsonl`
|
| 162 |
+
- resolve motion/video from `file_id`
|
| 163 |
+
- slice the file timeline with `[start_timestep, end_timestep)`
|
| 164 |
+
- each parquet shard contains many episode rows, and those rows may reference many different `file_id` values
|
| 165 |
+
|
| 166 |
+
## 7. `data/chunk-xxx/file-xxxxxx.parquet`
|
| 167 |
+
|
| 168 |
+
Each motion parquet stores frame-level hand motion for one `file_id`.
|
| 169 |
+
|
| 170 |
+
Common per-frame columns:
|
| 171 |
+
|
| 172 |
+
- `camera_c2w`
|
| 173 |
+
- `left.trans_w`
|
| 174 |
+
- `left.rot_axis_angle_w`
|
| 175 |
+
- `left.theta`
|
| 176 |
+
- `left.beta`
|
| 177 |
+
- `right.trans_w`
|
| 178 |
+
- `right.rot_axis_angle_w`
|
| 179 |
+
- `right.theta`
|
| 180 |
+
- `right.beta`
|
| 181 |
+
- `valid.left_horizon`
|
| 182 |
+
- `valid.right_horizon`
|
| 183 |
+
- `valid.joint_horizon`
|
| 184 |
+
|
| 185 |
+
Meaning:
|
| 186 |
+
|
| 187 |
+
- `camera_c2w`: flattened camera-to-world transform
|
| 188 |
+
- `left/right.trans_w`: wrist translation in world frame
|
| 189 |
+
- `left/right.rot_axis_angle_w`: wrist rotation in axis-angle form
|
| 190 |
+
- `left/right.theta`: MANO pose parameters
|
| 191 |
+
- `left/right.beta`: MANO shape parameters
|
| 192 |
+
- `valid.*_horizon`: future validity horizon for action extraction
|
| 193 |
+
|
| 194 |
+
How to use it:
|
| 195 |
+
|
| 196 |
+
- locate the file from `file_id`
|
| 197 |
+
- load the parquet
|
| 198 |
+
- use frame indices in the global file timeline
|
| 199 |
+
- if an episode row is `(start_timestep, end_timestep)`, only use frames in that interval
|
| 200 |
+
|
| 201 |
+
Path rule:
|
| 202 |
+
|
| 203 |
+
```text
|
| 204 |
+
data/chunk-{file_id // 1000:03d}/file-{file_id:06d}.parquet
|
| 205 |
+
```
|
| 206 |
+
|
| 207 |
+
## 8. `videos/ego-view/chunk-xxx/file-xxxxxx.mp4`
|
| 208 |
+
|
| 209 |
+
These mp4 files store the ego-view video aligned with the motion parquet.
|
| 210 |
+
|
| 211 |
+
How to resolve the path:
|
| 212 |
+
|
| 213 |
+
1. Read `file_id`.
|
| 214 |
+
2. Use:
|
| 215 |
+
|
| 216 |
+
```text
|
| 217 |
+
videos/ego-view/chunk-{file_id // 1000:03d}/file-{file_id:06d}.mp4
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
## 9. How To Read One Sample
|
| 221 |
+
|
| 222 |
+
### Instruction / prediction sample
|
| 223 |
+
|
| 224 |
+
1. Read one row from `meta/episodes/instruction/**/*.parquet`.
|
| 225 |
+
2. Use `task_index` to look up text in `meta/tasks_instruction.jsonl`.
|
| 226 |
+
3. Use `file_id` to locate the motion parquet in `data/...`.
|
| 227 |
+
4. Use `file_id` to locate the ego-view video in `videos/ego-view/...`.
|
| 228 |
+
5. Restrict the valid episode range to `start_timestep <= t < end_timestep`.
|
| 229 |
+
6. Read the frame(s) you need from the motion parquet.
|
| 230 |
+
7. Read the aligned video frame(s) using the same file-global frame index.
|
| 231 |
+
|
| 232 |
+
### Description sample
|
| 233 |
+
|
| 234 |
+
1. Read one row from `meta/episodes/description/**/*.parquet`.
|
| 235 |
+
2. Use `task_index` to look up text in `meta/tasks_description.jsonl`.
|
| 236 |
+
3. Use `file_id` to locate the motion parquet in `data/...`.
|
| 237 |
+
4. Use `file_id` to locate the ego-view video in `videos/ego-view/...`.
|
| 238 |
+
5. Restrict the valid episode range to `start_timestep <= t < end_timestep`.
|
| 239 |
+
6. Read the frame(s) you need from the motion parquet.
|
| 240 |
+
7. Read the aligned video frame(s) using the same file-global frame index.
|
| 241 |
+
|
| 242 |
+
## 10. Valid Horizon
|
| 243 |
+
|
| 244 |
+
The `valid.*_horizon` columns are used to check whether a timestep can support future action extraction.
|
| 245 |
+
|
| 246 |
+
Use:
|
| 247 |
+
|
| 248 |
+
- `valid.left_horizon` for left-hand rows
|
| 249 |
+
- `valid.right_horizon` for right-hand rows
|
| 250 |
+
- `valid.joint_horizon` for bimanual rows
|
| 251 |
+
|
| 252 |
+
The horizon is measured in units of `meta/info.json["action_stride"]`.
|
| 253 |
+
|
| 254 |
+
If your base timestep is `t` and your future chunk needs `K` stride-steps, require:
|
| 255 |
+
|
| 256 |
+
```text
|
| 257 |
+
selected_horizon[t] >= K
|
| 258 |
+
```
|
| 259 |
+
|
| 260 |
+
and also require all sampled future timesteps to stay inside the episode range:
|
| 261 |
+
|
| 262 |
+
```text
|
| 263 |
+
t + K * action_stride < end_timestep
|
| 264 |
+
```
|
human_data/arctic_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67a7024d6837c0591bfe2788b46747d14981ad4cc9e87b1354b482db8edcdad1
|
| 3 |
+
size 1854044160
|
human_data/dex-ycb_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb0b0a05ccc9bca7b154da1c789318e5457bb8aebd55c14722f477fd54308c0c
|
| 3 |
+
size 448972800
|
human_data/fpha_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50f0051926f8526e35f1765b2ab4afe108f7af344635a6b0da636aadbdffc01a
|
| 3 |
+
size 423669760
|
human_data/h2o_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1ad415c5655cbea190ff2de99042d376cd60376c52f5ded12c1baa1bd44b8a86
|
| 3 |
+
size 1136128000
|
human_data/hoi4d_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81e99d8434f7ca4b82cdc8d0a836f43872069f27646f3cd5955c99bcb61fd78e
|
| 3 |
+
size 3987230720
|
human_data/oakink2_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e19cd14eadfb3b2f5853c4cb644040b81e73ff312bcb4a55be7e55220c42baec
|
| 3 |
+
size 2605649920
|
human_data/taco_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c27de5b51966ab1d71249a32707d11ec1f7abd74a897b59f92b7577281a57bc5
|
| 3 |
+
size 3165767680
|
human_data/taste-rob_lerobot_v3.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6699b6f132dfe8af78fd0e90644a16fb7119f85cb51f57179ec3c1fde8d9126
|
| 3 |
+
size 2631372800
|