VadExylos commited on
Commit
c28315b
·
verified ·
1 Parent(s): 7c821e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -43
README.md CHANGED
@@ -15,38 +15,50 @@ tags:
15
  - pick-and-place
16
  - multi-view
17
  - vr-teleoperation
 
 
18
  - synthetic
19
  - sim-to-real
 
 
20
  - franka
21
- - exylos
22
- - phase-annotations
23
- - failure-recovery
24
  - panda
 
25
  - parquet
26
  - time-series
27
  - trajectories
28
  - state-action
 
 
29
  ---
30
 
31
  # Exylos Pick-and-Place Sample
32
 
33
- > A multi-view robot manipulation dataset captured through consumer VR and procedurally expanded into transfer-ready episodes. Delivered in LeRobot-compatible structure.
34
 
35
  <video controls autoplay loop muted src="https://huggingface.co/datasets/ExylosAi/pick_and_place_sample/resolve/main/preview.mp4" width="720"></video>
36
 
37
  ---
38
 
 
 
 
 
 
 
 
 
39
  ## Why this dataset is different
40
 
41
- Most public manipulation datasets come from one of two sources: real-robot teleoperation farms (slow and expensive) or pure simulation (cheap but with poor real-world transfer). This sample comes from a third path:
42
 
43
- 1. **Captured in consumer VR.** A human performs the task in an immersive virtual environment using a standard VR headset. Their hand motion is retargeted onto a virtual robot embodiment in real time, producing kinematically valid trajectories.
44
- 2. **Procedurally expanded.** Each seed demonstration is multiplied into many physics-consistent variations (object poses, distractors, lighting, camera angles) so that a small number of human demonstrations becomes a diverse training corpus.
45
- 3. **Packaged for direct training.** Output is delivered in LeRobot-compatible structure, with synchronized multi-view video, state and action streams, phase-level annotations, and success/failure metadata.
46
 
47
- The result is human-seeded, scaled, and labeled data that is closer to what policy training actually needs without the cost of running a physical lab.
48
 
49
- This public release is intentionally compact. It is meant as an **inspection sample** — to let robotics teams evaluate the format, modalities, and annotation quality before discussing larger productized skill packs.
50
 
51
  ---
52
 
@@ -55,35 +67,40 @@ This public release is intentionally compact. It is meant as an **inspection sam
55
  | Property | Value |
56
  |---|---|
57
  | Episodes | 50 |
58
- | Modalities | Multi-view RGB video + robot state + actions + phase annotations + episode metadata |
59
  | Task | Pick up an object from the workspace and place it into a container |
60
- | Robot embodiment | Franka Emika Panda (7-DoF arm + parallel gripper) |
61
  | Camera views | 5 synchronized RGB streams |
62
- | Video | 30 FPS, H.264 |
63
  | Robot state | 9-dimensional |
64
  | Action vector | 9-dimensional |
65
  | Trajectories | Synchronized robot state + action streams per frame |
66
- | Episode-level metadata | Success / failure outcome, derived quality flags |
67
- | Phase-level annotations | Approach, grasp, transport, place, recovery segments |
68
- | Trajectory mix | Success, failure, and recovery-rich episodes |
69
- | Format | LeRobot-compatible (Parquet + MP4) |
 
 
 
70
  | License | Apache 2.0 |
71
 
72
  ---
73
 
74
  ## What is included
75
 
76
- Each episode bundles five kinds of synchronized signals:
77
 
78
- - **Robot state trajectories** the full 9D state stream over time
79
- - **Action trajectories** the 9D control signal at each frame
80
- - **Multi-view RGB video** five synchronized streams (wrist, front, left, top, right)
81
- - **Episode-level metadata** task identity, success / failure outcome, derived quality flags
82
- - **Phase-level annotations** segment boundaries for the meaningful sub-stages of each episode (approach, grasp, transport, place, recovery)
 
 
83
 
84
  ### Camera views
85
 
86
- ```
87
  observation.images.wrist_cam
88
  observation.images.front_cam
89
  observation.images.left_cam
@@ -93,22 +110,41 @@ observation.images.right_cam
93
 
94
  ### Core trajectory fields
95
 
96
- ```
97
  observation.state
98
  action
99
  timestamp
100
  frame_index
101
  episode_index
 
102
  task_index
103
  next.done
104
  next.success
105
  ```
106
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  ---
108
 
109
  ## Quick start
110
 
111
- The dataset follows LeRobot dataset conventions and can be loaded directly with the `lerobot` library:
112
 
113
  ```python
114
  from lerobot.datasets.lerobot_dataset import LeRobotDataset
@@ -128,15 +164,16 @@ You can also browse the raw Parquet and MP4 files directly under the **Files** t
128
 
129
  ## Repository structure
130
 
131
- ```
132
  README.md
133
- LICENSE
134
  info.json
135
  annotations.json
136
  tasks.jsonl
137
  episodes.jsonl
138
  episodes_stats.jsonl
139
  preview.mp4
 
140
  data/
141
  chunk-000/
142
  episode_000000.parquet
@@ -144,17 +181,19 @@ data/
144
  ...
145
  videos/
146
  chunk-000/
147
- wrist_cam/
148
  episode_000000.mp4
149
  episode_000001.mp4
150
  ...
151
- front_cam/
 
 
152
  ...
153
- left_cam/
154
  ...
155
- top_cam/
156
  ...
157
- right_cam/
158
  ...
159
  ```
160
 
@@ -164,26 +203,29 @@ videos/
164
 
165
  This sample is suitable for:
166
 
167
- - Inspecting the Exylos data format and annotation schema
168
- - Quick imitation-learning experiments on a narrow pick-and-place task
169
- - Format compatibility testing against a LeRobot-based training pipeline
170
- - Evaluating phase-level annotation density and recovery-trajectory coverage
 
 
171
 
172
- For larger production-scale skill packs (broader object families, configurable embodiments, custom evaluation logic, or higher episode volumes), see [exylos.ai](https://exylos.ai) or contact us directly.
173
 
174
  ---
175
 
176
  ## Out-of-scope
177
 
178
  - This sample does not target a specific real-world deployment cell or production line.
179
- - It does not include dense per-frame semantic or instance masks (these are available in higher-tier skill packs).
180
- - It is not a benchmark and does not include a held-out evaluation split tuned for leaderboard-style comparison.
 
181
 
182
  ---
183
 
184
  ## About Exylos
185
 
186
- Exylos is an early-stage robotics data company. We capture human manipulation demonstrations in consumer VR and procedurally expand them into thousands of physics-consistent, transfer-ready training episodes. Datasets are delivered in LeRobot-compatible structure or adapted to client pipelines.
187
 
188
  If you are a robotics or applied-ML team and want to discuss a custom skill pack for your embodiment and task, reach out at **contact@exylos.ai** or visit [exylos.ai](https://exylos.ai).
189
 
@@ -207,7 +249,7 @@ If you use this dataset in research or in a public technical report, please cite
207
 
208
  ## License
209
 
210
- Released under the **Apache License 2.0**. You are free to use this dataset for both research and commercial purposes, subject to the standard Apache 2.0 attribution requirements. See the `LICENSE` file in this repository for full terms.
211
 
212
  ---
213
 
@@ -217,4 +259,4 @@ Released under the **Apache License 2.0**. You are free to use this dataset for
217
  - Email: contact@exylos.ai
218
  - LinkedIn: [Exylos on LinkedIn](https://www.linkedin.com/company/exylos-ai/)
219
 
220
- For questions specific to this dataset (format, schema, fields), please open a discussion in the **Community** tab on this repository.
 
15
  - pick-and-place
16
  - multi-view
17
  - vr-teleoperation
18
+ - human-in-the-loop
19
+ - human-seeded
20
  - synthetic
21
  - sim-to-real
22
+ - visual-domain-randomization
23
+ - domain-randomization
24
  - franka
 
 
 
25
  - panda
26
+ - exylos
27
  - parquet
28
  - time-series
29
  - trajectories
30
  - state-action
31
+ - phase-annotations
32
+ - failure-recovery
33
  ---
34
 
35
  # Exylos Pick-and-Place Sample
36
 
37
+ > A human-in-the-loop, multi-view robot manipulation dataset captured through consumer VR and procedurally expanded with visual domain randomization into transfer-oriented pick-and-place episodes. Delivered in a LeRobot-compatible structure.
38
 
39
  <video controls autoplay loop muted src="https://huggingface.co/datasets/ExylosAi/pick_and_place_sample/resolve/main/preview.mp4" width="720"></video>
40
 
41
  ---
42
 
43
+ ## Visualize episodes interactively
44
+
45
+ Open this dataset in the official LeRobot Dataset Visualizer to browse individual episodes, inspect camera streams, and view trajectories in your browser:
46
+
47
+ **[Open in LeRobot Visualizer](https://huggingface.co/spaces/lerobot/visualize_dataset?dataset=ExylosAi%2Fpick_and_place_sample)**
48
+
49
+ ---
50
+
51
  ## Why this dataset is different
52
 
53
+ Most public manipulation datasets come from one of two sources: real-robot teleoperation farms, which are slow and expensive, or pure simulation, which is cheap but often weak for transfer. This sample comes from a third path:
54
 
55
+ 1. **Human-in-the-loop VR capture.** A human performs the task in an immersive virtual environment using a standard VR headset. Their motion provides task intent, manipulation timing, and correction behavior, while the system retargets the demonstration onto a virtual Franka Panda robot embodiment.
56
+ 2. **Procedurally expanded with visual domain randomization.** Seed demonstrations are expanded into physics-consistent variations with changing object poses, distractors, mild occlusions, lighting conditions, camera configurations, object materials, and environment appearance.
57
+ 3. **Packaged for direct inspection and training.** The output is delivered in a LeRobot-compatible structure, with synchronized multi-view video, state and action streams, phase-level annotations, quality scores, and success/failure metadata.
58
 
59
+ The result is human-seeded, scaled, and labeled robot-manipulation data that is closer to what policy training needs, without requiring every trajectory to be collected on a physical robot.
60
 
61
+ This public release is intentionally compact. It is meant as an **inspection sample**: robotics teams can evaluate the format, modalities, visual variation, annotation schema, and trajectory quality before discussing larger productized skill packs.
62
 
63
  ---
64
 
 
67
  | Property | Value |
68
  |---|---|
69
  | Episodes | 50 |
70
+ | Total frames | 21,412 |
71
  | Task | Pick up an object from the workspace and place it into a container |
72
+ | Robot embodiment | Franka Emika Panda, 7-DoF arm + parallel gripper |
73
  | Camera views | 5 synchronized RGB streams |
74
+ | Video | 30 FPS, H.264, 1280 x 960 |
75
  | Robot state | 9-dimensional |
76
  | Action vector | 9-dimensional |
77
  | Trajectories | Synchronized robot state + action streams per frame |
78
+ | Outcome mix | 30 success episodes, 20 failure episodes |
79
+ | Failure reasons | 6 slip/drop failures, 14 operator-abort failures |
80
+ | Correction coverage | 16 episodes include correction phases or nonzero correction counts |
81
+ | Phase-level annotations | approach, grasp, transport, place, retract, correction |
82
+ | Episode-level metadata | success/failure outcome, failure reason, duration, frozen-frame count, quality scores, derived metrics |
83
+ | Visual variation | Object pose, distractors, mild occlusions, lighting, camera configuration, object material, and environment appearance variation |
84
+ | Format | LeRobot-compatible Parquet + MP4 |
85
  | License | Apache 2.0 |
86
 
87
  ---
88
 
89
  ## What is included
90
 
91
+ Each episode bundles synchronized robot, video, and annotation signals:
92
 
93
+ - **Robot state trajectories**: the full 9D robot state stream over time.
94
+ - **Action trajectories**: the 9D control/action signal at each frame.
95
+ - **Multi-view RGB video**: five synchronized camera streams: wrist, front, left, top, and right.
96
+ - **Per-frame indexing**: timestamp, frame index, episode index, global index, task index, terminal state, and terminal success flag.
97
+ - **Episode-level metadata**: task identity, success/failure outcome, failure reason, duration, frozen-frame count, quality scores, and derived execution metrics.
98
+ - **Phase-level annotations**: frame-range segment boundaries for approach, grasp, transport, place, retract, and correction phases.
99
+ - **Correction and failure semantics**: selected episodes include wrong-object, slip/drop, placement-error, retry, and correction/recovery signals in annotations and metrics.
100
 
101
  ### Camera views
102
 
103
+ ```text
104
  observation.images.wrist_cam
105
  observation.images.front_cam
106
  observation.images.left_cam
 
110
 
111
  ### Core trajectory fields
112
 
113
+ ```text
114
  observation.state
115
  action
116
  timestamp
117
  frame_index
118
  episode_index
119
+ index
120
  task_index
121
  next.done
122
  next.success
123
  ```
124
 
125
+ ### Annotation fields
126
+
127
+ ```text
128
+ episode_id
129
+ success
130
+ task_success
131
+ failure_reason
132
+ duration_sec
133
+ frozen_frames
134
+ phase_annotations
135
+ scores
136
+ derived
137
+ raw_measurements
138
+ scorer_id
139
+ ```
140
+
141
+ The `phase_annotations` field contains phase names, frame ranges, execution quality, and task-alignment labels. The `scores`, `derived`, and `raw_measurements` fields provide quality and diagnostic metrics such as path efficiency, grasp precision, placement accuracy, temporal efficiency, motion smoothness, corrective movement score, correction count, correction duration, discontinuity count, and kinematic headroom.
142
+
143
  ---
144
 
145
  ## Quick start
146
 
147
+ The dataset follows LeRobot dataset conventions and can be loaded with the `lerobot` library:
148
 
149
  ```python
150
  from lerobot.datasets.lerobot_dataset import LeRobotDataset
 
164
 
165
  ## Repository structure
166
 
167
+ ```text
168
  README.md
169
+ LICENSE_1.txt
170
  info.json
171
  annotations.json
172
  tasks.jsonl
173
  episodes.jsonl
174
  episodes_stats.jsonl
175
  preview.mp4
176
+ preview.gif
177
  data/
178
  chunk-000/
179
  episode_000000.parquet
 
181
  ...
182
  videos/
183
  chunk-000/
184
+ observation.images.wrist_cam/
185
  episode_000000.mp4
186
  episode_000001.mp4
187
  ...
188
+ observation.images.front_cam/
189
+ episode_000000.mp4
190
+ episode_000001.mp4
191
  ...
192
+ observation.images.left_cam/
193
  ...
194
+ observation.images.top_cam/
195
  ...
196
+ observation.images.right_cam/
197
  ...
198
  ```
199
 
 
203
 
204
  This sample is suitable for:
205
 
206
+ - Inspecting the Exylos data format and annotation schema.
207
+ - Testing LeRobot-compatible training and data-loading pipelines.
208
+ - Quick imitation-learning experiments on a narrow pick-and-place task.
209
+ - Evaluating synchronized multi-view RGB, state/action trajectories, and phase-level annotations.
210
+ - Inspecting visual domain randomization and procedural variation in a compact manipulation sample.
211
+ - Reviewing success, failure, slip/drop, operator-abort, and correction/recovery examples.
212
 
213
+ For larger production-scale skill packs, including broader object families, configurable embodiments, denser masks, custom evaluation logic, or higher episode volumes, visit [exylos.ai](https://exylos.ai) or contact us directly.
214
 
215
  ---
216
 
217
  ## Out-of-scope
218
 
219
  - This sample does not target a specific real-world deployment cell or production line.
220
+ - It does not include dense per-frame semantic or instance masks.
221
+ - It does not include a held-out benchmark split tuned for leaderboard-style evaluation.
222
+ - It does not provide dense per-frame 6DoF object pose labels as a standalone object-state stream.
223
 
224
  ---
225
 
226
  ## About Exylos
227
 
228
+ Exylos is an early-stage robotics data company. We capture human manipulation demonstrations in consumer VR and procedurally expand them into physics-consistent, transfer-oriented training episodes with visual domain randomization. Datasets are delivered in LeRobot-compatible structure or adapted to client pipelines.
229
 
230
  If you are a robotics or applied-ML team and want to discuss a custom skill pack for your embodiment and task, reach out at **contact@exylos.ai** or visit [exylos.ai](https://exylos.ai).
231
 
 
249
 
250
  ## License
251
 
252
+ Released under the **Apache License 2.0**. This sample is intentionally permissive so robotics and ML teams can inspect, load, test, and commercially evaluate the format without licensing friction. You are free to use this dataset for both research and commercial purposes, subject to the standard Apache 2.0 attribution requirements. See `LICENSE_1.txt` in this repository for full terms.
253
 
254
  ---
255
 
 
259
  - Email: contact@exylos.ai
260
  - LinkedIn: [Exylos on LinkedIn](https://www.linkedin.com/company/exylos-ai/)
261
 
262
+ For questions specific to this dataset, including format, schema, or fields, please open a discussion in the **Community** tab on this repository.