File size: 9,728 Bytes
bd56113 b82b4bb bd56113 b82b4bb bd56113 b82b4bb bd56113 b82b4bb c28315b b82b4bb c28315b b82b4bb 7c821e3 c28315b 459ecbd c28315b b82b4bb c28315b b82b4bb bd56113 c28315b b82b4bb bd56113 c28315b bd56113 c28315b bd56113 c28315b bd56113 c28315b b82b4bb bd56113 b82b4bb c28315b b82b4bb c28315b b82b4bb c28315b b82b4bb 459ecbd c28315b b82b4bb bd56113 c28315b bd56113 c28315b bd56113 c28315b b82b4bb bd56113 c28315b b82b4bb c28315b b82b4bb bd56113 c28315b b82b4bb c28315b b82b4bb bd56113 b82b4bb bd56113 b82b4bb c28315b bd56113 c28315b bd56113 b82b4bb c28315b bd56113 c28315b bd56113 c28315b bd56113 c28315b bd56113 c28315b bd56113 c28315b b82b4bb c28315b b82b4bb c28315b b82b4bb c28315b b82b4bb c28315b b82b4bb c28315b b82b4bb 6bfc17b b82b4bb c28315b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | ---
license: apache-2.0
language:
- en
pretty_name: "Exylos Pick-and-Place Sample"
size_categories:
- n<1K
task_categories:
- robotics
tags:
- lerobot
- robot-learning
- imitation-learning
- manipulation
- pick-and-place
- multi-view
- vr-teleoperation
- human-in-the-loop
- human-seeded
- synthetic
- sim-to-real
- visual-domain-randomization
- domain-randomization
- franka
- panda
- exylos
- parquet
- time-series
- trajectories
- state-action
- phase-annotations
- failure-recovery
---
# Exylos Pick-and-Place Sample
> A human-in-the-loop, multi-view robot manipulation dataset captured through consumer VR and procedurally expanded with visual domain randomization into transfer-oriented pick-and-place episodes. Delivered in a LeRobot-compatible structure.
<video controls autoplay loop muted src="https://huggingface.co/datasets/ExylosAi/pick_and_place_sample/resolve/main/preview.mp4" width="720"></video>
---
## Visualize episodes interactively
Open this dataset in the official LeRobot Dataset Visualizer to browse individual episodes, inspect camera streams, and view trajectories in your browser:
**[Open in LeRobot Visualizer](https://huggingface.co/spaces/lerobot/visualize_dataset?dataset=ExylosAi%2Fpick_and_place_sample)**
---
## Why this dataset is different
Most public manipulation datasets come from one of two sources: real-robot teleoperation farms, which are slow and expensive, or pure simulation, which is cheap but often weak for transfer. This sample comes from a third path:
1. **Human-in-the-loop VR capture.** A human performs the task in an immersive virtual environment using a standard VR headset. Their motion provides task intent, manipulation timing, and correction behavior, while the system retargets the demonstration onto a virtual Franka Panda robot embodiment.
2. **Procedurally expanded with visual domain randomization.** Seed demonstrations are expanded into physics-consistent variations with changing object poses, distractors, mild occlusions, lighting conditions, camera configurations, object materials, and environment appearance.
3. **Packaged for direct inspection and training.** The output is delivered in a LeRobot-compatible structure, with synchronized multi-view video, state and action streams, phase-level annotations, quality scores, and success/failure metadata.
The result is human-seeded, scaled, and labeled robot-manipulation data that is closer to what policy training needs, without requiring every trajectory to be collected on a physical robot.
This public release is intentionally compact. It is meant as an **inspection sample**: robotics teams can evaluate the format, modalities, visual variation, annotation schema, and trajectory quality before discussing larger productized skill packs.
---
## Dataset summary
| Property | Value |
|---|---|
| Episodes | 50 |
| Total frames | 21,412 |
| Task | Pick up an object from the workspace and place it into a container |
| Robot embodiment | Franka Emika Panda, 7-DoF arm + parallel gripper |
| Camera views | 5 synchronized RGB streams |
| Video | 30 FPS, H.264, 1280 x 960 |
| Robot state | 9-dimensional |
| Action vector | 9-dimensional |
| Trajectories | Synchronized robot state + action streams per frame |
| Outcome mix | 30 success episodes, 20 failure episodes |
| Failure reasons | 6 slip/drop failures, 14 operator-abort failures |
| Correction coverage | 16 episodes include correction phases or nonzero correction counts |
| Phase-level annotations | approach, grasp, transport, place, retract, correction |
| Episode-level metadata | success/failure outcome, failure reason, duration, frozen-frame count, quality scores, derived metrics |
| Visual variation | Object pose, distractors, mild occlusions, lighting, camera configuration, object material, and environment appearance variation |
| Format | LeRobot-compatible Parquet + MP4 |
| License | Apache 2.0 |
---
## What is included
Each episode bundles synchronized robot, video, and annotation signals:
- **Robot state trajectories**: the full 9D robot state stream over time.
- **Action trajectories**: the 9D control/action signal at each frame.
- **Multi-view RGB video**: five synchronized camera streams: wrist, front, left, top, and right.
- **Per-frame indexing**: timestamp, frame index, episode index, global index, task index, terminal state, and terminal success flag.
- **Episode-level metadata**: task identity, success/failure outcome, failure reason, duration, frozen-frame count, quality scores, and derived execution metrics.
- **Phase-level annotations**: frame-range segment boundaries for approach, grasp, transport, place, retract, and correction phases.
- **Correction and failure semantics**: selected episodes include wrong-object, slip/drop, placement-error, retry, and correction/recovery signals in annotations and metrics.
### Camera views
```text
observation.images.wrist_cam
observation.images.front_cam
observation.images.left_cam
observation.images.top_cam
observation.images.right_cam
```
### Core trajectory fields
```text
observation.state
action
timestamp
frame_index
episode_index
index
task_index
next.done
next.success
```
### Annotation fields
```text
episode_id
success
task_success
failure_reason
duration_sec
frozen_frames
phase_annotations
scores
derived
raw_measurements
scorer_id
```
The `phase_annotations` field contains phase names, frame ranges, execution quality, and task-alignment labels. The `scores`, `derived`, and `raw_measurements` fields provide quality and diagnostic metrics such as path efficiency, grasp precision, placement accuracy, temporal efficiency, motion smoothness, corrective movement score, correction count, correction duration, discontinuity count, and kinematic headroom.
---
## Quick start
The dataset follows LeRobot dataset conventions and can be loaded with the `lerobot` library:
```python
from lerobot.datasets.lerobot_dataset import LeRobotDataset
dataset = LeRobotDataset("ExylosAi/pick_and_place_sample")
# Inspect the first frame of the first episode
sample = dataset[0]
print(sample.keys())
print(sample["observation.state"].shape)
print(sample["action"].shape)
```
You can also browse the raw Parquet and MP4 files directly under the **Files** tab.
---
## Repository structure
```text
README.md
LICENSE_1.txt
info.json
annotations.json
tasks.jsonl
episodes.jsonl
episodes_stats.jsonl
preview.mp4
preview.gif
data/
chunk-000/
episode_000000.parquet
episode_000001.parquet
...
videos/
chunk-000/
observation.images.wrist_cam/
episode_000000.mp4
episode_000001.mp4
...
observation.images.front_cam/
episode_000000.mp4
episode_000001.mp4
...
observation.images.left_cam/
...
observation.images.top_cam/
...
observation.images.right_cam/
...
```
---
## Intended use
This sample is suitable for:
- Inspecting the Exylos data format and annotation schema.
- Testing LeRobot-compatible training and data-loading pipelines.
- Quick imitation-learning experiments on a narrow pick-and-place task.
- Evaluating synchronized multi-view RGB, state/action trajectories, and phase-level annotations.
- Inspecting visual domain randomization and procedural variation in a compact manipulation sample.
- Reviewing success, failure, slip/drop, operator-abort, and correction/recovery examples.
For larger production-scale skill packs, including broader object families, configurable embodiments, denser masks, custom evaluation logic, or higher episode volumes, visit [exylos.ai](https://exylos.ai) or contact us directly.
---
## Out-of-scope
- This sample does not target a specific real-world deployment cell or production line.
- It does not include dense per-frame semantic or instance masks.
- It does not include a held-out benchmark split tuned for leaderboard-style evaluation.
- It does not provide dense per-frame 6DoF object pose labels as a standalone object-state stream.
---
## About Exylos
Exylos is an early-stage robotics data company. We capture human manipulation demonstrations in consumer VR and procedurally expand them into physics-consistent, transfer-oriented training episodes with visual domain randomization. Datasets are delivered in LeRobot-compatible structure or adapted to client pipelines.
If you are a robotics or applied-ML team and want to discuss a custom skill pack for your embodiment and task, reach out at **contact@exylos.ai** or visit [exylos.ai](https://exylos.ai).
---
## Citation
If you use this dataset in research or in a public technical report, please cite it as:
```bibtex
@misc{exylos_picknplace_sample_2026,
title = {Exylos Pick-and-Place Sample: A Multi-View, VR-Captured Manipulation Dataset},
author = {Exylos},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/ExylosAi/pick_and_place_sample}},
note = {LeRobot-compatible dataset}
}
```
---
## License
Released under the **Apache License 2.0**. This sample is intentionally permissive so robotics and ML teams can inspect, load, test, and commercially evaluate the format without licensing friction. You are free to use this dataset for both research and commercial purposes, subject to the standard Apache 2.0 attribution requirements. See `LICENSE_1.txt` in this repository for full terms.
---
## Contact
- Website: [exylos.ai](https://exylos.ai)
- Email: contact@exylos.ai
- LinkedIn: [Exylos on LinkedIn](https://www.linkedin.com/company/exylos-ai/)
For questions specific to this dataset, including format, schema, or fields, please open a discussion in the **Community** tab on this repository.
|