File size: 4,599 Bytes
c732b3a
4d8f970
 
 
 
c732b3a
 
 
 
4d8f970
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
title: EXYLOS
emoji: 🤖
colorFrom: blue
colorTo: gray
sdk: static
pinned: false
---

<p align="center">
  <h1 align="center">EXYLOS</h1>
  <p align="center"><strong>Robot-ready skill datasets for manipulation policy learning and evaluation.</strong></p>
</p>

---

## What we do

EXYLOS builds structured robot manipulation datasets for imitation learning, VLA models, policy training, and evaluation.

Raw videos show what happened, but policy learning also needs synchronized actions, states, task metadata, outcomes, and failure context. EXYLOS turns human-seeded manipulation workflows performed in our simulations into train-ready episodes with multi-view observations, trajectories, annotations, success/failure labels, and quality diagnostics.

---

## Public sample datasets

This organization hosts compact inspection samples for checking schema, loading data, inspecting trajectories, and evaluating whether the EXYLOS format fits a robotics ML stack.

| Dataset | Status | Contents |
|---|---:|---|
| [`ExylosAi/pick_and_place_sample`](https://huggingface.co/datasets/ExylosAi/pick_and_place_sample) | Available | 50 pick-and-place episodes, 21,412 frames, 5 RGB views, 9D Panda state/action, 30 success and 20 failure episodes. |
| [`ExylosAi/bimanual_table_spill_cleanup`](https://huggingface.co/datasets/ExylosAi/table_spill_cleanup_bimanual) | Available | 50 bimanual spill-cleanup episodes, 67,742 frames, 6 RGB views, 18D dual-Panda state/action, 35 success and 15 failure episodes. |

More contact-rich and recovery-heavy samples are planned.

---

## Dataset format

EXYLOS samples are packaged to be compatible with the LeRobot ecosystem whenever possible. A typical dataset contains:

```text
README.md
LICENSE.txt
annotations.json
meta/
  info.json
  tasks.jsonl
  episodes.jsonl
  episodes_stats.jsonl
data/
  chunk-000/
    episode_000000.parquet
    episode_000001.parquet
videos/
  chunk-000/
    observation.images.<camera_name>/
      episode_000000.mp4
      episode_000001.mp4
```

Core signals:

| Category | Examples |
|---|---|
| Visual observations | Synchronized RGB wrist and scene views |
| Action and state | Robot state, action vectors, timestamps, frame indices |
| Labels | Success, failure reason, terminal flags, collisions, aborts, retries |
| Annotations | Phase boundaries, hand labels, object notes, scores, derived metrics |
| Metadata | Task description, duration, splits, feature schema, validation stats |

Exact fields vary by dataset, so each repository includes a dataset-specific card.

---

## Why EXYLOS datasets are different

- **Structured, not raw:** episodes include synchronized video, actions, state, metadata, annotations, and quality checks.
- **Skill-oriented:** each dataset is organized around a manipulation workflow rather than unrelated clips.
- **Failure-aware:** samples include failed attempts, aborts, collisions, incomplete task executions, and recovery-relevant labels when available.
- **LeRobot-oriented:** data is stored in open formats such as MP4, Parquet, JSON, and JSONL.
- **Transfer-minded:** workflows are captured from human intent in consumer VR and procedurally expanded, with added visual domain randomization for broader policy-learning experiments.

---

## Intended use

Public samples are suitable for:

- inspecting EXYLOS schema and annotation conventions
- testing LeRobot-compatible loaders and training pipelines
- running small imitation-learning experiments
- reviewing multi-view video, trajectories, and phase annotations
- evaluating whether a custom EXYLOS skill pack would fit your workflow

They are compact inspection datasets, not complete production-scale benchmarks.

---

## Commercial datasets and custom skill packs

EXYLOS can generate custom robot-ready skill datasets for pick-and-place, bimanual manipulation, spill cleanup, sorting, binning, object rearrangement, failure recovery, and evaluation/regression sets.

Commercial deliveries can add depth, segmentation masks, object states, event labels, custom cameras, larger episode volumes, stricter QA, and internal-pipeline packaging.

---

## License

Current public samples are released under Apache 2.0. Please check each dataset card and license file before using a sample in research, demos, training, or commercial workflows.

---

## Contact

- Website: [exylos.ai](https://exylos.ai)
- Email: contact@exylos.ai
- LinkedIn: [Exylos on LinkedIn](https://www.linkedin.com/company/exylos-ai/)

If you need structured skill data, send us the target task, robot, modalities, format, evaluation criteria, and timeline.