Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
video
video
17.8
58.1
label
class label
6 classes
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
0observation.images.front_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
1observation.images.left_cam
End of preview. Expand in Data Studio

Exylos Bimanual Table Spill Cleanup Sample

A human-in-the-loop, multi-view bimanual robot manipulation dataset for tabletop spill cleanup: removing distractor objects and wiping spilled liquid from the surface with a sponge. Delivered in a LeRobot-compatible structure with synchronized video, state/action trajectories, phase annotations, and success/failure labels.


Why this dataset is different

Most public manipulation datasets come from one of two sources: real-robot teleoperation farms, which are slow and expensive, or pure simulation, which is cheap but often weak for transfer. This sample comes from a third path:

  1. Human-in-the-loop VR capture. A human performs the task in an immersive virtual environment using a standard VR headset. Their motion provides task intent, manipulation timing, bimanual coordination, and correction behavior, while the system retargets the demonstration onto a virtual Franka Panda robot embodiment.
  2. Procedurally expanded with added visual domain randomization. Seed demonstrations are expanded into physics-consistent variations with changing object poses, distractors, mild occlusions, lighting conditions, camera configurations, object materials, and environment appearance.
  3. Packaged for direct inspection and training. The output is delivered in a LeRobot-compatible structure, with synchronized multi-view video, dual-arm state and action streams, phase-level annotations, quality scores, and success/failure metadata.

The result is human-seeded, scaled, and labeled robot-manipulation data that is closer to what policy training needs, without requiring every trajectory to be collected on a physical robot.

This public release is intentionally compact. It is meant as an inspection sample: robotics teams can evaluate the format, modalities, visual variation, annotation schema, bimanual trajectory quality, and failure semantics before discussing larger productized skill packs.


Dataset summary

Property Value
Episodes 50
Total frames 67,742
Total duration 2,256.37 seconds, about 37.6 minutes
Task Remove distractor objects and wipe the spilled liquid from the tabletop with a sponge
Robot embodiment Bimanual Franka Emika Panda setup, two 7-DoF arms + parallel grippers
Camera views 6 synchronized RGB streams
Video 30 FPS, H.264, 1280 x 960
Robot state 18-dimensional
Action vector 18-dimensional
Trajectories Synchronized dual-arm robot state + action streams per frame
Outcome mix 35 success episodes, 15 failure episodes
Failure reasons 12 cleanup-incomplete failures, 3 operator-abort failures
Frozen frames 8,177 total frozen frames across 35 episodes
Phase-level annotations approach, grasp, transport, place, retract, clean_surface, clean_swipe_pass, collision, handover, task_attempt
Episode-level metadata success/failure outcome, failure reason, duration, frozen-frame count, quality scores, derived metrics
Visual variation Object pose, distractors, mild occlusions, lighting, camera configuration, object material, and environment appearance variation
Format LeRobot-compatible Parquet + MP4
License Apache 2.0

What is included

Each episode bundles synchronized robot, video, and annotation signals:

  • Robot state trajectories: the full 18D bimanual robot state stream over time, covering left and right Panda arm joints plus gripper finger joints.
  • Action trajectories: the 18D control/action signal at each frame for the same bimanual motor ordering.
  • Multi-view RGB video: six synchronized camera streams: front, left, right, top, left wrist, and right wrist.
  • Per-frame indexing: timestamp, frame index, episode index, global index, task index, terminal state, and terminal success flag.
  • Episode-level metadata: task identity, success/failure outcome, failure reason, duration, frozen-frame count, quality scores, and derived execution metrics.
  • Phase-level annotations: frame-range segment boundaries for object approach, grasp, transport, placement, retraction, wiping passes, collision events, handovers, and failed task attempts.
  • Failure semantics: selected episodes include cleanup-incomplete, operator-abort, collision, wrong-object, and task-attempt failure signals in annotations and metrics.

Camera views

observation.images.front_cam
observation.images.left_cam
observation.images.right_cam
observation.images.top_cam
observation.images.wrist_cam_l
observation.images.wrist_cam_r

Core trajectory fields

observation.state
action
timestamp
frame_index
episode_index
index
task_index
next.done
next.success

State and action motor order

left_panda_joint1
left_panda_joint2
left_panda_joint3
left_panda_joint4
left_panda_joint5
left_panda_joint6
left_panda_joint7
left_panda_finger_joint1
left_panda_finger_joint2
right_panda_joint1
right_panda_joint2
right_panda_joint3
right_panda_joint4
right_panda_joint5
right_panda_joint6
right_panda_joint7
right_panda_finger_joint1
right_panda_finger_joint2

Annotation fields

episode_id
success
task_success
failure_reason
duration_sec
frozen_frames
phase_annotations
scores
derived
raw_measurements
scorer_id

The phase_annotations field contains phase names, frame ranges, execution quality, task-alignment labels, hand labels, and optional notes such as grasped object, released object, collision object, or operator-abort context.

The scores, derived, and raw_measurements fields provide quality and diagnostic metrics such as path efficiency, grasp precision, placement accuracy, temporal efficiency, motion smoothness, corrective movement score, kinematic headroom, composite score, confidence, path ratio, discontinuity count, and low-frequency motion power.


Repository structure

README.md
LICENSE_1.txt
annotations.json
meta/
  info.json
  tasks.jsonl
  episodes.jsonl
  episodes_stats.jsonl
data/
  chunk-000/
    episode_000000.parquet
    episode_000001.parquet
    ...
videos/
  chunk-000/
    observation.images.front_cam/
      episode_000000.mp4
      episode_000001.mp4
      ...
    observation.images.left_cam/
      episode_000000.mp4
      episode_000001.mp4
      ...
    observation.images.right_cam/
      ...
    observation.images.top_cam/
      ...
    observation.images.wrist_cam_l/
      ...
    observation.images.wrist_cam_r/
      ...

Intended use

This sample is suitable for:

  • Inspecting the Exylos data format and annotation schema.
  • Testing LeRobot-compatible training and data-loading pipelines.
  • Quick imitation-learning experiments on a narrow bimanual spill-cleanup task.
  • Evaluating synchronized multi-view RGB, dual-arm state/action trajectories, and phase-level annotations.
  • Inspecting visual domain randomization and procedural variation in a compact manipulation sample.
  • Reviewing success, cleanup-incomplete, operator-abort, collision, wrong-object, and task-attempt failure examples.

For larger production-scale skill packs, including broader object families, configurable embodiments, denser masks, custom evaluation logic, or higher episode volumes, visit exylos.ai or contact us directly.


Out-of-scope

  • This sample does not target a specific real-world deployment cell or production line.
  • It does not include dense per-frame semantic or instance masks.
  • It does not include a held-out benchmark split tuned for leaderboard-style evaluation.
  • It does not provide dense per-frame 6DoF object pose labels as a standalone object-state stream.
  • It is a compact inspection sample rather than a broad benchmark for all spill-cleanup behaviors.

About Exylos

Exylos is an early-stage robotics data company. We capture human manipulation demonstrations in consumer VR and procedurally expand them into physics-consistent, transfer-oriented training episodes with visual domain randomization. Datasets are delivered in LeRobot-compatible structure or adapted to client pipelines.

If you are a robotics or applied-ML team and want to discuss a custom skill pack for your embodiment and task, reach out at contact@exylos.ai or visit exylos.ai.


Citation

If you use this dataset in research or in a public technical report, please cite it as:

@misc{exylos_bimanual_table_spill_cleanup_sample_2026,
  title        = {Exylos Bimanual Table Spill Cleanup Sample: A Multi-View, VR-Captured Sponge-Wiping Dataset},
  author       = {Exylos},
  year         = {2026},
  howpublished = {\url{https://huggingface.co/datasets/ExylosAi/table_spill_cleanup_bimanual},
  note         = {LeRobot-compatible dataset}
}

License

Released under the Apache License 2.0. This sample is intentionally permissive so robotics and ML teams can inspect, load, test, and commercially evaluate the format without licensing friction. You are free to use this dataset for both research and commercial purposes, subject to the standard Apache 2.0 attribution requirements. See LICENSE_1.txt in this repository for full terms.


Contact

For questions specific to this dataset, including format, schema, or fields, please open a discussion in the Community tab on this repository.

Downloads last month
565