lorenzomazza's picture
Update Readme.md
275ee11 verified
---
license: mit
pretty_name: Robotics Bowel Retraction
viewer: false
tags:
- robotics
- surgical-robotics
- imitation-learning
- minimally-invasive-surgery
- endoscopic-vision
---
# Robotics Bowel Retraction Dataset
Dataset release for **Supervised Mixture-of-Experts for Surgical Grasping and Retraction**, arxiv.org/abs/2601.21971.
This dataset supports learning autonomous surgical-assistant policies for collaborative bowel grasping and retraction. In each demonstration, a surgeon-controlled instrument visually indicates a target bowel segment and the robot assistant performs grasping, retraction, and tension maintenance using stereo endoscopic observations.
## Dataset Description
The data were collected in an OpenHELP open-body phantom setup with two UR5e robotic arms: one static arm holding a stereo endoscope and one arm controlling a laparoscopic bowel grasper through a mechatronic interface. The demonstrations contain stereo endoscopic image pairs, gripper state, and instrument-tip position in the camera coordinate frame.
The associated paper uses this data to train and evaluate a supervised Mixture-of-Experts extension to Action Chunking Transformer (ACT) for a five-phase surgical manipulation task:
1. Idle
2. Approach and grasp
3. Hold
4. Retract
5. Maintain tension
## Dataset Size
This release contains 165 demonstrations across the fixed-viewpoint and random-viewpoint splits. Demonstration counts are computed from episode folders containing `episode.csv`. Synchronized samples are the total non-header rows across the corresponding `episode.csv` files.
| Split | Source folders | Demonstrations | Synchronized samples | Approx. raw size | Archive size |
| --- | --- | ---: | ---: | ---: | ---: |
| fixed_viewpoint | `v1/`, `v2/` | 120 | 14,837 | 138 GB | 127 GB |
| random_viewpoint | `multicam/multi_camera_exp/`, `multicam/multi_camera_exp2/` | 45 | 8,051 | 65 GB | 62 GB |
| total | all released folders | 165 | 22,888 | 203 GB | 189 GB |
Folder-level breakdown:
| Folder | Split | Demonstrations | Synchronized samples | Approx. raw size |
| --- | --- | ---: | ---: | ---: |
| `v1/` | fixed_viewpoint | 60 | 7,577 | 58 GB |
| `v2/` | fixed_viewpoint | 60 | 7,260 | 80 GB |
| `multicam/multi_camera_exp/` | random_viewpoint | 15 | 2,280 | 19 GB |
| `multicam/multi_camera_exp2/` | random_viewpoint | 30 | 5,771 | 46 GB |
## Release Files
This release contains two compressed archives:
| Archive | Split | Contents |
| --- | --- | --- |
| `bowel_retraction_fixed_viewpoint.tar.zst` | fixed_viewpoint | `v1/`, `v2/` |
| `bowel_retraction_random_viewpoint.tar.zst` | random_viewpoint | `multicam/multi_camera_exp/`, `multicam/multi_camera_exp2/` |
Archive member paths are relative to the dataset root and preserve the raw source folders. For example, files inside the fixed-viewpoint archive are stored as `v1/...` and `v2/...`, not as absolute tar paths.
`v1` and `v2` are historical raw folder names for two fixed-viewpoint recording days. They are not dataset version numbers. Together, these two folders form the fixed-viewpoint split used in the paper.
Note: some CSV files, especially `episode.csv`, include path columns such as `frameLeftRectifiedPath` and `frameRightRectifiedPath` that contain the original absolute acquisition paths from the NCT storage system. Before using the dataset with loaders that read these paths directly, replace the original prefixes with your local extraction path.
If both archives are extracted into a dataset root such as `/path/to/robotics_bowel_grasping`, the expected local layout is:
```text
/path/to/robotics_bowel_grasping/
v1/
v2/
multicam/
multi_camera_exp/
multi_camera_exp2/
```
Apply these path-prefix replacements in the CSV files:
```text
/mnt/cluster/datasets/bowel_retraction/v1/ -> /path/to/robotics_bowel_grasping/v1/
/mnt/cluster/datasets/bowel_retraction/v2/ -> /path/to/robotics_bowel_grasping/v2/
/mnt/cluster/datasets/bowel_retraction/multi_camera_exp/ -> /path/to/robotics_bowel_grasping/multicam/multi_camera_exp/
/mnt/cluster/datasets/bowel_retraction/multi_camera_exp2/ -> /path/to/robotics_bowel_grasping/multicam/multi_camera_exp2/
```
Example helper script:
```python
from pathlib import Path
root = Path("/path/to/robotics_bowel_grasping")
replacements = {
"/mnt/cluster/datasets/bowel_retraction/v1/": f"{root}/v1/",
"/mnt/cluster/datasets/bowel_retraction/v2/": f"{root}/v2/",
"/mnt/cluster/datasets/bowel_retraction/multi_camera_exp/": f"{root}/multicam/multi_camera_exp/",
"/mnt/cluster/datasets/bowel_retraction/multi_camera_exp2/": f"{root}/multicam/multi_camera_exp2/",
}
for csv_path in root.rglob("episode.csv"):
text = csv_path.read_text()
for old, new in replacements.items():
text = text.replace(old, new)
csv_path.write_text(text)
```
## Episode CSV Schema
Each episode folder contains an `episode.csv` file. This is the synchronized table used for learning. Other CSV files in the raw folders contain unsynchronized sensor logs and are not required for the standard learning setup.
The `episode.csv` columns are:
| Column | Description |
| --- | --- |
| `frameLeftRectifiedPath` | Path to the left rectified endoscopic RGB image. These paths use the original acquisition prefix and should be rewritten after extraction. |
| `frameRightRectifiedPath` | Path to the right rectified endoscopic RGB image. These paths use the original acquisition prefix and should be rewritten after extraction. |
| `base_to_camera_quat_x` | Quaternion x component for the rotation from robot base frame to camera frame. |
| `base_to_camera_quat_y` | Quaternion y component for the rotation from robot base frame to camera frame. |
| `base_to_camera_quat_z` | Quaternion z component for the rotation from robot base frame to camera frame. |
| `base_to_camera_quat_w` | Quaternion w component for the rotation from robot base frame to camera frame. |
| `open` | Boolean gripper state. `True` means open gripper and `False` means closed gripper. |
| `relative_tip_position_x` | Tool-tip x position in meters, relative to the tool-tip position when the robot controller was started, where the initial position is `(0, 0, 0)`. |
| `relative_tip_position_y` | Tool-tip y position in meters, relative to the tool-tip position when the robot controller was started, where the initial position is `(0, 0, 0)`. |
| `relative_tip_position_z` | Tool-tip z position in meters, relative to the tool-tip position when the robot controller was started, where the initial position is `(0, 0, 0)`. |
| `camera_frame_tip_position_x` | Tool-tip x position in meters expressed in the camera-oriented frame. The rotational component is valid, but no calibrated translation vector is provided. |
| `camera_frame_tip_position_y` | Tool-tip y position in meters expressed in the camera-oriented frame. The rotational component is valid, but no calibrated translation vector is provided. |
| `camera_frame_tip_position_z` | Tool-tip z position in meters expressed in the camera-oriented frame. The rotational component is valid, but no calibrated translation vector is provided. |
| `task_phase` | Zero-indexed task phase label: `0` idle, `1` approach and grasp, `2` hold, `3` retract, `4` maintain tension. |
| `language` | Language instruction associated with the task. |
Additional release files:
- `manifest.csv`: archive-level manifest.
- `SHA256SUMS`: SHA256 checksums for the `.tar.zst` archives.
- `.gitattributes`: Git LFS patterns for archive storage.
- `LICENSE`: MIT License.
## Verification
Verify archive checksums with:
```bash
sha256sum -c SHA256SUMS
```
Inspect archive contents with:
```bash
tar --use-compress-program="zstd -d" -tf bowel_retraction_fixed_viewpoint.tar.zst
tar --use-compress-program="zstd -d" -tf bowel_retraction_random_viewpoint.tar.zst
```
## Citation
If you use this dataset or the associated code, please cite:
```bibtex
@misc{mazza2026surgicalmoe,
title = {Supervised Mixture-of-Experts for Surgical Grasping and Retraction},
author = {Mazza, Lorenzo and Rodriguez, Ariel and Younis, Rayan and Lelis, Martin and Hellig, Ortrun and Li, Chenpan and Bodenstedt, Sebastian and Wagner, Martin and Speidel, Stefanie},
year = {2026},
url = {https://surgical-moe-project.github.io/rss-paper/}
}
```
## License
This dataset release is distributed under the MIT License.
## Intended Use
This dataset is intended for research in surgical robotics, imitation learning, autonomous assistance, and endoscopic vision-based robot control. It is not a clinically validated system and should not be used for patient care or clinical decision-making.