Vincent2025hello commited on
Commit
5eb16b9
·
verified ·
1 Parent(s): 2388b5b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +267 -3
README.md CHANGED
@@ -1,3 +1,267 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - lerobot
7
+ - underwater-robotics
8
+ - simulation
9
+ - vla
10
+ - manipulation
11
+ - navigation
12
+ pretty_name: USIM
13
+ size_categories:
14
+ - 100K<n<1M
15
+ ---
16
+
17
+ # USIM: Underwater Simulation Dataset for Vision-Language-Action Models
18
+
19
+ [![Paper](https://img.shields.io/badge/arXiv-2510.07869-B31B1B.svg)](https://arxiv.org/abs/2510.07869)
20
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
21
+
22
+ ## TL;DR
23
+
24
+ USIM is a large-scale underwater robot manipulation and navigation dataset collected in the [Stonefish](https://github.com/patrykcieslak/stonefish) physics simulator. It contains **2,275 episodes** (1,750 train + 525 test) across **20 tasks** in 9 underwater scenarios, formatted in [LeRobot v2.1](https://github.com/huggingface/lerobot) format with dual-camera video recordings.
25
+
26
+ ## Dataset Description
27
+
28
+ USIM is introduced in the paper **"USIM and U0: A Vision-Language-Action Dataset and Model for General Underwater Robots"**. It is designed to train and evaluate Vision-Language-Action (VLA) models for autonomous underwater robots operating in diverse subsea environments.
29
+
30
+ ### Key Features
31
+
32
+ - **Diverse underwater scenarios**: shallow ocean, underwater factory, industrial pool, subsea pipeline, shipwreck sites, lake environments, and open sea
33
+ - **Dual-camera observation**: ego (front-facing) and wrist (end-effector) camera views at 240×320 resolution
34
+ - **Rich proprioceptive state**: 29-dimensional state vector including joint positions, thruster PWM, velocities, IMU data, DVL, and pressure readings
35
+ - **20 tasks** spanning grasping, navigation, tracking, and transporting
36
+
37
+ ### Robot Platform
38
+
39
+ The robot used is a BlueROV2 underwater vehicle equipped with a 4-DOF robotic arm and a scaled-down Robotiq gripper, simulated in the [Stonefish](https://github.com/patrykcieslak/stonefish) physics engine.
40
+
41
+ ## Dataset Structure
42
+
43
+ This repository contains two independent LeRobot v2.1 datasets:
44
+
45
+ ```
46
+ usim/
47
+ ├── train/ # Training split (1,750 episodes)
48
+ │ ├── meta/
49
+ │ │ ├── info.json
50
+ │ │ ├── tasks.jsonl
51
+ │ │ ├── episodes.jsonl
52
+ │ │ ├── episodes_stats.jsonl
53
+ │ │ └── modality.json
54
+ │ ├── data/
55
+ │ │ ├── chunk-000/
56
+ │ │ └── chunk-001/
57
+ │ └── videos/
58
+ │ ├── chunk-000/
59
+ │ │ ├── observation.images.ego/
60
+ │ │ └── observation.images.wrist/
61
+ │ └── chunk-001/
62
+ │ ├── observation.images.ego/
63
+ │ └── observation.images.wrist/
64
+ ├── test/ # Test split (525 episodes)
65
+ │ ├── meta/
66
+ │ ├── data/
67
+ │ │ └── chunk-000/
68
+ │ └── videos/
69
+ │ └── chunk-000/
70
+ │ ├── observation.images.ego/
71
+ │ └── observation.images.wrist/
72
+ └── README.md
73
+ ```
74
+
75
+ ## Supported Tasks
76
+
77
+ The dataset covers 20 tasks and 9 language instructions grouped into 4 categories:
78
+
79
+ ### Grasping
80
+ | Task Code | Instruction | Scenario |
81
+ |-----------|-------------|----------|
82
+ | pick_pipe0_shallow | Pick up the pipe | Shallow ocean |
83
+ | pick_pipe1_shallow | Pick up the pipe | Shallow ocean |
84
+ | pick_pipe0_factory | Pick up the pipe | Underwater factory |
85
+ | pick_pipe1_factory | Pick up the pipe | Underwater factory |
86
+ | pick_red_shallow | Pick up the red cylinder | Shallow ocean |
87
+ | pick_redx_shallow | Pick up the red cylinder | Shallow ocean (multi-blue distractors) |
88
+ | pick_red_factory | Pick up the red cylinder | Underwater factory |
89
+ | pick_redx_factory | Pick up the red cylinder | Underwater factory (multi-blue distractors) |
90
+ | pick_blue_shallow | Pick up the blue cylinder | Shallow ocean |
91
+ | pick_bluex_shallow | Pick up the blue cylinder | Shallow ocean (multi-red distractors) |
92
+ | pick_blue_factory | Pick up the blue cylinder | Underwater factory |
93
+ | pick_bluex_factory | Pick up the blue cylinder | Underwater factory (multi-red distractors) |
94
+
95
+ ### Navigation
96
+ | Task Code | Instruction | Scenario |
97
+ |-----------|-------------|----------|
98
+ | goto_charge_station | Go to the charge station | Lake with equipment |
99
+ | goto_water_tower | Go to the water tower | Lake with rocks |
100
+ | scan_ship_modern | Scan the ship | Modern shipwreck |
101
+ | scan_ship_ancient | Scan the ship | Ancient shipwreck |
102
+ | inspect_pipeline_pool | Inspect the pipeline | Industrial pool with pipelines |
103
+ | inspect_pipeline_sea | Inspect the pipeline | Subsea pipeline |
104
+
105
+ ### Tracking
106
+ | Task Code | Instruction | Scenario |
107
+ |-----------|-------------|----------|
108
+ | follow_boat | Follow the boat | Open sea |
109
+
110
+ ### Transporting
111
+ | Task Code | Instruction | Scenario |
112
+ |-----------|-------------|----------|
113
+ | transfer_red_shallow | Pick up the red cylinder and transfer it to the box | Shallow ocean |
114
+
115
+ ## Data Statistics
116
+
117
+ ### Overall
118
+
119
+ | Metric | Train | Test | Total |
120
+ |--------|-------|------|-------|
121
+ | Episodes | 1,750 | 525 | 2,275 |
122
+ | Frames | 696,990 | 208,605 | 905,595 |
123
+ | Videos | 3,500 | 1,050 | 4,550 |
124
+
125
+ ### Per-Task Breakdown
126
+
127
+ | Task | Train Episodes | Train Frames | Test Episodes | Test Frames |
128
+ |------|---------------|--------------|---------------|-------------|
129
+ | follow_boat | 50 | 18,061 | 15 | 5,026 |
130
+ | goto_charge_station | 100 | 13,371 | 30 | 4,437 |
131
+ | goto_water_tower | 100 | 29,505 | 30 | 9,084 |
132
+ | inspect_pipeline_pool | 50 | 29,609 | 15 | 8,828 |
133
+ | inspect_pipeline_sea | 50 | 33,884 | 15 | 10,156 |
134
+ | pick_blue_factory | 100 | 38,038 | 30 | 11,857 |
135
+ | pick_blue_shallow | 100 | 35,953 | 30 | 11,371 |
136
+ | pick_bluex_factory | 100 | 38,461 | 30 | 11,505 |
137
+ | pick_bluex_shallow | 100 | 38,486 | 30 | 10,843 |
138
+ | pick_pipe0_factory | 100 | 38,683 | 30 | 10,942 |
139
+ | pick_pipe0_shallow | 100 | 37,205 | 30 | 11,411 |
140
+ | pick_pipe1_factory | 100 | 36,997 | 30 | 11,113 |
141
+ | pick_pipe1_shallow | 100 | 37,025 | 30 | 10,963 |
142
+ | pick_red_factory | 100 | 37,829 | 30 | 11,645 |
143
+ | pick_red_shallow | 100 | 36,914 | 30 | 10,990 |
144
+ | pick_redx_factory | 100 | 38,455 | 30 | 11,433 |
145
+ | pick_redx_shallow | 100 | 36,428 | 30 | 10,398 |
146
+ | scan_ship_ancient | 50 | 37,046 | 15 | 11,008 |
147
+ | scan_ship_modern | 50 | 33,868 | 15 | 10,285 |
148
+ | transfer_red_shallow | 100 | 51,172 | 30 | 15,310 |
149
+ | **Total** | **1,750** | **696,990** | **525** | **208,605** |
150
+
151
+ ## Data Schema
152
+
153
+ Both `train/` and `test/` follow the [LeRobot v2.1](https://github.com/huggingface/lerobot) format. Each episode is stored as a Parquet file with the following features:
154
+
155
+ ### Observation
156
+
157
+ | Feature | Dtype | Shape | Description |
158
+ |---------|-------|-------|-------------|
159
+ | `observation.images.ego` | video | (240, 320, 3) | Front-facing ego camera RGB video |
160
+ | `observation.images.wrist` | video | (240, 320, 3) | Wrist-mounted end-effector camera RGB video |
161
+ | `observation.state` | float32 | (29,) | Robot proprioceptive state vector |
162
+
163
+ #### State Vector Breakdown (29-dim)
164
+
165
+ | Component | Indices | Dim | Description |
166
+ |-----------|---------|-----|-------------|
167
+ | `joint_pos` | 0–5 | 6 | Arm joint positions |
168
+ | `pwm` | 5–13 | 8 | Thruster PWM values |
169
+ | `joint_v` | 13–18 | 5 | Arm joint velocities |
170
+ | `dvl_v` | 18–21 | 3 | Doppler Velocity Log velocity |
171
+ | `imu_av` | 21–24 | 3 | IMU angular velocity |
172
+ | `imu_la` | 24–27 | 3 | IMU linear acceleration |
173
+ | `pressure` | 27–28 | 1 | Pressure sensor reading |
174
+ | `dvl_h` | 28–29 | 1 | DVL altitude |
175
+
176
+ ### Action
177
+
178
+ | Feature | Dtype | Shape | Description |
179
+ |---------|-------|-------|-------------|
180
+ | `action` | float32 | (13,) | Robot action command |
181
+
182
+ #### Action Breakdown (13-dim)
183
+
184
+ | Component | Indices | Dim | Description |
185
+ |-----------|---------|-----|-------------|
186
+ | `joint_pos` | 0–5 | 6 | Arm target joint positions |
187
+ | `pwm` | 5–13 | 8 | Thruster PWM commands |
188
+
189
+ ### Additional Features
190
+
191
+ | Feature | Dtype | Shape | Description |
192
+ |---------|-------|-------|-------------|
193
+ | `target_pos` | float32 | (6,) | Target pose in robot local frame (x, y, z, roll, pitch, yaw) |
194
+ | `timestamp` | float32 | (1,) | Frame timestamp in seconds |
195
+ | `frame_index` | int64 | (1,) | Frame index within episode |
196
+ | `episode_index` | int64 | (1,) | Episode index |
197
+ | `index` | int64 | (1,) | Global frame index |
198
+ | `task_index` | int64 | (1,) | Task index (maps to `tasks.jsonl`) |
199
+
200
+ ### Video Metadata
201
+
202
+ | Property | Value |
203
+ |----------|-------|
204
+ | Resolution | 240 × 320 |
205
+ | Codec | AV1 |
206
+ | Pixel Format | YUV420P |
207
+ | FPS | 10 |
208
+ | Channels | 3 (RGB) |
209
+ | Audio | No |
210
+
211
+ ## Loading the Dataset
212
+
213
+ ### Using LeRobot
214
+
215
+ ```python
216
+ from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
217
+
218
+ # Load the training split
219
+ train_dataset = LeRobotDataset("Vincent2025hello/usim", root="train")
220
+
221
+ # Load the test split
222
+ test_dataset = LeRobotDataset("Vincent2025hello/usim", root="test")
223
+
224
+ # Iterate through episodes
225
+ for episode in train_dataset:
226
+ ego_image = episode["observation.images.ego"] # (240, 320, 3) numpy array
227
+ wrist_image = episode["observation.images.wrist"] # (240, 320, 3) numpy array
228
+ state = episode["observation.state"] # (29,) numpy array
229
+ action = episode["action"] # (13,) numpy array
230
+ task_index = episode["task_index"] # scalar
231
+ print(f"Task: {train_dataset.meta.tasks[task_index]}")
232
+ ```
233
+
234
+ ### Using Hugging Face Datasets
235
+
236
+ ```python
237
+ from datasets import load_dataset
238
+
239
+ # Load from the repository
240
+ dataset = load_dataset("Vincent2025hello/usim")
241
+ ```
242
+
243
+ ## Citation
244
+
245
+ If you use this dataset in your research, please cite:
246
+
247
+ ```bibtex
248
+ @misc{gu2025usimu0visionlanguageactiondataset,
249
+ title={USIM and U0: A Vision-Language-Action Dataset and Model for General Underwater Robots},
250
+ author={Junwen Gu and Zhiheng Wu and Pengxuan Si and Shuang Qiu and Yukai Feng and Luoyang Sun and Laien Luo and Lianyi Yu and Jian Wang and Zhengxing Wu},
251
+ year={2025},
252
+ eprint={2510.07869},
253
+ archivePrefix={arXiv},
254
+ primaryClass={cs.RO},
255
+ url={https://arxiv.org/abs/2510.07869},
256
+ }
257
+ ```
258
+
259
+ ## License
260
+
261
+ This dataset is released under the [Apache 2.0 License](https://opensource.org/licenses/Apache-2.0).
262
+
263
+ ## Acknowledgements
264
+
265
+ - [Stonefish](https://github.com/patrykcieslak/stonefish) — Physics-based underwater simulator
266
+ - [stonefish_ros](https://github.com/patrykcieslak/stonefish_ros) — ROS interface for Stonefish
267
+ - [LeRobot](https://github.com/huggingface/lerobot) — Dataset format and loading utilities