alibustami commited on
Commit
152f878
·
verified ·
1 Parent(s): a2be9f0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +359 -0
README.md ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - robotics
5
+ - image-to-image
6
+ language:
7
+ - en
8
+ tags:
9
+ - robotics
10
+ - navigation
11
+ - imitation-learning
12
+ - vision-language-action
13
+ - isaac-sim
14
+ - nova-carter
15
+ - differential-drive
16
+ - language-conditioned
17
+ - behavior-cloning
18
+ - simulation
19
+ - object-approach
20
+ - depth
21
+ - segmentation
22
+ pretty_name: MiniVLA-Nav v1
23
+ size_categories:
24
+ - 1K<n<10K
25
+ multilinguality:
26
+ - monolingual
27
+ source_datasets:
28
+ - original
29
+ ---
30
+
31
+ # MiniVLA-Nav v1
32
+
33
+ **A Multi-Scene Simulation Dataset for Language-Conditioned Robot Navigation**
34
+
35
+ <!-- > Ali Al-Bustami · Department of Robotics Engineering (Thesis Project) -->
36
+
37
+ ---
38
+
39
+ ## Demo
40
+
41
+ <video src="assets/montage_all_scenes.mp4" controls width="100%">All-scenes montage</video>
42
+
43
+ *Nova Carter navigating to named objects across all four Isaac Sim environments.*
44
+
45
+ ---
46
+
47
+ ## Dataset Summary
48
+
49
+ MiniVLA-Nav v1 is a simulation dataset for the **Language-Conditioned Object Approach (LCOA)** task: given a short natural-language instruction, an NVIDIA Nova Carter differential-drive robot must navigate to the named object and stop within 1 m. Data were collected inside four photorealistic NVIDIA Isaac Sim 5.1 environments (Office, Hospital, Full Warehouse, Warehouse with Multiple Shelves).
50
+
51
+ Each of the **1,174 episodes** pairs a language instruction with per-timestep, synchronized multimodal observations:
52
+
53
+ | Modality | Resolution / Shape | Format |
54
+ |---|---|---|
55
+ | Front RGB | 640 × 640 × 3, uint8 | PNG |
56
+ | Metric depth | 640 × 640, float32 (metres) | NumPy |
57
+ | Instance segmentation | 640 × 640, uint16 | PNG |
58
+ | Continuous actions (v, ω) | T × 2, float32 | NumPy |
59
+ | Tokenized actions (7×7) | T × 2, int16 | NumPy |
60
+ | Robot poses (x,y,z,qw,qx,qy,qz) | T × 7, float32 | NumPy |
61
+
62
+ All sensors operate at **60 Hz** (physics Δt = 1/60 s).
63
+
64
+ ---
65
+
66
+ ## Supported Tasks
67
+
68
+ - **Language-Conditioned Object Approach (LCOA)** — given a natural-language goal and front RGB-D observations, predict continuous (v, ω) or discrete 7×7 action tokens to drive a differential-drive robot within 1 m of the named object.
69
+ - **Behaviour Cloning / Imitation Learning** — dense per-step expert labels enable direct supervised training.
70
+ - **OOD Generalisation** — structured evaluation splits test template-paraphrase and object-category out-of-distribution robustness.
71
+
72
+ ---
73
+
74
+ ## Multimodal Observations
75
+
76
+ Each timestep provides synchronized RGB, metric depth (float32, metres), and instance segmentation. The composites below show RGB (left) and depth colormap (right) from a mid-episode step.
77
+
78
+ | Office | Hospital |
79
+ |:---:|:---:|
80
+ | ![RGB+D office](assets/rgbd_office.png) | ![RGB+D hospital](assets/rgbd_hospital.png) |
81
+
82
+ | Full Warehouse | Warehouse (Multi-Shelf) |
83
+ |:---:|:---:|
84
+ | ![RGB+D full warehouse](assets/rgbd_full_warehouse.png) | ![RGB+D warehouse shelves](assets/rgbd_warehouse_shelves.png) |
85
+
86
+ **Depth strip** — consecutive frames from an office episode, showing depth (metres) as the robot approaches the target:
87
+
88
+ ![Depth strip office](assets/depth_strip_office.png)
89
+
90
+ ---
91
+
92
+ ## Scenes
93
+
94
+ Four photorealistic Isaac Sim environments, each with curated seen/held-out object categories:
95
+
96
+ ### Office
97
+ ![Contact sheet — Office](assets/contact_office.png)
98
+
99
+ ### Hospital
100
+ ![Contact sheet — Hospital](assets/contact_hospital.png)
101
+
102
+ ### Full Warehouse
103
+ ![Contact sheet — Full Warehouse](assets/contact_full_warehouse.png)
104
+
105
+ ### Warehouse (Multiple Shelves)
106
+ ![Contact sheet — Warehouse Multi-Shelf](assets/contact_warehouse_multiple_shelves.png)
107
+
108
+ | Scene | Episodes | Seen Categories | Held-out Categories |
109
+ |---|---|---|---|
110
+ | Office | 281 | chair, sofa, table, monitor, plant, trash\_can | fire\_extinguisher, whiteboard |
111
+ | Hospital | 22 | chair, trash\_can | fire\_extinguisher, whiteboard |
112
+ | Full Warehouse | 54 | shelf, rack | barrel |
113
+ | Warehouse (Multi-Shelf) | 68 | shelf, rack | barrel |
114
+
115
+ ---
116
+
117
+ ## Object Categories
118
+
119
+ 12 categories total — 9 seen during training, 3 held out for OOD evaluation.
120
+
121
+ **Seen categories:**
122
+
123
+ | chair | monitor | table | trash can |
124
+ |:---:|:---:|:---:|:---:|
125
+ | ![chair](assets/sample_chair.png) | ![monitor](assets/sample_monitor.png) | ![table](assets/sample_table.png) | ![trash can](assets/sample_trash_can.png) |
126
+
127
+ | rack | crate | shelf | barrel (OOD) |
128
+ |:---:|:---:|:---:|:---:|
129
+ | ![rack](assets/sample_rack.png) | ![crate](assets/sample_crate.png) | ![shelf](assets/sample_shelf.png) | ![barrel](assets/sample_barrel.png) |
130
+
131
+ **Held-out (OOD):** fire\_extinguisher, whiteboard, barrel — appear only in `test_ood_obj` split.
132
+
133
+ ---
134
+
135
+ ## Object Category Demo
136
+
137
+ <video src="assets/montage_office_categories.mp4" controls width="100%">Office categories montage</video>
138
+
139
+ *All object categories navigated to in the Office scene.*
140
+
141
+ ---
142
+
143
+ ## Dataset Structure
144
+
145
+ ```
146
+ v1/
147
+ ├── dataset_meta.json # Global metadata (scenes, camera, action space, splits)
148
+ ├── assets/ # README visual assets
149
+ ├── splits/
150
+ │ ├── train_id.txt # 261 episode IDs
151
+ │ ├── val_id.txt # 41 episode IDs
152
+ │ ├── test_id.txt # 50 episode IDs
153
+ │ ├── test_ood_obj.txt # 37 episode IDs (held-out object categories)
154
+ │ └── test_ood_lang.txt # 36 episode IDs (paraphrase OOD templates)
155
+ ├── targets_office.yaml # Per-scene object catalogs (3-D centroids)
156
+ ├── targets_hospital.yaml
157
+ ├── targets_full_warehouse.yaml
158
+ ├── targets_warehouse_multiple_shelves.yaml
159
+ └── episodes/
160
+ └── ep_{N:06d}/
161
+ ├── meta.json # Full episode metadata
162
+ ├── rgb_front/{t}.png # 640×640 RGB frame at step t
163
+ ├── depth_front/{t}.npy # 640×640 float32 depth (m) at step t
164
+ ├── seg_front/{t}.png # 640×640 uint16 instance segmentation at step t
165
+ ├── actions_continuous.npy # (T, 2) float32 — (v_t, ω_t)
166
+ ├── actions_tokens.npy # (T, 2) int16 — discretized 7×7 tokens
167
+ └── poses.npy # (T, 7) float32 — (x,y,z,qw,qx,qy,qz)
168
+ ```
169
+
170
+ ### Episode Metadata (`meta.json`)
171
+
172
+ Each episode's sidecar JSON records the full configuration:
173
+
174
+ ```json
175
+ {
176
+ "episode_id": "ep_000321",
177
+ "scene_id": "full_warehouse.usd",
178
+ "goal": {
179
+ "target_category": "crate",
180
+ "target_id": "crate_038",
181
+ "goal_position_xyz_m": [-15.08, 10.77, 2.93]
182
+ },
183
+ "instruction": {
184
+ "text": "Go to the crate.",
185
+ "template_id": "train_01"
186
+ },
187
+ "spawn": { "tier": "mid", "spawn_to_target_dist_m": 3.574 },
188
+ "rollout": {
189
+ "num_steps": 219,
190
+ "terminated_by": "success",
191
+ "success": true,
192
+ "collision_count": 0,
193
+ "final_ne_m": 0.966,
194
+ "trajectory_length_m": 2.61
195
+ }
196
+ }
197
+ ```
198
+
199
+ ---
200
+
201
+ ## Splits
202
+
203
+ | Split | Episodes | Description |
204
+ |---|---|---|
205
+ | `train_id` | 261 | Seen objects, seen instruction templates |
206
+ | `val_id` | 41 | Seen objects, seen templates (validation) |
207
+ | `test_id` | 50 | Seen objects, seen templates (held-out test) |
208
+ | `test_ood_obj` | 37 | **Held-out object categories** (fire extinguisher, whiteboard, barrel) |
209
+ | `test_ood_lang` | 36 | **Paraphrase OOD** instruction templates |
210
+ | **Total** | **425** | (current snapshot; full budget: 2,000) |
211
+
212
+ ---
213
+
214
+ ## Language Instructions
215
+
216
+ Instructions are generated from slot-fill templates with `{object}` and `{color}` placeholders.
217
+
218
+ **18 training templates** (T1–T18), examples:
219
+ - "Go to the {object}."
220
+ - "Drive to the {object} and stop."
221
+ - "Approach the {object}."
222
+ - "Navigate to the {object}."
223
+ - "Your destination is the {object}."
224
+
225
+ **12 paraphrase-OOD templates** (O1–O12), examples:
226
+ - "Make your way to the {object}."
227
+ - "Proceed to the {object}."
228
+ - "Find the {object} and come to a stop."
229
+ - "Close in on the {object}."
230
+
231
+ > **Note:** Color-slot templates are suppressed in v1 — all targets carry `color=unknown` because USD assets do not expose material-color attributes through a standard prim API. Active pool: 13 train + 10 paraphrase-OOD templates.
232
+
233
+ ---
234
+
235
+ ## Task Definition
236
+
237
+ **LCOA formulation:** Given instruction $\ell$ and observations $o_t = (I_t^\text{RGB}, D_t)$, output actions $a_t = (v_t, \omega_t)$ such that the robot stops within $r_\text{success} = 1.0$ m of the target object centroid.
238
+
239
+ **Action space:**
240
+ - Continuous: $(v, \omega) \in [0, 1]$ m/s × $[-1.5, 1.5]$ rad/s
241
+ - Tokenized: each dimension quantized to 7 uniform bins → 49-token vocabulary
242
+
243
+ **Episode termination:**
244
+ - **Success** — within 1 m and stationary for ≥ 5 consecutive steps
245
+ - **Collision** — stall detected (no forward progress for ≥ 16 steps near obstacle)
246
+ - **Timeout** — 1,000 steps reached without success
247
+
248
+ Only successful episodes are retained in the dataset.
249
+
250
+ ---
251
+
252
+ ## Spawn Tiers
253
+
254
+ Trajectory diversity is ensured through three distance tiers:
255
+
256
+ | Tier | Weight | Radius |
257
+ |---|---|---|
258
+ | Near | 30% | 1.5–3.5 m from target |
259
+ | Mid | 40% | 3.5–7.0 m from target |
260
+ | Far | 30% | Global curated floor points |
261
+
262
+ Pearson correlation between spawn distance and trajectory length: **r = 0.94**.
263
+
264
+ ---
265
+
266
+ ## Expert Controller
267
+
268
+ The data-collection expert is a proportional controller using pixel-level target visibility from the instance segmentation mask:
269
+
270
+ - **Target visible (≥ 32 px):** angular correction from mask centroid column + depth-based speed
271
+ - **Target not visible:** bearing-only proportional law from known goal position
272
+ - **Obstacle avoidance:** speed clamped when depth in central foreground crop < 0.25 m
273
+
274
+ ---
275
+
276
+ ## Rollout Statistics
277
+
278
+ | Split | N | Mean NE (m) | Mean TL (m) | Mean Steps |
279
+ |---|---|---|---|---|
280
+ | train\_id | 261 | 0.967 | 2.75 | 197.6 |
281
+ | val\_id | 41 | 0.967 | 2.83 | 205.6 |
282
+ | test\_id | 50 | 0.966 | 2.74 | 190.6 |
283
+ | test\_ood\_obj | 37 | 0.967 | 2.38 | 174.7 |
284
+ | test\_ood\_lang | 36 | 0.967 | 3.07 | 229.7 |
285
+
286
+ NE = final navigation error (distance to goal at termination). TL = trajectory length.
287
+
288
+ ---
289
+
290
+ ## Collection Setup
291
+
292
+ | Property | Value |
293
+ |---|---|
294
+ | Simulator | NVIDIA Isaac Sim 5.1.0-rc.19 |
295
+ | Robot | NVIDIA Nova Carter (differential-drive) |
296
+ | Camera | front\_hawk/right stereo camera |
297
+ | Physics rate | 60 Hz (Δt = 1/60 s) |
298
+ | Image resolution | 640 × 640 px |
299
+ | Random seed | 42 |
300
+ | Generation date | 2026-04-22 |
301
+
302
+ ---
303
+
304
+ ## Loading the Dataset
305
+
306
+ ```python
307
+ import json
308
+ import numpy as np
309
+ from pathlib import Path
310
+ from PIL import Image
311
+
312
+ root = Path("v1")
313
+
314
+ # Load split
315
+ with open(root / "splits" / "train_id.txt") as f:
316
+ train_ids = [line.strip() for line in f]
317
+
318
+ # Load an episode
319
+ ep_dir = root / "episodes" / train_ids[0]
320
+ meta = json.loads((ep_dir / "meta.json").read_text())
321
+
322
+ instruction = meta["instruction"]["text"] # "Go to the monitor."
323
+ actions = np.load(ep_dir / "actions_continuous.npy") # (T, 2) float32
324
+ tokens = np.load(ep_dir / "actions_tokens.npy") # (T, 2) int16
325
+ poses = np.load(ep_dir / "poses.npy") # (T, 7) float32
326
+
327
+ # Load frame t=0
328
+ rgb = np.array(Image.open(ep_dir / "rgb_front" / "0.png")) # (640, 640, 3)
329
+ depth = np.load(ep_dir / "depth_front" / "0.npy") # (640, 640) metres
330
+ seg = np.array(Image.open(ep_dir / "seg_front" / "0.png")) # (640, 640) instance IDs
331
+ ```
332
+
333
+ ---
334
+
335
+ ## Citation
336
+
337
+ If you use MiniVLA-Nav v1 in your research, please cite:
338
+
339
+ ```bibtex
340
+ @article{albustami2026minivlanav,
341
+ title = {{MiniVLA-Nav v1}: A Multi-Scene Simulation Dataset for
342
+ Language-Conditioned Robot Navigation},
343
+ author = {Al-Bustami, Ali},
344
+ year = {2026},
345
+ note = {Thesis project, Department of Robotics Engineering}
346
+ }
347
+ ```
348
+
349
+ ---
350
+
351
+ ## License
352
+
353
+ This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
354
+
355
+ ---
356
+
357
+ ## Contact
358
+
359
+ Ali Al-Bustami — alialbustami@gmail.com