alibustami commited on
Commit
05932a8
·
verified ·
1 Parent(s): 75c7cfd

Fix video embeds: use <video> tags + correct repo ID

Browse files
Files changed (1) hide show
  1. README.md +135 -252
README.md CHANGED
@@ -2,347 +2,236 @@
2
  license: cc-by-4.0
3
  task_categories:
4
  - robotics
5
- - image-to-image
6
  language:
7
  - en
8
  tags:
9
  - robotics
10
  - navigation
11
- - imitation-learning
12
- - vision-language-action
13
  - isaac-sim
14
  - nova-carter
15
- - differential-drive
16
  - language-conditioned
17
- - behavior-cloning
18
- - simulation
19
- - object-approach
20
- - depth
21
- - segmentation
22
- pretty_name: MiniVLA-Nav v1
23
  size_categories:
24
  - 1K<n<10K
25
- multilinguality:
26
- - monolingual
27
- source_datasets:
28
- - original
29
  ---
30
 
31
  # MiniVLA-Nav v1
32
 
33
- **A Multi-Scene Simulation Dataset for Language-Conditioned Robot Navigation**
34
-
35
- <!-- > Ali Al-Bustami · Department of Robotics Engineering (Thesis Project) -->
36
 
37
  ---
38
 
39
  ## Demo
40
 
41
- <video src="assets/montage_all_scenes.mp4" controls width="100%">All-scenes montage</video>
42
 
43
- *Nova Carter navigating to named objects across all four Isaac Sim environments.*
 
 
44
 
45
- ---
46
 
47
- ## Dataset Summary
48
 
49
- MiniVLA-Nav v1 is a simulation dataset for the **Language-Conditioned Object Approach (LCOA)** task: given a short natural-language instruction, an NVIDIA Nova Carter differential-drive robot must navigate to the named object and stop within 1 m. Data were collected inside four photorealistic NVIDIA Isaac Sim 5.1 environments (Office, Hospital, Full Warehouse, Warehouse with Multiple Shelves).
 
 
50
 
51
- Each of the **1,174 episodes** pairs a language instruction with per-timestep, synchronized multimodal observations:
52
 
53
- | Modality | Resolution / Shape | Format |
54
- |---|---|---|
55
- | Front RGB | 640 × 640 × 3, uint8 | PNG |
56
- | Metric depth | 640 × 640, float32 (metres) | NumPy |
57
- | Instance segmentation | 640 × 640, uint16 | PNG |
58
- | Continuous actions (v, ω) | T × 2, float32 | NumPy |
59
- | Tokenized actions (7×7) | T × 2, int16 | NumPy |
60
- | Robot poses (x,y,z,qw,qx,qy,qz) | T × 7, float32 | NumPy |
61
-
62
- All sensors operate at **60 Hz** (physics Δt = 1/60 s).
63
 
64
  ---
65
 
66
- ## Supported Tasks
67
 
68
- - **Language-Conditioned Object Approach (LCOA)** — given a natural-language goal and front RGB-D observations, predict continuous (v, ω) or discrete 7×7 action tokens to drive a differential-drive robot within 1 m of the named object.
69
- - **Behaviour Cloning / Imitation Learning** — dense per-step expert labels enable direct supervised training.
70
- - **OOD Generalisation** structured evaluation splits test template-paraphrase and object-category out-of-distribution robustness.
 
 
 
 
 
 
 
 
 
71
 
72
  ---
73
 
74
- ## Multimodal Observations
75
-
76
- Each timestep provides synchronized RGB, metric depth (float32, metres), and instance segmentation. The composites below show RGB (left) and depth colormap (right) from a mid-episode step.
77
-
78
- | Office | Hospital |
79
- |:---:|:---:|
80
- | ![RGB+D office](assets/rgbd_office.png) | ![RGB+D hospital](assets/rgbd_hospital.png) |
81
-
82
- | Full Warehouse | Warehouse (Multi-Shelf) |
83
- |:---:|:---:|
84
- | ![RGB+D full warehouse](assets/rgbd_full_warehouse.png) | ![RGB+D warehouse shelves](assets/rgbd_warehouse_shelves.png) |
85
-
86
- **Depth strip** — consecutive frames from an office episode, showing depth (metres) as the robot approaches the target:
87
 
88
- ![Depth strip office](assets/depth_strip_office.png)
 
 
 
 
 
89
 
90
  ---
91
 
92
- ## Scenes
93
-
94
- Four photorealistic Isaac Sim environments, each with curated seen/held-out object categories:
95
-
96
- ### Office
97
- ![Contact sheet — Office](assets/contact_office.png)
98
 
99
- ### Hospital
100
- ![Contact sheet — Hospital](assets/contact_hospital.png)
 
 
 
 
 
101
 
102
- ### Full Warehouse
103
- ![Contact sheet — Full Warehouse](assets/contact_full_warehouse.png)
104
 
105
- ### Warehouse (Multiple Shelves)
106
- ![Contact sheet — Warehouse Multi-Shelf](assets/contact_warehouse_multiple_shelves.png)
107
 
108
- | Scene | Episodes | Seen Categories | Held-out Categories |
109
- |---|---|---|---|
110
- | Office | 281 | chair, sofa, table, monitor, plant, trash\_can | fire\_extinguisher, whiteboard |
111
- | Hospital | 22 | chair, trash\_can | fire\_extinguisher, whiteboard |
112
- | Full Warehouse | 54 | shelf, rack | barrel |
113
- | Warehouse (Multi-Shelf) | 68 | shelf, rack | barrel |
114
 
115
  ---
116
 
117
- ## Object Categories
118
 
119
- 12 categories total — 9 seen during training, 3 held out for OOD evaluation.
120
-
121
- **Seen categories:**
 
122
 
123
- | chair | monitor | table | trash can |
124
- |:---:|:---:|:---:|:---:|
125
- | ![chair](assets/sample_chair.png) | ![monitor](assets/sample_monitor.png) | ![table](assets/sample_table.png) | ![trash can](assets/sample_trash_can.png) |
 
126
 
127
- | rack | crate | shelf | barrel (OOD) |
128
- |:---:|:---:|:---:|:---:|
129
- | ![rack](assets/sample_rack.png) | ![crate](assets/sample_crate.png) | ![shelf](assets/sample_shelf.png) | ![barrel](assets/sample_barrel.png) |
 
130
 
131
- **Held-out (OOD):** fire\_extinguisher, whiteboard, barrel — appear only in `test_ood_obj` split.
 
 
 
132
 
133
  ---
134
 
135
- ## Object Category Demo
136
 
137
- <video src="assets/montage_office_categories.mp4" controls width="100%">Office categories montage</video>
138
 
139
- *All object categories navigated to in the Office scene.*
 
 
 
 
 
140
 
141
  ---
142
 
143
- ## Dataset Structure
 
 
144
 
145
  ```
146
- v1/
147
- ├── dataset_meta.json # Global metadata (scenes, camera, action space, splits)
148
- ├── assets/ # README visual assets
149
- ├── splits/
150
- ── train_id.txt # 261 episode IDs
151
- ├── val_id.txt # 41 episode IDs
152
- ── test_id.txt # 50 episode IDs
153
- ├── test_ood_obj.txt # 37 episode IDs (held-out object categories)
154
- │ └── test_ood_lang.txt # 36 episode IDs (paraphrase OOD templates)
155
- ├── targets_office.yaml # Per-scene object catalogs (3-D centroids)
156
- ├── targets_hospital.yaml
157
- ── targets_full_warehouse.yaml
158
- ├── targets_warehouse_multiple_shelves.yaml
159
- └── episodes/
160
- └── ep_{N:06d}/
161
- ├── meta.json # Full episode metadata
162
- ├── rgb_front/{t}.png # 640×640 RGB frame at step t
163
- ├── depth_front/{t}.npy # 640×640 float32 depth (m) at step t
164
- ├── seg_front/{t}.png # 640×640 uint16 instance segmentation at step t
165
- ├── actions_continuous.npy # (T, 2) float32 — (v_t, ω_t)
166
- ├── actions_tokens.npy # (T, 2) int16 — discretized 7×7 tokens
167
- └── poses.npy # (T, 7) float32 — (x,y,z,qw,qx,qy,qz)
168
  ```
169
 
170
- ### Episode Metadata (`meta.json`)
171
-
172
- Each episode's sidecar JSON records the full configuration:
173
 
174
  ```json
175
  {
176
- "episode_id": "ep_000321",
177
- "scene_id": "full_warehouse.usd",
178
- "goal": {
179
- "target_category": "crate",
180
- "target_id": "crate_038",
181
- "goal_position_xyz_m": [-15.08, 10.77, 2.93]
182
- },
183
- "instruction": {
184
- "text": "Go to the crate.",
185
- "template_id": "train_01"
186
- },
187
- "spawn": { "tier": "mid", "spawn_to_target_dist_m": 3.574 },
188
  "rollout": {
189
- "num_steps": 219,
190
  "terminated_by": "success",
191
  "success": true,
192
  "collision_count": 0,
193
- "final_ne_m": 0.966,
194
- "trajectory_length_m": 2.61
195
  }
196
  }
197
  ```
198
 
199
  ---
200
 
201
- ## Splits
202
-
203
- | Split | Episodes | Description |
204
- |---|---|---|
205
- | `train_id` | 261 | Seen objects, seen instruction templates |
206
- | `val_id` | 41 | Seen objects, seen templates (validation) |
207
- | `test_id` | 50 | Seen objects, seen templates (held-out test) |
208
- | `test_ood_obj` | 37 | **Held-out object categories** (fire extinguisher, whiteboard, barrel) |
209
- | `test_ood_lang` | 36 | **Paraphrase OOD** instruction templates |
210
- | **Total** | **425** | (current snapshot; full budget: 2,000) |
211
-
212
- ---
213
-
214
- ## Language Instructions
215
-
216
- Instructions are generated from slot-fill templates with `{object}` and `{color}` placeholders.
217
 
218
- **18 training templates** (T1–T18), examples:
219
- - "Go to the {object}."
220
- - "Drive to the {object} and stop."
221
- - "Approach the {object}."
222
- - "Navigate to the {object}."
223
- - "Your destination is the {object}."
224
-
225
- **12 paraphrase-OOD templates** (O1–O12), examples:
226
- - "Make your way to the {object}."
227
- - "Proceed to the {object}."
228
- - "Find the {object} and come to a stop."
229
- - "Close in on the {object}."
230
-
231
- > **Note:** Color-slot templates are suppressed in v1 — all targets carry `color=unknown` because USD assets do not expose material-color attributes through a standard prim API. Active pool: 13 train + 10 paraphrase-OOD templates.
232
 
233
  ---
234
 
235
- ## Task Definition
236
-
237
- **LCOA formulation:** Given instruction $\ell$ and observations $o_t = (I_t^\text{RGB}, D_t)$, output actions $a_t = (v_t, \omega_t)$ such that the robot stops within $r_\text{success} = 1.0$ m of the target object centroid.
238
-
239
- **Action space:**
240
- - Continuous: $(v, \omega) \in [0, 1]$ m/s × $[-1.5, 1.5]$ rad/s
241
- - Tokenized: each dimension quantized to 7 uniform bins → 49-token vocabulary
242
-
243
- **Episode termination:**
244
- - **Success** — within 1 m and stationary for ≥ 5 consecutive steps
245
- - **Collision** — stall detected (no forward progress for ≥ 16 steps near obstacle)
246
- - **Timeout** — 1,000 steps reached without success
247
 
248
- Only successful episodes are retained in the dataset.
 
 
249
 
250
  ---
251
 
252
- ## Spawn Tiers
253
-
254
- Trajectory diversity is ensured through three distance tiers:
255
-
256
- | Tier | Weight | Radius |
257
- |---|---|---|
258
- | Near | 30% | 1.5–3.5 m from target |
259
- | Mid | 40% | 3.5–7.0 m from target |
260
- | Far | 30% | Global curated floor points |
261
-
262
- Pearson correlation between spawn distance and trajectory length: **r = 0.94**.
263
 
264
- ---
 
 
265
 
266
- ## Expert Controller
 
 
 
267
 
268
- The data-collection expert is a proportional controller using pixel-level target visibility from the instance segmentation mask:
 
269
 
270
- - **Target visible (≥ 32 px):** angular correction from mask centroid column + depth-based speed
271
- - **Target not visible:** bearing-only proportional law from known goal position
272
- - **Obstacle avoidance:** speed clamped when depth in central foreground crop < 0.25 m
273
 
274
  ---
275
 
276
- ## Rollout Statistics
277
-
278
- | Split | N | Mean NE (m) | Mean TL (m) | Mean Steps |
279
- |---|---|---|---|---|
280
- | train\_id | 261 | 0.967 | 2.75 | 197.6 |
281
- | val\_id | 41 | 0.967 | 2.83 | 205.6 |
282
- | test\_id | 50 | 0.966 | 2.74 | 190.6 |
283
- | test\_ood\_obj | 37 | 0.967 | 2.38 | 174.7 |
284
- | test\_ood\_lang | 36 | 0.967 | 3.07 | 229.7 |
285
-
286
- NE = final navigation error (distance to goal at termination). TL = trajectory length.
287
-
288
- ---
289
 
290
- ## Collection Setup
291
 
292
- | Property | Value |
293
  |---|---|
294
- | Simulator | NVIDIA Isaac Sim 5.1.0-rc.19 |
295
- | Robot | NVIDIA Nova Carter (differential-drive) |
296
- | Camera | front\_hawk/right stereo camera |
297
- | Physics rate | 60 Hz (Δt = 1/60 s) |
298
- | Image resolution | 640 × 640 px |
299
- | Random seed | 42 |
300
- | Generation date | 2026-04-22 |
301
-
302
- ---
303
-
304
- ## Loading the Dataset
305
-
306
- ```python
307
- import json
308
- import numpy as np
309
- from pathlib import Path
310
- from PIL import Image
311
-
312
- root = Path("v1")
313
-
314
- # Load split
315
- with open(root / "splits" / "train_id.txt") as f:
316
- train_ids = [line.strip() for line in f]
317
-
318
- # Load an episode
319
- ep_dir = root / "episodes" / train_ids[0]
320
- meta = json.loads((ep_dir / "meta.json").read_text())
321
-
322
- instruction = meta["instruction"]["text"] # "Go to the monitor."
323
- actions = np.load(ep_dir / "actions_continuous.npy") # (T, 2) float32
324
- tokens = np.load(ep_dir / "actions_tokens.npy") # (T, 2) int16
325
- poses = np.load(ep_dir / "poses.npy") # (T, 7) float32
326
-
327
- # Load frame t=0
328
- rgb = np.array(Image.open(ep_dir / "rgb_front" / "0.png")) # (640, 640, 3)
329
- depth = np.load(ep_dir / "depth_front" / "0.npy") # (640, 640) metres
330
- seg = np.array(Image.open(ep_dir / "seg_front" / "0.png")) # (640, 640) instance IDs
331
- ```
332
 
333
  ---
334
 
335
  ## Citation
336
 
337
- If you use MiniVLA-Nav v1 in your research, please cite:
338
-
339
  ```bibtex
340
- @article{albustami2026minivlanav,
341
- title = {{MiniVLA-Nav v1}: A Multi-Scene Simulation Dataset for
342
- Language-Conditioned Robot Navigation},
343
- author = {Al-Bustami, Ali},
344
- year = {2026},
345
- note = {Thesis project, Department of Robotics Engineering}
 
346
  }
347
  ```
348
 
@@ -350,10 +239,4 @@ If you use MiniVLA-Nav v1 in your research, please cite:
350
 
351
  ## License
352
 
353
- This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
354
-
355
- ---
356
-
357
- ## Contact
358
-
359
- Ali Al-Bustami — alialbustami@gmail.com
 
2
  license: cc-by-4.0
3
  task_categories:
4
  - robotics
5
+ - visual-question-answering
6
  language:
7
  - en
8
  tags:
9
  - robotics
10
  - navigation
11
+ - vla
 
12
  - isaac-sim
13
  - nova-carter
 
14
  - language-conditioned
15
+ - embodied-ai
 
 
 
 
 
16
  size_categories:
17
  - 1K<n<10K
 
 
 
 
18
  ---
19
 
20
  # MiniVLA-Nav v1
21
 
22
+ **Language-conditioned navigation dataset for Visual Language Action (VLA) model training and evaluation**, generated entirely in NVIDIA Isaac Sim 5.1 with a Nova Carter differential-drive robot across four photo-realistic scenes.
 
 
23
 
24
  ---
25
 
26
  ## Demo
27
 
28
+ <div align="center">
29
 
30
+ <video controls autoplay loop muted playsinline width="720">
31
+ <source src="https://huggingface.co/datasets/alibustami/miniVLA-Nav/resolve/main/assets/videos/montage_all_scenes.mp4" type="video/mp4">
32
+ </video>
33
 
34
+ *2×2 montage — Office · Hospital · Warehouse (Full) · Warehouse (Shelves)*
35
 
36
+ <br><br>
37
 
38
+ <video controls autoplay loop muted playsinline width="720">
39
+ <source src="https://huggingface.co/datasets/alibustami/miniVLA-Nav/resolve/main/assets/videos/montage_office_categories.mp4" type="video/mp4">
40
+ </video>
41
 
42
+ *Office scene diverse goal categories*
43
 
44
+ </div>
 
 
 
 
 
 
 
 
 
45
 
46
  ---
47
 
48
+ ## Overview
49
 
50
+ | Property | Value |
51
+ |---|---|
52
+ | **Total episodes** | 1,174 |
53
+ | **Success rate** | 100 % (failed rollouts discarded) |
54
+ | **Scenes** | 4 (Office, Hospital, Full Warehouse, Warehouse Shelves) |
55
+ | **Robot** | NVIDIA Nova Carter (differential drive) |
56
+ | **Simulator** | Isaac Sim 5.1.0 |
57
+ | **Sensor** | 640×640 RGB + Depth + Instance Segmentation |
58
+ | **Action space** | Linear velocity *v* ∈ [0, 1] m/s · Angular velocity *ω* ∈ [−1.5, 1.5] rad/s |
59
+ | **Max steps / episode** | 1,000 |
60
+ | **Success radius** | 1.0 m |
61
+ | **License** | CC-BY 4.0 |
62
 
63
  ---
64
 
65
+ ## Scenes & Episode Counts
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
+ | Scene | Episodes | Seen Categories | Held-out Categories |
68
+ |---|---|---|---|
69
+ | Office | 700 | chair, sofa, table, monitor, plant, trash_can | fire_extinguisher, whiteboard |
70
+ | Hospital | 52 | chair, trash_can | fire_extinguisher, whiteboard |
71
+ | Full Warehouse | 354 | shelf, rack | barrel |
72
+ | Warehouse (Shelves) | 68 | shelf, rack | barrel |
73
 
74
  ---
75
 
76
+ ## Splits
 
 
 
 
 
77
 
78
+ | Split | Episodes | Description |
79
+ |---|---|---|
80
+ | `train_id` | 716 | In-distribution training |
81
+ | `val_id` | 114 | In-distribution validation |
82
+ | `test_id` | 121 | In-distribution test |
83
+ | `test_ood_lang` | 122 | Novel instruction templates (OOD language) |
84
+ | `test_ood_obj` | 101 | Novel object categories (OOD objects) |
85
 
86
+ ---
 
87
 
88
+ ## Spawn Tiers
 
89
 
90
+ | Tier | Range | Proportion |
91
+ |---|---|---|
92
+ | Near | 1.5 3.5 m | ~55 % |
93
+ | Mid | 3.5 7.0 m | ~44 % |
94
+ | Far | Global free points | ~1 % |
 
95
 
96
  ---
97
 
98
+ ## Scene Previews
99
 
100
+ ### Office
101
+ <table><tr>
102
+ <td><img src="assets/contact_sheets/contact_office.png" width="480"/></td>
103
+ </tr></table>
104
 
105
+ ### Hospital
106
+ <table><tr>
107
+ <td><img src="assets/contact_sheets/contact_hospital.png" width="480"/></td>
108
+ </tr></table>
109
 
110
+ ### Warehouse (Full)
111
+ <table><tr>
112
+ <td><img src="assets/contact_sheets/contact_full_warehouse.png" width="480"/></td>
113
+ </tr></table>
114
 
115
+ ### Warehouse (Shelves)
116
+ <table><tr>
117
+ <td><img src="assets/contact_sheets/contact_warehouse_multiple_shelves.png" width="480"/></td>
118
+ </tr></table>
119
 
120
  ---
121
 
122
+ ## Cinematic Captures (Multi-Camera)
123
 
124
+ Each demo episode is recorded simultaneously from **4 static camera angles** in addition to the robot's front camera.
125
 
126
+ | Scene | View | Video |
127
+ |---|---|---|
128
+ | Office | 4-up composite | [office/ep_000002/4up_cinematic.mp4](assets/cinematic/office/ep_000002/4up_cinematic.mp4) |
129
+ | Hospital | 4-up composite | [hospital/ep_000846/4up_cinematic.mp4](assets/cinematic/hospital/ep_000846/4up_cinematic.mp4) |
130
+ | Warehouse (Full) | 4-up composite | [full_warehouse/ep_000909/4up_cinematic.mp4](assets/cinematic/full_warehouse/ep_000909/4up_cinematic.mp4) |
131
+ | Warehouse (Shelves) | 4-up composite | [warehouse_shelves/ep_000399/4up_cinematic.mp4](assets/cinematic/warehouse_shelves/ep_000399/4up_cinematic.mp4) |
132
 
133
  ---
134
 
135
+ ## Episode Structure
136
+
137
+ Each episode is stored under `data/v1/episodes/ep_XXXXXX/`:
138
 
139
  ```
140
+ ep_000001/
141
+ ├── meta.json # Full episode metadata
142
+ ├── rgb_front/ # 640×640 RGB frames (PNG)
143
+ ├── 000000.png
144
+ ── ...
145
+ ├── depth_front/ # 640×640 depth maps (float32 NPY, metres)
146
+ ── ...
147
+ ├── seg_front/ # Instance segmentation masks (uint16 PNG)
148
+ │ └── ...
149
+ ├── actions_continuous.npy # (N, 2) [v, ω] per step
150
+ ├── actions_tokens.npy # Tokenised action sequences
151
+ ── poses.npy # (N, 7) — [x, y, z, qw, qx, qy, qz]
 
 
 
 
 
 
 
 
 
 
152
  ```
153
 
154
+ ### `meta.json` fields
 
 
155
 
156
  ```json
157
  {
158
+ "episode_id": "ep_000001",
159
+ "scene_id": "office.usd",
160
+ "robot": { "name": "nova_carter", "prim_path": "/World/nova_carter" },
161
+ "task": { "type": "language_conditioned_object_approach", "success_radius_m": 1.0 },
162
+ "goal": { "target_category": "monitor", "goal_position_xyz_m": [...] },
163
+ "instruction": { "text": "Move toward the monitor.", "split": "train_id" },
164
+ "spawn": { "tier": "near", "spawn_to_target_dist_m": 1.662 },
 
 
 
 
 
165
  "rollout": {
166
+ "num_steps": 58,
167
  "terminated_by": "success",
168
  "success": true,
169
  "collision_count": 0,
170
+ "final_ne_m": 0.96,
171
+ "trajectory_length_m": 0.70
172
  }
173
  }
174
  ```
175
 
176
  ---
177
 
178
+ ## Language Templates
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179
 
180
+ **18 training templates** (e.g. `"Go to the {object}."`, `"Move toward the {color} {object}."`) and **12 OOD templates** (e.g. `"Make your way to the {object}."`, `"Park next to the {object}."`) covering a wide range of natural language phrasings.
 
 
 
 
 
 
 
 
 
 
 
 
 
181
 
182
  ---
183
 
184
+ ## Action Space
 
 
 
 
 
 
 
 
 
 
 
185
 
186
+ Actions are continuous `[v, ω]` tuples discretised into a 7×7 token grid:
187
+ - **v** ∈ [0.0, 1.0] m/s (7 bins)
188
+ - **ω** ∈ [−1.5, 1.5] rad/s (7 bins)
189
 
190
  ---
191
 
192
+ ## Quick-Load Example
 
 
 
 
 
 
 
 
 
 
193
 
194
+ ```python
195
+ import json, numpy as np
196
+ from pathlib import Path
197
 
198
+ EP = Path("data/v1/episodes/ep_000001")
199
+ meta = json.loads((EP / "meta.json").read_text())
200
+ actions = np.load(EP / "actions_continuous.npy") # (N, 2)
201
+ poses = np.load(EP / "poses.npy") # (N, 7)
202
 
203
+ from PIL import Image
204
+ frame0 = Image.open(EP / "rgb_front" / "000000.png")
205
 
206
+ print(meta["instruction"]["text"]) # "Move toward the monitor."
207
+ print(actions.shape, poses.shape) # (58, 2) (58, 7)
208
+ ```
209
 
210
  ---
211
 
212
+ ## HuggingFace Dataset Card Assets
 
 
 
 
 
 
 
 
 
 
 
 
213
 
214
+ Publication-quality assets are in `assets/`:
215
 
216
+ | Path | Contents |
217
  |---|---|
218
+ | `assets/videos/` | 18 trajectory videos (H.264, 15 fps) + 2 montage grids |
219
+ | `assets/cinematic/` | 60 multi-camera videos across 12 demo episodes |
220
+ | `assets/contact_sheets/` | 4 scene contact-sheet PNGs (4 rows × 5 key frames) |
221
+ | `assets/frames/` | 166 individual key-frame PNGs (RGB + RGB-D pairs, depth strips) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
 
223
  ---
224
 
225
  ## Citation
226
 
 
 
227
  ```bibtex
228
+ @dataset{albustami2026minivla,
229
+ author = {Albustami, Ali},
230
+ title = {{MiniVLA-Nav}: A Language-Conditioned Navigation Dataset
231
+ for VLA Training in Isaac Sim},
232
+ year = {2026},
233
+ publisher = {HuggingFace},
234
+ url = {https://huggingface.co/datasets/alibustami/miniVLA-Nav}
235
  }
236
  ```
237
 
 
239
 
240
  ## License
241
 
242
+ [Creative Commons Attribution 4.0 International (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)