miniVLA-Nav / README.md
nielsr's picture
nielsr HF Staff
Link dataset to paper
4815021 verified
|
raw
history blame
12.2 kB
metadata
language:
  - en
license: cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - robotics
pretty_name: MiniVLA-Nav v1
tags:
  - robotics
  - navigation
  - imitation-learning
  - vision-language-action
  - isaac-sim
  - nova-carter
  - differential-drive
  - language-conditioned
  - behavior-cloning
  - simulation
  - object-approach
  - depth
  - segmentation
configs:
  - config_name: default
    data_files: metadata.parquet

MiniVLA-Nav v1

A Multi-Scene Simulation Dataset for Language-Conditioned Robot Navigation

Paper


Demo

Nova Carter navigating to named objects across all four Isaac Sim environments.


Dataset Summary

MiniVLA-Nav v1 is a simulation dataset for the Language-Conditioned Object Approach (LCOA) task: given a short natural-language instruction, an NVIDIA Nova Carter differential-drive robot must navigate to the named object and stop within 1 m. Data were collected inside four photorealistic NVIDIA Isaac Sim 5.1 environments (Office, Hospital, Full Warehouse, Warehouse with Multiple Shelves).

Each of the 1,174 episodes pairs a language instruction with per-timestep, synchronized multimodal observations:

Modality Resolution / Shape Format
Front RGB 640 × 640 × 3, uint8 PNG
Metric depth 640 × 640, float32 (metres) NumPy
Instance segmentation 640 × 640, uint16 PNG
Continuous actions (v, ω) T × 2, float32 NumPy
Tokenized actions (7×7) T × 2, int16 NumPy
Robot poses (x,y,z,qw,qx,qy,qz) T × 7, float32 NumPy

All sensors operate at 60 Hz (physics Δt = 1/60 s).


Supported Tasks

  • Language-Conditioned Object Approach (LCOA) — given a natural-language goal and front RGB-D observations, predict continuous (v, ω) or discrete 7×7 action tokens to drive a differential-drive robot within 1 m of the named object.
  • Behaviour Cloning / Imitation Learning — dense per-step expert labels enable direct supervised training.
  • OOD Generalisation — structured evaluation splits test template-paraphrase and object-category out-of-distribution robustness.

Multimodal Observations

Each timestep provides synchronized RGB, metric depth (float32, metres), and instance segmentation. The composites below show RGB (left) and depth colormap (right) from a mid-episode step.

Office Hospital
RGB+D office RGB+D hospital
Full Warehouse Warehouse (Multi-Shelf)
RGB+D full warehouse RGB+D warehouse shelves

Depth strip — consecutive frames from an office episode, showing depth (metres) as the robot approaches the target:

Depth strip office


Scenes

Four photorealistic Isaac Sim environments, each with curated seen/held-out object categories:

Office

Contact sheet — Office

Hospital

Contact sheet — Hospital

Full Warehouse

Contact sheet — Full Warehouse

Warehouse (Multiple Shelves)

Contact sheet — Warehouse Multi-Shelf

Scene Episodes Seen Categories Held-out Categories
Office 281 chair, sofa, table, monitor, plant, trash_can fire_extinguisher, whiteboard
Hospital 22 chair, trash_can fire_extinguisher, whiteboard
Full Warehouse 54 shelf, rack barrel
Warehouse (Multi-Shelf) 68 shelf, rack barrel

Object Categories

12 categories total — 9 seen during training, 3 held out for OOD evaluation.

Seen categories:

chair monitor table trash can
chair monitor table trash can
rack crate shelf barrel (OOD)
rack crate shelf barrel

Held-out (OOD): fire_extinguisher, whiteboard, barrel — appear only in test_ood_obj split.


Object Category Demo

All object categories navigated to in the Office scene.


Dataset Structure

v1/
├── dataset_meta.json            # Global metadata (scenes, camera, action space, splits)
├── assets/                      # README visual assets
├── splits/
│   ├── train_id.txt             # 261 episode IDs
│   ├── val_id.txt               #  41 episode IDs
│   ├── test_id.txt              #  50 episode IDs
│   ├── test_ood_obj.txt         #  37 episode IDs  (held-out object categories)
│   └── test_ood_lang.txt        #  36 episode IDs  (paraphrase OOD templates)
├── targets_office.yaml          # Per-scene object catalogs (3-D centroids)
├── targets_hospital.yaml
├── targets_full_warehouse.yaml
├── targets_warehouse_multiple_shelves.yaml
└── episodes/
    └── ep_{N:06d}/
        ├── meta.json                 # Full episode metadata
        ├── rgb_front/{t}.png         # 640×640 RGB frame at step t
        ├── depth_front/{t}.npy       # 640×640 float32 depth (m) at step t
        ├── seg_front/{t}.png         # 640×640 uint16 instance segmentation at step t
        ├── actions_continuous.npy    # (T, 2) float32 — (v_t, ω_t)
        ├── actions_tokens.npy        # (T, 2) int16  — discretized 7×7 tokens
        └── poses.npy                 # (T, 7) float32 — (x,y,z,qw,qx,qy,qz)

Episode Metadata (meta.json)

Each episode's sidecar JSON records the full configuration:

{
  "episode_id": "ep_000321",
  "scene_id": "full_warehouse.usd",
  "goal": {
    "target_category": "crate",
    "target_id": "crate_038",
    "goal_position_xyz_m": [-15.08, 10.77, 2.93]
  },
  "instruction": {
    "text": "Go to the crate.",
    "template_id": "train_01"
  },
  "spawn": { "tier": "mid", "spawn_to_target_dist_m": 3.574 },
  "rollout": {
    "num_steps": 219,
    "terminated_by": "success",
    "success": true,
    "collision_count": 0,
    "final_ne_m": 0.966,
    "trajectory_length_m": 2.61
  }
}

Splits

Split Episodes Description
train_id 261 Seen objects, seen instruction templates
val_id 41 Seen objects, seen templates (validation)
test_id 50 Seen objects, seen templates (held-out test)
test_ood_obj 37 Held-out object categories (fire extinguisher, whiteboard, barrel)
test_ood_lang 36 Paraphrase OOD instruction templates
Total 425 (current snapshot; full budget: 2,000)

Language Instructions

Instructions are generated from slot-fill templates with {object} and {color} placeholders.

18 training templates (T1–T18), examples:

  • "Go to the {object}."
  • "Drive to the {object} and stop."
  • "Approach the {object}."
  • "Navigate to the {object}."
  • "Your destination is the {object}."

12 paraphrase-OOD templates (O1–O12), examples:

  • "Make your way to the {object}."
  • "Proceed to the {object}."
  • "Find the {object} and come to a stop."
  • "Close in on the {object}."

Note: Color-slot templates are suppressed in v1 — all targets carry color=unknown because USD assets do not expose material-color attributes through a standard prim API. Active pool: 13 train + 10 paraphrase-OOD templates.


Task Definition

LCOA formulation: Given instruction $\ell$ and observations $o_t = (I_t^\text{RGB}, D_t)$, output actions $a_t = (v_t, \omega_t)$ such that the robot stops within $r_\text{success} = 1.0$ m of the target object centroid.

Action space:

  • Continuous: $(v, \omega) \in [0, 1]$ m/s × $[-1.5, 1.5]$ rad/s
  • Tokenized: each dimension quantized to 7 uniform bins → 49-token vocabulary

Episode termination:

  • Success — within 1 m and stationary for ≥ 5 consecutive steps
  • Collision — stall detected (no forward progress for ≥ 16 steps near obstacle)
  • Timeout — 1,000 steps reached without success

Only successful episodes are retained in the dataset.


Spawn Tiers

Trajectory diversity is ensured through three distance tiers:

Tier Weight Radius
Near 30% 1.5–3.5 m from target
Mid 40% 3.5–7.0 m from target
Far 30% Global curated floor points

Pearson correlation between spawn distance and trajectory length: r = 0.94.


Expert Controller

The data-collection expert is a proportional controller using pixel-level target visibility from the instance segmentation mask:

  • Target visible (≥ 32 px): angular correction from mask centroid column + depth-based speed
  • Target not visible: bearing-only proportional law from known goal position
  • Obstacle avoidance: speed clamped when depth in central foreground crop < 0.25 m

Rollout Statistics

Split N Mean NE (m) Mean TL (m) Mean Steps
train_id 261 0.967 2.75 197.6
val_id 41 0.967 2.83 205.6
test_id 50 0.966 2.74 190.6
test_ood_obj 37 0.967 2.38 174.7
test_ood_lang 36 0.967 3.07 229.7

NE = final navigation error (distance to goal at termination). TL = trajectory length.


Collection Setup

Property Value
Simulator NVIDIA Isaac Sim 5.1.0-rc.19
Robot NVIDIA Nova Carter (differential-drive)
Camera front_hawk/right stereo camera
Physics rate 60 Hz (Δt = 1/60 s)
Image resolution 640 × 640 px
Random seed 42
Generation date 2026-04-22

Loading the Dataset

import json
import numpy as np
from pathlib import Path
from PIL import Image

root = Path("v1")

# Load split
with open(root / "splits" / "train_id.txt") as f:
    train_ids = [line.strip() for line in f]

# Load an episode
ep_dir = root / "episodes" / train_ids[0]
meta = json.loads((ep_dir / "meta.json").read_text())

instruction = meta["instruction"]["text"]             # "Go to the monitor."
actions = np.load(ep_dir / "actions_continuous.npy")  # (T, 2) float32
tokens  = np.load(ep_dir / "actions_tokens.npy")      # (T, 2) int16
poses   = np.load(ep_dir / "poses.npy")               # (T, 7) float32

# Load frame t=0
rgb   = np.array(Image.open(ep_dir / "rgb_front" / "0.png"))   # (640, 640, 3)
depth = np.load(ep_dir / "depth_front" / "0.npy")              # (640, 640) metres
seg   = np.array(Image.open(ep_dir / "seg_front" / "0.png"))   # (640, 640) instance IDs

Citation

If you use MiniVLA-Nav v1 in your research, please cite:

@article{albustami2026minivlanav,
  title   = {{MiniVLA-Nav v1}: A Multi-Scene Simulation Dataset for
             Language-Conditioned Robot Navigation},
  authors  = {Ali Al-Bustami and Jaerock Kwon},
  year    = {2026},
  url     = {https://huggingface.co/papers/2605.00397},
  note    = {Thesis project, Department of Robotics Engineering}
}

License

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.


Contact

Ali Al-Bustami - abustami@umich.edu