The dataset viewer should be available soon. Please retry later.
GNN Constraint-Aware World Model Dataset (v3)
Real robot episodes with per-frame constraint graphs, SAM2 segmentation masks + 256-D feature embeddings, full 3D depth bundles, and synchronized robot states across two manipulation domains. Both domains share the v3 on-disk layout (same JSON/NPZ schemas, same delta-encoded frame_states, same fully-connected PyG expansion at load time) and now share a unified 270-D node feature format — the PyG loader reads a fixed 10-D type encoding from a YAML config so both domains produce identical node dimensionality.
- Project: CoRL 2026 — GNN world model for constraint-aware video generation
- Author: Chang Liu (Texas A&M University)
- Hardware: UR5e + Robotiq 2F-85 gripper, OAK-D Pro (static side view)
- Format version: v3.0 (updated 2026-04-16)
Domains at a glance
| Domain | Graph variants offered | Node vocab size | Node feature dim | Edge feature dim | Data root |
|---|---|---|---|---|---|
| Desktop disassembly | products-only, with-robot-node, with-robot-state, with-robot-action | 9 (8 products + robot) |
270 | 3 | session_<date>_<time>/episode_XX/ |
| Tower of Hanoi | products-only, with-robot-state, with-robot-action | 4 (ring_1..ring_4) |
270 | 3 | hanoi/session_hanoi_<date>_<time>/episode_XX/ |
Node feature dim = 256 (SAM2 emb) + 3 (3D pos) + 10 (fixed type encoding) + 1 (visibility) = 270. The 10-D type encoding is a fixed, deterministic per-type vector (NOT trained) read from config/type_encoding_random.yaml or config/type_encoding_clip.yaml at load time — so both domains, and any future component vocabulary up to 13 types, share the same node dimension.
Four loader variants (all return torch_geometric.data.Data):
load_pyg_frame_products_only— V1 bare graph: products/rings only, no robot info.load_pyg_frame_with_robot— V2 ablation: robot attached as a graph NODE (Desktop only; Hanoi has no robot mask in v1, so this falls back to products-only).load_pyg_frame_with_robot_state— V3 recommended: products-only graph +robot_state=[13]side-tensor. Works for both domains becauserobot_states.npyis present everywhere.load_pyg_frame_with_robot_action— V3 action-conditioned: same as above +robot_action=[13]delta for the next frame.
The three paper options map cleanly: Option 1 (direct graph encoding) → products_only; Option 2 (encoder → latent → world model with robot context) → with_robot_state; Option 3 (action-conditioned GNN) → with_robot_action.
File layout (same for both domains)
episode_XX/
├── metadata.json # episode metadata (domain-specific extras)
├── robot_states.npy # (T, 13) float32 — joints + TCP + gripper
├── robot_actions.npy # (T-1, 13) float32 — frame deltas
├── timestamps.npy # (T, 3) float64
├── side/
│ ├── rgb/frame_XXXXXX.png # 1280×720 RGB
│ └── depth/frame_XXXXXX.npy # 1280×720 uint16 (mm)
├── wrist/ # raw wrist camera (not used in v3)
└── annotations/
├── side_graph.json # components, static edges, frame_states
├── side_masks/ # {component_id: (H,W) uint8} per frame
├── side_embeddings/ # {component_id: (256,) float32} per frame
├── side_depth_info/ # flat-keyed depth bundle per frame
├── side_robot/ # robot bundle per frame (visible flag)
└── dataset_card.json # format description
Alignment guarantee: every labeled frame index has files in all four of side_masks/, side_embeddings/, side_depth_info/, side_robot/. Files are keyed by the same integer frame index, so a loader can key off the mask directory and trust the rest to be present.
Pipeline
Collection. 30 Hz synchronous capture of side RGB + depth + robot state into episode_XX/ — no image processing or graph work happens here. Desktop is human teleop; Hanoi is autonomous, orchestrator samples one mission per episode (classical/single_ring/rearrange at 40/40/20) and writes metadata.json with goal_prompt, initial_state, target_state, solver_moves.
Auto-labeling. Separate offline step (Hanoi only in v3). python scripts/hanoi/auto_label.py <session_dir> produces the full v3 annotations/ tree. For each frame: HSV→bbox→SAM2 (Hanoi-FT checkpoint auto-loaded if present) → refined ring mask → 256-D pooled embedding → depth backprojection. Once per episode: detect grasp intervals in the gripper_pos trace, then symbolically unroll the constraint state from initial_state + solver_moves + held intervals — no per-frame ring detection in the image.
Verification / correction. bash scripts/run_annotator.sh --hanoi (or --desktop) opens the browser UI at localhost:8000 over labeled episodes. Per-frame edit with bbox / point / brush / eraser / polygon. Save writes back to the same annotations/side_masks/*.npz; the format is identical pre- and post-verification.
SAM2 FT retraining. scripts/sam2_finetune/collect_hanoi_samples.py pulls (RGB, mask, bbox) triples from any set of labeled episodes; scripts/sam2_finetune/train.py fine-tunes the SAM2 decoder and writes a domain-specific checkpoint (e.g. sam2_hanoi_ft.pt). auto_label.py auto-selects the checkpoint on its next run, closing the loop.
Desktop Disassembly Domain
Components (9 types)
Eight product types + one robot agent. Multiple instances (e.g. ram_1, ram_2) share the same 10-D type encoding and are disambiguated by SAM2 embedding + 3D position.
| Index | Type | Color | Notes |
|---|---|---|---|
| 0 | cpu_fan |
#FF6B6B | Always visible at start |
| 1 | cpu_bracket |
#4ECDC4 | Hidden at start (under fan) |
| 2 | cpu |
#45B7D1 | Hidden at start |
| 3 | ram_clip |
#96CEB4 | Multi-instance |
| 4 | ram |
#FFEAA7 | Multi-instance |
| 5 | connector |
#DDA0DD | Multi-instance |
| 6 | graphic_card |
#FF8C42 | Always visible |
| 7 | motherboard |
#8B5CF6 | Always visible (base) |
| 8 | robot |
#F5F5F5 | Agent node (stored separately in side_robot/) |
Sparse constraint edges
Directed prerequisite relations — A -> B means "A must be removed before B can be removed":
cpu_fan -> cpu_bracket (fan covers bracket)
cpu_fan -> motherboard
cpu_bracket -> cpu
cpu_bracket -> motherboard
cpu -> motherboard
ram_N -> motherboard
ram_clip_N -> motherboard
ram_clip_N -> ram_M (user pairs manually)
connector_N -> motherboard
graphic_card -> motherboard
Typical episode has 10-15 product nodes and 10-14 stored directed edges.
Node feature layout (270-D)
[0 : 256] SAM2 embedding (256) — masked avg pool over vision_features
[256 : 259] 3D position (3) — centroid in camera frame (meters)
[259 : 269] type encoding (10) — fixed 10-D vector from
config/type_encoding_<method>.yaml
(shared across domains)
[269] visibility (1) — 1 if visible this frame, else 0
Total: 270-D. The 10-D type slot is a deterministic encoding (NOT trained) — see "Fixed 10-D type encoding — how it's made" below.
Available Desktop episodes
| Session / Episode | Labeled frames | Goal |
|---|---|---|
session_0408_162129/episode_00 |
346 | cpu_fan |
session_0410_125013/episode_00 |
473 | cpu_fan |
session_0410_125013/episode_01 |
525 | graphic_card |
Total: 1344 frames.
Tower of Hanoi Domain
Components (4 types) — rings only, no robot node in v1
Hanoi episodes use native ring IDs (ring_1 .. ring_4) in components and as npz keys — no desktop-proxy remapping, and no robot node in v1. type_vocab is ["ring_1", "ring_2", "ring_3", "ring_4"] (length 4). Robot segmentation is deferred; side_robot/*.npz is zero-filled per frame for format uniformity but never becomes a graph node.
Note on V2 vs V3 for Hanoi. V2 (with_robot — robot as graph node) requires a labeled robot mask/embedding and is therefore Desktop-only in v1. V3 (with_robot_state / with_robot_action) uses the 13-D robot_states.npy trace, which IS recorded for Hanoi too — so V3 loaders work for both domains.
| ID | Color | Disk size | Role |
|---|---|---|---|
ring_1 |
red (#E63946) | 32 mm | Smallest |
ring_2 |
yellow (#F1C40F) | 42 mm | — |
ring_3 |
green (#2ECC71) | 52 mm | — |
ring_4 |
blue (#2E86DE) | 62 mm | Largest |
Mask .npz files carry the literal keys ring_1, ring_2, ring_3, ring_4. No robot in type_vocab, no robot edges, no robot node appended at load time.
Mission kinds (40 / 40 / 20 sampling)
| Kind | Weight | Prompt template | Target |
|---|---|---|---|
classical |
0.40 | "Solve the puzzle: stack all rings on peg X" |
All 4 rings stacked in size order on one peg |
single_ring |
0.40 | "Move the <color> ring to peg X" |
One designated ring moved; others untouched |
rearrange |
0.20 | "Rearrange: red on peg A, green on peg B, ..." |
Uniformly sampled valid (larger-under-smaller) configuration |
Every Hanoi metadata.json records mission_kind, goal_prompt, initial_state, target_state, and solver_moves (the reference action sequence from the classical-Hanoi solver, one entry per pickup/release pair).
Structural edges (static, always 6)
The 6 smaller → larger directed pairs are stored verbatim in side_graph.json:
ring_1 -> ring_2 ring_1 -> ring_3 ring_1 -> ring_4
ring_2 -> ring_3 ring_2 -> ring_4
ring_3 -> ring_4
At PyG load time the loader expands to 4 × 3 = 12 fully-connected directed edges. The reverse (larger → smaller) direction carries the same has_constraint / is_locked but flipped src_blocks_dst.
Per-frame is_locked semantics
is_locked = 1 on edge (A, B) iff A is currently the immediately-stacked ring on top of B on the same peg (adjacent in the peg-stack with A above B). Every other pair — non-adjacent on the same peg, on different pegs, or with either ring in transit — gets is_locked = 0. This is strictly "physical stacking right now," not "A must move before B."
Held-ring rule (captures "constraint broken during transit")
When the robot holds a ring (gripper closed between grasp and release of that move), the ring is in transit and no longer touches any other ring. The auto-labeler flags held = 1 for that ring on every held frame, and every edge touching it gets is_locked = 0 — the constraint is physically broken mid-move. On release, the new adjacency emerges and that edge flips back to is_locked = 1.
Implementation: auto_label.py reads robot_states.npy[:, 12] (gripper position, Robotiq 2F-85, 0-255) and detects grasp intervals via baseline-mode thresholding (estimate "resting open" mode, threshold at baseline + margin, binary-close morphologically to bridge single-frame glitches). It then zips the resulting intervals with solver_moves in order — the k-th grasp interval is assigned to the k-th move. Validated on ep_00 (1 move, 1 interval), ep_01 (15 moves, 15 intervals), ep_02 (1 move, 1 interval). Per-frame held deltas are recorded as frame_states[f].held = {ring_id: True|False}.
Rule 2 — "larger must never sit on smaller"
Encoded without a new feature via the edge's existing src_blocks_dst bit:
| Edge direction | src_blocks_dst |
Meaning |
|---|---|---|
smaller → larger (e.g. ring_1 -> ring_3) |
1 | Legal — smaller may rest on larger |
larger → smaller (e.g. ring_3 -> ring_1) |
0 | Illegal — larger may not rest on smaller |
Three dimension-preserving ways the world model can respect Rule 2:
| Method | Where | One-liner | Guarantee |
|---|---|---|---|
| Training loss | objective | λ * (pred_is_locked * (1 - src_blocks_dst)).sum() |
Soft (shapes distribution) |
| Rollout mask | inference | Reject any predicted is_locked = 1 where src_blocks_dst = 0 |
Hard (eliminates illegal) |
| Dataset invariant | this spec | is_locked is never 1 on a larger→smaller edge in any training frame |
Hard (on training distribution) |
Node feature layout (270-D)
[0 : 256] SAM2 embedding (256)
[256 : 259] 3D position (3)
[259 : 269] type encoding (10) — fixed 10-D vector from
config/type_encoding_<method>.yaml
(shared with Desktop)
[269] visibility (1)
Total: 270-D — identical to Desktop. The 10-D encoding is domain-independent; unknown/unlisted types encode to a zero vector.
Mission metadata saved per episode
Every Hanoi side_graph.json carries goal_prompt, mission_kind, and target_state in addition to the fields shared with Desktop. Per-frame transitions (grasps, releases, re-stacks) are recorded as deltas in frame_states[f] with constraints, visibility, and held sub-dicts.
Hanoi episodes available
| Session / Episode | Frames | mission_kind |
goal_prompt |
Moves |
|---|---|---|---|---|
session_hanoi_0415_190808/episode_00 |
494 | single_ring |
"Move the red ring to peg B" |
1 |
session_hanoi_0415_190808/episode_01 |
6719 | classical |
"Solve the puzzle: stack all rings on peg C" |
15 |
session_hanoi_0415_190808/episode_02 |
266 | single_ring |
"Move the red ring to peg B" |
1 |
Total: 7479 frames.
Shared: PyG edge feature semantics (3-D, both domains)
edge_attr[k] = [has_constraint, is_locked, src_blocks_dst]
has_constraint |
is_locked |
src_blocks_dst |
Meaning |
|---|---|---|---|
| 0 | 0 | 0 | No physical constraint — message passing only. Used for: robot ↔ anything; Hanoi larger → smaller (non-edge at the pair level) |
| 1 | 1 | 1 | Constraint active, src is the blocker (physical Desktop) / src rests on top (physical Hanoi) |
| 1 | 1 | 0 | Same pair, reverse direction — src is the blocked / src is underneath |
| 1 | 0 | 1 | Constraint released, src was the blocker / legal rest direction with no contact right now |
| 1 | 0 | 0 | Same released pair, reverse direction |
Symmetry invariants: has_constraint and is_locked are symmetric per unordered pair (same value for (i, j) and (j, i)). src_blocks_dst flips between the two directions. Robot ↔ anything edges are always [0, 0, 0].
Shared: Fixed 10-D type encoding — how it's made
Across both domains the component-type universe is 13 types (the two vocabularies unioned):
cpu_fan, cpu_bracket, cpu, ram_clip, ram, connector, graphic_card, motherboard,
ring_1, ring_2, ring_3, ring_4, robot
Each type is assigned a fixed 10-D vector. The encoding is NOT trained — it is a deterministic lookup read from a YAML at load time, so any consumer of the dataset gets the exact same node features bit-for-bit. Two methods are provided; both YAMLs live at the dataset repo root alongside the session directories:
| Method | YAML file | How vectors are built | Semantic structure |
|---|---|---|---|
random |
config/type_encoding_random.yaml |
numpy.random.default_rng(42) unit-norm 10-vectors, one per type |
None — vectors are orthogonal-ish noise |
clip |
config/type_encoding_clip.yaml |
CLIP ViT-B/32 text embedding of a humanised prompt (e.g. "a CPU fan", "a small red ring") → PCA to 10 → unit-normalise |
Related types cluster (the four rings are close; the fan/bracket/cpu cluster is tight) |
Unknown type → 10-D zero vector. If a component's type is not in the YAML, the loader returns np.zeros(10, dtype=np.float32) for that slot. This keeps node dim at 270 regardless of vocabulary drift.
To reproduce or extend: download whichever YAML you want from the dataset repo root, load it with yaml.safe_load, and look up each component's type. The loader code below shows the full pattern.
Shared: PyG loader — self-contained Python
Prerequisites
pip install torch numpy torch_geometric pillow pyyaml
Save as gnn_world_model_loader.py
The key design property: node_dim = 256 + 3 + 10 + 1 = 270 for both domains. The 10-D type slot comes from the fixed YAML encoding (loaded once), so there's no domain branching — Desktop, Hanoi, and any future vocabulary all produce 270-D nodes.
import json
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
from typing import Dict, List, Optional
import numpy as np
import torch
import yaml
from torch_geometric.data import Data
# ---------- constants ----------
TYPE_ENCODING_DIM = 10 # fixed, domain-independent
SAM2_EMB_DIM = 256
POS_DIM = 3
VIS_DIM = 1
NODE_DIM = SAM2_EMB_DIM + POS_DIM + TYPE_ENCODING_DIM + VIS_DIM # = 270
ROBOT_STATE_DIM = 13 # [j0..j5, tcp_x, tcp_y, tcp_z, tcp_rx, tcp_ry, tcp_rz, gripper_pos]
# ---------- fixed type encoding ----------
# Download once from the dataset repo root:
# config/type_encoding_random.yaml (seeded numpy unit vectors, seed=42)
# config/type_encoding_clip.yaml (CLIP ViT-B/32 text → PCA(10) → unit-norm)
# Point TYPE_ENCODING_ROOT at wherever you saved them.
TYPE_ENCODING_ROOT = Path("./config")
@lru_cache(maxsize=4)
def load_type_encoding(encoding_method: str = "random") -> Dict[str, np.ndarray]:
"""Load the fixed 10-D per-type encoding from YAML. Cached across calls."""
path = TYPE_ENCODING_ROOT / f"type_encoding_{encoding_method}.yaml"
with open(path) as f:
raw = yaml.safe_load(f)
return {k: np.asarray(v, dtype=np.float32) for k, v in raw.items()}
def type_encode(comp_type: str, encoding_method: str = "random") -> np.ndarray:
"""Return 10-D vector for `comp_type`; zeros for unknown types."""
table = load_type_encoding(encoding_method)
vec = table.get(comp_type)
if vec is None:
return np.zeros(TYPE_ENCODING_DIM, dtype=np.float32)
return vec.astype(np.float32)
# ---------- file helpers ----------
def list_labeled_frames(episode_dir: Path) -> List[int]:
mask_dir = episode_dir / "annotations" / "side_masks"
if not mask_dir.exists():
return []
frames = []
for p in mask_dir.glob("frame_*.npz"):
try:
frames.append(int(p.stem.split("_")[1]))
except (ValueError, IndexError):
continue
return sorted(frames)
def resolve_frame_state(graph_json: dict, frame_idx: int):
constraints, visibility = {}, {}
for c in graph_json["components"]:
visibility[c["id"]] = True
for e in graph_json["edges"]:
constraints[f"{e['src']}->{e['dst']}"] = True
fs_dict = graph_json.get("frame_states", {})
for f in sorted([int(k) for k in fs_dict]):
if f > frame_idx:
break
fs = fs_dict[str(f)]
for k, v in fs.get("constraints", {}).items():
constraints[k] = v
for k, v in fs.get("visibility", {}).items():
visibility[k] = v
return constraints, visibility
@dataclass
class FrameData:
graph: dict
masks: dict
embeddings: dict
depth_info: dict
robot: Optional[dict]
constraints: dict
visibility: dict
def load_frame_data(episode_dir, frame_idx):
anno = Path(episode_dir) / "annotations"
with open(anno / "side_graph.json") as f:
graph = json.load(f)
def _npz(p):
if not p.exists(): return {}
d = np.load(p)
return {k: d[k] for k in d.files}
masks = _npz(anno / "side_masks" / f"frame_{frame_idx:06d}.npz")
embeddings = _npz(anno / "side_embeddings" / f"frame_{frame_idx:06d}.npz")
depth_info = _npz(anno / "side_depth_info" / f"frame_{frame_idx:06d}.npz")
robot = None
rp = anno / "side_robot" / f"frame_{frame_idx:06d}.npz"
if rp.exists():
r = np.load(rp)
if r["visible"][0] == 1:
robot = {k: r[k] for k in r.files}
constraints, visibility = resolve_frame_state(graph, frame_idx)
return FrameData(graph, masks, embeddings, depth_info, robot, constraints, visibility)
def _build_product_node_features(nodes, fd, encoding_method):
feats = []
for node in nodes:
cid = node["id"]
emb = fd.embeddings.get(cid, np.zeros(SAM2_EMB_DIM, dtype=np.float32))
dvk = f"{cid}_depth_valid"; ck = f"{cid}_centroid"
if dvk in fd.depth_info and int(fd.depth_info[dvk][0]) == 1:
pos = fd.depth_info[ck].astype(np.float32)
else:
pos = np.zeros(POS_DIM, dtype=np.float32)
vis = 1.0 if fd.visibility.get(cid, True) else 0.0
if vis == 0.0:
emb = np.zeros(SAM2_EMB_DIM, dtype=np.float32)
pos = np.zeros(POS_DIM, dtype=np.float32)
feats.append(np.concatenate([
emb.astype(np.float32),
pos,
type_encode(node["type"], encoding_method),
np.array([vis], dtype=np.float32),
]))
if not feats:
return torch.empty((0, NODE_DIM), dtype=torch.float32)
return torch.tensor(np.stack(feats), dtype=torch.float32)
def _build_product_edges(nodes, graph, fd):
N = len(nodes)
constraint_set = {(e["src"], e["dst"]) for e in graph["edges"]}
pair_forward = {frozenset([s, d]): (s, d) for s, d in constraint_set}
src_idx, dst_idx, edge_attr = [], [], []
for i in range(N):
for j in range(N):
if i == j: continue
src_id, dst_id = nodes[i]["id"], nodes[j]["id"]
src_idx.append(i); dst_idx.append(j)
key = frozenset([src_id, dst_id])
if key in pair_forward:
fwd = pair_forward[key]
is_locked = fd.constraints.get(f"{fwd[0]}->{fwd[1]}", True)
sb = 1.0 if src_id == fwd[0] else 0.0
edge_attr.append([1.0, 1.0 if is_locked else 0.0, sb])
else:
edge_attr.append([0.0, 0.0, 0.0])
return src_idx, dst_idx, edge_attr
# ---------- 1) products-only (Option 1: direct graph encoding) ----------
def load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method: str = "random"):
fd = load_frame_data(episode_dir, frame_idx)
nodes = fd.graph["components"]
x = _build_product_node_features(nodes, fd, encoding_method)
src, dst, ea = _build_product_edges(nodes, fd.graph, fd)
return Data(
x=x,
edge_index=torch.tensor([src, dst], dtype=torch.long),
edge_attr=torch.tensor(ea, dtype=torch.float32),
y=torch.tensor([frame_idx], dtype=torch.long),
num_nodes=len(nodes),
)
# ---------- 2) V2 ablation: robot as graph NODE (Desktop only) ----------
def load_pyg_frame_with_robot(episode_dir, frame_idx, encoding_method: str = "random"):
fd = load_frame_data(episode_dir, frame_idx)
# Hanoi has no robot mask/embedding in v1 → fall back to products-only.
if fd.robot is None:
return load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)
products = fd.graph["components"]
N_prod = len(products); N = N_prod + 1
x_prod = _build_product_node_features(products, fd, encoding_method)
robot_emb = fd.robot["embedding"].astype(np.float32)
robot_pos = (fd.robot["centroid"].astype(np.float32)
if int(fd.robot["depth_valid"][0]) == 1
else np.zeros(POS_DIM, dtype=np.float32))
robot_feat = np.concatenate([
robot_emb, robot_pos,
type_encode("robot", encoding_method),
np.array([1.0], dtype=np.float32),
])
x = torch.cat([x_prod, torch.tensor(robot_feat, dtype=torch.float32).unsqueeze(0)], dim=0)
src, dst, ea = _build_product_edges(products, fd.graph, fd)
robot_idx = N_prod
for i in range(N_prod):
src.append(robot_idx); dst.append(i); ea.append([0.0, 0.0, 0.0])
src.append(i); dst.append(robot_idx); ea.append([0.0, 0.0, 0.0])
data = Data(
x=x,
edge_index=torch.tensor([src, dst], dtype=torch.long),
edge_attr=torch.tensor(ea, dtype=torch.float32),
y=torch.tensor([frame_idx], dtype=torch.long),
num_nodes=N,
)
data.robot_point_cloud = torch.tensor(fd.robot["point_cloud"], dtype=torch.float32)
data.robot_pixel_coords = torch.tensor(fd.robot["pixel_coords"], dtype=torch.int32)
data.robot_mask = torch.tensor(fd.robot["mask"], dtype=torch.uint8)
return data
# ---------- 3) V3 recommended: products graph + robot_state side-tensor ----------
def load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method: str = "random"):
data = load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)
robot_states = np.load(Path(episode_dir) / "robot_states.npy") # (T, 13) float32
rs = robot_states[frame_idx].astype(np.float32) # 13-D
data.robot_state = torch.tensor(rs, dtype=torch.float32)
return data
# ---------- 4) V3 action-conditioned: + robot_action delta ----------
def load_pyg_frame_with_robot_action(episode_dir, frame_idx, encoding_method: str = "random"):
data = load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method)
robot_states = np.load(Path(episode_dir) / "robot_states.npy") # (T, 13)
T = robot_states.shape[0]
if frame_idx + 1 < T:
action = robot_states[frame_idx + 1] - robot_states[frame_idx]
else:
action = np.zeros(ROBOT_STATE_DIM, dtype=np.float32)
data.robot_action = torch.tensor(action.astype(np.float32), dtype=torch.float32)
return data
Usage examples
All four loaders share the signature (episode_dir, frame_idx, encoding_method="random"). Swap "random" for "clip" to use the CLIP-derived encoding instead.
Desktop V1 — 15 product nodes, 270-D features, fully-connected edges (15×14 = 210):
from pathlib import Path
from gnn_world_model_loader import load_pyg_frame_products_only
episode = Path("session_0408_162129/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3])
Desktop V3 (recommended) — same graph + 13-D robot_state side-tensor:
from gnn_world_model_loader import load_pyg_frame_with_robot_state
data = load_pyg_frame_with_robot_state(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3], robot_state=[13])
Desktop V3 action-conditioned — adds 13-D delta for the next frame:
from gnn_world_model_loader import load_pyg_frame_with_robot_action
data = load_pyg_frame_with_robot_action(episode, frame_idx=42)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3],
# robot_state=[13], robot_action=[13])
Hanoi V1 — 4 ring nodes, 270-D features, 12 fully-connected edges:
episode = Path("hanoi/session_hanoi_0415_190808/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3])
Hanoi V3 (recommended) — V3 works for Hanoi too because robot_states.npy is recorded for every episode:
data = load_pyg_frame_with_robot_state(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3], robot_state=[13])
V2 note. load_pyg_frame_with_robot falls back to load_pyg_frame_products_only on Hanoi (no robot mask), so for Hanoi V1 and V2 return identical graphs. On Desktop V2 attaches the robot as a 16-th node (x shape becomes [16, 270]).
Shared: common v3 file schemas
side_graph.json
{
"episode_id": "episode_00",
"goal_component": "ring_1", // Desktop: a product id; Hanoi: a ring id
"view": "side",
"components": [
{"id": "ring_1", "type": "ring_1", "color": "#FF0000"}
],
"edges": [
{"src": "ring_1", "dst": "ring_3", "directed": true}
],
"frame_states": {
"0": {"constraints": {"ring_1->ring_3": true}, "visibility": {"ring_1": true}, "held": {}},
"120": {"constraints": {"ring_1->ring_3": false}, "held": {"ring_1": true}}
},
"node_positions": {"ring_1": [640, 360]},
"type_vocab": ["ring_1", "ring_2", "ring_3", "ring_4"], // Hanoi v1 — no robot
"embedding_dim": 256,
"feature_extractor": "sam2.1_hiera_base_plus",
// Hanoi-only extras:
"goal_prompt": "Move the red ring to peg B",
"mission_kind": "single_ring",
"target_state": {"peg_A": [], "peg_B": ["ring_1"], "peg_C": []}
}
side_depth_info/frame_XXXXXX.npz — 7 flat keys per component
| Key | Shape | Dtype | Meaning |
|---|---|---|---|
{cid}_point_cloud |
(N, 3) | float32 | 3D points in camera frame (m). (0, 3) if no valid depth |
{cid}_pixel_coords |
(N, 2) | int32 | (u, v) of valid depth pixels |
{cid}_raw_depths_mm |
(N,) | uint16 | Filtered to [50, 2000] |
{cid}_centroid |
(3,) | float32 | Mean of point_cloud; [0,0,0] if invalid |
{cid}_bbox_2d |
(4,) | int32 | [x1, y1, x2, y2] from mask |
{cid}_area |
(1,) | int32 | Mask pixel count |
{cid}_depth_valid |
(1,) | uint8 | 1 if N > 0 else 0 |
side_robot/frame_XXXXXX.npz — always 10 keys
| Key | Shape | Dtype | Meaning |
|---|---|---|---|
visible |
(1,) | uint8 | 1 if robot labeled, 0 otherwise |
mask |
(H, W) | uint8 | Binary mask |
embedding |
(256,) | float32 | SAM2 256-D |
point_cloud |
(N, 3) | float32 | 3D points (m) |
pixel_coords |
(N, 2) | int32 | (u, v) |
raw_depths_mm |
(N,) | uint16 | mm |
centroid |
(3,) | float32 | Mean of point cloud |
bbox_2d |
(4,) | int32 | From mask |
area |
(1,) | int32 | Pixel count |
depth_valid |
(1,) | uint8 | 1 if N > 0 else 0 |
Recording hardware
UR5e + Robotiq 2F-85 gripper; static-mounted Luxonis OAK-D Pro side view with intrinsics fx = 1033.8, fy = 1033.7, cx = 632.9, cy = 359.9; recording at 30 Hz, 1280 × 720 RGB and uint16 depth (mm) filtered to [50, 2000].
License
Released under CC BY 4.0. Use, share, and adapt freely with attribution.
Acknowledgements
- Downloads last month
- 9,637