YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
BEHAVIOR-1K GR00T Action Trajectories
Pre-recorded action trajectories from nvidia/GR00T-N1.6-BEHAVIOR1k policy evaluated on BEHAVIOR-1K tasks using OmniGibson simulator.
These trajectories enable noise-free third-person video rendering on RTX GPU servers without requiring the GR00T model for inference.
Background
When running BEHAVIOR-1K simulations on data-center GPUs (H100/A100) in headless mode, the OptiX denoiser does not function correctly, resulting in noisy path-tracing renders. RTX GPUs with display output can properly utilize the OptiX denoiser for clean, production-quality renders.
This repository provides:
- Saved action trajectories from GR00T policy episodes (captured on H100)
- Replay script that replays trajectories with high-quality rendering (for RTX GPUs)
- Capture script for recording new trajectories on headless servers
Repository Structure
.
βββ README.md
βββ scripts/
β βββ capture_trajectory.py # Capture actions from GR00T server
β βββ capture_trajectory.sh # Shell launcher for capture
β βββ replay_trajectory.py # Replay actions with HQ rendering (self-contained)
β βββ replay_trajectory.sh # Shell launcher for replay
βββ trajectories/
βββ picking_up_trash/
βββ trajectory_*.npz # Saved action trajectories
Trajectory Data Format
Each .npz file contains:
| Key | Shape | Description |
|---|---|---|
actions |
(N, 32, 23) |
Action array: N multi-steps, 32 action horizon, 23 action dims |
task_name |
scalar | BEHAVIOR-1K task name (e.g., picking_up_trash) |
n_action_steps |
scalar | Multi-step chunk size used by env wrapper (8) |
max_episode_steps |
scalar | Max episode length (720) |
n_steps |
scalar | Actual number of multi-steps taken |
success |
scalar | Whether the task was completed successfully |
task_progress |
scalar | Task progress percentage (0-100) |
Action Space (23-dim)
| Component | Indices | Dims | Description |
|---|---|---|---|
action.base |
0:3 | 3 | Base movement (vx, vy, vz) |
action.torso |
3:7 | 4 | Torso/trunk control |
action.left_arm |
7:14 | 7 | Left arm joint angles |
action.left_gripper |
14:15 | 1 | Left gripper |
action.right_arm |
15:22 | 7 | Right arm joint angles |
action.right_gripper |
22:23 | 1 | Right gripper |
Quick Start: Replay on RTX GPU
Prerequisites
Isaac-GR00T repository (for env wrapper imports):
git clone https://github.com/NVIDIA-Omniverse/Isaac-GR00T.git cd Isaac-GR00T && pip install -e .BEHAVIOR-1K / OmniGibson (for simulation):
# Follow OmniGibson installation guide: # https://behavior.stanford.edu/omnigibson/getting_started/installation.htmlPython dependencies (in BEHAVIOR-1K venv):
pip install av numpy gymnasium
Replay Saved Trajectories
Edit paths in
scripts/replay_trajectory.sh:BEHAVIOR_PYTHON=/path/to/BEHAVIOR-1K/.venv/bin/python GROOT_DIR=/path/to/Isaac-GR00TRun replay:
# For RTX GPU with display (best quality β full OptiX denoiser support): bash scripts/replay_trajectory.sh trajectories/picking_up_trash output_videos # For headless mode: OMNIGIBSON_HEADLESS=1 bash scripts/replay_trajectory.sh trajectories/picking_up_trash output_videosOutput videos will be in:
output_videos/thirdperson/*.mp4β Third-person camera view (1280x720)output_videos/robotview/*.mp4β Robot-mounted camera view
Direct Python Usage
# Replay with custom settings
PYTHONPATH=/path/to/Isaac-GR00T python scripts/replay_trajectory.py \
--trajectory_dir trajectories/picking_up_trash \
--output_dir my_videos \
--task picking_up_trash \
--n_render_iterations 20 \
--spp 1
Capture New Trajectories (on H100/A100)
To record trajectories from the GR00T policy on a headless server:
Prerequisites
- GR00T model weights:
nvidia/GR00T-N1.6-BEHAVIOR1k - Isaac-GR00T with GR00T server:
gr00t/eval/run_gr00t_server.py - BEHAVIOR-1K / OmniGibson environment
Run Capture
# Edit paths in capture_trajectory.sh, then:
bash scripts/capture_trajectory.sh
This will:
- Start the GR00T inference server on GPU 0
- Run the BEHAVIOR-1K environment on GPU 1
- Execute one episode and save action trajectories
- Output to
trajectories/<task_name>/trajectory_*.npz
Capture Custom Tasks
PYTHONPATH=/path/to/Isaac-GR00T python scripts/capture_trajectory.py \
--task turning_on_radio \
--output_dir trajectories/turning_on_radio \
--policy_client_host 127.0.0.1 \
--policy_client_port 5555 \
--max_episode_steps 720 \
--n_action_steps 8
Renderer Configuration
The replay script automatically configures the RTX Path Tracing renderer for maximum quality:
| Setting | Value | Purpose |
|---|---|---|
| Render mode | Path Tracing (2) | Physically-based rendering |
| OptiX denoiser | ON, blendFactor=0.0 | Full AI denoising |
| Temporal denoiser | ON | Reduces inter-frame flicker |
| SPP | 1 per render() call | Accumulates via iterations |
| clampSpp | 0 (disabled) | No SPP capping |
| totalSpp | 0 (infinite) | Allow unlimited accumulation |
| n_render_iterations | 20 | Extra render passes per frame |
| Firefly filter | ON | Clamps bright noise speckles |
| Adaptive sampling | ON (0.001 target) | Concentrates samples on noisy regions |
| Max bounces | 8 | Good global illumination |
| ENABLE_HQ_RENDERING | True | OmniGibson high-quality pipeline |
Known Limitations
- H100/A100 headless: OptiX denoiser does not function correctly (OmniGibson Issue #1875). Use RTX GPU with display for best results.
- DLSS-RR: Not supported on H100 ("known bugs with this GPU").
- Physics determinism: Action replay may produce slightly different results on different hardware due to floating-point precision differences in PhysX. Visual quality is unaffected.
- OmniGibson segfaults: Segmentation faults during environment cleanup are normal and expected.
Available Trajectories
| Task | Episodes | Steps | Success | Task Progress |
|---|---|---|---|---|
| picking_up_trash | 1 | 90 | 0% | 23.1% |
Environment Details
- Robot: R1Pro (28 DOF)
- Simulator: OmniGibson (Isaac Sim based)
- Model: nvidia/GR00T-N1.6-BEHAVIOR1k
- Embodiment: BEHAVIOR_R1_PRO
- Episode length: Max 720 multi-steps (each = 8 physics sub-steps)
Citation
@misc{groot_behavior1k_trajectories,
title={BEHAVIOR-1K GR00T Action Trajectories for Offline Rendering},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/xpuenabler/BEHAVIOR-1K-GROOT-trajectories}
}
License
This repository contains action trajectory data and rendering scripts. The GR00T model and BEHAVIOR-1K assets are subject to their respective licenses.