openpi-pi05-franka-insert-marker-v2-dagger-r3-ft-delta-h200x4
A Pi0.5 model fine-tuned using the OpenPI framework.
Model Details
| Property | Value |
|---|---|
| OpenPI config | pi05_sir_droid_finetune |
| Checkpoint step | 1999 |
| Training data | N/A |
| Precision | bfloat16 |
| Parameter size | ~6.7 GB |
| Source checkpoint | /work/hdd/bgdg/lankile/sir-workspace/real-world-worktree/deps/openpi/checkpoints/pi05_sir_droid_finetune/sir_v3_full_2k_delta_h200x4/1999 |
| Hugging Face repo | ankile/openpi-pi05-franka-insert-marker-v2-dagger-r3-ft-delta-h200x4 |
| W&B run | link |
| SLURM job ID | 16157349 |
Usage
Download and run inference
# Download checkpoint from HF Hub
huggingface-cli download ankile/openpi-pi05-franka-insert-marker-v2-dagger-r3-ft-delta-h200x4 --local-dir <local_path>
# Run inference server
cd deps/openpi
uv run python scripts/serve_policy.py pi05_sir_droid_finetune \
--checkpoint-dir <local_path>
In-process inference (Python)
from openpi.training import config as openpi_config
from openpi.policies import policy_config as openpi_policy_config
train_config = openpi_config.get_config("pi05_sir_droid_finetune")
policy = openpi_policy_config.create_trained_policy(
train_config, "<local_path>"
)
result = policy.infer(obs_dict)
actions = result["actions"]
Checkpoint Format
Orbax format, all parameters in bfloat16.
βββ _CHECKPOINT_METADATA
βββ checkpoint_provenance.json
βββ openpi_config.json
βββ assets/
β βββ (normalization stats)
βββ params/
βββ (orbax checkpoint files)
License
Apache 2.0