Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: mit
|
| 5 |
+
tags:
|
| 6 |
+
- MultiTaskDiT
|
| 7 |
+
- LeRobot
|
| 8 |
+
- robotics
|
| 9 |
+
- imitation-learning
|
| 10 |
+
- diffusion
|
| 11 |
+
- so101
|
| 12 |
+
pipeline_tag: reinforcement-learning
|
| 13 |
+
library_name: lerobot
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# LeRobot SO101 MultiTaskDiT task3-all_bs128_s30000
|
| 17 |
+
|
| 18 |
+
## Summary
|
| 19 |
+
|
| 20 |
+
This repository contains the final checkpoint for a MultiTask DiT policy trained on `aswinkumar99/task3-all` for SO101 sponge pick-and-place experiments.
|
| 21 |
+
|
| 22 |
+
Dataset meaning: Task 3: Single Sponge - With Distractors (all layouts).
|
| 23 |
+
|
| 24 |
+
This model was trained with the LeRobot `multi_task_dit` policy and diffusion objective. It is not a fine-tune from a published base checkpoint.
|
| 25 |
+
|
| 26 |
+
## Training Setup
|
| 27 |
+
|
| 28 |
+
- Dataset repo: `aswinkumar99/task3-all`
|
| 29 |
+
- Local dataset root during training: `/home/riftuser/datasets_combined/aswinkumar99/task3-all`
|
| 30 |
+
- Output directory during training: `/home/riftuser/outputs_matrix/multi_task_dit/task3-all_bs128_s30000`
|
| 31 |
+
- Batch size: `128`
|
| 32 |
+
- Training steps: `30000`
|
| 33 |
+
- Checkpoint save frequency: `5000`
|
| 34 |
+
- Data loader workers: `8`
|
| 35 |
+
- WandB project: `so101-layout-generalization`
|
| 36 |
+
- GPU: `NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition`
|
| 37 |
+
- Python: `CPython 3.12.13`
|
| 38 |
+
- CUDA: `12.9`
|
| 39 |
+
- Training start: `2026-04-24T09:50:13.274743+00:00`
|
| 40 |
+
- Training end: `2026-04-24T18:15:43`
|
| 41 |
+
- Approximate training duration: `8h 25m 29s`
|
| 42 |
+
- Objective: `diffusion`
|
| 43 |
+
- Noise scheduler: `DDPM`
|
| 44 |
+
- Horizon: `32`
|
| 45 |
+
- Action steps predicted: `24`
|
| 46 |
+
- Observation steps: `2`
|
| 47 |
+
- Vision encoder: `openai/clip-vit-base-patch16`
|
| 48 |
+
- Text encoder: `openai/clip-vit-base-patch16`
|
| 49 |
+
- Hidden dim: `512`
|
| 50 |
+
- Number of transformer layers: `4`
|
| 51 |
+
|
| 52 |
+
## Exact Training Command
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
lerobot-train \
|
| 56 |
+
--dataset.repo_id=aswinkumar99/task3-all \
|
| 57 |
+
--dataset.root=/home/riftuser/datasets_combined/aswinkumar99/task3-all \
|
| 58 |
+
--dataset.video_backend=torchcodec \
|
| 59 |
+
--output_dir=/home/riftuser/outputs_matrix/multi_task_dit/task3-all_bs128_s30000 \
|
| 60 |
+
--job_name=multi_task_dit_task3-all_bs128 \
|
| 61 |
+
--batch_size=128 \
|
| 62 |
+
--steps=30000 \
|
| 63 |
+
--log_freq=200 \
|
| 64 |
+
--save_freq=5000 \
|
| 65 |
+
--save_checkpoint=true \
|
| 66 |
+
--num_workers=8 \
|
| 67 |
+
--wandb.enable=true \
|
| 68 |
+
--wandb.project=so101-layout-generalization \
|
| 69 |
+
--wandb.mode=online \
|
| 70 |
+
--wandb.disable_artifact=true \
|
| 71 |
+
--policy.type=multi_task_dit \
|
| 72 |
+
--policy.device=cuda \
|
| 73 |
+
--policy.push_to_hub=false \
|
| 74 |
+
--policy.use_amp=true \
|
| 75 |
+
--policy.horizon=32 \
|
| 76 |
+
--policy.n_action_steps=24 \
|
| 77 |
+
--policy.n_obs_steps=2 \
|
| 78 |
+
--policy.num_layers=4 \
|
| 79 |
+
--policy.hidden_dim=512 \
|
| 80 |
+
--policy.num_heads=8 \
|
| 81 |
+
--policy.dropout=0.1 \
|
| 82 |
+
--policy.timestep_embed_dim=256 \
|
| 83 |
+
--policy.use_rope=true \
|
| 84 |
+
--policy.use_positional_encoding=false \
|
| 85 |
+
--policy.objective=diffusion \
|
| 86 |
+
--policy.noise_scheduler_type=DDPM \
|
| 87 |
+
--policy.num_train_timesteps=100 \
|
| 88 |
+
--policy.optimizer_lr=2e-5 \
|
| 89 |
+
--policy.vision_encoder_lr_multiplier=0.1 \
|
| 90 |
+
--policy.vision_encoder_name=openai/clip-vit-base-patch16 \
|
| 91 |
+
--policy.text_encoder_name=openai/clip-vit-base-patch16 \
|
| 92 |
+
--policy.image_crop_shape=[224,224] \
|
| 93 |
+
--policy.image_crop_is_random=true
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
## Repository Contents
|
| 97 |
+
|
| 98 |
+
- `pretrained_model/`: final downloadable model artifacts for inference/loading
|
| 99 |
+
- `training_state/`: optimizer, RNG, scheduler/state, and step information for resuming or auditability
|
| 100 |
+
|
| 101 |
+
## Creator
|
| 102 |
+
|
| 103 |
+
Aswinkumar
|
| 104 |
+
|
| 105 |
+
- Website: [aswinkumar.me](https://aswinkumar.me)
|
| 106 |
+
- Hugging Face repo: <https://huggingface.co/aswinkumar99/LeRobot-SO101-MultiTaskDiT-task3-all_bs128_s30000>
|