DreamZero LIBERO LoRA
LoRA fine-tuned checkpoint of DreamZero-AgiBot on the LIBERO benchmark.
Base Model
- Backbone: Wan2.1-I2V-14B-480P
- Pretrain: DreamZero-AgiBot (GEAR-Dreams/DreamZero-AgiBot)
Training Details
- Dataset: LIBERO-90 (no-noops), 3921 episodes, 73 tasks
- Method: LoRA (rank=4, alpha=4)
- Trainable params: ~108M / 16.5B total
- Resolution: 320×176 (frame_seqlen=880)
- Action horizon: 24 steps
- Optimizer: AdamW, lr=5e-6, weight_decay=1e-5
- DeepSpeed: ZeRO-2
Usage
Load with DreamZero codebase: https://github.com/GEAR-Dreams/DreamZero
- Downloads last month
- 19
Model tree for KyleZ0906/DreamZero-LIBERO-LoRA
Base model
GEAR-Dreams/DreamZero-AgiBot