Lexy Vox - MoCha LoRA
LoRA trained on MoCha (fine-tuned Wan 2.1 T2V 14B) for the "Lexy Vox" character.
Usage
This LoRA is for local/ComfyUI inference only. MoCha is a single-DiT model (not dual-DiT like Wan 2.2), so only one LoRA weight is needed.
With musubi-tuner
python wan_generate_video.py \
--task t2v-14B \
--dit mocha_step18500.safetensors \
--lora_weight lexy_vox_mocha.safetensors \
--prompt "lexy_vox, close-up portrait, warm smile"
Files
lexy_vox_mocha.safetensors- Final checkpoint (kohya format)lexy_vox_mocha_diffusers.safetensors- Final checkpoint (diffusers format)lexy_vox_mocha-000XXX.safetensors- Epoch checkpoints
Training Details
- Base model: MoCha (Wan 2.1 T2V 14B fine-tune)
- LoRA rank: 32 (alpha=32)
- Optimizer: Prodigy (LR=1.0)
- Epochs: 25
- Mixed precision: bf16
- Dropout: 0.1
- Scheduler: cosine with 50 warmup steps
- Trainer: musubi-tuner (kohya-ss)