MACAN LoRA v6 for ACE-Step v1.5
Fine-tuned LoRA adapter for ACE-Step v1.5 base model, trained on curated MACAN dataset.
Training Details
- Base model: ACE-Step/acestep-v15-base (NOT SFT)
- Dataset: 14 tracks (v6 - autotuned choruses removed, intros/outros trimmed)
- LoRA rank: 64, alpha: 128
- Learning rate: 5e-5, cosine schedule, 100 warmup steps
- Steps: 2000
- Final loss: 0.0018
- Training time: ~37 minutes on NVIDIA A40
v6 Changes
- Removed tracks with autotuned choruses (Заново, Останься образом, Пополам, MP3)
- Trimmed intros/outros from remaining tracks
- Base model instead of SFT for better quality
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ruslanmusinrusmus/macan-lora-v6-acestep-v15
Base model
ACE-Step/acestep-v15-base