Cicikus-v4-0.3B-PITIRCIK (Prettybird Cutiee) Edition

#1
by prometechinc - opened
PROMETECH BİLGİSAYAR BİLİMLERİ YAZILIM İTHALAT İHRACAT TİCARET ANONİM ŞİRKETİ org

We fine-tuned the Gemma 0.3B base model using a LoRA-based training approach, achieving an average performance improvement of approximately 50% across our evaluation benchmarks, with a standard deviation of ±5%. This enhancement demonstrates the effectiveness of parameter-efficient fine-tuning in significantly boosting model capability while maintaining low computational overhead.

Sign up or log in to comment