This is a full parameter fine tune of LFM2 24B A2B based on HuiHui's abliterated base made with a custom low memory fine tuning method as an experiment. I have trained this model to be better at staying in character when given roleplay prompts, however, the learning rate was intentionally kept light to preserve the original model's capabilities while improving its persona following.

Downloads last month
37
Safetensors
Model size
24B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for blascotobasco/LFM2-24B-A2B-CinderWatch

Finetuned
(1)
this model
Quantizations
3 models