RP fine-tunes
Collection
5 items • Updated • 1
A fine-tune of Qwen3-4B-Instruct-2507 for roleplay and creative writing.
Suitable for mobile roleplay: tested on Nothing Phone 2 in Q4_K_M quantization (7-8 t/s)
The SillyTavern preset is available here. For custom presets, please use the ChatML instruct template.
Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.
The base weights are abliterated via Heretic prior to fine-tuning, so this fine-tune is quite uncensored.
Method:
all-linearHyperparameters:
adamw_torch_fusedcosineneftune_noise_alpha=5This fine-tune wouldn't be possible without the incredible work of the community: