Ministral-3-14B-Nymphaea-RP
A fine-tune of Ministral 3 14B Instruct 2512 for roleplay and creative writing.
Fits good into 16GB VRAM at Q6_K quantization with a 16K context window (using Q8 KV cache).
The SillyTavern preset is available here. For custom presets, please use the Mistral V7-Tekken instruct template.
Chat Example
Tested at Q6_K quantization with the Web Search extension (via SearXNG) in SillyTavern.
Training Notes
Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.
The base weights are abliterated via Heretic prior to fine-tuning, so this fine-tune is quite uncensored.
Training Specs
Method:
- Training Method: DoRA (Weight-Decomposed LoRA)
- Target Modules
all-linear - LoRA Rank: 64
- LoRA Alpha: 64
- LoRA Dropout: 0.05
Hyperparameters:
- Batch Size: 2 (Per-device)
- Gradient Accumulation: 2
- Epochs: 2
- Learning Rate: 1e-4
- Optimizer:
adamw_torch_fused - LR Scheduler:
cosine - Noise Level:
neftune_noise_alpha=5
The vision encoder was frozen during training, so the model retains its native vision capabilities.
Special Thanks
This fine-tune wouldn't be possible without the incredible work of the community:
- p-e-w for developing Heretic - an essential tool for censorship removal.
- SicariusSicariiStuff for developing SLOP_Detector script.
- Mistral AI for their Ministral 3 weights.
- AMD for their Instinct™ MI300X GPU.
- Downloads last month
- 103
Model tree for 0xA50C1A1/Ministral-3-14B-Nymphaea-RP
Base model
mistralai/Ministral-3-14B-Base-2512