Ministral-3-14B-Nymphaea-RP

A fine-tune of Ministral 3 14B Instruct 2512 for roleplay and creative writing.

Fits good into 16GB VRAM at Q6_K quantization with a 16K context window (using Q8 KV cache).

The SillyTavern preset is available here. For custom presets, please use the Mistral V7-Tekken instruct template.

Chat Example

Tested at Q6_K quantization with the Web Search extension (via SearXNG) in SillyTavern.

SillyTavern Screenshot

Training Notes

Trained on the latest iteration of my Darkmere dataset. This version features expanded genre variety, built upon a mix of manually curated synthetics and human-written stories.

The base weights are abliterated via Heretic prior to fine-tuning, so this fine-tune is quite uncensored.

Training Specs

Method:

  • Training Method: DoRA (Weight-Decomposed LoRA)
  • Target Modules all-linear
  • LoRA Rank: 64
  • LoRA Alpha: 64
  • LoRA Dropout: 0.05

Hyperparameters:

  • Batch Size: 2 (Per-device)
  • Gradient Accumulation: 2
  • Epochs: 2
  • Learning Rate: 1e-4
  • Optimizer: adamw_torch_fused
  • LR Scheduler: cosine
  • Noise Level: neftune_noise_alpha=5

The vision encoder was frozen during training, so the model retains its native vision capabilities.

Special Thanks

This fine-tune wouldn't be possible without the incredible work of the community:

Downloads last month
103
Safetensors
Model size
14B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xA50C1A1/Ministral-3-14B-Nymphaea-RP

Collection including 0xA50C1A1/Ministral-3-14B-Nymphaea-RP