Gemma-3-12B-Character-Creator-V2 - GGUF Quants

GGUF quantizations of SufficientPrune3897/Gemma-3-12B-Character-Creator-V2.

This is a model made to create characters that can be used in Sillytavern, cai, jai and other such roleplay scenarios. The resulting characters should be about ~2k tokens and follow a prebaked structure.

Versions:

  • 8B llama 3.3 based and GGUFs
  • 12B gemma 3 based (this one) and GGUFs
  • 24B mistral small 3.2 based and GGUFs
  • (maybe) 27B gemma 3 based and GGUFs

How to use it:

  • Simply tell the model what you want your character to be.
  • It should know many popular franchises, the bigger the model, the more it knows.
  • Fully uncensored.
  • Asking for a different structure than the one the model uses might significantly reduce result quality.
  • While follow up questions are supported, you will often get better results adjusting your original prompt.
  • Supports asking for: prompts for pictures of the char, asking for changes and making an intro.

Changes from V1

  • No longer supports Groups and scenarios
  • Characters should be much better
  • It actually follows a structure and doesnt start making shit up after ~1k tokens

Available Quants

Filename Quant Size Description
Gemma-3-12B-Character-Creator-V2-Q8_0.gguf Q8_0 12GB Maximum quality, near-lossless
Gemma-3-12B-Character-Creator-V2-Q5_K_M.gguf Q5_K_M 7.9GB High quality, recommended
Gemma-3-12B-Character-Creator-V2-Q4_K_M.gguf Q4_K_M 6.9GB Good quality, good balance
Gemma-3-12B-Character-Creator-V2-IQ4_NL.gguf IQ4_NL 6.5GB Good quality, slightly smaller than Q4_K_M
Gemma-3-12B-Character-Creator-V2-IQ3_M.gguf IQ3_M 5.3GB Smaller, some quality loss

V3 and beyond:

The next version will either reintroduce scenarios, groups or feature reasoning. Probably both. Perhaps even lorebooks, although I'm still unsure how to execute on that... After that I will probably make my own real roleplay finetune or something.

If anybody wants support of their native language just ask me and tell me what model does the best for that.

I am very much open for feedback. A single comment can easily change how I will do my next version.


  • Developed by: SufficientPrune3897
  • License: apache-2.0
  • Finetuned from model: p-e-w/gemma-3-12b-it-heretic-v2
  • Quantized with: llama.cpp

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
817
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SufficientPrune3897/Gemma-3-12B-Character-Creator-V2-GGUF

Quantized
(2)
this model