ConicCat/Gemma4-Garnet-31B

A finetune primarily focused on improving the prose and writing capabilities of Gemma 4. This does generalize strongly to roleplay and most other creative domains as well.

Features:

  • Improved longform writing capabilites; output context extension allows for prompting for up to 4000 words of text in one go.
  • Markedly less AI slop and identifiable Gemini-isms in writing.
  • Improved swipe or output diversity.
  • Fewer 'soft' refusals in writing.

Datasets

  • internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.
  • ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.
  • A dataset of backtranslated books. Unfortunately, I am unable to release this set as all of the data is under copyright.
  • A dash of a certain third owned archive.
Downloads last month
440
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ConicCat/Gemma4-Garnet-31B

Finetuned
(57)
this model
Quantizations
1 model

Datasets used to train ConicCat/Gemma4-Garnet-31B