Gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking

Quality: quantized (8 bit, group size: 32, 8.643 bpw)

Gemma 4 31B first "Heretic'ed" (de-censored), then fine tuned by DavidAU (Via Unsloth) on "The Deckard" dataset.


Source

This model was converted to MLX format from DavidAU/gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking using mlx-vlm version 0.4.4.

Downloads last month
484
Safetensors
Model size
33B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking-MLX-8bit

Collections including TheCluster/Gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking-MLX-8bit