Gemma-4-31b-abliterated GGUF

GGUF quantizations of Kooten/gemma-4-31b-abliterated.

Files

File Quant
gemma-4-31b-abliterated-Q4_K_M.gguf Q4_K_M
gemma-4-31b-abliterated-Q5_K_M.gguf Q5_K_M
gemma-4-31b-abliterated-Q6_K.gguf Q6_K
gemma-4-31b-abliterated-Q8_0.gguf Q8_0
gemma-4-31b-abliterated-mmproj-BF16.gguf Multimodal projector

License: Apache 2.0 (inherited from the original model).

Downloads last month
1,006
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kooten/gemma-4-31b-abliterated-v1.1-GGUF

Quantized
(2)
this model