gemma-4-31B-it-Uncensored-MAX-GGUF
gemma-4-31B-it-Uncensored-MAX is an uncensored evolution built on top of google/gemma-4-31B-it. This model applies advanced refusal direction analysis and abliteration-based training strategies to significantly reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful 31B parameter language model optimized for detailed responses and improved instruction adherence.
This model is materialized for research and learning purposes only. The model has reduced internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for content generated by this model. Users are responsible for ensuring that the model is used in a safe, ethical, and lawful manner.
Model Files
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| gemma-4-31B-it-Uncensored-MAX.BF16.gguf | BF16 | 61.4 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.F16.gguf | F16 | 61.4 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.F32.gguf | F32 | 123 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.Q8_0.gguf | Q8_0 | 32.6 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.mmproj-bf16.gguf | mmproj-bf16 | 1.2 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.mmproj-f16.gguf | mmproj-f16 | 1.2 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.mmproj-f32.gguf | mmproj-f32 | 2.3 GB | Download |
| gemma-4-31B-it-Uncensored-MAX.mmproj-q8_0.gguf | mmproj-q8_0 | 810 MB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 1,850
8-bit
16-bit
32-bit
Model tree for prithivMLmods/gemma-4-31B-it-Uncensored-MAX-GGUF
Base model
google/gemma-4-31B-it