Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
LiconStudio
/
Gemma-4-31B-it-abliterated-GGUF
like
7
Text Generation
GGUF
Chinese
English
heretic
uncensored
ablation
gemma4
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Gemma-4-31B-it-abliterated-GGUF
327 GB
Ctrl+K
Ctrl+K
1 contributor
History:
20 commits
LiconStudio
Upload gemma-4-31B-it-balanced-Q2_K.gguf with huggingface_hub
170e371
verified
16 days ago
.gitattributes
Safe
2.49 kB
Upload gemma-4-31B-it-balanced-Q2_K.gguf with huggingface_hub
16 days ago
README.md
Safe
3.24 kB
Update README.md
16 days ago
gemma-4-31B-it-abliterated-F16.gguf
61.4 GB
xet
Upload gemma-4-31B-it-abliterated-F16.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-abliterated-Q2_K.gguf
11.9 GB
xet
Upload gemma-4-31B-it-abliterated-Q2_K.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-abliterated-Q3_K_M.gguf
15.3 GB
xet
Upload gemma-4-31B-it-abliterated-Q3_K_M.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-abliterated-Q4_K_M.gguf
18.7 GB
xet
Upload gemma-4-31B-it-abliterated-Q4_K_M.gguf with huggingface_hub
17 days ago
gemma-4-31B-it-abliterated-Q5_K_M.gguf
21.8 GB
xet
Upload gemma-4-31B-it-abliterated-Q5_K_M.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-abliterated-Q8_0.gguf
32.6 GB
xet
Upload gemma-4-31B-it-abliterated-Q8_0.gguf with huggingface_hub
17 days ago
gemma-4-31B-it-balanced-F16.gguf
61.4 GB
xet
Upload gemma-4-31B-it-balanced-F16.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-balanced-Q2_K.gguf
11.9 GB
xet
Upload gemma-4-31B-it-balanced-Q2_K.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-balanced-Q3_K_M.gguf
15.3 GB
xet
Upload gemma-4-31B-it-balanced-Q3_K_M.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-balanced-Q4_K_M.gguf
18.7 GB
xet
Upload gemma-4-31B-it-balanced-Q4_K_M.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-balanced-Q5_K_M.gguf
21.8 GB
xet
Upload gemma-4-31B-it-balanced-Q5_K_M.gguf with huggingface_hub
16 days ago
gemma-4-31B-it-balanced-Q8_0.gguf
32.6 GB
xet
Upload gemma-4-31B-it-balanced-Q8_0.gguf with huggingface_hub
16 days ago
mmproj-F16.gguf
1.2 GB
xet
Upload 2 files
17 days ago
mmproj-F32.gguf
2.3 GB
xet
Upload 2 files
17 days ago