A few GGUF quants of the base model, as I could not find any. Made using llama.cpp at commit d0a6dfe, binaries at release b8683 Re-quantized with bf16 and re-uploaded. Original model: https://huggingface.co/google/gemma-4-31B

Downloads last month
962
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support