A few GGUF quants of the base model, as I could not find any. Made using llama.cpp at commit d0a6dfe, binaries at release b8683 Re-quantized with bf16 and re-uploaded. Original model: https://huggingface.co/google/gemma-4-E2B
- Downloads last month
- 1,097
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support