• This is a backup quant inferior to mradermacher/Gemma-4-31B-FT-it-i1-GGUF. I recommend: Gemma-4-31B-FT-it.i1-IQ4_XS.gguf

  • This is GGUF quant of the following model: aifeifei798/Gemma-4-31B-FT-it - refer for more details on the model.

Downloads last month
454
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for s1arsky/Gemma-4-31B-FT-it-Q4_KS-GGUF

Quantized
(3)
this model