Text Generation
• Updated • 124k
• 577
Text Generation
• 0.3B • Updated • 95.9k
• 1.01k
Image-Text-to-Text
• Updated • 1.75M
• 1.3k
Image-Text-to-Text
• 4B • Updated • 137k
• 154
Text Generation
• Updated • 69.7k
• 190
Text Generation
• 1.0B • Updated • 786k
• 919
Image-Text-to-Text
• 12B • Updated • 20.5k
• 89
Image-Text-to-Text
• Updated • 2.51M
• • 704
Image-Text-to-Text
• Updated • 15.6k
• 123
Image-Text-to-Text
• 27B • Updated • 707k
• • 1.95k
Note ^ transformers-based pre-trained and instruct models
google/shieldgemma-2-4b-it
Image-Text-to-Text
• Updated • 3.85k
• 158
Note ^ ShieldGemma 2
google/gemma-3-4b-it-qat-q4_0-gguf
Image-Text-to-Text
• 4B • Updated • 12.2k
• 254
google/gemma-3-4b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 4B • Updated • 478
• 25
google/gemma-3-1b-it-qat-q4_0-gguf
Text Generation
• 1.0B • Updated • 1.22k
• 124
google/gemma-3-1b-pt-qat-q4_0-gguf
Text Generation
• 1.0B • Updated • 82
• 14
google/gemma-3-12b-it-qat-q4_0-gguf
Image-Text-to-Text
• 12B • Updated • 5.56k
• 266
google/gemma-3-12b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 12B • Updated • 45
• 21
google/gemma-3-27b-it-qat-q4_0-gguf
Image-Text-to-Text
• 27B • Updated • 2.69k
• 399
google/gemma-3-27b-pt-qat-q4_0-gguf
Image-Text-to-Text
• 27B • Updated • 63
• 31
Note ^ GGUFs to be used in llama.cpp and Ollama. We strongly recommend using the IT models.
google/gemma-3-270m-qat-q4_0-unquantized
Text Generation
• 0.3B • Updated • 94
• 9
google/gemma-3-270m-it-qat-q4_0-unquantized
Text Generation
• 0.3B • Updated • 196
• 13
google/gemma-3-4b-it-qat-q4_0-unquantized
Image-Text-to-Text
• 4B • Updated • 603
• 11
google/gemma-3-27b-it-qat-q4_0-unquantized
Image-Text-to-Text
• Updated • 14k
• 41
google/gemma-3-12b-it-qat-q4_0-unquantized
Image-Text-to-Text
• Updated • 42.6k
• • 85
google/gemma-3-1b-it-qat-q4_0-unquantized
Text Generation
• 1.0B • Updated • 351
• 11
google/gemma-3-4b-it-qat-int4-unquantized
Image-Text-to-Text
• Updated • 151
• 10
google/gemma-3-12b-it-qat-int4-unquantized
Image-Text-to-Text
• 12B • Updated • 736
• 12
google/gemma-3-1b-it-qat-int4-unquantized
Text Generation
• 1.0B • Updated • 227
• 14
Note ^ unquantized QAT-based checkpoints that allow quantizing while retaining similar quality to half precision