Gemma 4
Collection
6 items • Updated
This repository contains GGUF format model files for Ayodele01's gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill.
These models were compiled and quantized via llama.cpp to enable efficient local inference on consumer hardware.
| File Name | Description |
|---|---|
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf |
8-bit quantization. Near unquantized performance, largest file size. |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q6_K.gguf |
6-bit quantization. Very high quality, minimal degradation from original. |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q5_K_M.gguf |
5-bit quantization. Higher quality, slightly larger size and slower inference. |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q4_K_M.gguf |
4-bit quantization. Recommended. Excellent balance of speed, memory usage, and quality. |
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q3_K_M.gguf |
3-bit quantization. Very high compression, fast inference, lower quality. |
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
google/gemma-4-E4B-it