gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-GGUF

This repository contains GGUF format model files for Ayodele01's gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill.

These models were compiled and quantized via llama.cpp to enable efficient local inference on consumer hardware.

Available Quantizations

File Name Description
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q8_0.gguf 8-bit quantization. Near unquantized performance, largest file size.
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q6_K.gguf 6-bit quantization. Very high quality, minimal degradation from original.
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q5_K_M.gguf 5-bit quantization. Higher quality, slightly larger size and slower inference.
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q4_K_M.gguf 4-bit quantization. Recommended. Excellent balance of speed, memory usage, and quality.
gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-Q3_K_M.gguf 3-bit quantization. Very high compression, fast inference, lower quality.
Downloads last month
6,489
GGUF
Model size
8B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abiray/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-GGUF

Collection including Abiray/gemma-4-E4B-Gemini-3.1-Pro-Reasoning-Distill-GGUF