Gemma 4 E2B it โ Q4_K_S GGUF
4-bit small quantized GGUF version of google/gemma-4-e2b-it.
Slightly smaller and faster than Q4_K_M with near-identical output quality.
Other quantizations in this series:
Q2_K ยท Q3_K_S ยท Q3_K_M ยท Q4_K_M ยท Q5_K_S ยท Q5_K_M ยท Q6_K ยท Q8
File Info
| Property | Value |
|---|---|
| Format | GGUF Q4_K_S |
| File size | 3.37 GB |
| Bits per weight | ~4 |
| Size vs F16 | 2.8ร smaller |
Benchmark Results
Tested across 4 categories (Math, Logic, Code, Science), 3 prompts each.
Greedy decoding, 200 max new tokens. Metrics compare logit distributions vs F16 baseline.
Results by Category
| Category | Speed (tok/s) | SQNR | Top-1 Agreement | KL Divergence |
|---|---|---|---|---|
| ๐ข Math | 25.0 | 19.0 dB | 84.1% | 0.2770 |
| ๐ง Logic | 25.0 | 19.2 dB | 78.7% | 0.4684 |
| ๐ป Code | 25.2 | 19.4 dB | 81.2% | 0.3213 |
| ๐ฌ Science | 24.9 | 18.8 dB | 79.4% | 0.3157 |
| Overall | 25.0 | 19.10 dB | 80.9% | 0.3456 |
Quantization Comparison
| Model | Size | Speed (tok/s) | vs F16 speed | SQNR | Top-1 Agree | KL Div |
|---|---|---|---|---|---|---|
| F16 (baseline) | 8.67 GB | 5.7 | 1.0ร | baseline | baseline | baseline |
| Q2_K | 2.99 GB | 31.6 | 5.6ร | 5.85 dB | 32.0% | 4.1149 |
| Q3_K_S | 3.11 GB | 28.9 | 5.1ร | 10.12 dB | 63.2% | 1.2605 |
| Q3_K_M | 2.98 GB | 27.4 | 4.8ร | 13.93 dB | 63.2% | 1.6747 |
| Q4_K_S (this) | 3.37 GB | 25.0 | 4.4ร | 19.10 dB | 80.9% | 0.3456 |
| Q4_K_M | 3.43 GB | 24.0 | 4.2ร | 20.33 dB | 82.4% | 0.3356 |
| Q5_K_S | 3.6 GB | 21.9 | 3.9x | 23.32 dB | 87.7% | 0.1547 |
| Q5_K_M | 3.63 GB | 22.0 | 3.9ร | 23.25 dB | 86.9% | 0.1248 |
| Q8 | 4.97 GB | 16.2 | 2.9ร | 37.11 dB | 96.0% | 0.0171 |
Key Findings
- Quality: 80.9% Top-1 agreement โ strong and coherent outputs across all task types
- Speed: 25.0 tok/s โ slightly faster than Q4_K_M (24.0 tok/s)
- Size: 3.13 GB โ 60 MB smaller than Q4_K_M
- vs Q4_K_M: Marginally lower quality metrics (1.2 dB SQNR, 1.5% Top-1), but faster and smaller; the difference is imperceptible in practice
- Best for: Same use case as Q4_K_M; prefer Q4_K_S if you want a touch more speed or are tight on the 4 GB boundary
Usage
# llama.cpp CLI
./llama-cli -m gemma-4-e2b-q4ks.gguf -p "Write a Python function for binary search." -n 200
# llama-cpp-python
from llama_cpp import Llama
llm = Llama(model_path="gemma-4-e2b-q4ks.gguf", n_ctx=2048)
output = llm("Write a Python function for binary search.", max_tokens=200)
print(output["choices"][0]["text"])
Hardware
Tested on: CPU inference (llama.cpp)
Context: 2048 tokens | Greedy decoding
- Downloads last month
- 681
Hardware compatibility
Log In to add your hardware
4-bit