gemma-4-19b-a4b-it-REAP — GGUF Quantizations

GGUF quantizations of 0xSero/gemma-4-19b-a4b-it-REAP, a 30% expert-pruned variant of google/gemma-4-26b-a4b-it using the REAP (Router-weighted Expert Activation Pruning) method.

Available Files

File Quant Size BPW Description
gemma-4-19b-a4b-it-REAP-BF16.gguf BF16 ~36 GB 16.0 Full precision, for re-quantization
gemma-4-19b-a4b-it-REAP-Q8_0.gguf Q8_0 ~19 GB 8.0 Near-lossless, large file
gemma-4-19b-a4b-it-REAP-Q6_K.gguf Q6_K ~15 GB 6.56 Near-lossless, recommended for high quality
gemma-4-19b-a4b-it-REAP-Q5_K_M.gguf Q5_K_M ~13 GB 5.68 High quality, larger size
gemma-4-19b-a4b-it-REAP-Q5_K_S.gguf Q5_K_S ~12 GB 5.52 High quality, slightly smaller
gemma-4-19b-a4b-it-REAP-Q4_K_M.gguf Q4_K_M ~12 GB 4.89 Recommended — best quality/size balance
gemma-4-19b-a4b-it-REAP-Q4_K_S.gguf Q4_K_S ~11 GB 4.63 4-bit small
gemma-4-19b-a4b-it-REAP-Q3_K_L.gguf Q3_K_L ~9.5 GB 4.27 3-bit large
gemma-4-19b-a4b-it-REAP-Q3_K_M.gguf Q3_K_M ~9 GB 3.91 3-bit medium
gemma-4-19b-a4b-it-REAP-Q3_K_S.gguf Q3_K_S ~8.5 GB 3.66 3-bit small
gemma-4-19b-a4b-it-REAP-Q2_K.gguf Q2_K ~7.5 GB 2.96 Smallest size, lowest quality

Model Details

Property Value
Architecture Gemma 4 (hybrid sliding/full attention MoE)
Parameters 19.02B total / ~4B active per token
Experts 90 total / 8 active per token
Context Length 262,144 tokens
Original dtype BF16
Quantization tool llama.cpp
License Gemma

Quantization Process

# 1. Convert BF16 SafeTensors → GGUF
python convert_hf_to_gguf.py 0xSero/gemma-4-19b-a4b-it-REAP \
  --outfile gemma-4-19b-a4b-it-REAP-BF16.gguf \
  --outtype bf16

# 2. Quantize (example: Q4_K_M)
llama-quantize gemma-4-19b-a4b-it-REAP-BF16.gguf \
  gemma-4-19b-a4b-it-REAP-Q4_K_M.gguf Q4_K_M

Usage

llama.cpp

llama-cli \
  -m gemma-4-19b-a4b-it-REAP-Q4_K_M.gguf \
  -ngl 99 -c 4096 \
  -p "Your prompt here"

llama-server (OpenAI-compatible API)

llama-server \
  -m gemma-4-19b-a4b-it-REAP-Q4_K_M.gguf \
  -ngl 99 -c 4096 \
  --port 8080

LM Studio / Jan / Ollama

Download the .gguf file and load it directly in your preferred local inference UI.

Hardware Requirements

Config VRAM / RAM
Full GPU (Q4_K_M, recommended) 14+ GB VRAM
Hybrid CPU+GPU (Q4_K_M) 8 GB VRAM + 8 GB RAM
CPU only (Q4_K_M) 16+ GB RAM

About the Original Model

0xSero/gemma-4-19b-a4b-it-REAP applies REAP expert pruning (arXiv:2510.13999) to remove 30% of MoE experts (38 of 128 per layer) from Gemma 4 26B-A4B-it, while preserving routing behavior. Active parameters per token remain unchanged at ~4B. The result is a ~31% smaller model with near-identical generation quality across coding, math, and reasoning benchmarks.

License

Gemma — see Google's Gemma Terms of Use.

Downloads last month
8,427
GGUF
Model size
18B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for barozp/gemma-4-19b-a4b-it-REAP-GGUF

Quantized
(7)
this model

Paper for barozp/gemma-4-19b-a4b-it-REAP-GGUF