Gemma 4 31B - RotorQuant KV Cache

RotorQuant KV-cache quantization applied to google/gemma-4-31B, delivering 5.3x faster prefill and 28% faster decode compared to TurboQuant while maintaining equivalent memory savings.

This repository provides the RotorQuant KV-cache configuration for Gemma 4 31B. The model weights remain at their original precision; only the key-value cache is quantized at runtime.

Model Specifications

Property Value
Base Model google/gemma-4-31B
Parameters 31 billion (dense transformer)
Architecture Dense transformer (not MoE)
Modality Multimodal: image + text input, text output
License Apache 2.0
Quantization RotorQuant KV-cache only (weights unchanged)

Quickstart

from rotorquant import RotorQuantCache
from transformers import AutoModelForImageTextToText, AutoProcessor

model_id = "google/gemma-4-31B"

processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, device_map="auto")

# Apply RotorQuant KV-cache quantization
cache = RotorQuantCache(model)

inputs = processor("Describe this image.", images=image, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, past_key_values=cache)
print(processor.decode(outputs[0], skip_special_tokens=True))

What is RotorQuant?

RotorQuant is a high-performance KV-cache quantization method that builds on the foundations of cache compression while achieving significantly better throughput. It compresses the key-value cache used during autoregressive generation without modifying model weights.

Key benefits:

  • 5.3x faster prefill compared to TurboQuant
  • 28% faster decode compared to TurboQuant
  • No weight modification -- model weights stay at original precision
  • Reduced inference memory -- KV cache is compressed significantly
  • Longer context windows -- fit more tokens in the same GPU memory

KV-Cache Quantization Comparison

Method Prefill Speed Decode Speed Memory Savings Reference
TurboQuant 1x (baseline) 1x (baseline) High arXiv: 2504.19874
RotorQuant 5.3x faster 28% faster High GitHub

Memory Estimates (Gemma 4 31B)

Precision Approximate Size
FP16 (original) ~62 GB
8-bit quantized ~31 GB
4-bit quantized ~17 GB
2-bit quantized ~9 GB

Note: These estimates are for weight quantization. This repository applies KV-cache quantization only, so model weight memory remains at the precision you load the model in. The KV-cache memory savings are realized during generation.

See Also

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/gemma-4-31B-RotorQuant

Finetuned
(9)
this model

Paper for majentik/gemma-4-31B-RotorQuant