Gemma-4-31B-it — 30GB (MLX)

Mixed-precision quantized version of google/gemma-4-31B-it optimised by baa.ai using a proprietary Black Sheep AI method.

Per-tensor bit-width allocation via advanced sensitivity analysis with adjusted vision encoder allocation.

Metrics

Metric Value
Size 30 GB
Average bits 7.8
MMLU vs BF16 100% of BF16

Usage

from mlx_lm import load, generate

model, tokenizer = load("baa-ai/Gemma-4-31B-it-RAM-30GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=256)
print(response)

Quantized by baa.ai

Downloads last month
3,285
Safetensors
Model size
9B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for baa-ai/Gemma-4-31B-it-RAM-30GB-MLX

Quantized
(107)
this model

Collection including baa-ai/Gemma-4-31B-it-RAM-30GB-MLX