Gemma-4-31B-it optimized for MLX. This quant supports image input and requires a vision-enabled MLX server.

Usage

# Start server at http://localhost:8080/chat/completions
uvx --from mlx-vlm --with torchvision \
  mlx_vlm.server \
  --host 127.0.0.1 \
  --port 8080 \
  --model spicyneuron/Gemma-4-31B-MLX-4.9bit-vision

Methodology

Quantized using a custom script inspired by Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ than llama.cpp, but the principles are the same:

  • Sensitive layers like MoE routing, attention, and output embeddings get higher precision
  • More tolerant layers like MoE experts get lower precision

Benchmarks

metric unsloth_gemma-4-31b-it-UD-MLX-4bit 4.9 bit (this model)
bpw 5.765 4.904
prompt processing (1024) 354.269 355.741
token gen (512) 24.842 28.376
peak mem 23.700 20.441
perplexity 35.710 ± 0.201 31.903 ± 0.249
hellaswag 0.53 ± 0.011 0.534 ± 0.011
piqa 0.736 ± 0.01 0.748 ± 0.01
winogrande 0.664 ± 0.013 0.665 ± 0.013
  • Bits per weight calculated against only the language_model weights.
  • Perplexity in Gemma 4 was surprisingly high but seemed consistent across my trials. Could be a side effect of using allenai/tulu-3-sft-mixture. Best to interpret it as weaker signal than the other benchmark results.

Tested with:

mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
mlx_lm.perplexity --sequence-length 2048 --seed 123
mlx_lm.evaluate --tasks hellaswag --seed 123 --num-shots 0 --limit 2000
mlx_lm.evaluate --tasks piqa --seed 123 --num-shots 0 --limit 2000
mlx_lm.evaluate --tasks winogrande --seed 123 --num-shots 0 --limit 2000
Downloads last month
863
Safetensors
Model size
6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for spicyneuron/Gemma-4-31B-MLX-4.9bit-vision

Quantized
(157)
this model