gemma-4-31b-it-oQ
Collection
9 items • Updated • 1
oQ3 mixed-precision MLX quantization produced via oMLX.
mlx-vlm and mlx-lmpip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/gemma-4-31b-it-oQ3 --prompt "Your prompt here" --max-tokens 512
oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.
3-bit