gemma-4-26B-A4B-it-qx86-hi-mlx

This is a Deckard(qx) experimental quant.

Brainwaves

         arc   arc/e boolq hswag obkqa piqa  wino
mxfp8    0.454,0.598,0.871,0.582,0.394,0.723,0.645
mxfp4    0.462,0.596,0.855,0.578,0.378,0.723,0.637
qx86-hi  0.472,0.605,0.873,0.565,0.386,0.712,0.644
qx64-hi  0.472,0.621,0.866,0.564,0.382,0.717,0.637

Perplexity               Peak Memory   Tokens/sec
mxfp8   103.904 ± 1.765   33.28 GB      880
mxfp4   123.621 ± 2.121   20.66 GB     1266
qx86-hi  75.542 ± 1.247   29.23 GB     1145
qx64-hi  98.161 ± 1.645   22.92 GB     1135

Based on this model

         arc   arc/e boolq hswag obkqa piqa  wino

TeichAI/gemma-4-26B-A4B-it-Claude-Opus-Distill
qx86-hi  0.433,0.522,0.468,0.506,0.370,0.687,0.612
Instruct
qx86-hi  0.564,0.763,0.861,0.657,0.450,0.771,0.680

See parent model for instructions on install and use with Transformers.

As I don't have an easy way to test this until LMStudio supports it, please Like it only if you had a good experience.

Thank you,

-G

Downloads last month
2,139
Safetensors
Model size
7B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/gemma-4-26B-A4B-it-qx86-hi-mlx

Quantized
(153)
this model

Collection including nightmedia/gemma-4-26B-A4B-it-qx86-hi-mlx