gemma-4-26B-A4B-cosine-slerp-mxfp8-mlx

Brainwaves

         arc   arc/e boolq hswag obkqa piqa  wino
mxfp8    0.488,0.652,0.868,0.588,0.398,0.733,0.642
qx86-hi  0.491,0.649,0.871,0.578,0.394,0.723,0.629
qx64-hi  0.496,0.662,0.866,0.578,0.404,0.726,0.642
mxfp4    0.507,0.675,0.855,0.586,0.376,0.718,0.639

Baseline model

         arc   arc/e boolq hswag obkqa piqa  wino
gemma-4-26B-A4B-it
mxfp8    0.454,0.598,0.871,0.582,0.394,0.723,0.645
qx86-hi  0.472,0.605,0.873,0.565,0.386,0.712,0.644
qx64-hi  0.472,0.621,0.866,0.564,0.382,0.717,0.637
mxfp4    0.462,0.596,0.855,0.578,0.378,0.723,0.637

See parent model for instructions on install and use with Transformers.

-G

Downloads last month
627
Safetensors
Model size
8B params
Tensor type
U8
·
U32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/gemma-4-26B-A4B-cosine-slerp-mxfp8-mlx

Quantized
(2)
this model

Collection including nightmedia/gemma-4-26B-A4B-cosine-slerp-mxfp8-mlx