gemma-4-E4B-it-The-DECKARD-V3-Expresso-HERETIC-UNCENSORED-Thinking-mxfp8-mlx

Brainwaves

         arc   arc/e boolq hswag obkqa piqa  wino
mxfp8    0.514,0.706,0.774,0.660,0.436,0.762,0.635
q8       0.514,0.720,0.773,0.660,0.428,0.764,0.646
mxfp4    0.485,0.667,0.809,0.645,0.422,0.758,0.640

Quant    Perplexity
mxfp8    8.726 ± 0.087

Baseline model

gemma-4-E4B-it
bf16     0.490,0.674,0.793,0.612,0.416,0.756,0.669
mxfp8    0.480,0.656,0.797,0.608,0.400,0.755,0.665
mxfp4    0.455,0.607,0.851,0.585,0.402,0.744,0.651

Quant    Perplexity      Peak Memory   Tokens/sec
mxfp8    35.937 ± 0.525  14.80 GB      1153
mxfp4    36.746 ± 0.534  11.06 GB      1030

See parent model for instructions on install and use with Transformers.

-G

Downloads last month
1,579
Safetensors
Model size
3B params
Tensor type
BF16
·
U8
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/gemma-4-E4B-it-The-DECKARD-V3-Expresso-HERETIC-UNCENSORED-Thinking-mxfp8-mlx

Collection including nightmedia/gemma-4-E4B-it-The-DECKARD-V3-Expresso-HERETIC-UNCENSORED-Thinking-mxfp8-mlx