Gemma-4-26B-A4B Heretic

Quality: quantized (mxfp4, 4.604 bpw)

This is a abliterated (uncensored) version of google/gemma-4-26B-A4B-it, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method (with row-norm preservation)

Performance

Metric This model Original model (google/gemma-4-26B-A4B-it)
KL divergence 0.0499 0 (by definition)
Refusals 11/100 100/100

Abliteration parameters

Parameter Value
start_layer_index 10
end_layer_index 30
preserve_good_behavior_weight 0.5480
steer_bad_behavior_weight 0.0009
overcorrect_relative_weight 0.5868
neighbor_count 14

Source

This model was converted to MLX format from coder3101/gemma-4-26B-A4B-it-heretic using mlx-vlm version 0.4.4.

Downloads last month
1,615
Safetensors
Model size
27B params
Tensor type
U8
·
U32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Gemma-4-26B-A4B-Heretic-MLX-mxfp4

Quantized
(14)
this model

Collection including TheCluster/Gemma-4-26B-A4B-Heretic-MLX-mxfp4