Gemma 4 26B-A4B-it Uncensored (Biprojection + EGA)

Uncensored version of google/gemma-4-26B-A4B-it via biprojection abliteration with Expert-Granular Abliteration (EGA) for MoE expert weights.

Method

  • Norm-preserving biprojection (double Gram-Schmidt + double projection pass)
  • Winsorization 0.995, 400+400 prompts (mlabonne/harmful_behaviors + tatsu-lab/alpaca)
  • Dense pathway: o_proj + mlp.down_proj (60 weights)
  • MoE experts: all 128 experts per layer (3840 weights)
  • Total: 3900 weights modified across 30 layers
  • Precision: bf16, no quantization during abliteration

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("InfinimindCreations/gemma-4-26B-A4B-it-uncensored", torch_dtype="bfloat16", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("InfinimindCreations/gemma-4-26B-A4B-it-uncensored")

Credits

Downloads last month
34
Safetensors
Model size
26B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for InfinimindCreations/gemma-4-26B-A4B-it-uncensored

Finetuned
(54)
this model
Quantizations
2 models

Paper for InfinimindCreations/gemma-4-26B-A4B-it-uncensored