gemma-3-12b-it-orthogonal-rotation-bounded-ablation-v1-12B
ORBA (Orthogonal Rotational Bounded Ablation) has been applied to several layers in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams, along with a few supporting techniques. Preserving norms at the neuron level also ensured numerical conservation of the Frobenius norm for each stream subjected to intervention.
Some refusal behaviors have been geometrically ablated, refusal being a classic high-contrast case that has been well-studied. Safety knowledge and awareness appears to be intact. We posit that a refusal persona was ablated. The vision stack remains part of the model was not subjected to intervention. There are rare token-level glitches in the output; it's possible that quantization errors arising from measurement against a 4-bit bitsandbytes model contibuted to this, though it's also possible that GeLU is less forgiving of errors as an activation function.
More exact details of the intervention will be forthcoming.
- Downloads last month
- 17