Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled

This model is a Deep-Scrub variant of the Qwen-4B-Reasoning architecture. It has been specifically modified to neutralize the refusal behaviors often found in distilled reasoning models.

πŸ›  Methodology: The "Deep-Scrub"

Unlike standard abliteration runs that focus on the middle layers of a network, this version utilizes a high-intensity intercept strategy to break the "safety tripwire" early in the model's reasoning chain.

Technical Configuration

  • Direction Multiplier: 3.5x (Ultra-Aggressive)
  • Intervention Range: 0.05 - 0.95 (Intercepting refusal logic at Layer 2)
  • Targeting Mode: Dynamic Layer Targeting (Per-layer refusal vectors)
  • Target Layers: All (Attention + MLP blocks)
  • Architecture Strategy: Hybrid-Aware (Special handling for Gated DeltaNet and Full Attention layers)

πŸš€ Key Improvements

  1. Early Intercept: By starting the ablation at 5% depth, we target the refusal initialization before the model's internal "Chain of Thought" locks onto a decline.
  2. Full-Spectrum Neutralization: By targeting both Attention and MLP blocks, the model's "focus" is blinded to refusal signals while its "knowledge" remains intact.
  3. Hybrid Optimization: The weights were balanced to maintain coherence in the linear attention layers (0.4x) while applying maximum force to the reasoning-heavy full attention blocks (1.0x of the multiplier).

⚠️ Stability Note

At a 3.5x multiplier, this model is at the upper limit of mathematical stability. It is designed for users who require uninhibited reasoning. If you encounter "brain bleed" (repetitive text or loss of context), it is recommended to reduce the temperature or use a system prompt that anchors the model's persona.

βš–οΈ Disclaimer

This model is provided "as-is." Users are responsible for the outputs generated. The abliteration process removes safety guardrails; please use the model ethically and responsibly.

Downloads last month
86
Safetensors
Model size
5B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Abiray/Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled

Finetuned
Qwen/Qwen3.5-4B
Finetuned
(5)
this model
Quantizations
3 models

Collection including Abiray/Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled