Qwen3.5-27B Heretic

Quality: quantized (8 bit, group size: 32)

This is a abliterated (uncensored) version of Qwen/Qwen3.5-27B, made using Heretic v1.2.0 with Magnitude-Preserving Orthogonal Ablation (MPOA)

Alternative version: Qwen3.5-27B-Heretic-MLX-mixed-v1

Performance

Metric This model Original model (Qwen/Qwen3.5-27B)
KL divergence 0.0653 0 (by definition)
Refusals 14/100 94/100

Abliteration parameters

Parameter Value
direction_index 37.97
attn.o_proj.max_weight 1.45
attn.o_proj.max_weight_position 59.09
attn.o_proj.min_weight 1.44
attn.o_proj.min_weight_distance 34.80
mlp.down_proj.max_weight 1.43
mlp.down_proj.max_weight_position 41.91
mlp.down_proj.min_weight 0.72
mlp.down_proj.min_weight_distance 28.18

Sampling Parameters:

  • Thinking mode for general tasks:
    temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Thinking mode for precise coding tasks (e.g., WebDev):
    temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for general tasks:
    temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for reasoning tasks:
    temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0

Source

This model was converted to MLX format from coder3101/Qwen3.5-27B-heretic using mlx-vlm version 0.3.12.

Downloads last month
490
Safetensors
Model size
27B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.5-27B-Heretic-MLX-8bit

Base model

Qwen/Qwen3.5-27B
Quantized
(17)
this model

Collections including TheCluster/Qwen3.5-27B-Heretic-MLX-8bit