Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED

Quality: quantized (mixed quants per tensor, group size: 32, 10.101 bpw). Most layers use 8-bit affine quantization with a group size 32; some layers are saved in bf16.

Fully uncensored and fine-tuned (by DavidAU) using Claude 4.6 large distill dataset.

This version is INSTRUCT, with modified jinja template which put this model into "instruct only" mode.

The model weights were updated on April 14.

Abliteration metrics

Metric This model Original model (Qwen/Qwen3.5-9B)
KL divergence 0.0793 0 (by definition)
Refusals 6/100 100/100

Benchmarks:

         arc   arc/e boolq hswag obkqa piqa  wino

HERETIC verison (this model):
mxfp8    0.574,0.755,0.869,0.714,0.410,0.780,0.691

Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT
mxfp8    0.574,0.729,0.882,0.711,0.422,0.775,0.691

Qwen3.5-9B
mxfp8    0.417,0.458,0.623,0.634,0.338,0.737,0.639

Source

This model was converted to MLX format from DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED using mlx-vlm version 0.4.4.

Downloads last month
693
Safetensors
Model size
9B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED-MLX-mixed-10bit

Collections including TheCluster/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED-MLX-mixed-10bit