Uncensored Qwen3.5 MLX
Collection
26 items • Updated

Quality: quantized (mixed quants per tensor, group size: 32, 10.101 bpw). Most layers use 8-bit affine quantization with a group size 32; some layers are saved in bf16.
Fully uncensored and fine-tuned (by DavidAU) using Claude 4.6 large distill dataset.
This version is INSTRUCT, with modified jinja template which put this model into "instruct only" mode.
The model weights were updated on April 14.
| Metric | This model | Original model (Qwen/Qwen3.5-9B) |
|---|---|---|
| KL divergence | 0.0793 | 0 (by definition) |
| Refusals | 6/100 | 100/100 |
arc arc/e boolq hswag obkqa piqa wino
HERETIC verison (this model):
mxfp8 0.574,0.755,0.869,0.714,0.410,0.780,0.691
Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT
mxfp8 0.574,0.729,0.882,0.711,0.422,0.775,0.691
Qwen3.5-9B
mxfp8 0.417,0.458,0.623,0.634,0.338,0.737,0.639
This model was converted to MLX format from DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED using mlx-vlm version 0.4.4.
8-bit
Base model
Qwen/Qwen3.5-9B-Base