Fine-tuned Qwen3.5 MLX
Collection
26 items • Updated • 4

Quality: quantized (mixed quants per tensor, group size: 32, 9.450 bpw)
Most layers use 8-bit affine quantization with a group size 32; some layers are saved in bf16.
A writing & roleplay finetune of Qwen3.5 27B. The primary emphasis is on writing quality as it strongly generalizes across both domains.
Uncensored version: TheCluster/Qwen3.5-27B-Writer-V2-Uncensored-Heretic-MLX-mixed-9.4bit
<think>\n\n</think> or {{char}}: prefill. Only non-thinking was trained, but thinking probably still works.0.70.951.05 or a moderate dry setting should suffice.This model was converted to MLX format from ConicCat/Qwen3.5-27B-Writer-V2 using mlx-vlm version 0.4.4.
8-bit