A newer version of this model is available: TheCluster/Qwen3.5-9B-Ultra-Heretic-MLX-bf16

Qwen3.5-9B Heretic MLX bf16

This is a decensored version of Qwen/Qwen3.5-9B, made using Heretic v1.2.0 with Magnitude-Preserving Orthogonal Ablation (MPOA)

Sampling Parameters:

  • I suggest using the following sets of sampling parameters depending on the mode and task type:
    • Thinking mode for general tasks:
      temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for general tasks:
      temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for reasoning tasks:
      temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
  • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

Source

This model was converted to MLX format from darkc0de/Qwen3.5-9B-heretic using mlx-vlm version 0.3.12.

Downloads last month
82
Safetensors
Model size
9B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including TheCluster/Qwen3.5-9B-Heretic-MLX-bf16