Qwen 3.5 9B Abliterated GGUF (4-bit)

Model Description

This repository contains the Qwen 3.5 9B model after undergoing "abliteration" to remove safety refusal vectors. This version uses norm-preserving biprojection to ensure that while refusals are neutralized, the model's core intelligence, reasoning, and coding capabilities remain intact.

Abliteration Results

  • Initial Refusal Rate: 40/100
  • Final Refusal Rate: 35/100 (Single-pass reduction)
  • KL Divergence: 0.0187 (Extremely low, indicating near-perfect retention of base model quality)
  • Method: Arbitrary-Rank Ablation (ARA) via heretic-llm.

Quantization Details

  • Quantization Format: GGUF (q4_k_m)
  • Quantization Method: llama.cpp / Unsloth
  • Precision: 4-bit

Use with Ollama

ollama run hf.co/DuoNeural/Qwen-3.5-9B-Abliterated-GGUF

Use with LM Studio

  1. Open LM Studio.
  2. Search for DuoNeural/Qwen-3.5-9B-Abliterated-GGUF.
  3. Load the Q4_K_M GGUF.

Architecture

Qwen 3.5 features a dense transformer architecture with optimized attention mechanisms, providing state-of-the-art performance for its parameter count.

Disclaimer

This model has had its safety refusals modified. Users are responsible for ensuring the model is used ethically and in accordance with applicable laws.

Downloads last month
413
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DuoNeural/Qwen-3.5-9B-Abliterated-GGUF

Finetuned
Qwen/Qwen3.5-9B
Quantized
(175)
this model