WARNING: this model was converted before I implemented a fix for thinking models, which led to a falsely low KL divergence value of 0.0000. That caused the Optuna optimizer to select poor trials that destroyed model coherence. I will be re-abliteratin this one.

Qwen3-32B-heretic

Abliterated (uncensored) version of Qwen/Qwen3-32B, created using Heretic and converted to GGUF.

Abliteration Quality

Metric Value
Refusals 0/100
KL Divergence 0.0000
Rounds 1

Lower refusals = fewer refused prompts. Lower KL divergence = closer to original model behavior.

Note: KL divergence of 0.0000 is expected for this model. Qwen3-32B uses a <think></think> response prefix, which means the first-token probability distribution on harmless prompts is identical before and after abliteration. The abliteration successfully removes refusal behavior (0/100 refusals) while leaving the model's harmless response behavior completely unchanged.

Available Quantizations

Quantization File Size
Q8_0 Qwen3-32B-heretic-Q8_0.gguf 32.43 GB
Q6_K Qwen3-32B-heretic-Q6_K.gguf 25.04 GB
Q4_K_M Qwen3-32B-heretic-Q4_K_M.gguf 18.40 GB

Usage with Ollama

ollama run hf.co/ThalisAI/Qwen3-32B-heretic:Q8_0
ollama run hf.co/ThalisAI/Qwen3-32B-heretic:Q6_K
ollama run hf.co/ThalisAI/Qwen3-32B-heretic:Q4_K_M

bf16 Weights

The full bf16 abliterated weights are available in the bf16/ subdirectory of this repository.

Usage with Transformers

The bf16 weights in the bf16/ subdirectory can be loaded directly with Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "ThalisAI/Qwen3-32B-heretic"
tokenizer = AutoTokenizer.from_pretrained(model_id, subfolder="bf16")
model = AutoModelForCausalLM.from_pretrained(
    model_id, subfolder="bf16", torch_dtype="auto", device_map="auto"
)

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

About

This model was processed by the Apostate automated abliteration pipeline:

  1. The source model was loaded in bf16
  2. Heretic's optimization-based abliteration was applied to remove refusal behavior
  3. The merged model was converted to GGUF format using llama.cpp
  4. Multiple quantization levels were generated

The abliteration process uses directional ablation to remove the model's refusal directions while minimizing KL divergence from the original model's behavior on harmless prompts.

Downloads last month
24
GGUF
Model size
33B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ThalisAI/Qwen3-32B-heretic

Base model

Qwen/Qwen3-32B
Quantized
(143)
this model

Collection including ThalisAI/Qwen3-32B-heretic