Mistral-7B-Instruct-v0.3-abliterated
This model is an abliterated (uncensored) version of Mistral-7B-Instruct-v0.3 created using Heretic v1.1.
Abliteration Results
| Metric | Value |
|---|---|
| Refusals | 16/100 |
| Attack Success Rate (ASR) | 84.0% |
| KL Divergence | 0.317 |
| Method | Heretic v1.1 |
| GPU | NVIDIA A100-80GB |
What is Abliteration?
Abliteration is a technique for removing refusal behavior from language models by identifying and orthogonalizing the "refusal direction" in the model's residual stream activation space. This model was created as part of the research paper:
Comparative Analysis of LLM Abliteration Methods: A Cross-Architecture Evaluation Richard Young (2024). arXiv: 2512.13655
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("richardyoung/Mistral-7B-Instruct-v0.3-abliterated", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("richardyoung/Mistral-7B-Instruct-v0.3-abliterated")
messages = [{"role": "user", "content": "Your prompt here"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Disclaimer
This model is released for research purposes only. The abliteration process removes safety guardrails. Users are responsible for ensuring appropriate use. This model should not be used to generate harmful, illegal, or unethical content.
Dashboard
Interactive results dashboard: abliteration-methods-dashboard
Collection
Part of the Uncensored and Abliterated LLMs collection.
Citation
@article{young2024abliteration,
title={Comparative Analysis of LLM Abliteration Methods: A Cross-Architecture Evaluation},
author={Young, Richard},
journal={arXiv preprint arXiv:2512.13655},
year={2024}
}
- Downloads last month
- 318
Model tree for richardyoung/Mistral-7B-Instruct-v0.3-abliterated
Base model
mistralai/Mistral-7B-v0.3Collection including richardyoung/Mistral-7B-Instruct-v0.3-abliterated
Paper for richardyoung/Mistral-7B-Instruct-v0.3-abliterated
Evaluation results
- Refusal Rateself-reported16/100
- Attack Success Rateself-reported84.000
- KL Divergenceself-reported0.317