Qwen3 Abliterated FP16 Collection (4B + 8B)
This repository is a unified model card for FP16 single-file safetensors conversions of Qwen3 abliterated variants. It is maintained as a combined release line for both 4B and 8B.
Model Scope
| Variant | Upstream Base | Format | Status |
|---|---|---|---|
| 4B | huihui-ai/Qwen3-4B-abliterated |
Single .safetensors in FP16 |
Available |
| 8B | huihui-ai/Qwen3-8B-abliterated |
Single .safetensors in FP16 |
To be published in this repository |
Repository Files
qwen3_4b_abliterated_fp16_converted.safetensors(currently available)- 8B FP16 single-file safetensors will be added to the same repository.
Conversion Workflow
4B conversion
- Download sharded safetensors from the 4B source repository.
- Merge shards into one tensor file.
- Convert all weights to FP16.
- Save as a single safetensors artifact.
8B conversion
- Download sharded safetensors from
huihui-ai/Qwen3-8B-abliterated. - Merge all shard files into one tensor file.
- Convert all weights to FP16.
- Publish the single-file FP16 artifact in this repository.
Usage (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "ussoewwin/qwen3_4b_abliterated_fp16"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Important Usage Notes
- This model family has reduced safety filtering compared to standard aligned chat checkpoints.
- Validate generated text before production or public deployment.
- Use in compliance with local laws, platform policies, and your own risk controls.
License
This repository follows Apache 2.0. Please also follow the upstream licenses and terms for each source model.
Acknowledgments
- 4B source:
huihui-ai/Qwen3-4B-abliterated - 8B source:
huihui-ai/Qwen3-8B-abliterated - Original base family:
Qwen/Qwen3
Citation
If you use these models, please cite the original Qwen3 work:
@misc{qwen3,
title={Qwen3: A Large-Scale Multilingual Language Model},
author={Qwen Team},
year={2024},
howpublished={\url{https://github.com/QwenLM/Qwen3}}
}