Text Generation
Transformers
Safetensors
MLX
English
Chinese
qwen3
coding
research
deep thinking
humour
sarcasm
irony
256k context
Qwen3
All use cases
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
finetune
mergekit
Merge
conversational
text-generation-inference
8-bit precision
Qwen3-32B-Element5-Heretic-qx86-hi-mlx
Brainwave: 0.483,0.596,0.738,0.754,0.394,0.802,0.710
This is a nuslerp merge of the following models:
- Skywork/MindLink-32B-0801 (Engineer4)
- Akicou/DeepKAT-32B (Engineer4)
- microsoft/FrogBoss-32B-2510 (Element3)
- ValiantLabs/Qwen3-32B-Guardpoint (Element4)
- ReadyArt/Dark-Nexus-32B-v2.0 (Element5)
The ReadyArt/Dark-Nexus-32B-v2.0 was included for its rich vocabulary--more about that on the model card :)
The model has been abliterated with Heretic by DavidAU
Refusals: 94/100
Final: 27/100 // 0.1708 KLD
Metrics are pending
-G
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-32B-Element5-Heretic-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 71
Model size
33B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit