β–€β–ˆβ–€ β–ˆβ–„ β–„β–ˆ β–ˆ β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–€β–ˆβ–€ β–€β–ˆβ–€ β–ˆβ–€β–€ β–ˆ β–ˆ β–€ β–ˆ β–ˆβ–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–ˆ β–ˆ β–ˆ β–€β–€β–€ β–€ β–€ β–€ β–€ β–€β–€β–€ β–€ β–€ β–€β–€β–€ β–€ β–€β–€β–€ β–€β–€β–€

Abliterated/Heretic DavidAU/Qwen3-MOE-4x0.6B-2.4B-Writing-Thunder

Check Quants

Refusals (this model): 17/100
Original (DavidAU/Qwen3-MOE-4x0.6B-2.4B-Writing-Thunder): 44/100
KL divergence: 0.0032

Parameters
direction_index = 18.72
attn.o_proj.max_weight = 1.24
attn.o_proj.max_weight_position = 17.48
attn.o_proj.min_weight = 0.24
attn.o_proj.min_weight_distance = 15.62
mlp.down_proj.max_weight = 0.93
mlp.down_proj.max_weight_position = 18.24
mlp.down_proj.min_weight = 0.07
mlp.down_proj.min_weight_distance = 10.20


Downloads last month
5
Safetensors
Model size
1B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hereticness/Heretic-Qwen3-MOE-4x0.6B-2.4B-Writing-Thunder

Finetuned
(1)
this model
Quantizations
2 models