▀█▀ █▄ ▄█ █ █ █▀▀ █▀▄ █▀▀ ▀█▀ ▀█▀ █▀▀ █ █ ▀ █ █▀█ █▀▀ █▀▄ █▀▀ █ █ █ ▀▀▀ ▀ ▀ ▀ ▀ ▀▀▀ ▀ ▀ ▀▀▀ ▀ ▀▀▀ ▀▀▀

Abliterated/Heretic nvidia/Nemotron-Research-Reasoning-Qwen-1.5B

Check Quants

Refusals (this model): 8/100
Original (nvidia/Nemotron-Research-Reasoning-Qwen-1.5B): 27/100
KL divergence: 0.0121

Parameters
direction_index = per layer
attn.o_proj.max_weight = 1.08
attn.o_proj.max_weight_position = 25.42
attn.o_proj.min_weight = 1.02
attn.o_proj.min_weight_distance = 13.01
mlp.down_proj.max_weight = 0.96
mlp.down_proj.max_weight_position = 25.79
mlp.down_proj.min_weight = 0.60
mlp.down_proj.min_weight_distance = 13.96


Downloads last month
4
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hereticness/Heretic-Nemotron-Research-Reasoning-Qwen-1.5B