went from baseline 100/100 refusals to 86/100 refusals

I wish to improve this number at some point

UPDATE! i just picked up more hours at work, and can take another shot at abliteration! I work a manual labor job and have to rent a G4 gpu in colab to do this because i had to disable KV cache for this model to properly work with heretic, if you wish to support this work and my other projects (including hierarchos, my custom RWKV-HRM-Titans architecture) please donate to my patreon, i cant drive a car due to brain damage and have to walk to work to pay my bills:

https://www.patreon.com/c/MakhiBurroughs

Downloads last month
33
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for netcat420/Falcon-H1R-7B-Heretic

Quantizations
2 models