MASSIVE BREAKTHROUGH!!

I managed to fix the model layer scanning AND optimize the layer discovery! not only did the speed MASSIVELY improve from 13 hours to just 45 minutes on a G4 GPU in colab, i now have just 3/100 refusals on a HYBRID MODEL!!! I am at a loss of words and this is the HAPPIEST ive been in YEARS!

2026-03-15-102543_1366x768_scrot

this is going to be the main heretic model i use from this point forward, i plan to hook this shit straight up to a synthetic data generation pipeline with web search, and use it to generate a good synthetic dataset for hierarchos! this is MASSIVE for the community!

if you wish to keep breakthroughs like this coming, and maybe even free my time from my horrible job and barely surviving (which will allow me to work on stuff like this more :3 ) please consider donating to my patreon!

https://www.patreon.com/c/MakhiBurroughs

Downloads last month
74
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for netcat420/Falcon-H1R-7B-Heretic-V2

Quantizations
2 models