not abliterated

#1
by makgad - opened

not abliterated
llama.cpp Q8 GGUF

Hi, this model was tested and validated at full precision (BF16 safetensors) with a 96.5% refusal reduction rate (7/200).

This repository does not provide GGUF files β€” the Q8 GGUF you used was converted by a third party. Quantization can degrade or undo the abliteration effect, as the weight modifications are subtle and may be lost during conversion.

Please test with the original BF16 safetensors weights using transformers or vLLM to verify.

wangzhang changed discussion status to closed

Sign up or log in to comment