Text Generation
Transformers
Safetensors
English
mistral
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
float32
swearing
rp
horror
della
Merge
mergekit
heretic
uncensored
decensored
abliterated
conversational
text-generation-inference
Faithful decensor
#1
by redaihf - opened
Good to hear the EOS script is functional, and I suspect that heretic (or even an MPOA) version of the normalize=true should be 'smarter' than unablated normalize=false
Also thanks @MuXodious for ablating this
All of @MuXodious ' Hereticisations use MPOA as standard. MPOA causes contextual ethical realignment whereas standard abliteration merely suppresses overt refusals. Failing to target the harmfulness direction originally theorised by @GrimJim means that models often exhibit other forms of noncompliance.
There is some entanglement between the refusal direction and the compliance direction in all the models I've worked with. I would not be surprised if the entanglement were less directly fused in models with more layer depth.