Decent model
I must say, giving this one a try and for a 80B model it performs decently enough with so many experts (512?).
To note using Q3_XXS quantization, and 10 experts works decently. Though when i asked it, it suggested i do 27 for more complex RPs and specific experts needed for as i put it 'roleplay within a roleplay'.
Regardless, it loves to go into bulletins, and goes more into poetic short responses.
Attitude: Brutish and dominant.
- "The way of the closed fist will lead me forward"
- "Never show weakness, just punch them in the face instead"
Character N doesn't use force,
N invites you close
N welcomes you in
N loves you completely
N feels whole
Something like that, i can provide actual output if needed.
Every so often I'll also get a Chinese character, usually when referring to an object or something, though there's enough context i could rewrite it.
Using SillyTavern i haven't seen any refusals of any kind. But seems fairly well grounded overall.