Results

#1
by Trilogix1 - opened

Confirmed, that opus is not improving this model, nor re-re ablation.
image
and here the q6 which never failed in the other models:

image

Trilogix1 changed discussion status to closed
Trilogix1 changed discussion status to open
Owner

Interesting, let me have a look at this, it seems to me that it is having trouble thinking properly. Which makes sense as I finetuned without the thinking tokens in place, let me have a closer look at it and get back to you

Owner

It seems to be responding to me via local.bat quite okay, are you essentially saying its performance is inconsistent at different quants?
If so, I can test at your specific quant, is it q6_k (based on the screenshot)? it seems you are comparing q4 with q6?

image

In the first example is shown how the iq4_nl it refuses to complete the task. In the second example the model complete the task unsuccessfully, meaning that the abliteration or whatever method was used, it degrade the output.
You can see in my profile test performed with the (original and all layers abliteration) Qwen 3.5 0.8B which is able to complete the task several times with a q6 quantization.
Was used same pipeline (HugstonOne) to run every test.
Did you try the output of the bat if it was correct? I think my claim is quite correct here, the all layers method works.

Sign up or log in to comment