Prompts with /nothink in the calibration data

#2
by sokann - opened

With GLM-4.5/4.6, when a user prompt ends with /nothink, then the model is supposed to respond with a dummy <think></think>.

I made a ~133 GiB quant of REAP-218B, and noticed that sometimes it would struggle a bit to come up with the</think>, e.g.

              {
                "id": 151351,
                "token": "</think>",
                "bytes": [60,47,116,104,105,110,107,62],
                "logprob": -0.5386887192726135
              },
              {
                "id": 198,
                "token": "\n",
                "bytes": [10],
                "logprob": -0.8810062408447266
              },

Meanwhile, a ~133 GiB quant of REAP-252B doesn't have this issue. So I suspect the ability to respond with a dummy <think></think> may have been partially pruned in REAP-218B. Is it possible to mitigate that by including some prompts with /nothink in the calibration data? Thanks

Btw, I am particularly interested in REAP-218B, as the 133 GiB quant somehow was able to figure out a rather tricky golang concurrency issue. Prior to this, the only models that could figure it out were

  • O3 Pro - tested and passed with medium
  • O3 - tested and passed with medium
  • GPT-5 - tested and passed with high/medium/low, failed with minimal

After the heavy pruning, REAP-218B somehow was able to stumble to the correct answer after a long reasoning, and its final answer was 99% coherent. The answers from full 355B (via fireworks) and the REAP-252B quant were totally wrong.

Also, it is expected that these REAP models are not able to recite wikitext right? The PPL of https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/resolve/main/wiki.test.raw.gz is atrocious πŸ˜‚

Baseline: about 3.5, see the graph from https://huggingface.co/ubergarm/GLM-4.6-GGUF

REAP-252B 133 GiB quant: 12.7295 +/- 0.10395

REAP-218B 133 GiB quant: 18.7934 +/- 0.16674

Sign up or log in to comment