Model produces `<|channel><unused49><unused49><unused49>`
Seems an issue with the unsloth quants, others are working fine!
I tried BF16 and also Q8_K_XL, but this model in llama.cpp only produces
<unused49>tokens. Gemma-4-31B works though.
I don't know, started fine for me in llama.cpp server
/somewhere/llama-b8638$ ./llama-server -hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-IQ4_NL --port 8080 -ngl 12 -c 96000 --temp 0.5 --jinja -ctk q4_0 -ctv q4_0
I also tried BF16 and 4-bit and all produce coherent good responses.
Maybe, yes, I have ROCm as well, but it's working fine with other gguf quants like the one from ggml-org, which is strange.
SO, I tried the RADV driver and it works, so it seems to be ROCm somehow.
This is the full verbose llama-server log in case it helps:
https://gist.github.com/kyuz0/b64e7c8f7d81fd97a342e0e168f5a3e4
I'm seeing the same <|channel><unused49><unused49><unused49> using UD-Q4_K_XL. I find the model works fine on initial load and for short queries (Who won the 1984 World Series), but after an extended period of thinking (1000 tokens? I haven't tested much) it begins unused49 looping. Any subsequent prompt only produces this output until the model is reloaded.
*Forgot to mention that I rebuilt llama an hour ago.
<|channel><unused2
A'ha , on repetitive big prompts i can reproduce your issue. On vulkan + linux. So i don't think it's rocm related at all.
A'ha , on repetitive big prompts i can reproduce your issue. On vulkan + linux. So i don't think it's rocm related at all.
But it seems it's more prevalent when I use ROCm.
Also also encountered this with Q4_K_XL, but it is intermittent. It works a while, then starts to fail like this. Restarting llama-server and resuming the conversation with the same old context seems to make it work again, so this could well be a llama.cpp bug.
I don't know if this is the same issue, but my llama.cpp server outputs 'unused49', but only occasionally in the middle of any text, tried on b8664 CUDA version (self-compiled).
And it seems to go away if you remove --mmproj ... from launch parameters, i.e. turn off vision. (Wrong assumption)
UPD1: After some testing i was able to get same 'unused49' infinite loop, which is not related to --mmproj ...
UPD2: Adding "--cache-type-k q8_0 --cache-type-v q8_0 --flash-attn on", i.e. quantizing k v cache, reduces 'unused49' token rate to the same level i was getting in the beginning. I will investigate further.
FINAL UPDATE: This is 100% unsloth quantization problem. Tried unsloth UD-Q4_K_XL and MXFP4_MOE, both have 'unused49' problem. Either infinite loop of 'unused49' token (most often with kv cache quantization disabled) or some spread 'unused49' tokens in the middle of any generated text (most often with kv cache quantization set at q8_0).
Also tried lmstudio-community Q4_K_M, no such problem observed (with cache quants at q8_0/q4_0 and mmproj on/off), everything working as normal.
P.S. Unsloth UD-Q5_K_S doesn't have that problem either...
@GideonWyeth No it's not our issue - we're already engaging with llama.cpp and trying ourselves to fix it - see https://github.com/ggml-org/llama.cpp/issues/21321
OP on https://github.com/ggml-org/llama.cpp/issues/21321 had ggml-org/gemma-4-26B-A4B-it-GGUF/gemma-4-26B-A4B-it-f16.gguf (F16) - and the sameunused token issue was observed
https://github.com/ggml-org/llama.cpp/pull/21566/changes should hopefully fix things
@GideonWyeth No it's not our issue - we're already engaging with llama.cpp and trying ourselves to fix it - see https://github.com/ggml-org/llama.cpp/issues/21321
OP on https://github.com/ggml-org/llama.cpp/issues/21321 had
ggml-org/gemma-4-26B-A4B-it-GGUF/gemma-4-26B-A4B-it-f16.gguf(F16) - and the sameunused token issue was observedhttps://github.com/ggml-org/llama.cpp/pull/21566/changes should hopefully fix things
Although i understand you are operating based on user reports, my own testing shows that unsloth UD-Q4_K_XL and MXFP4_MOE quants giving this error 100% of the time. While other Q4 quants (tried bartowski, ggml, lmstudio) 100% doesn't have that issue. Tested on the same prompts, 50+ tests on each variant, maybe my test samples are not so big, but if error didn't show in 50 tries, i don't think it will show (quite unlikely).
I do not see this error with the latest version of llama.cpp.
llama-cli -m gemma-4-26B-A4B-it-UD-Q8_K_XL.gguf --jinja -c 10000 -ngl 999 -fa 1 --no-mmap
ggml_cuda_init: found 1 ROCm devices (Total VRAM: 126976 MiB):
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, VRAM: 126976 MiB
Loading model...
▄▄ ▄▄
██ ██
██ ██ ▀▀█▄ ███▄███▄ ▀▀█▄ ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██ ██ ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
██ ██
▀▀ ▀▀
build : b8696-69c28f154
model : gemma-4-26B-A4B-it-UD-Q8_K_XL.gguf
modalities : text
available commands:
/exit or Ctrl+C stop or exit
/regen regenerate the last response
/clear clear the chat history
/read <file> add a text file
/glob <pattern> add text files using globbing pattern
> hey
[Start thinking]
The user said "hey".
This is a standard greeting.
Respond in a friendly, helpful, and professional manner to initiate the conversation.
* "Hello! How can I help you today?"
* "Hey there! What's on your mind?"
* "Hi! Is there anything I can assist you with?"
"Hello! How can I help you today?" (Simple, direct, and inviting).
[End thinking]
Hello! How can I help you today?
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20
Yes, just retested latest quants. Everything looks good now, but i also updated llama.cpp (b8691) before tests, so i can't be sure if that was your quants problem or llama.cpp code, or both.
I do not see this error with the latest version of llama.cpp.
build : b8696-69c28f154 model : gemma-4-26B-A4B-it-UD-Q8_K_XL.gguf
This error wasn't present in Q5 quants and above. And it's solved now, anyway.
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20Yes, just retested latest quants. Everything looks good now, but i also updated llama.cpp (b8691) before tests, so i can't be sure if that was your quants problem or llama.cpp code, or both.
Well the only thing changed for the quants was the imatrix which shouldn't affect the quants that much. We didn't change anything else
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20Yes, just retested latest quants. Everything looks good now, but i also updated llama.cpp (b8691) before tests, so i can't be sure if that was your quants problem or llama.cpp code, or both.
Well the only thing changed for the quants was the imatrix which shouldn't affect the quants that much. We didn't change anything else
Indeed, once the eval bug was fixed in llama.cpp, even the old quants worked, I had confirmed that.
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20Yes, just retested latest quants. Everything looks good now, but i also updated llama.cpp (b8691) before tests, so i can't be sure if that was your quants problem or llama.cpp code, or both.
Well the only thing changed for the quants was the imatrix which shouldn't affect the quants that much. We didn't change anything else
Indeed, once the eval bug was fixed in llama.cpp, even the old quants worked, I had confirmed that.
Thank you appreciate you confirming!!!! :)
We updated the quants due to multiple bug fixes being pushed to llama.cpp!
It should solve this issue - see Yes see https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/discussions/20
I just encountered this <unused49> issue after re-downloading the models. I tested two: gemma-4-26B-A4B-it-UD-Q6_K_XL.gguf worked perfectly fine, while gemma-4-31B-it-UD-Q5_K_XL.gguf only returns <unused49> with or without thinking enabled. Before the model update, the 31B model was working well.
I'm running llama server with the latest docker image. All models run with mmproj-BF16.gguf, which was another thing I swapped from F16 version. Hope it could be reproduced and fixed.
Update: re-downloaded again and it's back to normal. Wierd how all these happened.
Update2: The latest 4/11 version of gemma-4-26B-A4B-it-UD-Q6_K_XL.ggufstarted doing this again.
Using q4_k_xl UD quants here on CUDA 13.2 without any issues so far.
I couldn't run latest llama.cpp without either having CUDA 13.2, or requiring me to compile it myself.
Using:
Qwen 3 - 2507 a3b-30b instruct UD q4_k_xl
Gemma 4 - a4b-26b-it UD q4_k_xl
For other AMD users on Linux with dual GPU setup (in my case dual RX 7800 XT), check the Vulkan version of Llama.cpp.
I was first trying multiple versions of Llama.cpp for Linux ROCM, all of them were returning only the same error garbage.
When I switched to Vulkan and run this:./llama-server -hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q8_K_XL -c 32768 --n-gpu-layers 999
I got a different result:
Conclusion: try Vulkan llama.cpp
Despite the merged fix in llama.cpp I'm still seeing this issue when using SYCL
I'm having this issue with LM Studio.
Versions:
CUDA 12 Llama 2.13.0 (Windows)
LM Studio 0.4.11 (Build 1)
google/gemma-4-26b-a4b
Using q8_0 version
I can also confirm that gemma4:31b q4_k_xl with f16 and f32 vision projectors (I tried both - but no images are present in the testing context) still randomly results in <unused49> floods with today's build of llama.cpp. On Linux/CUDA13 and freshly downloaded ggufs from hf (timestamps on them suggest they were updated a few days ago).
Stopping llama-server, restarting and recomputing the context / kv cache makes it go away again for a while, but it eventually still comes up again.
It is currently unclear if this is the unsloth quant problem or llama.cpp bug.
Had the same problem on my end.
I can confirm a fix for now. if youre on 2.12/2.13, downgrading ollama.cpp runtimes to 2.11 fixes it
Dont know what they were thinking releasing this, everything was working one day, the next all went to trash. no matter what config i did, it started spiting the unused49 across all my llms.
im on an nvidia 5080 with 64gb ddr5 ram
Had the same problem on my end.
I can confirm a fix for now. if youre on 2.12/2.13, downgrading ollama.cpp runtimes to 2.11 fixes it
Dont know what they were thinking releasing this, everything was working one day, the next all went to trash. no matter what config i did, it started spiting the unused49 across all my llms.
im on an nvidia 5080 with 64gb ddr5 ram
I recommend reporting this in this thread. It would help the llama.cpp team to come up with a fix. Right now it seems that they have not pinpointed where the issue originated.
I rebuilt llama.cpp b8795 (Apr 14 2026). Neven seen that problem since that day, i only see infinite loops sometimes, but they are fixed with retry queue. Tried a lot of quants, all quants produced unusedX tokens or infinite loops of some other words, but when i updated llama.cpp unusedX problem was gone. Now it looks like this problem is outside of Unsloth quants somewhere, it just happened to affect their quants, but not created by their team.
P.S. Maybe that happened, because Unsloth team made their quants in exact day some bad commit was made to llama.cpp, but the only way to fix such problems is for llama.cpp team to stop rolling infinite updates every day and start to do a proper version management, with update accumulation. I guess before they cross b9999 build (which at their speed will happen very soon), it's a good time to consider NOT going for b10000.






