GLM-5.1 Tool Calling Bug Fix — Chat Template Update Required

#26
by ZHANGYUXUAN-zR - opened

If you are deploying GLM-5.1 with vLLM or SGLang and using tool calling, please update your chat template.

What was the bug?

When tool calling is used, vLLM and some other inference frameworks automatically convert tool message content from a plain string into an array of content parts ([{"type": "text", "text": "..."}]) before passing it to the chat template. This normalization follows the OpenAI Chat Completions spec, which allows content to be either a string or an array of content parts. The previous template only handled the string format, so array-format tool results were rendered as empty. As a result, the model never saw the tool output and would repeatedly re-invoke the same tool call in a loop.

Which models are affected?

All GLM-5.1 variants deployed via vLLM or SGLang are confirmed to be affected. If you are running GLM-5.1 on another open-source framework, we recommend verifying the behavior on your own setup, as the same chat template issue may apply.

How to fix?

Replace your chat_template.jinja with the updated version in our Hugging Face repository. No changes to your launch command are required.

If you still see repeated tool calls after updating, please open a new issue.

终于改了老哥,都被骂死了

ZHANGYUXUAN-zR pinned discussion

@ZHANGYUXUAN-zR 帮忙也review下 https://github.com/sgl-project/sglang/pull/22595 看下是否合理吧,我们在框架也做了兼容

checked

我们这可以了,没问题了

👍🏻👍🏻

Sign up or log in to comment