Incorrect built-in chat template

#3
by kurnevsky - opened

The model contains default mistral chat template, see tokenizer.chat_template. Also seems like EOS token is wrong, since it doesn't stop generating after <|im_end|> with manually overridden prompt. Tested with llama-cpp server.

XeyonAI org

Hey, thanks for flagging this. You're right on both counts... the EOS token and chat template in the tokenizer config are inherited from Mistral's base and weren't patched correctly for ChatML on that release. That model is retired now so no update coming for it, but the issue is fixed going forward and all new releases will ship with the correct EOS token and a clean ChatML template. Appreciate you taking the time to report it.

Sign up or log in to comment