Can I obtain reasoning traces when using Ministral-3-14B-Reasoning-2512 via HuggingFace Transformers?

#7
by erjiaxiao - opened

Hello! Thank you for your wonderful work on Ministral-3 Reasoning models. I’m running into a question when using Hugging Face Transformers with mistralai/Ministral-3-14B-Reasoning-2512. With my current setup (calling model.generate(...) and decoding with the tokenizer), I only get the final answer, and I don’t see any explicit reasoning / chain-of-thought traces in the output. Is it possible to obtain the reasoning process when using this model directly via Transformers?

Thank you for your guidance!

Similar issues here even with OpenAI /chat/completions API without stream=True (only modifying this one option from the demo code using vllm serve). It looks like somehow the [THINK][/THINK] tags are trimmed off when outputting, while reasoning_content is empty. I haven't seen other models behaving like this. Any suggestions? @juliendenize @pandora-s

Sign up or log in to comment