Never mind the benchmarks, MiniMax M2.1 outshines GLM 4.7
Hopefully this will help someone who's looking for a competent coder to run in a local setup. I've been using BOTH heavily since they were both released and there's no question, MiniMax M2.1 beats this model in everyday coding tasks. Once you get beyond the impressive one-shot capability of 4.7, there's not much meat on the bone. I've tried using GLM 4.7 in a handful of projects and each and every time I find more occasions where the model makes dumb decisions, doesn't follow instructions or literally gets lost in the code. Too many times it takes an issue for example where the API in the backend needs a change but then it drops the ball and updates the frontend incorrectly or forgets some arguments or arguments are the wrong type. Too many occurrences where it confuses dict and tuple in backend code as another example, causing failures and lots of tokens burned remediating issues it created, not to mention the time it takes to fix all the stuff.
Another example where it completely messed up a container deployment and after dozens of iterations, can't figure out how to create the proper file structure or avoid circular dependencies and other general messes it creates.
This is not just because I'm running the Q8 quant on my rig either. I signed up for the Z.ai pro plan and I've been using 4.7 there too and see the same issues of the model frequently messing things up and generally being dumber than MiniMax M2.1.
We have multiple open-weights models of this caliber claiming to beat state of the art models from Anthropic, Google and OpenAI. The only thing this has taught me is that benchmarks don't mean jackshit anymore. It seems a LOT of model factories are more interested in benchmark maxing, rather than making actually useful coding models.
Having said all that, I am appreciative of all the work Unsloth does to make these quants and get them to the community FAST! A serious thumbs up to you guys π . I also appreciate the work you guys do to make the models BETTER than the official releases, chat template updates, improved tool calling, etc. Looking forward to a model that beats MiniMax M2.1 at this size, GLM 4.7 is NOT it.
This is hugely useful information. I was debating downloading GLM-4.7 but I'll probably skip it for now. I have MiniMax M2.1 running comfortably at Q4 on my PC. I've been very satisfied with it overall, so I'll keep it going.
I'd say using it with claude code + having a custom impact analysis agent (to see what all other changes need to be made to propagate the proposed change), truly made it much much more useable. GLM 4.7 is my go to model rn.
I'd say using it with claude code + having a custom impact analysis agent (to see what all other changes need to be made to propagate the proposed change), truly made it much much more useable. GLM 4.7 is my go to model rn.
Is that impact analysis agent open source e.g. can you share a link to GitHub for the script?
Propagating changes through the whole codebase has always been such a headache.
@TimothyRoo I created the custom Agent inside Claude code. CC allows you to create new agents that the orchestrator/main LLM can call. You can define specific roles for these agents, from Bug bounty, to impact analysis. And Claude Code helps you create the agent too, so you don't need to write the full prompt by yourself.
I agree that minimax 2.5 is probably better for chat-level coding, but for Claude code, I just find GLM to be too good.