Minimax 2.7

#1
by dustinogle1 - opened

Does this work with Minimax 2.7? And is it possible to use with something like lmstudio on mlx?

Or is it only vllm? If I can use vllm on mac would that work? I know there is a xmlx project built on vllm.

It does work with M2.7 with and acceptance rate of around 25%, roughly 16/17% speedup. Don't know about the rest.

This is actually much better than I thought...
I'm getting upwards of 50% uptick in gen throughput, still investigating the correct balance.

It's possible to fine tune the M2.7 EAGLE3 head from M2.5, which would be significantly shorter than fine-tuning it from scratch.

It does work with M2.7 with and acceptance rate of around 25%, roughly 16/17% speedup. Don't know about the rest.

Did you get this result using the standard https://huggingface.co/MiniMaxAI/MiniMax-M2.7, or are you able to run it with a quantized model, such as NVFP4?

Base, I tried but at the moment with voipmonitor:cu130 nvfp4 requires Spec V2 whichs limits top_k to 1 and this hinders the draft model acceptance rate.

Sign up or log in to comment