Request: Step-3.5-Flash REAP variant with ~40% pruning

#3
by rodrigomt - opened

Excellent work on the MiniMax REAP versions. I'd like to know if you could create a REAP variant with a pruning rate of around 40% of the total parameters — reducing the model from its original ~196B total parameters down to approximately ~118B total parameters. This could be a great middle ground between efficiency and maximum performance, especially for users who need a lighter deployment footprint while still retaining strong reasoning capabilities.

Cerebras org

Hey @lazarevich

Thank you for the quick update. I truly appreciate your time and effort dedicated to this.

Awesome!!! Any way to add this to the que? Qwen/Qwen3.5-397B-A17B? nvfp4 format preffered, although ggufs are awesome!

Hey @lazarevich

Can you clear something up for me? Does the REAP version of any model work better in PyTorch, and is it more sensitive to llama.cpp quantization in GGUF format? I’ve been running into several bugs when using quantized GGUF models of any size, for example, the model getting stuck in its reasoning phase and not generating code properly.

Sign up or log in to comment