Kind request for Qwen3.5-397B-A17B MXFP4 BF16
Hi Noctrex,
First if all thanks for the hard work you put in releasing so many super quants! π
As the title says, dare I kindly as for a MXFP4 / BF16 GGUF of Qwen3.5-397B-A17B.
I am on Ampere, 8x RTX 3090, and based on previous releases of your MXFP4 I would really like to test this version of quantization. The model seems highly capable and MXFP4 has been so far quite speedy on my configuration!
Many thanks in advance of the model gets released! βοΈ
Yeah, that's quite a large model. Let me see if I can find some space to quantize it.
Thanks for the MXFP4 version, it runs very well. An ablit. version would be nice π
Here you go: https://huggingface.co/noctrex/Qwen3.5-397B-A17B-MXFP4_MOE-GGUF
Thank you for your help! Vielen Dank! π
Kein Problem