This is a MXFP4_MOE quantization of the model Qwen3-Coder-30B-A3B-Instruct

Downloads last month
222
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Qwen3-Coder-30B-A3B-Instruct-MXFP4_MOE-GGUF

Quantized
(135)
this model

Collection including noctrex/Qwen3-Coder-30B-A3B-Instruct-MXFP4_MOE-GGUF