This is a direct converion of the model HyperNova-60B

As the original model is already in MXFP4, this essentially is a unaltered pass-through.

Downloads last month
101
GGUF
Model size
59B params
Architecture
gpt-oss
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for noctrex/HyperNova-60B-MXFP4_MOE-GGUF

Quantized
(12)
this model