This is a MXFP4_MOE quantization of the model Grok 2

Model quantized with F16 GGUF's from: https://huggingface.co/unsloth/grok-2-GGUF

Original model: https://huggingface.co/xai-org/grok-2

This model's GGUF's have been removed, in order to conserve my repos use of space.
If you want it, just message me, and I will make it available on demand.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/grok-2-MXFP4_MOE-GGUF

Base model

xai-org/grok-2
Finetuned
(5)
this model