This is a MXFP4_MOE quantization of the model Huihui-LFM2-8B-A1B-abliterated

Downloads last month
77
GGUF
Model size
8B params
Architecture
lfm2moe
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Huihui-LFM2-8B-A1B-abliterated-MXFP4_MOE-GGUF

Quantized
(3)
this model