MXFP4 format?
Hi there,
Is it possible to create a MXFP4 format for this model? Current gpt-oss-120b size is 63.39GB while "Q4 K M GGUF" size is 81.82GB.
Also, is it possible to improve the name for this model? If I try this at LM Studio, I just see "Q4 K M GGUF" instead of "Huihui gpt-oss 120B Abliterated" or similar.
No idea how LM Studio works at this topic, but when using "gpt-oss-120b" I see an option to change reasoning from "low-medium-high" while I don't see that for this model. Not sure if hardcoded or not, but would be nice to have.
What name format would you like to use? I haven't used LM Studio, so I'm not familiar with its naming conventions.
Also, to my knowledge, the current version of Transformers doesn't support saving in MXFP4 format?
(Transformers supports MXFP4 to bf16, but does not support bf16 to MXFP4.?)
GPUs with Compute Capability >= 9.0 support MXFP4.
As far I know, it relates to the file names (but I guess it's not a good idea to change it for now) containing the gguf files.
For example, here is how this project looks like at LM Studio
This is how the files are named:
In other hand, the unsloth builds look like this:
And this is their file name structure:
Now, taking a project from unsloth using your same folder convention, this is how their gguf files looks like:
Hope this is not just an annoying demanding thing, but rather a way to improve how it looks like using a popular tool like LM Studio (which is promoted by AMD).
The gguf file generated later will follow this convention.






