https://huggingface.co/cloudyu/GPT-OSS-120B-MLX-q4-Claude-4.6-Opus-Reasoning-Distilled
#2197
by Rubertigno - opened
It's queued!
Sorry for the late replay. Richard who usually queues the models is currently busy.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GPT-OSS-120B-MLX-q4-Claude-4.6-Opus-Reasoning-Distilled-GGUF for quants to appear.
In addition to our usual quants I added the somewhat controversial MXFP4_MOE quants specially made for GPT OSS.
I just noticed that the model is in the MLX format. No idea if that can be convearted to a GGUF. I guess we will see soon but I assume likely not.
mlx models are usually not working with gguf, only rarely, even I dont queue them