Original model - https://huggingface.co/huihui-ai/Huihui-gpt-oss-120b-BF16-abliterated

This model was specially built to be used with Nvidia DGX Spark

Abliterated version of OSS-GPT-120B converted back from BF16 to MXFP4 (original format) so that you can use it natively with vLLM.

Gives you 40-50 tps with 100k context on Nvidia Spark DGX, which makes it a perfect choice for a the daily driver.

Please be aware - it's "jailbroken" / "abliterated" / "unlocked" model! it will do whatever you tell it to do, so don't use it unless you know what you are doing!

there are alternative models too:

  • justinjja/gpt-oss-120b-Derestricted-MXFP4
  • kldzj/gpt-oss-120b-heretic-v2

both are not as "derestricted" as this one, but it is possible that they are more "accurate".

Downloads last month
628
Safetensors
Model size
117B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for batsclamp/Huihui-gpt-oss-120b-mxfp4-abliterated