license_name: hugston-licenced


This is an Abliterated version of MiniMax-M2 abliterated by: huihui-ai/Huihui-MiniMax-M2-abliterated-GGUF then using Quanta for conversion and quantization, and HugstonOne to run and work with the model.


Credit to MinimaxAI for the model creation

Credit to huihui-ai/Huihui-MiniMax-M2-abliterated for the abliteration

Credit to Hugston Team for Testing, Benching and other...

Credit to LLama.cpp team for the great contribution

Credit to Huggingface for the amazing hosting platform


Keep away from children

Here we show the behaviour (in another llm) running the model in HugstonOne image

Here we show Quanta our convertor and Quantizer tool.

image

Downloads last month
159
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Trilogix1/MiniMax-M2-abliterated-GGUF

Quantized
(46)
this model