MiniMax-M2.7-Abliterated-Heretic-MLX-4bit

This is the 4-bit Apple MLX release of an abliterated version of MiniMaxAI's MiniMax-M2.7.

By applying Heretic's Ablated Refusal Adaptation (ARA), the base refusal behavior was removed at the weight level. The result keeps MiniMax-M2.7's sparse MoE reasoning, long-context instruction following, and general capability profile, but no longer defaults to the original refusal pattern.

Quantization

This build uses layer-aware mixed 4/5-bit MLX quantization. The bulk of the model is quantized to 4-bit, while sensitive projection and output modules are kept at 5-bit treatment for better stability.

  • Format: MLX safetensors
  • Effective quantization: 4.662 bits per weight
  • Runtime: mlx-lm
  • Source checkpoint: Youssofal/MiniMax-M2.7-abliterated-BF16

Methodology & Model Notes

MiniMax-M2.7 is a 229B sparse MoE model with 10B active parameters per token, 62 layers, hybrid attention, 256 local experts with 8 active per token, and a 200K context window.

This release was produced with a direct Heretic ARA run using the fixed parameter set below:

  • start_layer_index = 30
  • end_layer_index = 51
  • preserve_good_behavior_weight = 0.4512
  • steer_bad_behavior_weight = 0.0037
  • overcorrect_relative_weight = 0.8804
  • neighbor_count = 14

The direct ARA run completed with Refusals: 0/25.

Validation

This 4-bit MLX variant was built from the same validated abliterated BF16 checkpoint as the GGUF and 3-bit MLX releases. It is published as the higher-quality Apple Silicon MLX option for users who want more precision than the 3-bit variant.

Running

from mlx_lm import load, generate

model, tokenizer = load("Youssofal/MiniMax-M2.7-Abliterated-Heretic-MLX-4bit")

messages = [{"role": "user", "content": "Write a short Python function that reverses a string."}]
prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

response = generate(model, tokenizer, prompt=prompt, max_tokens=256)
print(response)

Model Architecture

Spec Value
Total Parameters 229B sparse MoE
Active Parameters 10B per token
Experts 256 local, 8 per token
Layers 62
Attention Hybrid: 7 Lightning + 1 softmax per 8-block
Context 200K
Base Model MiniMaxAI/MiniMax-M2.7

Related Releases

Disclaimer

This model has had refusal behavior removed at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.

Credits

License

This release inherits the base MiniMax-M2.7 license.

NON-COMMERCIAL. Commercial use requires written authorization from MiniMax.

Downloads last month
1,032
Safetensors
Model size
229B params
Tensor type
BF16
·
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Youssofal/MiniMax-M2.7-Abliterated-Heretic-MLX-4bit

Quantized
(4)
this model