MiniMax-M2.7-GGUF (229B MoE)

High-precision GGUF quants of MiniMax-M2.7 (229B parameters) Mixture of Experts model. Optimized for local inference on high-RAM setups, particularly Apple Silicon (M3 Max/Ultra).

Perplexity Validation (WikiText-2)

Quant PPL (c=512, seed=1337) Speed (M3 Max 128GB)
Q3_K_L 8.4400 Β± 0.065 28.52 t/s

Baseline β€” MiniMax-M2.5 Q3_K_L: 8.7948 PPL, 28.7 t/s

Available Quants

File Method Size Use Case
minimax-m2.7-Q3_K_L.gguf Q3_K_L ~110 GB Sweet spot for 128GB Macs. Runs natively in RAM.
minimax-m2.7-Q8_0.gguf Q8_0 ~243 GB Maximum precision. Requires 256GB+ unified memory.

Model Highlights

  • Self-evolution: M2.7 participated in its own training β€” autonomously optimized a programming scaffold over 100+ rounds, achieving 30% performance improvement
  • MLE Bench Lite: 66.6% medal rate (22 ML competitions), second only to Opus-4.6 and GPT-5.4
  • SWE-Pro: 56.22% β€” matches GPT-5.3-Codex
  • SWE Multilingual: 76.5 | Multi SWE Bench: 52.7
  • VIBE-Pro: 55.6% β€” nearly on par with Opus 4.6
  • Terminal Bench 2: 57.0% | NL2Repo: 39.8%
  • GDPval-AA ELO: 1495 β€” highest among open-source models
  • Native Agent Teams support for multi-agent collaboration

Model Details

  • Architecture: MiniMax-M2 (Mixture of Experts) with 256 experts (8 active per token)
  • Parameters: ~229B total
  • Quantization Process: FP8 safetensors β†’ Q8_0 β†’ Q3_K_L via llama.cpp
  • Context Window: Up to 196k tokens
  • Chat Template: Includes official Jinja template for <think> tag handling

Recommended Inference Parameters

temperature=1.0, top_p=0.95, top_k=40

Default system prompt:

You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax.

Usage

1. Install llama.cpp

git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build -DGGML_METAL=ON
cmake --build build --config Release -j

2. Download the model

# Q3_K_L (128GB Mac)
huggingface-cli download ox-ox/MiniMax-M2.7-GGUF \
  minimax-m2.7-Q3_K_L.gguf --local-dir .

# Q8_0 (256GB+)
huggingface-cli download ox-ox/MiniMax-M2.7-GGUF \
  minimax-m2.7-Q8_0.gguf --local-dir .

3. Unlock Metal memory limit (128GB Mac only)

The model weights use ~118GB. Run this before launching to allow full GPU offload:

sudo sysctl iogpu.wired_limit_mb=122000

4. Run

./build/bin/llama-server -m minimax-m2.7-Q3_K_L.gguf \
  -ngl 99 \
  --ctx-size 512 \
  -b 512 -ub 512 \
  --port 8080 \
  --jinja

⚠️ License: Non-commercial use only. Commercial use requires written authorization from MiniMax. See LICENSE.

Downloads last month
-
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

3-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ox-ox/MiniMax-M2.7-GGUF

Quantized
(33)
this model