WIP
- add PPL/KLD benchmark data and graph
- noodle on a good 4ish BPW quant size... any requests?
ik_llama.cpp imatrix Quantizations of MiniMaxAI/MiniMax-M2.7
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
These two are just a test quants for baseline perplexity comparison and not available for download here:
BF16426.060 GiB (16.003 BPW)- PPL over 552 chunks for n_ctx=512 = 7.8743 +/- 0.05993
Q8_0226.431 GiB (8.505 BPW)- PPL over 552 chunks for n_ctx=512 = 7.8764 +/- 0.05997
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ5_K 157.771 GiB (5.926 BPW)
PPL over 552 chunks for n_ctx=512 = 7.8860 +/- 0.05997
π Secret Recipe
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/imatrix-MiniMax-M2.7-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-IQ5_K.gguf \
IQ5_K \
128
smol-IQ3_KS 87.237 GiB (3.277 BPW)
PPL over 552 chunks for n_ctx=512 = 8.1491 +/- 0.06240
π Secret Recipe
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/imatrix-MiniMax-M2.7-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-smol-IQ3_KS.gguf \
IQ3_KS \
128
IQ2_KS 69.800 GiB (2.622 BPW)
PPL over 552 chunks for n_ctx=512 = 9.0713 +/- 0.07085
π Secret Recipe
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/imatrix-MiniMax-M2.7-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.7-GGUF/MiniMax-M2.7-IQ2_KS.gguf \
IQ2_KS \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Desired Quant
$ pip install huggingface_hub
$ hf download --local-dir ./MiniMax-M2.7-GGUF/ --include=IQ2_KS/*.gguf ubergarm/MiniMax-M2.7-GGUF
# Multi GPU Full Offload 128k+ context 96GB VRAM!!!
# Note: `-muge` and combination of `-vhad -sm graph` causes gibberish, see ik_llama.cpp issue in references
model=MiniMax-M2.7-IQ2_KS-00001-of-00003.gguf
./build/bin/llama-server \
--model "$model" \
--alias ubergarm/MiniMax-M2.7 \
-c 163840 \
-khad -ctk q8_0 -ctv q6_0 \
-sm graph \
-ngl 99 \
-ub 1024 -b 2048 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--no-mmap
# CPU-Only
# NOTE: -muge causes gibberish, see ik_llama.cpp issue in references
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/MiniMax-M2.7 \
--ctx-size 65536 \
--merge-qkv \
-ctk q8_0 -ctv q8_0 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
For tool use you can always bring your own template with --chat-template-file myTemplate.jinja.
Advanced options like self-speculative decoding and using RAM for caching prompts e.g. (8192 would use 8GiB of RAM):
--spec-type ngram-map-k4v --spec-ngram-size-n 8 --draft-min 1 --draft-max 16 --draft-p-min 0.4 \
--cache-ram 8192 \
--prompt-cache-all
References
- Downloads last month
- -
2-bit
Model tree for ubergarm/MiniMax-M2.7-GGUF
Base model
MiniMaxAI/MiniMax-M2.7