MiniMax-M2.7 — 155 GB (MLX)
Earlier build of the MiniMax-M2.7 mixed-precision MLX family, by baa.ai.
Current builds
For updated builds with HumanEval results and recommended inference settings, see:
| Variant | Size | Link |
|---|---|---|
| 100 GB | 100.1 GB | baa-ai/MiniMax-M2.7-RAM-100GB-MLX |
| 111 GB | 110.9 GB | baa-ai/MiniMax-M2.7-RAM-111GB-MLX |
| 116 GB | 116.0 GB | baa-ai/MiniMax-M2.7-RAM-116GB-MLX |
| 120 GB | 120.1 GB | baa-ai/MiniMax-M2.7-RAM-120GB-MLX |
Usage
from mlx_lm import load, generate
model, tokenizer = load("baa-ai/MiniMax-M2.7-RAM-155GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=512)
print(response)
License
Inherited from the upstream MiniMax-M2.7 license: non-commercial use permitted; commercial use requires written authorization from MiniMax.
Quantized by baa.ai
- Downloads last month
- 694
Model size
229B params
Tensor type
BF16
·
U32 ·
F32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for baa-ai/MiniMax-M2.7-RAM-155GB-MLX
Base model
MiniMaxAI/MiniMax-M2.7