RAM quantized versions of MiniMaxAI/MiniMax-M2.7 for Apple Silicon. Multiple size points from 90 GB to 203 GB.
AI & ML interests
Model Quantization
Recent Activity
Organization Card
Smaller. Smarter. Sovereign.
Making frontier models run anywhere
We publish high-quality quantized models for Apple Silicon and GGUF. Our models use a proprietary optimisation method that delivers superior quality at your target memory budget.
Browse our models, or connect with us below.
models 37
baa-ai/Gemma-4-31B-it-RAM-31GB-MLX
Text Generation • 31B • Updated • 572 • 1
baa-ai/MiniMax-M2.7-RAM-90GB-MLX
Text Generation • 229B • Updated • 21 • 4
baa-ai/MiniMax-M2.7-RAM-100GB-MLX
Text Generation • 229B • Updated • 53 • 4
baa-ai/Gemma-4-31B-it-RAM-8bit-MLX
Image-Text-to-Text • 11B • Updated • 1
baa-ai/Gemma-4-31B-it-RAM-3bit-MLX
Image-Text-to-Text • 5B • Updated • 1
baa-ai/Gemma-4-31B-it-RAM-4bit-MLX
Image-Text-to-Text • 6B • Updated
baa-ai/Gemma-4-26B-A4B-it-RAM-4bit-MLX
Image-Text-to-Text • 4B • Updated
baa-ai/Gemma-4-26B-A4B-it-RAM-8bit-MLX
Image-Text-to-Text • 8B • Updated • 1
baa-ai/MiniMax-M2.7-RAM-203GB-MLX
Text Generation • 229B • Updated • 1
baa-ai/MiniMax-M2.7-RAM-155GB-MLX
Text Generation • 229B • Updated • 1
datasets 0
None public yet