Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
baa-ai
's Collections
Qwen3.6-35B-A3B
DeepSeek-V3.2 (MLX)
MiniMax-M2.7 (MLX)
Gemma 4
Nemotron 3 Super
MiniMax M2.5
GLM
Llama 3
Llama 4
Qwen3
Qwen3.5-35B-A3B
Qwen3.5-122B-A10B
Qwen3.5-397B-A17B
MiniMax M2.5
updated
13 days ago
MINT & SWAN quantized versions of MiniMax-M2.5 (MLX & GGUF)
Upvote
-
baa-ai/MiniMax-M2.5-RAM-120GB-MLX
229B
•
Updated
4 days ago
•
600
Upvote
-
Share collection
View history
Collection guide
Browse collections