bearzi's picture
Upload MiniMax-M2.7-JANG_2S
105fe89 verified
---
base_model: MiniMaxAI/MiniMax-M2.7
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
tags:
- mlx
- jang
- jang-quantized
- JANG_2S
- mixed-precision
- apple-silicon
---
# MiniMax-M2.7-JANG_2S
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
- **Quantization:** 2.06b avg, profile JANG_2S, method mse-all, calibration activations
- **Profile:** JANG_2S
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vmlx, MLX Studio, oMLX (with JANG patch)
## Usage
### vmlx (recommended)
```bash
pip install 'vmlx[jang]'
vmlx serve bearzi/MiniMax-M2.7-JANG_2S
```
### Python
```python
from jang_tools.loader import load_jang_model
from mlx_lm import generate
model, tokenizer = load_jang_model("bearzi/MiniMax-M2.7-JANG_2S")
messages = [{"role": "user", "content": "Hello"}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
```
## About JANG
JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.
See [JANG documentation](https://github.com/jjang-ai/jangq) and scores at [jangq.ai](https://jangq.ai).
Comparative benchmarks and feedback welcome — please open a discussion.