File size: 1,506 Bytes
89553ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
base_model: MiniMaxAI/MiniMax-M2.7
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
tags:
- mlx
- jang
- jang-quantized
- JANG_2L
- mixed-precision
- apple-silicon
---

# MiniMax-M2.7-JANG_2L

JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).

- **Quantization:** 2.1b avg, profile JANG_2L, method mse-all, calibration activations
- **Profile:** JANG_2L
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vmlx, MLX Studio, oMLX (with JANG patch)

## Usage

### vmlx (recommended)

```bash
pip install 'vmlx[jang]'
vmlx serve bearzi/MiniMax-M2.7-JANG_2L
```

### Python

```python
from jang_tools.loader import load_jang_model
from mlx_lm import generate

model, tokenizer = load_jang_model("bearzi/MiniMax-M2.7-JANG_2L")
messages = [{"role": "user", "content": "Hello"}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
```

## About JANG

JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.

See [JANG documentation](https://github.com/jjang-ai/jangq) and scores at [jangq.ai](https://jangq.ai).

Comparative benchmarks and feedback welcome — please open a discussion.