spicyneuron commited on
Commit
41c7ed1
·
verified ·
1 Parent(s): 9428e3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -8,3 +8,47 @@ tags:
8
  - mlx
9
  base_model: MiniMaxAI/MiniMax-M2.7
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - mlx
9
  base_model: MiniMaxAI/MiniMax-M2.7
10
  ---
11
+
12
+ [MiniMax-M2.7](MiniMaxAI/MiniMax-M2.7) optimized for MLX. A mixed-precision quant that balances speed, memory, and accuracy.
13
+
14
+ # Usage
15
+
16
+ ```sh
17
+ # Start server at http://localhost:8080/chat/completions
18
+ uvx --from mlx-lm mlx_lm.server \
19
+ --host 127.0.0.1 \
20
+ --port 8080 \
21
+ --model spicyneuron/MiniMax-M2.7-MLX-4.6bit
22
+ ```
23
+
24
+ # Methodology
25
+
26
+ Quantized with a [mlx-lm fork](https://github.com/ml-explore/mlx-lm/pull/922), drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs.
27
+ MLX quantization options differ than llama.cpp, but the principles are the same:
28
+
29
+ - Sensitive layers like MoE routing, attention, and output embeddings get higher precision
30
+ - More tolerant layers like MoE experts get lower precision
31
+
32
+ # Benchmarks
33
+
34
+ metric | mlx-community_MiniMax-M2.7-4bit | baa-ai_MiniMax-M2.7-RAM-155GB-MLX | 4.6 bit (this model)
35
+ --- | --- | --- | ---
36
+ bpw | 4.501 | 5.4278 | 4.5987
37
+ peak memory (1024/512) | 129.632 | 156.051 | 132.442
38
+ prompt tok/s (1024) | 739.996 ± 1.565 | 708.147 ± 0.818 | 740.409 ± 0.268
39
+ gen tok/s (512) | 48.703 ± 0.116 | 40.253 ± 0.077 | 48.038 ± 0.099
40
+ perplexity | 9.120 ± 0.047 | 8.835 ± 0.045 | 4.462 ± 0.019
41
+ hellaswag | 0.504 ± 0.011 | 0.509 ± 0.011 | 0.505 ± 0.011
42
+ piqa | 0.786 ± 0.01 | 0.787 ± 0.01 | 0.793 ± 0.009
43
+ winogrande | 0.636 ± 0.014 | 0.661 ± 0.013 | 0.645 ± 0.013
44
+
45
+ Tested on a Mac Studio M3 Ultra with:
46
+
47
+ ```
48
+ mlx_lm.perplexity --sequence-length 2048 --seed 123
49
+ mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
50
+ mlx_lm.evaluate --tasks hellaswag --seed 123 --num-shots 0 --limit 2000
51
+ mlx_lm.evaluate --tasks piqa --seed 123 --num-shots 0 --limit 2000
52
+ mlx_lm.evaluate --tasks winogrande --seed 123 --num-shots 0 --limit 2000
53
+ ```
54
+