NOTICE
See GLM-5.1 MLX in action - demonstration video
Tested on a M3 Ultra 512GB RAM using Inferencer app
- Single inference ~16.9 tokens/s @ 1000 tokens (debug build)
- Batched inference ~22.8 total tokens/s across two inferences
- Memory usage: ~420.61 GiB
4.8bpw quant typically achieves 93% accuracy in our coding test
| Quantization (bpw) | Perplexity | Token Accuracy | Missed Divergence |
|---|---|---|---|
| q4.5 | 1.35937 | 89.75% | 28.98% |
| q4.8 | 1.26562 | 93.50% | 19.57% |
| q5.5 | 1.24218 | 94.60% | 17.55% |
| q6.5 | 1.21875 | 96.85% | 16.03% |
| q8.5 | 1.21875 | 97.65% | 9.92% |
| q9 | 1.21093 | 97.95% | 9.61% |
| Base | 1.20312 | 100.0% | 0.000% |
Quantized with a modified version of MLX
For more details see demonstration video or visit zai-org/GLM-5.1.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.
- Downloads last month
- 1,091
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for Nishant2414/GLM-5.1-MLX-4.8bit
Base model
zai-org/GLM-5.1