See Kimi-K2-Thinking 3.825bit MLX in action - demonstration video

q3.825bit quant typically achieves 1.256 perplexity in our testing

Quantization Perplexity
q2.5 41.293
q3.5 1.900
q3.825 1.256
q4.5 1.168
q6.5 1.128
q8.5 1.128

Usage Notes

  • Tested remotely over the network via a M3 Ultra 512GB RAM using Inferencer app
  • Expect ~25 tokens/s @ 1000 tokens
  • Memory usage: ~460GB
    • For a context window of more than 3000 tokens you can expand the VRAM limit:
      • sudo sysctl iogpu.wired_limit_mb=507000
  • Quantized with a modified version of MLX 0.28
  • For more details see demonstration video or visit Kimi-K2-Thinking.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
161
Safetensors
Model size
1T params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for inferencerlabs/Kimi-K2-Thinking-MLX-3.8bit

Quantized
(18)
this model