See DeepSeek-V4-Pro MLX in action - demonstration videos
Tested with an M3 Ultra 512 GiB and M4 Max 128 GiB RAM using Inferencer app v1.11.1 distributed compute
- Distributed inference: ~13 tokens/s @ 1000 tokens ~450.69 GiB / ~78.84 GiB (debug build)
Q2.8-EXP is an experimental build of DeepSeek-V4-Pro
This build compresses the model to operate within the memory constraints of a distributed setup combining 512 GiB and 128 GiB RAM while maintaining response coherence. However, overall accuracy is degraded due to the level of compression. Stay tuned for updates.
Quantized with a modified version of MLX
For more details see our demonstration videos or visit DeepSeek-V4-Pro.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.
- Downloads last month
- 7,979
Quantized
Model tree for inferencerlabs/DeepSeek-V4-Pro-MLX-2.8bit-EXP
Base model
deepseek-ai/DeepSeek-V4-Pro