See DeepSeek-V4-Flash MLX in action - demonstration videos

Tested on an M3 Ultra 512 GiB RAM using Inferencer app v1.11.1

  • Text inference: ~25.98 tokens/s @ 1000 tokens ~145.11 GiB (debug build)

Q9-EXP is an experimental build of DeepSeek-V4-Flash

In this build, the 4-bit pre-quantized weights of the base model were repacked (rather than dequantized and re-quantized to 9-bit), as this approach performed slightly better in our initial coding tests. All remaining weights were quantized to 9-bit. It also includes a temporary chat template. Stay tuned for updates.

Screenshot

Quantized with a modified version of MLX
For more details see our demonstration videos or visit DeepSeek-V4-Flash.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
3,094
Safetensors
Model size
284B params
Tensor type
BF16
·
U32
·
F32
·
U8
·
I64
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inferencerlabs/DeepSeek-V4-Flash-MLX-9bit-EXP

Quantized
(27)
this model