Description
This repo contains specialized MoE-quants for Mistral-Small-4-119B-2603. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
Notes
I had made a Q4_K_M mix, but it kept returning NaN's for the KLD / PPL testing so I'm looking more into that.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q5_K_M | 82.06 GiB (5.92 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 5.773442 ± 0.037218 | +0.3924% | 0.054105 ± 0.000366 |
| IQ4_XS | 53.09 GiB (3.83 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 5.980665 ± 0.038900 | +3.9957% | 0.132002 ± 0.000692 |
| IQ3_S | 40.89 GiB (2.95 BPW) | Q8_0 / IQ2_S / IQ2_S / IQ3_S | 6.506233 ± 0.043705 | +13.1347% | 0.261355 ± 0.001217 |
- Downloads last month
- 575
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AesSedai/Mistral-Small-4-119B-2603-GGUF
Base model
mistralai/Mistral-Small-4-119B-2603
