iMatrix gguf quants of a newer finetune of Mixtral-8x22B
EdgeQuants still underway, IQ4XS version recommended. Make sure to combine/merge the parts back together before using
cat tessIQ4XS.gguf.part* > tessIQ4XS.gguf
Then use with llama.cpp version from April 12 or older. April 13 release had massive changes and messed up inferene for MoE models
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for nisten/Tess-Mixtral-8x22B-imatrix-gguf
Base model
migtissera/Tess-2.0-Mixtral-8x22B