quantized version of Locutusque/Rhino-Mistral-7B-GGUF. Only 5 bit and 16 bit quantization so far.
- Downloads last month
- 4
Hardware compatibility
Log In to add your hardware
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support