This is a quantized GGML version of OpenOrca-Platypus-13B quantized to 4_0 bits.
(link to the original model : https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support