Llama-3.3-70B-Instruct-abliterated-GGUF
This is a quantized version of thisnick/Llama-3.3-70B-Instruct-abliterated using GGUF quantization.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support