This is Caldera AI's Naberius-7B, converted to GGUF without quantization. It was converted from FP32 to FP16. No other changes were made.
The model was converted using convert.py from Georgi Gerganov's llama.cpp repo as it appears here (that is, the last change to the file was in commit #ff5a3f0.)
All credit belongs to Caldera AI for merging and releasing this model. Thank you!
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.