How did you convert it to GGUF?

#3
by deepsweet - opened

Hi.

May I ask how did you convert it to GGUF?

llama.cpp convert_hf_to_gguf.py does not support it yet.

Argus org

I have been using a custom version of llama.cpp for my own purposes. GGUF is bit-perfectly compatible with the original model, but you have to wait for the perplexity pull requests to be merged into llama.cpp before it becomes publicly available.

https://github.com/hellc/llama.cpp/commits/master/
image

Sign up or log in to comment