How did you convert it to GGUF?
#3
by deepsweet - opened
Hi.
May I ask how did you convert it to GGUF?
llama.cpp convert_hf_to_gguf.py does not support it yet.
I have been using a custom version of llama.cpp for my own purposes. GGUF is bit-perfectly compatible with the original model, but you have to wait for the perplexity pull requests to be merged into llama.cpp before it becomes publicly available.
