Llava on llama.cpp?
#30
by danielelongo - opened
Hello, is there a quantized version of llava that can run locally on a CPU with llama.cpp?
@danielelongo Hey! we don't have gguf converted weights in the repo currently, but I saw some for llava-1.6 in https://huggingface.co/cjpais. Let me know if that help but in any case, the conversion script is available in llama.cpp repo if you wanna convert youself :)
Thanks!
danielelongo changed discussion status to closed