How can I use this with llama.cpp?

#24
by KeilahElla - opened

How can I use this with llama.cpp?

You can run
llama-server -hf tecaprovn/deepseek-v4-flash-gguf:Q4_K_M

here is the repo link:
https://huggingface.co/tecaprovn/deepseek-v4-flash-gguf

Sign up or log in to comment