How can I use this with llama.cpp?
#24
by KeilahElla - opened
How can I use this with llama.cpp?
You can runllama-server -hf tecaprovn/deepseek-v4-flash-gguf:Q4_K_M
here is the repo link:
https://huggingface.co/tecaprovn/deepseek-v4-flash-gguf
is gone?