llama.cpp thinks it's not a vision model?

#1
by popeyed - opened

I downloaded Qwen 3.5 from another repo and it was recognized as a vision model, but the one from here wasn't. I simply loaded the model in llama.cpp via the command:
llama-server -hf mradermacher/Qwen3.5-4B-PaperWitch-heresy-i1-GGUF:Q6_K --jinja -c 0 --host 127.0.0.1 --port 8033

Edit, another model of yours: mradermacher/Ministral-3-8B-Reasoning-2512-absolute-heresy-i1-GGUF:Q4_K_M also wasn't recognized as a vision model.

So I'm guessing I'm doing something wrong then? I'm new llama.cpp, I apologize if it's obvious.

Had trouble getting it recognized with llama.cpp, but it worked fine with Jan, which uses llama.cpp in the background anyway. Good enough for me. Thank you.

popeyed changed discussion status to closed

Sign up or log in to comment