Ollama Support - Error on run
Hi, I've pulled this via ollama, but when i try to run it i get:
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
Qwen3.5:9b regular works fine.
Is it me doing something wrong?
These are the two I've pulled:
hf.co/Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-GGUF:latest 02132958ed88 6.6 GB
qwen3.5:9b 6488c96fa5fa 6.6 GB
Thanks
John
Hi, this is likely not an issue with your setup. Ollama uses llama.cpp as its backend, and support for some Qwen3.5 variants (like qwen35) may not be fully implemented there yet.
You might want to try running this model in LM Studio instead — it already has support for these models and should work without this error~