ollama nor working

#4
by chenyongshao - opened

hi,why i run ollama this model,it show :500 Internal Server Error: unable to load model: D:\ollama models\blobs\sha256-f7edd45febfafc43a21d82de49874d7665b2f372ef36707815a650089c29579b

The issue isn't Ollama's support, it's that the HuggingFace GGUF from Jackrong has the wrong architecture string baked in (gemma4 instead of whatever llama.cpp expects internally), same as the Qwopus model.

oh,then how should i use this model if not using ollama?

Hello, qwen35 is the expected GGUF architecture name for Qwen 3.5 models.

Older llama.cpp build that does not yet recognize the newer qwen35 architecture, may not fully support this architecture yet.

You can try LM Studio instead β€” it should run there perfectly fine now.

Thanks.

Sign up or log in to comment