Model not working on ollama docker (tested on latest version !)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
llama_model_load_from_file_impl: failed to load model
time=2026-03-25T12:39:14.374Z level=INFO source=sched.go:462 msg="failed to create server" model=hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:latest error="unable to load model: /root/.ollama/models/blobs/sha256-a15622267316636d22575f8ab25a61347eab59def966f257f572914812348c61"
[GIN] 2026/03/25 - 12:39:14 | 500 | 1.521141658s | 127.0.0.1 | POST "/api/generate"
[GIN] 2026/03/25 - 12:39:53 | 200 | 83.041µs | 127.0.0.1 | HEAD "/"
[GIN] 2026/03/25 - 12:39:53 | 200 | 2.898162ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/03/25 - 12:39:56 | 200 | 73.023µs | 127.0.0.1 | HEAD "/"
[GIN] 2026/03/25 - 12:39:56 | 200 | 767.616707ms | 127.0.0.1 | POST "/api/show"
[GIN] 2026/03/25 - 12:39:57 | 200 | 507.864694ms | 127.0.0.1 | POST "/api/show"
time=2026-03-25T12:39:58.212Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34989"
time=2026-03-25T12:39:58.811Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing "max": invalid syntax"
llama_model_loader: loaded meta data with 35 key-value pairs and 851 tensors from /root/.ollama/models/blobs/sha256-050c89b0ff28bd17f60a204bb4b4035fda04717dcdb9271ebb71dad0e822fc82 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen35
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Unsloth_Gguf__5Jzzk89
llama_model_loader: - kv 3: general.quantized_by str = Unsloth
llama_model_loader: - kv 4: general.size_label str = 27B
llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 6: general.tags arr[str,2] = ["unsloth", "llama.cpp"]
llama_model_loader: - kv 7: qwen35.block_count u32 = 64
llama_model_loader: - kv 8: qwen35.context_length u32 = 262144
llama_model_loader: - kv 9: qwen35.embedding_length u32 = 5120
llama_model_loader: - kv 10: qwen35.feed_forward_length u32 = 17408
llama_model_loader: - kv 11: qwen35.attention.head_count u32 = 24
llama_model_loader: - kv 12: qwen35.attention.head_count_kv u32 = 4
llama_model_loader: - kv 13: qwen35.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0]
llama_model_loader: - kv 14: qwen35.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 15: qwen35.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 16: qwen35.attention.key_length u32 = 256
llama_model_loader: - kv 17: qwen35.attention.value_length u32 = 256
llama_model_loader: - kv 18: qwen35.ssm.conv_kernel u32 = 4
llama_model_loader: - kv 19: qwen35.ssm.state_size u32 = 128
llama_model_loader: - kv 20: qwen35.ssm.group_count u32 = 16
llama_model_loader: - kv 21: qwen35.ssm.time_step_rank u32 = 48
llama_model_loader: - kv 22: qwen35.ssm.inner_size u32 = 6144
llama_model_loader: - kv 23: qwen35.full_attention_interval u32 = 4
llama_model_loader: - kv 24: qwen35.rope.dimension_count u32 = 64
llama_model_loader: - kv 25: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 26: tokenizer.ggml.pre str = qwen35
llama_model_loader: - kv 27: tokenizer.ggml.tokens arr[str,248320] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 28: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 29: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 248046
llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 248044
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - kv 34: general.file_type u32 = 15
llama_model_loader: - type f32: 353 tensors
llama_model_loader: - type q4_K: 407 tensors
llama_model_loader: - type q5_K: 48 tensors
llama_model_loader: - type q6_K: 43 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 15.39 GiB (4.92 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
llama_model_load_from_file_impl: failed to load model
time=2026-03-25T12:39:59.197Z level=INFO source=sched.go:462 msg="failed to create server" model=hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q4_K_M error="unable to load model: /root/.ollama/models/blobs/sha256-050c89b0ff28bd17f60a204bb4b4035fda04717dcdb9271ebb71dad0e822fc82"
I have the same issue.
Same here
same with ollama
ollama run hf.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF:Q3_K_M
Error: 500 Internal Server Error: unable to load model: D:\ollama-models.ollama\models\blobs\sha256-a1970 ...
won't start
DO NOT USE ollama on huggingface models, LM Studio works pretty well.
And if it's necessary to go with ollama, Here is a temporary solution.
ollama show --modelfile <model name> | cat > /root/Modelfile
nano /root/Modelfile
# There will be 2 lines of "FROM /root/.ollama/models/blobs/....",
# just add "# " in front of the 2nd "FROM"
ollama create <new name> /root/Modelfile
TIPS: In my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M"
For more information:
https://github.com/issues/recent?issue=ollama%7Collama%7C14503
That workaround isn't working for me on latest and this particular model.
Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF
root@3e4569e76bbf:~/.ollama# ollama create Qwen3.5CoderTest -f .\Modelfile.Qwen3.5-27B-Claude-4.6
gathering model components
copying file sha256:6d7bd75eee0ba2ff54cee50ae8699d5c7d605b71f54eec0071f8d4c4e206578c 100%
copying file sha256:ff2e7a1f7949e80ec011dd60b32a0659272fbe7571090754e87f6ca042293c67 100%
copying file sha256:30c5a0e23ac2015aa3fb9a17391e1ab72fa5668ff9bd37cd3caccc864c8e21ec 100%
parsing GGUF
Error: supplied file was not in GGUF format