Ollama run bug

#1
by danielyf2023 - opened

Don't know why when I use ollama run the model. After the download of model ,always can't run :

unable to load model: ***/.ollama/models/blobs/sha256-79e28ecacf84e75b6056cf4059636d435aa9eb67795780f7b7dbc7d32a962741

Owner

I highly advise people not to use Ollama. Go with LM Studio if you don't want to use llama.cpp directly.

It also sounds like your version is outdated.

I highly advise people not to use Ollama. Go with LM Studio if you don't want to use llama.cpp directly.

It also sounds like your version is outdated.

Thanks I will try it

i also got the exact same error with ollama. did anyone found the solution to use it with ollama

i also got the exact same error with ollama. did anyone found the solution to use it with ollama

use hugging face CLI to download and ask chatgpt use ollama with a model directory

This is my error:

Error: 500 Internal Server Error: unable to load model: /var/lib/ollama/.ollama/models/blobs/sha256-79e28ecacf84e75b6056cf4059636d435aa9eb67795780f7b7dbc7d32a962741 

and i try another tool, lms, which is lm studio headless cli, and it's worked, use these command:

  1. install lms curl -fsSL https://lmstudio.ai/install.sh | bash source
  2. download model: lms get HauhauCS, select Qwen3.5-4B-Uncensored-HauhauCS-Aggressive and Q4_K_M
  3. run chat mode lms chat

me too

i also got the exact same error with ollama. did anyone found the solution to use it with ollama

Don't know why when I use ollama run the model. After the download of model ,always can't run :

unable to load model: ***/.ollama/models/blobs/sha256-79e28ecacf84e75b6056cf4059636d435aa9eb67795780f7b7dbc7d32a962741

found the solution for this problem. You have to download the gguf file directly from the page. When downloaded, in that same folder make a textfile named modelfile, and inside it write FROM ./modelname.gguf
Modelname is the name of the file downloaded. Now open terminal in that same folder, write command ollama create title -f modelfile
Title is the nameto be given to midel like jn this case qwen3.5uncen . And model will be ready and can be used by opening a new terminal window and running ollama.

i also got the exact same error with ollama. did anyone found the solution to use it with ollama

Don't know why when I use ollama run the model. After the download of model ,always can't run :

unable to load model: ***/.ollama/models/blobs/sha256-79e28ecacf84e75b6056cf4059636d435aa9eb67795780f7b7dbc7d32a962741

found the solution for this problem. You have to download the gguf file directly from the page. When downloaded, in that same folder make a textfile named modelfile, and inside it write FROM ./modelname.gguf
Modelname is the name of the file downloaded. Now open terminal in that same folder, write command ollama create title -f modelfile
Title is the nameto be given to midel like jn this case qwen3.5uncen . And model will be ready and can be used by opening a new terminal window and running ollama.

gathering model components
copying file sha256:3aa21dfc185595bb9f3f8f98f08325fcecfdf446c510ce2b03fe19104e700c16 100%
copying file sha256:61f780dea3524c19b43fa66cf52aca5fb524b13fb53e4d7a59513d68a228bfb5 100%
parsing GGUF
Error: supplied file was not in GGUF format

Ollama's auto updates are usually far behind, the latest version is able to run this model, people might have to download the installer from the Ollama github

Ollama's auto updates are usually far behind, the latest version is able to run this model, people might have to download the installer from the Ollama github

Even after upgrading to the latest version, the error persists.

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
llama_model_load_from_file_impl: failed to load model
time=2026-04-08T18:11:06.991+03:00 level=INFO source=sched.go:462 msg="failed to create server" model=hf.co/HauhauCS/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive:Q8_0

Sign up or log in to comment