500 error-ollama
C:\Users\user>ollama run hf.co/HauhauCS/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive:Q4_K_M
pulling manifest
pulling 2ca636d9e81d: 100% ▕██████████████████████████████████████████████████████████▏ 5.6 GB
pulling 05f662501f8b: 100% ▕██████████████████████████████████████████████████████████▏ 921 MB
pulling e8b41bd7e9bc: 100% ▕██████████████████████████████████████████████████████████▏ 481 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: unable to load model: D:\Ollama agent\blobs\sha256-2ca636d9e81d3d23ca9b60c234fe185d30ec082eeba69ce770fdb0c76559a4f5
Short version
If ollama run hf.co/... fails on Windows 11 with:
Error: 500 Internal Server Error: unable to load model
this worked for me:
- Download the main
.ggufmodel file manually from Hugging Face in the browser. - Create a
Modelfilethat points directly to that local.gguf. - Import it with
ollama create. - Run the local model instead of
hf.co/....
Commands:
cd D:\AI\ollama
notepad Modelfile
ollama create hauhau-qwen35 -f .\Modelfile
ollama list
ollama show hauhau-qwen35
ollama run hauhau-qwen35
Example Modelfile:
FROM D:\AI\ollama\Qwen3.5-9B-Uncensored-HauhauCS-Aggressive-BF16.gguf
PARAMETER temperature 1
PARAMETER top_k 20
PARAMETER top_p 0.95
PARAMETER presence_penalty 1.5
That solved the issue for me.
Full version
I had this problem on Windows 11 when trying to run the model directly from Hugging Face with:
ollama run hf.co/...
Ollama downloaded the files successfully, but then failed with:
Error: 500 Internal Server Error: unable to load model
What worked for me was not using hf.co/... directly.
Instead, I downloaded the actual .gguf model file manually from the Hugging Face page, saved it locally, and created a local Ollama model from that file.
Steps
- Download the main
.gguffile manually from Hugging Face using your browser.
Example path:
D:\AI\ollama\Qwen3.5-9B-Uncensored-HauhauCS-Aggressive-BF16.gguf
- Create a
Modelfilelike this:
FROM D:\AI\ollama\Qwen3.5-9B-Uncensored-HauhauCS-Aggressive-BF16.gguf
PARAMETER temperature 1
PARAMETER top_k 20
PARAMETER top_p 0.95
PARAMETER presence_penalty 1.5
- Run:
cd D:\AI\ollama
ollama create hauhau-qwen35 -f .\Modelfile
ollama list
ollama show hauhau-qwen35
ollama run hauhau-qwen35
Result
After that, the model loaded correctly and also worked fine in the Ollama UI.
Note
For me, the direct hf.co/... route failed with the 500 error, but importing the local .gguf through Modelfile worked immediately.
If someone else hits the same issue on Windows, this workaround may help.