OLLAMA version

#5
by d4rksou1 - opened

does this GUFF work with ollama? i'm getting error 500 on run command

me too...

same here, checked different quantizations.

yeah... me too however i'm limited to 160GB ram, and on the documentation i noticed they recomend 256, maybe embebed context... does anyone tried with a machine with more RAM?

PS C:\Users\Korisnik\Projekti\AI\qwen35> ollama --version
ollama version is 0.17.1
PS C:\Users\Korisnik\Projekti\AI\qwen35> ollama run hf.co/unsloth/Qwen3.5-27B-GGUF:Q4_K_M
Error: 500 Internal Server Error: unable to load model: C:\Users\Korisnik.ollama\models\blobs\sha256-728960e4dda52d4f2af5bee09b2cbe86addfa93220fe9324bfac9dc727605c17

I am having the same issue as sindab on ollama 0.17.2

Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-7b0cc96478922848f1fc1b70918b45dd1bad79d671cd13f0a3e446ab64ee3319

same here still

me too

ollama 0.17.7 has the same issue.

Ollama 0.18.1 the same issue

Sign up or log in to comment