No mmproj?
#1
by yamatazen - opened
Where is the mmproj file?
Thanks a lot for letting us know. This happened to all gemma 4 models queued before we updated llama.cpp to a version that supports vision for gemma 4. I now requeued it so the mmproj is on the way and will likely be performed once nico1 wakes up at European sunrise:
nico1 ~# llmc add force -4000 si https://huggingface.co/coder3101/gemma-4-E4B-it-heretic worker nico1 llama nico
submit tokens: ["force","-4000","static","imatrix","worker","nico1","llama","nico","https://huggingface.co/coder3101/gemma-4-E4B-it-heretic","provenance","api-10.28.1.6-2026-04-13T02:24:10"]
https://huggingface.co/coder3101/gemma-4-E4B-it-heretic
["https://huggingface.co/coder3101/gemma-4-E4B-it-heretic",["-2000","static","imatrix","llama","nico","worker","rich1","provenance","api-10.28.1.7-2026-04-04T18:53:06"],1775321588],
https://huggingface.co/coder3101/gemma-4-E4B-it-heretic already in llmjob.submit.txt
forcing.
coder3101/gemma-4-E4B-it-heretic: vision model, forcing worker nico1
No idea. Given the error it likely either run out of disk, run out of RAM or the server crashed all of which seems unlikely for nico1 in the past few days. In any case it seems someone already retried it after which it completed successfully:
- Static quants: https://huggingface.co/mradermacher/gemma-4-E4B-it-heretic-GGUF
- Weighted/imatrix quants https://huggingface.co/mradermacher/gemma-4-E4B-it-heretic-i1-GGUF
- Convenient download page: https://hf.tst.eu/model#gemma-4-E4B-it-heretic-GGUF
Still no mmproj though
yamatazen changed discussion status to closed
