Phenomenal model

#1
by spanspek - opened

This model is phenomenal.

Q8_0 and F32 mmproj, full context in under 4.5GB of VRAM and it runs incredibly quickly. I'm running this on a laptop with a RTX 4060 (8GB) and it flies

spanspek changed discussion title from Phenomenal to Phenomenal model

This is somehow more reliable than Qwen3-VL 8B or next-ocr, or pretty much any other model I've tried, and it's tiny at 1B.
Truly amazing model, managed to OCR an entire movie's SDH PGS subtitles in less than 2 mins with zero errors.

Owner

Yeah, the first one was good, but this second one is even better.

Used Q6_k . Just 2.13G and did unexpectedly well, used in lmstudio. Man where is the loss? How this performs too good while others need minimum 5gb for quality. Ran on ryzen 8600g igpu.

Sign up or log in to comment