DeepSeek-OCR GGUF (for llama.cpp PR #17400)

This repository provides GGUF model files required for running the DeepSeek-OCR / MTMD support introduced in the following llama.cpp pull request:

👉 llama.cpp PR #17400 https://github.com/ggml-org/llama.cpp/pull/17400

These models are only compatible with the PR branch and will not run on upstream llama.cpp main.


📥 Download

You can download the model files directly using:

huggingface-cli download <this-repo> --include "deepseek-ocr-bf16.gguf" --local-dir gguf_models/deepseek-ai
huggingface-cli download <this-repo> --include "mmproj-deepseek-ocr-bf16.gguf" --local-dir gguf_models/deepseek-ai

🚀 Run Example

Use the llama-mtmd-cli executable from the PR:

build/bin/llama-mtmd-cli \
-m gguf_models/deepseek-ai/deepseek-ocr-bf16.gguf \
--mmproj gguf_models/deepseek-ai/mmproj-deepseek-ocr-bf16.gguf \
--image tmp/mtmd_test_data/Deepseek-OCR-2510.18234v1_page1.png \
-p "<|grounding|>Convert the document to markdown." \
--chat-template deepseek-ocr --temp 0
build/bin/llama-mtmd-cli \                                                                           
-m gguf_models/deepseek-ai/deepseek-ocr-bf16_new.gguf \
--mmproj gguf_models/deepseek-ai/mmproj-deepseek-ocr-bf16_new.gguf \
--image tools/mtmd/test-1.jpeg \
-p "Free OCR. "  \
--chat-template deepseek-ocr --temp 0
Downloads last month
2,636
GGUF
Model size
3B params
Architecture
deepseek2-ocr
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sabafallah/DeepSeek-OCR-GGUF

Quantized
(9)
this model