GlotOCR Bench: OCR Models Still Struggle Beyond a Handful of Unicode Scripts
Abstract
Vision-language models show limited generalization in OCR across diverse scripts, with performance closely tied to pretraining coverage and struggling with unfamiliar writing systems.
Optical character recognition (OCR) has advanced rapidly with the rise of vision-language models, yet evaluation has remained concentrated on a small cluster of high- and mid-resource scripts. We introduce GlotOCR Bench, a comprehensive benchmark evaluating OCR generalization across 100+ Unicode scripts. Our benchmark comprises clean and degraded image variants rendered from real multilingual texts. Images are rendered using fonts from the Google Fonts repository, shaped with HarfBuzz and rasterized with FreeType, supporting both LTR and RTL scripts. Samples of rendered images were manually reviewed to verify correct rendering across all scripts. We evaluate a broad suite of open-weight and proprietary vision-language models and find that most perform well on fewer than ten scripts, and even the strongest frontier models fail to generalize beyond thirty scripts. Performance broadly tracks script-level pretraining coverage, suggesting that current OCR systems rely on language model pretraining as much as on visual recognition. Models confronted with unfamiliar scripts either produce random noise or hallucinate characters from similar scripts they already know. We release the benchmark and pipeline for reproducibility. Pipeline Code: https://github.com/cisnlp/glotocr-bench, Benchmark: https://hf.co/datasets/cis-lmu/glotocr-bench.
Community
GlotOCR Bench is a benchmark for evaluating OCR for different Unicode scripts.
Benchmark: https://huggingface.co/datasets/cis-lmu/GlotOCR-bench
Results: https://huggingface.co/datasets/cis-lmu/GlotOCR-bench-v1.0-results
For Tifinagh, all results are zeros except one 1% SA by Gemini 😂
Great work 👏
Get this paper in your agent:
hf papers read 2604.12978 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper