Datasets:
TextAnchor-Bench (TABench)
π Paper Link: Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models
TABench evaluates whether a vision-language model can (i) accurately read the text within a specified region (Region-to-Text, R2T) and (ii) localize the region(s) corresponding to a given text query (Text-to-Region, T2R). It contains 5,450 queries in total with an exact 1:1 balance between the two tasks, defined over the same set of 973 core images. The benchmark is curated from four public datasets (HierText, SVRD, CDLA, ICDAR2015) and covers 12 representative scenarios spanning both scene text and document-centric settings.
- Region-to-Text (R2T): read the text inside a given region
- Text-to-Region (T2R): localize the region(s) corresponding to a given text query This repository releases the official benchmark data and evaluation pipeline for TABench.
π Dataset Statistics
- Total queries: 5,450
- Task balance:
- R2T: 2,725
- T2R: 2,725
- Core images: 973 unique images
- Scenarios: 12 representative categories covering both scene text and document-centric settings
- Source datasets:
- HierText
- SVRD
- CDLA
- ICDAR2015
π§© Official Subsets
TABench provides two official coordinate configurations, both using the bounding box order:
[x_min, y_min, x_max, y_max]
abs: absolute pixel coordinates in[x_min, y_min, x_max, y_max]formatrel1000: coordinates normalized into the range[0, 1000], still in[x_min, y_min, x_max, y_max]format
This design makes evaluation easier across models with different coordinate output conventions.
You can load them as two Hugging Face dataset configs:
loongwayX/TABench, config =absloongwayX/TABench, config =rel1000
π§ Tasks
TABench consists of two evaluation tasks. Each item in the JSONL files already includes the inputs a model needs and the ground truth used by the official evaluator.
1) Region-to-Text (R2T)
- Input: an image and a target bounding box
- Goal: read the text within the given box and return the string
- Expected model output (for
eval.py): a text string, or a JSON object / array that contains atextfield
The evaluator is tolerant to formats such as:
"hello"{"text": "hello"}[{"text": "hello"}]
Example JSONL record (fields simplified):
{
"image_path": "SceneText/SceneText_e81f0a9df28c75fa.jpg",
"task_type": "R2T",
"question": "What is the text at location [455, 638, 517, 650]?",
"bbox": [455, 638, 517, 650],
"GT": "USINGER'S",
"text": "USINGER'S"
}
2) Text-to-Region (T2R)
- Input: an image and a text query
- Goal: localize one or more regions that correspond to the query
- Expected model output (for
eval.py): a list of boxes, or JSON objects containingbbox/bbox_2d
The evaluator accepts formats such as:
[[x1, y1, x2, y2], ...]{"bbox": [x1, y1, x2, y2]}{"bbox_2d": [x1, y1, x2, y2]}[{"bbox": [...]}, {"bbox": [...]}]
Example JSONL record (fields simplified):
{
"image_path": "Receipt/Receipt_1135.jpg",
"task_type": "T2R",
"question": "Where is \"ε°ιΎεοΌε京倫εεΊεΊοΌ\" located in the image?",
"bbox": [[171, 18, 340, 42]],
"GT": {"bbox": [171, 18, 340, 42], "text": "ε°ιΎεοΌε京倫εεΊεΊοΌ"},
"text": "ε°ιΎεοΌε京倫εεΊεΊοΌ"
}
π Evaluation Metrics
We report the following metrics:
R2T Accuracy: Exact-match accuracy between the normalized prediction and the ground-truth text.
T2R F1-score: Detection-style F1 score computed using greedy bipartite matching under an IoU threshold of 0.5.
Overall Score:
Overall = (Acc_R2T + F1_T2R) / 2
If a model only supports one task direction, the missing metric is counted as 0.
ποΈ Data Format
All annotations are provided in JSONL, with two coordinate configurations:
TABench-abs.jsonlβ absolute pixel coordinates (x_min, y_min, x_max, y_max)TABench-rel1000.jsonlβ the same coordinates normalized into the range[0, 1000]
Each line is one example with commonly used fields:
image_path: path of the image relative to repo rootcategory: scenario label (e.g.,SceneText,Receipt,Book, ...)task_type:"R2T"or"T2R"question: natural-language prompt for the task instancebbox: target box(es)- R2T: a single box
[x_min, y_min, x_max, y_max] - T2R: an array of boxes
[[x_min, y_min, x_max, y_max], ...]
- R2T: a single box
GT: ground-truth- R2T: a string (the expected text)
- T2R: an object
{ "bbox": [...], "text": "..." }or an array of such objects
text: the textual content associated with the target region(s) when availablecoord_type(optional):"abs"or"rel1000"annotation_level(optional): one ofword/line/paragraphsource_dataset(optional): the original dataset name
Notes:
- For R2T,
eval.pynormalizes strings (NFKC, whitespace folding, punctuation mapping) before exact-match accuracy and character error rate (CER). - For T2R,
eval.pyparses boxes from model outputs using robust rules and evaluates with IoU-based matching (default threshold = 0.5) to compute Precision / Recall / F1 and QSR@K.
π Repository Structure
.
βββ TABench-abs.jsonl
βββ TABench-rel1000.jsonl
βββ eval.py
βββ simple_vllm_infer.py
βββ SceneText/
βββ Receipt/
βββ Book/
βββ Document/
βββ ...
βββ README.md
π Quick Start
To run TABench locally, first clone the full repository with Git LFS so that the images, annotations, and evaluation scripts are all available under the expected relative paths.
git lfs install
git clone https://huggingface.co/datasets/loongwayX/TABench
cd TABench
1) Run inference
# For abs
python simple_vllm_infer.py \
--model_path /path/to/your/model \
--coords abs \
--input-dir . \
--output-dir ./infer_output
# For rel1000
python simple_vllm_infer.py \
--model_path /path/to/your/model \
--coords rel1000 \
--input-dir . \
--output-dir ./infer_output
The script will produce predictions at:
./infer_output/<model_name>/TABench-abs_with_predictions.jsonl
# or
./infer_output/<model_name>/TABench-rel1000_with_predictions.jsonl
2) Run evaluation
python eval.py \
--pred ./infer_output/<model_name>/TABench-abs_with_predictions.jsonl \
--output report.json \
--export-jsonl case_analysis.jsonl
π Baseline Results
We report representative model performance on TABench.
βββ indicates that the metric is not applicable because the model does not support the corresponding output format.
For models that do not support one task direction, the missing metric is counted as 0 when computing Overall.
| Model | Size | R2T Acc (%) β | T2R F1 (%) β | Overall (%) β |
|---|---|---|---|---|
| Gemini 3.0 Pro | Closed | 25.85 | 62.58 | 44.22 |
| GPT 5.2 | Closed | 10.64 | 0.64 | 5.64 |
| Kimi K2.5 | 1T | 49.54 | 57.73 | 53.64 |
| Qwen3.5 | 397B | 61.10 | 72.80 | 66.95 |
| Qwen3-VL-Instruct | 235B | 60.90 | 60.40 | 60.65 |
| DeepSeek OCR2 | 3B | β | 11.66 | 5.83 |
| Qwen3-VL-Instruct | 2B | 38.35 | 37.19 | 37.77 |
| Q-Mask (Ours) | 3B | 50.64 | 40.36 | 45.50 |
π§βπ» Optional: Programmatic Access
If you only want to inspect the benchmark records programmatically, you can also load the JSONL annotations with datasets:
from datasets import load_dataset
ds = load_dataset("loongwayX/TABench", name="abs", split="test")
print(ds[0])
Note that the main recommended workflow for running the benchmark is to clone the full repository locally, since the official scripts expect the images to be available under the same relative paths as image_path.
π License and Data Usage
- Code and evaluation scripts in this repository are released under Apache-2.0.
- TABench benchmark packaging and annotations are released for research use.
- Original images and source annotations remain subject to the licenses and terms of their respective upstream datasets.
- Users are responsible for complying with the original licenses and usage restrictions of the source datasets when using or redistributing this benchmark.
If you use TABench commercially or redistribute any part of the original data, please carefully review the licenses of all upstream datasets first.
π Citation
If you use TABench in your research, please cite:
@article{xu2026q,
title={Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models},
author={Xu, Longwei and Feng, Feng and Zhang, Shaojie and Chen, Xin and Li, Hang and Du, Anan and Yu, Hailong and Fu, Pei and Luo, Zhenbo and Luan, Jian},
journal={arXiv preprint arXiv:2604.00161},
year={2026}
}
Acknowledgements
TABench is built upon several publicly available datasets and benchmarks. We sincerely thank the original dataset creators and maintainers for making these resources available to the community.
- HierText: https://github.com/google-research-datasets/hiertext
- CDLA: https://github.com/buptlihang/CDLA
- ICDAR 2015 Incidental Scene Text: https://rrc.cvc.uab.es/?ch=4
- SVRD (ICDAR 2023 Structured Text Extraction from Visually-Rich Document Images): https://rrc.cvc.uab.es/?ch=21
Please refer to the corresponding official pages for dataset descriptions, licenses, and usage terms.
- Downloads last month
- 10