Dataset Viewer
Auto-converted to Parquet Duplicate
image
imagewidth (px)
621
4.17k
annotation_json
stringlengths
13.3k
244k
[{"bb_dim":[73,176,157,213],"bb_ids":[{"id":71120,"ocrv":"huge","attb":{"bold":false,"italic":false,"b_i":false,"no_bi":true,"no_us":true,"underlined":false,"strikeout":false,"u_s":false}}]},{"bb_dim":[167,180,346,209],"bb_ids":[{"id":71121,"ocrv":"signboards.","attb":{"bold":false,"italic":false,"b_i":false,"no_bi":tr...
"[{\"bb_dim\":[532,120,624,153],\"bb_ids\":[{\"id\":10400,\"ocrv\":\"Glossary\",\"attb\":{\"bold\":f(...TRUNCATED)
"[{\"bb_dim\":[532,120,627,155],\"bb_ids\":[{\"id\":11232,\"ocrv\":\"Glossary\",\"attb\":{\"bold\":f(...TRUNCATED)
"[{\"bb_dim\":[525,35,684,71],\"bb_ids\":[{\"id\":67044,\"ocrv\":\"gauasog\",\"attb\":{\"bold\":true(...TRUNCATED)
"[{\"bb_dim\":[788,177,915,218],\"bb_ids\":[{\"id\":34420,\"ocrv\":\"EAH2T\",\"attb\":{\"bold\":true(...TRUNCATED)
"[{\"bb_dim\":[164,89,308,116],\"bb_ids\":[{\"id\":36719,\"ocrv\":\"categories\",\"attb\":{\"bold\":(...TRUNCATED)
"[{\"bb_dim\":[427,70,458,87],\"bb_ids\":[{\"id\":10206,\"ocrv\":\"US\",\"attb\":{\"bold\":false,\"i(...TRUNCATED)
"[{\"bb_dim\":[126,185,169,213],\"bb_ids\":[{\"id\":116324,\"ocrv\":\"02.\",\"attb\":{\"bold\":false(...TRUNCATED)
"[{\"bb_dim\":[1454,823,1985,891],\"bb_ids\":[{\"id\":51236,\"ocrv\":\"GOVERNMENT\",\"attb\":{\"bold(...TRUNCATED)
"[{\"bb_dim\":[729,247,774,278],\"bb_ids\":[{\"id\":67518,\"ocrv\":\"air\",\"attb\":{\"bold\":false,(...TRUNCATED)
End of preview. Expand in Data Studio

We introduce TexTAR, a multi-task, context-aware Transformer for Textual Attribute Recognition (TAR), capable of handling both positional cues (bold, italic) and visual cues (underline, strikeout) in noisy, multilingual document images.

MMTAD Dataset

MMTAD (Multilingual Multi-domain Textual Attribute Dataset) comprises 1,623 real-world document images—from legislative records and notices to textbooks and notary documents—captured under diverse lighting, layout, and noise conditions. It delivers 1,117,716 word-level annotations for two attribute groups:

  • T1: Bold,Italic,Bold & Italic

  • T2: Underline,Strikeout,Underline & Strikeout

Language & Domain Coverage

  • English, Spanish, and six South Asian languages
  • Distribution: 67.4 % Hindi, 8.2 % Telugu, 8.0 % Marathi, 5.9 % Punjabi, 5.4 % Bengali, 5.2 % Gujarati/Tamil/others
  • 300–500 annotated words per image on average

To address class imbalance (e.g., fewer italic or strikeout samples), we apply context-aware augmentations:

  • Shear transforms to generate additional italics
  • Realistic, noisy underline and strikeout overlays

These augmentations preserve document context and mimic real-world distortions, ensuring a rich, balanced benchmark for textual attribute recognition.

More Information
For detailed documentation and resources, visit our website: TexTAR

Downloading the Dataset

from datasets import load_dataset

ds = load_dataset("Tex-TAR/MMTAD")
print(ds)

Dataset contains

  • textar-testset: document images
  • testset_labels.json: a JSON array or dict where each key/entry is an image filename and the value is its annotated attribute labels (bold, italic, underline, strikeout, etc. for each word)

Viewer Format

To power the Hugging-Face Data Studio we convert the original testset_labels.json into a line-delimited JSONL (hierarchical.jsonl) of the form:

{"image":"textar-testset/ncert-page_25.png",
 "annotation_json":"[{"bb_dim":[73,176,157,213],"bb_ids":[{"id":71120,"ocrv":"huge","attb":{"bold":false,"italic":false,"b_i":false,"no_bi":true,…}}]},…]"}

Citation

Please use the following BibTeX entry for citation .

@article{Kumar2025TexTAR,
  title   = {TexTAR: Textual Attribute Recognition in Multi-domain and Multi-lingual Document Images},
  author  = {Rohan Kumar and Jyothi Swaroopa Jinka and Ravi Kiran Sarvadevabhatla},
  booktitle = {International Conference on Document Analysis and Recognition,
            {ICDAR}},
  year    = {2025}
}
Downloads last month
35