Datasets:
Formats:
imagefolder
Size:
1K - 10K
ArXiv:
Tags:
vision-language
multimodal
benchmarking
low-resource-languages
cross-lingual-evaluation
long-text-grounding
License:
| license: cc-by-nc-sa-4.0 | |
| language: | |
| - en | |
| - ja | |
| - sw | |
| - ur | |
| multilinguality: | |
| - multilingual | |
| pretty_name: VLURes | |
| tags: | |
| - vision-language | |
| - multimodal | |
| - benchmarking | |
| - low-resource-languages | |
| - cross-lingual-evaluation | |
| - long-text-grounding | |
| - image-text | |
| - acl-2026 | |
| size_categories: | |
| - 1K<n<10K | |
| # VLURes: Benchmarking Long-Text Grounding and Cross-Lingual Robustness in Vision Language Models | |
| ## Dataset Description | |
| **VLURes** is a multilingual benchmark for evaluating the fine-grained visual and linguistic understanding of Vision-Language Models (VLMs) in long-text settings. It was created to move beyond short-caption, English-centric evaluation and instead test image understanding, long-context grounding, and cross-lingual robustness in culturally diverse settings. | |
| This dataset is associated with our <span style="color: blue;">ACL2026 Findings paper titled "VLURes: Benchmarking Long-Text Grounding and Cross-Lingual Robustness in Vision Language Models."</span> | |
| The current Hugging Face release contains the uploaded image-text pairs in a single multilingual split, with each example consisting of a renamed image file, its paired long-form text, and a language identifier. | |
| ### Key Features | |
| * **Multilingual and culturally grounded:** The dataset covers **English, Japanese, Swahili, and Urdu**. | |
| * **Long-text grounding:** Each example pairs an image with substantially richer text than standard short-caption benchmarks. | |
| * **Single multilingual release:** The uploaded Hugging Face version is organized as one `train` split, with language identified in the `language` field. | |
| * **Benchmark-oriented design:** The data supports fine-grained evaluation of VLMs across visual, linguistic, and cross-modal tasks. | |
| * **Low-resource language coverage:** The benchmark includes dedicated resources for **Swahili** and **Urdu**, which remain underrepresented in existing vision-language evaluation datasets. | |
| ### Supported Tasks | |
| VLURes was designed to support evaluation across eight tasks: | |
| 1. **Object Recognition (OR)** | |
| 2. **Scene Understanding (SU)** | |
| 3. **Relationship Understanding (RU)** | |
| 4. **Semantic Segmentation (SS)** | |
| 5. **Image Captioning (IC)** | |
| 6. **Image-Text Matching (ITM)** | |
| 7. **Unrelatedness (U)** | |
| 8. **Visual Question Answering (VQA)** | |
| The Hugging Face release provides the core multilingual image-text pairs. Task prompts, evaluation protocols, and benchmark-specific task formulations are described in the paper and accompanying project materials. | |
| --- | |
| ## Dataset Structure | |
| ### Repository Layout | |
| The uploaded dataset follows the structure below: | |
| ```text | |
| VLURes_hf_ready/ | |
| ├── README.md | |
| └── train/ | |
| ├── metadata.parquet | |
| └── images/ | |
| ├── en/... | |
| ├── sw/... | |
| ├── ur/... | |
| └── jp/... | |
| ``` | |
| ### Data Format | |
| The dataset is packaged in an `ImageFolder`-style format for easy loading with the Hugging Face `datasets` library. | |
| * `train/metadata.parquet` stores the metadata table. | |
| * `train/images/...` contains the actual image files. | |
| * All images in this release are stored as `.jpg` files after preprocessing. | |
| ### Data Instances | |
| In the uploaded `metadata.parquet`, each row contains the following fields: | |
| * `id`: a unique example identifier such as `en_000001` | |
| * `file_name`: the relative path to the image file, for example `images/en/en_000001.jpg` | |
| * `text`: the paired text associated with the image | |
| * `language`: the language code for the example | |
| When loaded with the Hugging Face `datasets` library, the dataset exposes the following features: | |
| * `id` | |
| * `image` | |
| * `text` | |
| * `language` | |
| ### Language Codes | |
| The current uploaded release uses the following values in the `language` field: | |
| * `en` for English | |
| * `jp` for Japanese | |
| * `sw` for Swahili | |
| * `ur` for Urdu | |
| Note that the metadata uses `jp` in the actual uploaded files for compatibility with the current release structure. | |
| ### Splits | |
| This Hugging Face release currently provides a **single split**: | |
| * `train` | |
| This split contains the multilingual image-text pairs used in the benchmark release. | |
| ### Data Size | |
| The current uploaded release contains **3,415** examples in total. | |
| | Language | Number of image-text pairs | | |
| |---|---:| | |
| | English (`en`) | 996 | | |
| | Swahili (`sw`) | 1,030 | | |
| | Urdu (`ur`) | 949 | | |
| | Japanese (`jp`) | 440 | | |
| | **Total** | **3,415** | | |
| <span style="color: red;">We have not included lots of Japanese (ja) image-text pairs in this release due to license restrictions imposed by the respective web sources. For en, sw, ur, we have removed some image-text pairs as well.</span> | |
| --- | |
| ## Dataset Creation | |
| ### Curation Rationale | |
| VLURes was created to evaluate VLMs in settings that require more than shallow image-caption matching. The benchmark emphasizes: | |
| 1. multilingual understanding, | |
| 2. culturally grounded content, | |
| 3. long-text visual grounding, | |
| 4. robustness beyond English-only evaluation, and | |
| 5. fine-grained multimodal reasoning. | |
| ### Source Data | |
| The image-text pairs were curated from publicly accessible web sources, including encyclopedia-style pages, news content, and other article-like web documents containing naturally co-occurring images and text. | |
| The benchmark spans a broad range of topics, including: | |
| * animals | |
| * products | |
| * buildings | |
| * locations | |
| * events | |
| * food | |
| * drinks | |
| * hobbies | |
| * works of art | |
| * organizations | |
| ### Image-Text Alignment | |
| For each document, candidate images were matched to the article content and filtered to retain representative image-text pairs suitable for benchmark construction. The final release stores the prepared image files together with their corresponding text in a format that is easy to load and use for research. | |
| ### Preprocessing for the Hugging Face Release | |
| For this uploaded release: | |
| * image files were converted into a unified `.jpg` format, | |
| * files were renamed into stable identifiers such as `en_000001.jpg`, | |
| * text content was extracted and cleaned from source text files, | |
| * the dataset was organized into a single multilingual `train` split, | |
| * metadata was consolidated into `metadata.parquet`. | |
| --- | |
| ## Usage | |
| You can load the dataset directly from Hugging Face using the `datasets` library: | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("atamiles/VLURes") | |
| print(dataset) | |
| print(dataset["train"][0]) | |
| ``` | |
| A typical example will contain: | |
| ```python | |
| { | |
| "id": "en_000001", | |
| "image": <PIL image>, | |
| "text": "...", | |
| "language": "en" | |
| } | |
| ``` | |
| To inspect the language distribution: | |
| ```python | |
| from collections import Counter | |
| langs = Counter(dataset["train"]["language"]) | |
| print(langs) | |
| ``` | |
| --- | |
| ## Intended Uses | |
| <span style="color: red;">VLURes is intended for research use in:</span> | |
| * multilingual vision-language evaluation, | |
| * long-text visual grounding, | |
| * cross-lingual robustness analysis, | |
| * multimodal benchmarking, | |
| * low-resource language research. | |
| It may also be useful for studying failure modes of VLMs under long-context and multilingual conditions. | |
| ## Out-of-Scope Uses | |
| This dataset was not designed for: | |
| * face recognition or identity inference, | |
| * surveillance applications, | |
| * safety-critical deployment without additional validation, | |
| * legal or medical decision-making, | |
| * commercial reuse without checking the rights associated with the underlying source materials. | |
| --- | |
| ## Important Note on Copyright and Licensing | |
| The benchmark release is shared for research use under the license specified for this repository. | |
| Because the data originates from public web sources, **users are responsible for ensuring that their use of the released materials complies with any applicable third-party rights, copyright restrictions, and terms of use associated with the original source content.** | |
| If you plan to redistribute, adapt, or deploy the contents beyond research use, please verify the status of the original source materials independently. | |
| --- | |
| ## Citation | |
| If you use VLURes in your work, please cite the associated paper: | |
| ```bibtex | |
| @misc{atuhurra2025vluresbenchmarkingvlmvisual, | |
| title={VLURes: Benchmarking VLM Visual and Linguistic Understanding in Low-Resource Languages}, | |
| author={Jesse Atuhurra and Iqra Ali and Tomoya Iwakura and Hidetaka Kamigaito and Tatsuya Hiraoka}, | |
| year={2025}, | |
| eprint={2510.12845}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CL}, | |
| url={https://arxiv.org/abs/2510.12845}, | |
| } | |
| ``` | |
| We will update the citation block with the final proceedings metadata when it becomes available. | |
| --- | |
| ## Contact | |
| For questions about the dataset, benchmark, or associated paper, please use the project repository or contact Jesse Atuhurra. | |