Datasets:
Formats:
imagefolder
Size:
1K - 10K
ArXiv:
Tags:
vision-language
multimodal
benchmarking
low-resource-languages
cross-lingual-evaluation
long-text-grounding
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ size_categories:
|
|
| 27 |
|
| 28 |
**VLURes** is a multilingual benchmark for evaluating the fine-grained visual and linguistic understanding of Vision-Language Models (VLMs) in long-text settings. It was created to move beyond short-caption, English-centric evaluation and instead test image understanding, long-context grounding, and cross-lingual robustness in culturally diverse settings.
|
| 29 |
|
| 30 |
-
This dataset is associated with our
|
| 31 |
|
| 32 |
The current Hugging Face release contains the uploaded image-text pairs in a single multilingual split, with each example consisting of a renamed image file, its paired long-form text, and a language identifier.
|
| 33 |
|
|
@@ -119,15 +119,15 @@ This split contains the multilingual image-text pairs used in the benchmark rele
|
|
| 119 |
|
| 120 |
### Data Size
|
| 121 |
|
| 122 |
-
The current uploaded release contains **3,
|
| 123 |
|
| 124 |
| Language | Number of image-text pairs |
|
| 125 |
|---|---:|
|
| 126 |
| English (`en`) | 996 |
|
| 127 |
| Swahili (`sw`) | 1,030 |
|
| 128 |
-
| Urdu (`ur`) |
|
| 129 |
| Japanese (`jp`) | 440 |
|
| 130 |
-
| **Total** | **3,
|
| 131 |
|
| 132 |
<span style="color: red;">We have not included lots of Japanese (ja) image-text pairs in this release due to license restrictions imposed by the respective web sources. For en, sw, ur, we have removed some image-text pairs as well.</span>
|
| 133 |
|
|
@@ -214,7 +214,7 @@ print(langs)
|
|
| 214 |
|
| 215 |
## Intended Uses
|
| 216 |
|
| 217 |
-
VLURes is intended for
|
| 218 |
|
| 219 |
* multilingual vision-language evaluation,
|
| 220 |
* long-text visual grounding,
|
|
|
|
| 27 |
|
| 28 |
**VLURes** is a multilingual benchmark for evaluating the fine-grained visual and linguistic understanding of Vision-Language Models (VLMs) in long-text settings. It was created to move beyond short-caption, English-centric evaluation and instead test image understanding, long-context grounding, and cross-lingual robustness in culturally diverse settings.
|
| 29 |
|
| 30 |
+
This dataset is associated with our <span style="color: blue;">ACL2026 Findings paper titled "VLURes: Benchmarking Long-Text Grounding and Cross-Lingual Robustness in Vision Language Models."</span>
|
| 31 |
|
| 32 |
The current Hugging Face release contains the uploaded image-text pairs in a single multilingual split, with each example consisting of a renamed image file, its paired long-form text, and a language identifier.
|
| 33 |
|
|
|
|
| 119 |
|
| 120 |
### Data Size
|
| 121 |
|
| 122 |
+
The current uploaded release contains **3,415** examples in total.
|
| 123 |
|
| 124 |
| Language | Number of image-text pairs |
|
| 125 |
|---|---:|
|
| 126 |
| English (`en`) | 996 |
|
| 127 |
| Swahili (`sw`) | 1,030 |
|
| 128 |
+
| Urdu (`ur`) | 949 |
|
| 129 |
| Japanese (`jp`) | 440 |
|
| 130 |
+
| **Total** | **3,415** |
|
| 131 |
|
| 132 |
<span style="color: red;">We have not included lots of Japanese (ja) image-text pairs in this release due to license restrictions imposed by the respective web sources. For en, sw, ur, we have removed some image-text pairs as well.</span>
|
| 133 |
|
|
|
|
| 214 |
|
| 215 |
## Intended Uses
|
| 216 |
|
| 217 |
+
<span style="color: red;">VLURes is intended for research use in:</span>
|
| 218 |
|
| 219 |
* multilingual vision-language evaluation,
|
| 220 |
* long-text visual grounding,
|