--- license: cc-by-4.0 task_categories: - visual-question-answering language: - en tags: - multi-image - hallucination - benchmark - vision-language-model - multimodal size_categories: - 1K **Note**: A 5th task (ID Consistency) will be added in a future update. ## Dataset Schema ### Common columns (all tasks) | Column | Type | Description | |--------|------|-------------| | `images` | `list[image]` | 2-4 images (PIL Image objects) | | `question` | `str` | Natural language question about the images | | `label` | `str` | Ground truth answer (`"yes"` or `"no"`) | | `task` | `str` | Task identifier | | `num_images` | `int` | Number of images in the sample | | `image_names` | `list[str]` | Source image filenames | ### Additional columns (Count task only) | Column | Type | Description | |--------|------|-------------| | `injected` | `bool` | Whether distracting objects were injected into the question | | `object_counts` | `str` | JSON string mapping image identifiers to object counts (e.g., `'{"A": 1, "B": 1}'`) | ## Data Splits Each task is a separate configuration/split with 800 samples (400 `"yes"`, 400 `"no"`). ## Image Sources - **Tasks 1-4**: COCO (Common Objects in Context) dataset - **Task 5** (ID Consistency, coming soon): CO3D dataset ## Citation If you use MIHBench in your research, please cite: ```bibtex @inproceedings{mihbench2025, title={MIHBench: Can Multi-modal Large Language Models Understand Multi-Image Inputs?}, author={}, booktitle={Proceedings of the ACM Multimedia 2025}, year={2025} } ``` ## License This dataset is released under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. The underlying images are from COCO and CO3D, which have their own licenses.