MIHBench / README.md
chenhaoguan's picture
Upload README.md with huggingface_hub
71e01c9 verified
|
raw
history blame
2.85 kB
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - multi-image
  - hallucination
  - benchmark
  - vision-language-model
  - multimodal
size_categories:
  - 1K<n<10K

MIHBench: Multi-Image Hallucination Benchmark

Paper: MIHBench: Can Multi-modal Large Language Models Understand Multi-Image Inputs? | ACM Multimedia 2025

Overview

MIHBench is a comprehensive benchmark for evaluating multi-image understanding and hallucination in Multi-modal Large Language Models (MLLMs). It contains 3,200 samples across 4 tasks (800 samples each), with each sample containing 2-4 images.

Tasks

Task # Images Description
Count 2 Determine whether the same number of a target object appears in both images. 400 samples include injected distracting objects.
Existence (Adversarial) 3 Determine whether a target object exists in all images, with adversarially selected objects (rare, confusing).
Existence (Popular) 3 Determine whether a target object exists in all images, using commonly known objects.
Existence (Random) 3 Determine whether a target object exists in all images, using randomly selected objects.

Note: A 5th task (ID Consistency) will be added in a future update.

Dataset Schema

Common columns (all tasks)

Column Type Description
images list[image] 2-4 images (PIL Image objects)
question str Natural language question about the images
label str Ground truth answer ("yes" or "no")
task str Task identifier
num_images int Number of images in the sample
image_names list[str] Source image filenames

Additional columns (Count task only)

Column Type Description
injected bool Whether distracting objects were injected into the question
object_counts str JSON string mapping image identifiers to object counts (e.g., '{"A": 1, "B": 1}')

Data Splits

Each task is a separate configuration/split with 800 samples (400 "yes", 400 "no").

Image Sources

  • Tasks 1-4: COCO (Common Objects in Context) dataset
  • Task 5 (ID Consistency, coming soon): CO3D dataset

Citation

If you use MIHBench in your research, please cite:

@inproceedings{mihbench2025,
  title={MIHBench: Can Multi-modal Large Language Models Understand Multi-Image Inputs?},
  author={},
  booktitle={Proceedings of the ACM Multimedia 2025},
  year={2025}
}

License

This dataset is released under the CC-BY-4.0 license. The underlying images are from COCO and CO3D, which have their own licenses.