File size: 2,852 Bytes
71e01c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - multi-image
  - hallucination
  - benchmark
  - vision-language-model
  - multimodal
size_categories:
  - 1K<n<10K
---

# MIHBench: Multi-Image Hallucination Benchmark

**Paper**: [MIHBench: Can Multi-modal Large Language Models Understand Multi-Image Inputs?](https://arxiv.org/abs/2505.xxxxx) | ACM Multimedia 2025

## Overview

MIHBench is a comprehensive benchmark for evaluating multi-image understanding and hallucination in Multi-modal Large Language Models (MLLMs). It contains **3,200 samples** across **4 tasks** (800 samples each), with each sample containing 2-4 images.

## Tasks

| Task | # Images | Description |
|------|----------|-------------|
| **Count** | 2 | Determine whether the same number of a target object appears in both images. 400 samples include injected distracting objects. |
| **Existence (Adversarial)** | 3 | Determine whether a target object exists in all images, with adversarially selected objects (rare, confusing). |
| **Existence (Popular)** | 3 | Determine whether a target object exists in all images, using commonly known objects. |
| **Existence (Random)** | 3 | Determine whether a target object exists in all images, using randomly selected objects. |

> **Note**: A 5th task (ID Consistency) will be added in a future update.

## Dataset Schema

### Common columns (all tasks)

| Column | Type | Description |
|--------|------|-------------|
| `images` | `list[image]` | 2-4 images (PIL Image objects) |
| `question` | `str` | Natural language question about the images |
| `label` | `str` | Ground truth answer (`"yes"` or `"no"`) |
| `task` | `str` | Task identifier |
| `num_images` | `int` | Number of images in the sample |
| `image_names` | `list[str]` | Source image filenames |

### Additional columns (Count task only)

| Column | Type | Description |
|--------|------|-------------|
| `injected` | `bool` | Whether distracting objects were injected into the question |
| `object_counts` | `str` | JSON string mapping image identifiers to object counts (e.g., `'{"A": 1, "B": 1}'`) |

## Data Splits

Each task is a separate configuration/split with 800 samples (400 `"yes"`, 400 `"no"`).

## Image Sources

- **Tasks 1-4**: COCO (Common Objects in Context) dataset
- **Task 5** (ID Consistency, coming soon): CO3D dataset

## Citation

If you use MIHBench in your research, please cite:

```bibtex
@inproceedings{mihbench2025,
  title={MIHBench: Can Multi-modal Large Language Models Understand Multi-Image Inputs?},
  author={},
  booktitle={Proceedings of the ACM Multimedia 2025},
  year={2025}
}
```

## License

This dataset is released under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. The underlying images are from COCO and CO3D, which have their own licenses.