File size: 3,692 Bytes
ca79220
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a1c9b8a
ca79220
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: mit
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - benchmark
  - multimodal
  - reasoning
  - visual-grounding
  - mllm-evaluation
pretty_name: DailyClue
size_categories:
  - n<1K
---

# DailyClue Dataset

**Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios**

[![Paper](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/abs/2604.14041)
[![Code](https://img.shields.io/badge/Code-GitHub-black)](https://github.com/xiaominli1020/DailyClue)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)

## Dataset Summary

DailyClue is a benchmark for evaluating **visual clue-driven reasoning** in Multimodal Large Language Models (MLLMs). Unlike benchmarks that test pre-existing knowledge, DailyClue requires models to actively identify decisive visual clues from images before producing answers.

The dataset spans **4 major domains** and **16 distinct subtasks**, with **666 total questions**.

## Dataset Structure

```
DailyClue/
├── daily_life/          # Images for Daily Commonsense Reasoning
├── location/            # Images for Location Identification
├── science/             # Images for Scientific Commonsense
├── spatial/             # Images for Spatial Reasoning
├── daily_life.json
├── location.json
├── science.json
└── spatial.json
```

## Statistics

| Category | # Questions | Formats |
|---|---|---|
| Daily Commonsense Reasoning | 180 | Multiple Choice, Yes/No, Open-ended |
| Location Identification | 200 | Open-ended |
| Spatial Reasoning | 163 | Multiple Choice, Yes/No |
| Scientific Commonsense | 123 | Multiple Choice, Yes/No, Open-ended |
| **Total** | **666** | |

## Data Fields

Each JSON entry contains:

| Field | Type | Description |
|---|---|---|
| `image` | `list[str]` | Image filename(s) within the category subfolder |
| `question` | `str` | The question posed to the model |
| `clues` | `str` | Human-annotated ground-truth visual clues (see note below) |
| `ground_truth` | `str` | The correct answer |
| `format` | `str` | `"Multiple choice"`, `"Yes or no"`, or `"Open-ended"` |
| `category_1` | `str` | Primary domain (one of the four above) |
| `category_2` | `str` | Subtask within the primary domain |
| `language` | `str` | `"English"` |

> **Note on `clues`**: This field contains human-annotated ground-truth visual clues. It is used in ablation experiments (injecting GT clues to probe the impact on model accuracy) and in the Rigorous Evaluation Protocol (checking whether model-predicted clues semantically align with GT clues). It is **not** fed to the model during standard inference.

## Usage

### Download

```bash
# via git
git clone https://huggingface.co/datasets/Crysun/DailyClue

# via huggingface_hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="Crysun/DailyClue", repo_type="dataset", local_dir="./dataset")
```

### Run Inference

After downloading, point the inference script to the local directory:

```bash
python infer/inference.py \
  --dataset ./dataset \
  --model_names "gpt-4o" \
  --prompt_mode "b"
```

See the [GitHub repository](https://github.com/your-org/DailyClue) for the full evaluation pipeline.

## Citation

```bibtex
@article{dailyclue2026,
  title={Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios},
  author={Li, Xiaomin and Wang, Tala and Zhong, Zichen and Zhang, Ying and Zheng, Zirui and Isobe, Takashi and Li, Dezhuang and Lu, Huchuan and He, You and Jia, Xu},
  journal={arXiv preprint arXiv:2604.14041},
  year={2026}
}
```