DailyClue / README.md
Crysun's picture
Update README.md
a1c9b8a verified
metadata
license: mit
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - benchmark
  - multimodal
  - reasoning
  - visual-grounding
  - mllm-evaluation
pretty_name: DailyClue
size_categories:
  - n<1K

DailyClue Dataset

Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios

Paper Code License: MIT

Dataset Summary

DailyClue is a benchmark for evaluating visual clue-driven reasoning in Multimodal Large Language Models (MLLMs). Unlike benchmarks that test pre-existing knowledge, DailyClue requires models to actively identify decisive visual clues from images before producing answers.

The dataset spans 4 major domains and 16 distinct subtasks, with 666 total questions.

Dataset Structure

DailyClue/
├── daily_life/          # Images for Daily Commonsense Reasoning
├── location/            # Images for Location Identification
├── science/             # Images for Scientific Commonsense
├── spatial/             # Images for Spatial Reasoning
├── daily_life.json
├── location.json
├── science.json
└── spatial.json

Statistics

Category # Questions Formats
Daily Commonsense Reasoning 180 Multiple Choice, Yes/No, Open-ended
Location Identification 200 Open-ended
Spatial Reasoning 163 Multiple Choice, Yes/No
Scientific Commonsense 123 Multiple Choice, Yes/No, Open-ended
Total 666

Data Fields

Each JSON entry contains:

Field Type Description
image list[str] Image filename(s) within the category subfolder
question str The question posed to the model
clues str Human-annotated ground-truth visual clues (see note below)
ground_truth str The correct answer
format str "Multiple choice", "Yes or no", or "Open-ended"
category_1 str Primary domain (one of the four above)
category_2 str Subtask within the primary domain
language str "English"

Note on clues: This field contains human-annotated ground-truth visual clues. It is used in ablation experiments (injecting GT clues to probe the impact on model accuracy) and in the Rigorous Evaluation Protocol (checking whether model-predicted clues semantically align with GT clues). It is not fed to the model during standard inference.

Usage

Download

# via git
git clone https://huggingface.co/datasets/Crysun/DailyClue

# via huggingface_hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="Crysun/DailyClue", repo_type="dataset", local_dir="./dataset")

Run Inference

After downloading, point the inference script to the local directory:

python infer/inference.py \
  --dataset ./dataset \
  --model_names "gpt-4o" \
  --prompt_mode "b"

See the GitHub repository for the full evaluation pipeline.

Citation

@article{dailyclue2026,
  title={Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios},
  author={Li, Xiaomin and Wang, Tala and Zhong, Zichen and Zhang, Ying and Zheng, Zirui and Isobe, Takashi and Li, Dezhuang and Lu, Huchuan and He, You and Jia, Xu},
  journal={arXiv preprint arXiv:2604.14041},
  year={2026}
}