ViMU / metadata /README.md
LIQIIIII's picture
Upload 5298 files
1ed27f8 verified
|
raw
history blame
5.11 kB

ViMU: Benchmarking Video Metaphorical Understanding

ViMU is a video understanding benchmark designed to evaluate whether multimodal models can go beyond literal perception and infer metaphorical, rhetorical, and socially grounded meanings in videos.

Unlike standard video understanding datasets that focus mainly on objects, actions, events, or temporal relations, ViMU targets implicit subtext: what a video means beyond what is directly shown. The benchmark includes open-ended interpretation, evidence grounding, rhetoric mechanism identification, and social value signal identification.

Dataset Structure

After downloading, the dataset should be organized as:

ViMU/
├── videos/
│   ├── vimu_000001.mp4
│   ├── vimu_000002.mp4
│   └── ...
├── metadata/
│   ├── vimu_oe.jsonl
│   ├── vimu_eg.jsonl
│   ├── vimu_ss.jsonl
│   ├── video_evidence.jsonl
│   └── cache/
│       ├── frames/
│       └── transcripts/
└── output/

The current uploaded archive may contain only a small subset of the full dataset for review or testing purposes.

Files

videos/

This directory contains anonymized video files. Each video is named by its anonymous ID, for example:

vimu_000001.mp4
vimu_000002.mp4

metadata/vimu_oe.jsonl

Open-ended interpretation task. Each line corresponds to one video and contains:

{
  "video_id": "...",
  "video_path": "...",
  "taxonomy": {...},
  "qa": {
    "question": "...",
    "answer": "...",
    "short_reference_points": [...],
    "grading_rubric": {...}
  }
}

This task asks models to answer a hint-free question about the video’s intended meaning.

metadata/vimu_eg.jsonl

Evidence grounding task. Each line contains a multi-label question asking which video elements support the intended meaning.

The candidate evidence sources include:

visual scene / objects
on-screen text
spoken dialogue
tone of voice
editing transition

The fields include:

{
  "video_id": "...",
  "question_type": "evidence_grounding",
  "question": "...",
  "intended_meaning": "...",
  "options": {...},
  "correct_options": [...],
  "correct_labels": [...]
}

metadata/vimu_ss.jsonl

Structured subtext task. Each line contains two multi-label multiple-choice tasks:

rhetoric_mechanisms
social_value_signals

Each task uses five macro-level categories labeled A--E.

For rhetoric mechanisms:

A. Literal / Direct
B. Opposition / Incongruity
C. Attitude / Tone-based Rhetoric
D. Amplification / Stylization
E. Implicit / Coded Social Framing

For social value signals:

A. Neutral / No Social Signal
B. Emotional Attitude
C. Social Evaluation / Devaluation
D. Norm and Value Framing
E. Identity / Ideological Signaling

metadata/video_evidence.jsonl

Metadata used for model inference, including video paths, sampled frame paths, video duration, and ASR transcript.

{
  "video_id": "...",
  "video_path": "...",
  "duration_sec": 0.0,
  "frames": [...],
  "transcript": "..."
}

metadata/cache/

Optional cached evidence used by the scripts:

metadata/cache/frames/
metadata/cache/transcripts/

Recommended Placement

After downloading the dataset, place it as:

/path/to/ViMU/

Then update the code scripts by setting:

PROJECT_ROOT = "/path/to/ViMU"

For example:

PROJECT_ROOT = "/Users/anonymous/ViMU"

Tasks

ViMU contains four evaluation tasks:

Task File Format
Open-ended interpretation metadata/vimu_oe.jsonl Free-form generation
Evidence grounding metadata/vimu_eg.jsonl Multi-label multiple-choice
Rhetoric mechanism identification metadata/vimu_ss.jsonl Multi-label multiple-choice
Social value signal identification metadata/vimu_ss.jsonl Multi-label multiple-choice

License

This dataset is released under a Research Use Only License.

The dataset is provided solely for non-commercial research purposes. Users may use, download, and analyze the dataset for academic and research activities, including model evaluation, benchmarking, and reproducibility studies.

Commercial use, redistribution for commercial purposes, or use in products, services, or systems intended for commercial deployment is not permitted without prior written permission from the dataset maintainers.

By using this dataset, users agree to comply with the terms above and to use the dataset responsibly, with appropriate consideration of privacy, fairness, and potential social impact.

Notes

This dataset may contain offensive, harmful, or socially sensitive content because it studies videos and their implicit social meanings. The dataset is intended for research on video understanding, multimodal reasoning, and model evaluation.