Datasets:
Formats:
json
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
video-understanding
multimodal
video-metaphorical-understanding
benchmark
subtext-understanding
License:
File size: 5,198 Bytes
ebe8a73 0facc30 269da3a 25601eb 75db8c7 25601eb 32da574 25601eb d20ada1 25601eb d20ada1 25601eb | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 | ---
license: mit
task_categories:
- visual-question-answering
- question-answering
- text-classification
language:
- en
tags:
- video-understanding
- multimodal
- video-metaphorical-understanding
- benchmark
- subtext-understanding
pretty_name: ViMU
configs:
- config_name: OI
data_files:
- split: eval
path: metadata/vimu_oe.jsonl
- config_name: EG
data_files:
- split: eval
path: metadata/vimu_eg.jsonl
- config_name: RMI-SVI
data_files:
- split: eval
path: metadata/vimu_ss.jsonl
---
<div align="center">
<img src="overall.png" width="100%"/>
<h1>ViMU: Benchmarking Video Metaphorical Understanding</h1>
[](https://liqiiiii.github.io/Video-Metaphorical-Understanding/)
[](https://arxiv.org/abs/2605.14607)
[](https://huggingface.co/datasets/LIQIIIII/ViMU)
[](https://github.com/LiQiiiii/Video-Metaphorical-Understanding)
[Qi Li](https://liqiiiii.github.io/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)<sup>*</sup>
<sup>*</sup>Corresponding author
[xML Lab](https://sites.google.com/view/xml-nus), National University of Singapore
</div>
Our GitHub repository contains the evaluation scripts for ViMU, a benchmark for video metaphorical understanding. The code evaluates multimodal models on four tasks:
1. Open-ended interpretation (OE)
2. Evidence grounding (EG)
3. Rhetoric mechanism identification (RM)
4. Social value signal identification (SV)
## Directory Structure
Expected project structure:
```text
ViMU/
├── videos/
│ ├── vimu_000001.mp4
│ └── ...
├── metadata/
│ ├── vimu_oe.jsonl
│ ├── vimu_eg.jsonl
│ ├── vimu_ss.jsonl
│ ├── video_evidence.jsonl
│ └── cache/
├── scripts/
│ ├── 00-vimu_oe.py
│ ├── 01-vimu_oe_judge.py
│ ├── 02-vimu_oe_score.py
│ ├── 10-vimu_eg.py
│ ├── 11-vimu_eg_score.py
│ ├── 20-vimu_ss.py
│ ├── 21-vimu_ss_score.py
│ └── utils.py
└── output/
````
## Setup
Install dependencies:
```bash
pip install openai requests numpy pandas tqdm
```
Depending on the models used, additional API keys may be required.
Set API keys:
```bash
export OPENAI_API_KEY="your_openai_key"
export OPENROUTER_API_KEY="your_openrouter_key"
export GOOGLE_API_KEY="your_google_key"
```
Not all keys are required if you only run a subset of models.
## Path Configuration
Before running, edit each script and set:
```python
PROJECT_ROOT = "/Your/Path/To/ViMU"
```
## Recommended Running Order
For a full evaluation, run:
```bash
# Open-ended interpretation
python scripts/00-vimu_oe.py
python scripts/01-vimu_oe_judge.py
python scripts/02-vimu_oe_score.py
# Evidence grounding
python scripts/10-vimu_eg.py
python scripts/11-vimu_eg_score.py
# Structured subtext tasks without guidance
python scripts/20-vimu_ss.py --prompt_mode without_guidance
python scripts/21-vimu_ss_score.py --prompt_mode without_guidance
# Structured subtext tasks with guidance
python scripts/20-vimu_ss.py --prompt_mode with_guidance
python scripts/21-vimu_ss_score.py --prompt_mode with_guidance
```
## Model Configuration
Models are configured in the `MODEL_SPECS` list inside the inference scripts.
To enable or disable a model, edit:
```python
"enabled": True
```
or
```python
"enabled": False
```
For OpenRouter models, make sure the model ID and API key are valid.
## Output Files
The main output files are:
```text
output/vimu_oe_summary.json
output/vimu_eg_summary.json
output/vimu_ss_without_guidance_summary.json
output/vimu_ss_with_guidance_summary.json
```
These files contain aggregated evaluation results.
## Scoring Rules
### Open-ended Interpretation
Open-ended answers are evaluated using an LLM-as-a-judge protocol. The judge scores semantic understanding based on:
```text
core intent
implicit signal
target or social meaning
hallucination penalty
literal-only penalty
```
Evidence grounding is scored as a multi-label prediction problem. If the prediction contains any incorrect option, the score is 0. Otherwise, if the prediction is a subset of the gold answer, the score is: `score = number of correctly selected options / number of gold options`. Rhetoric and social value tasks use the same multi-label scoring rule. If no incorrect option is selected; otherwise: `score = 0`.
## Notes
The dataset contains socially sensitive video memes. The benchmark is intended for research use only.
## Citation
If you finding our work interesting or helpful to you, please cite as follows:
```
@article{li2026vimu,
title={ViMU: Benchmarking Video Metaphorical Understanding},
author={Li, Qi and Wang, Xinchao},
journal={arXiv preprint arXiv:2605.14607},
year={2026}
}
```
|