Datasets:
Formats:
json
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
video-understanding
multimodal
video-metaphorical-understanding
benchmark
subtext-understanding
License:
File size: 5,114 Bytes
1ed27f8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
# ViMU: Benchmarking Video Metaphorical Understanding
ViMU is a video understanding benchmark designed to evaluate whether multimodal models can go beyond literal perception and infer metaphorical, rhetorical, and socially grounded meanings in videos.
Unlike standard video understanding datasets that focus mainly on objects, actions, events, or temporal relations, ViMU targets implicit subtext: what a video means beyond what is directly shown. The benchmark includes open-ended interpretation, evidence grounding, rhetoric mechanism identification, and social value signal identification.
## Dataset Structure
After downloading, the dataset should be organized as:
```text
ViMU/
├── videos/
│ ├── vimu_000001.mp4
│ ├── vimu_000002.mp4
│ └── ...
├── metadata/
│ ├── vimu_oe.jsonl
│ ├── vimu_eg.jsonl
│ ├── vimu_ss.jsonl
│ ├── video_evidence.jsonl
│ └── cache/
│ ├── frames/
│ └── transcripts/
└── output/
````
The current uploaded archive may contain only a small subset of the full dataset for review or testing purposes.
## Files
### `videos/`
This directory contains anonymized video files. Each video is named by its anonymous ID, for example:
```text
vimu_000001.mp4
vimu_000002.mp4
```
### `metadata/vimu_oe.jsonl`
Open-ended interpretation task. Each line corresponds to one video and contains:
```json
{
"video_id": "...",
"video_path": "...",
"taxonomy": {...},
"qa": {
"question": "...",
"answer": "...",
"short_reference_points": [...],
"grading_rubric": {...}
}
}
```
This task asks models to answer a hint-free question about the video’s intended meaning.
### `metadata/vimu_eg.jsonl`
Evidence grounding task. Each line contains a multi-label question asking which video elements support the intended meaning.
The candidate evidence sources include:
```text
visual scene / objects
on-screen text
spoken dialogue
tone of voice
editing transition
```
The fields include:
```json
{
"video_id": "...",
"question_type": "evidence_grounding",
"question": "...",
"intended_meaning": "...",
"options": {...},
"correct_options": [...],
"correct_labels": [...]
}
```
### `metadata/vimu_ss.jsonl`
Structured subtext task. Each line contains two multi-label multiple-choice tasks:
```text
rhetoric_mechanisms
social_value_signals
```
Each task uses five macro-level categories labeled A--E.
For rhetoric mechanisms:
```text
A. Literal / Direct
B. Opposition / Incongruity
C. Attitude / Tone-based Rhetoric
D. Amplification / Stylization
E. Implicit / Coded Social Framing
```
For social value signals:
```text
A. Neutral / No Social Signal
B. Emotional Attitude
C. Social Evaluation / Devaluation
D. Norm and Value Framing
E. Identity / Ideological Signaling
```
### `metadata/video_evidence.jsonl`
Metadata used for model inference, including video paths, sampled frame paths, video duration, and ASR transcript.
```json
{
"video_id": "...",
"video_path": "...",
"duration_sec": 0.0,
"frames": [...],
"transcript": "..."
}
```
### `metadata/cache/`
Optional cached evidence used by the scripts:
```text
metadata/cache/frames/
metadata/cache/transcripts/
```
## Recommended Placement
After downloading the dataset, place it as:
```text
/path/to/ViMU/
```
Then update the code scripts by setting:
```python
PROJECT_ROOT = "/path/to/ViMU"
```
For example:
```python
PROJECT_ROOT = "/Users/anonymous/ViMU"
```
## Tasks
ViMU contains four evaluation tasks:
| Task | File | Format |
| ---------------------------------- | ------------------------ | --------------------------- |
| Open-ended interpretation | `metadata/vimu_oe.jsonl` | Free-form generation |
| Evidence grounding | `metadata/vimu_eg.jsonl` | Multi-label multiple-choice |
| Rhetoric mechanism identification | `metadata/vimu_ss.jsonl` | Multi-label multiple-choice |
| Social value signal identification | `metadata/vimu_ss.jsonl` | Multi-label multiple-choice |
## License
This dataset is released under a **Research Use Only License**.
The dataset is provided solely for non-commercial research purposes. Users may use, download, and analyze the dataset for academic and research activities, including model evaluation, benchmarking, and reproducibility studies.
Commercial use, redistribution for commercial purposes, or use in products, services, or systems intended for commercial deployment is not permitted without prior written permission from the dataset maintainers.
By using this dataset, users agree to comply with the terms above and to use the dataset responsibly, with appropriate consideration of privacy, fairness, and potential social impact.
## Notes
This dataset may contain offensive, harmful, or socially sensitive content because it studies videos and their implicit social meanings. The dataset is intended for research on video understanding, multimodal reasoning, and model evaluation.
|