Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,120 @@
|
|
| 1 |
-
-
|
| 2 |
-
|
| 3 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+

|
| 5 |
+

|
| 6 |
+

|
| 7 |
+

|
| 8 |
+

|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
<font size=7><div align='center' > [[🍎 Project Page](/)] [[📖 arXiv Paper]()] [[📊 Dataset]()] </div></font>
|
| 12 |
+
|
| 13 |
+
H2VU-Benchmark applies to **video MLLMs**. 🌟
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## 🔥 Updates News
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
* **`2024.04.30`** 🌟 We have open-sourced our dataset on Hugging Face.
|
| 21 |
+
* **`2025.04.01`** 🌟 We are very proud to launch H2VU-Benchmark, the first comprehensive evaluation benchmark for Multimodal Large Language Models (MLLMs) in offline general video and online streaming analysis!
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
## 👀 H²VU-Benchmark Overview
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
H²VU-Benchmark is designed to comprehensively assess the capabilities of video understanding models, particularly in real-world scenarios. It addresses limitations of existing benchmarks by focusing on **extended video durations, advanced task complexity, and diversified real-world data.**
|
| 29 |
+
|
| 30 |
+
## Key Features
|
| 31 |
+
|
| 32 |
+
* **Three-Tier Hierarchical Competency Classification:** (L-1 to L-3) with **10,183 evaluation tasks** covering a broad spectrum of diverse data.
|
| 33 |
+
* **Two Main Categories:**
|
| 34 |
+
* **Offline General Video:** Employs common perception and reasoning tasks, along with novel tasks focusing on **countercommonsense comprehension** and **trajectory tracking.** (27 evaluation task types)
|
| 35 |
+
* **Online Streaming Video:** Utilizes standard perception and reasoning tasks. (20 evaluation task types)
|
| 36 |
+
|
| 37 |
+
## Key Differentiators from Existing Benchmarks
|
| 38 |
+
|
| 39 |
+
Our work distinguishes itself through three key features:
|
| 40 |
+
|
| 41 |
+
* **Extended Video Duration:**
|
| 42 |
+
* Encompasses a diverse range from a **few seconds to 1.5 hours**, significantly expanding the temporal scope.
|
| 43 |
+
* Evaluates models' ability to capture **short-term dynamics** and model **long-term dependencies.**
|
| 44 |
+
* **Advanced Task Complexity:**
|
| 45 |
+
* Builds on traditional perceptual and reasoning tasks with the introduction of two new modules:
|
| 46 |
+
* **Counterfactual Reasoning:** Assesses vision-oriented understanding through tasks that defy common sense (e.g., implausible causal relationships).
|
| 47 |
+
* **Trajectory State Tracking:** Evaluates the ability to track and predict the states and trajectories of targets in complex dynamic scenes.
|
| 48 |
+
* **Diversified Real-World Data:**
|
| 49 |
+
* Incorporates **first-person streaming video data** to better simulate real-world streaming data processing needs.
|
| 50 |
+
* Explores multimodal models' performance in understanding first-person streaming video, crucial for AI agents functioning as real-world assistants or autonomous agents.
|
| 51 |
+
|
| 52 |
+
<p align="center">
|
| 53 |
+
<img src="./asset/sta.jpg" width="100%" height="100%">
|
| 54 |
+
</p>
|
| 55 |
+
|
| 56 |
+
## 📐 Dataset Examples
|
| 57 |
+
|
| 58 |
+
<p align="center">
|
| 59 |
+
<img src="./asset/Highlights-2.png" width="100%" height="100%">
|
| 60 |
+
</p>
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
## 🔍 Dataset
|
| 64 |
+
|
| 65 |
+
**License**:
|
| 66 |
+
```
|
| 67 |
+
H²VU-Benchmark is only used for academic research. Commercial use in any form is prohibited.
|
| 68 |
+
The copyright of all videos belongs to the video owners.
|
| 69 |
+
If there is any infringement in H²VU-Benchmark, please email us and we will remove it immediately.
|
| 70 |
+
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify H²VU-Benchmark in whole or in part.
|
| 71 |
+
You must strictly comply with the above restrictions.
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
## 🔮 Evaluation Pipeline
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
📍 **Prompt**:
|
| 80 |
+
|
| 81 |
+
The common prompt used in our evaluation follows this format:
|
| 82 |
+
|
| 83 |
+
```
|
| 84 |
+
Select the best answer to the following multiple-choice question based on the video. Respond with only the letter (A, B, C, or D) of the correct option.
|
| 85 |
+
[Question]
|
| 86 |
+
The best answer is:
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
📍 **Evaluation**:
|
| 90 |
+
|
| 91 |
+
To extract the answer and calculate the scores, we add the model response to a JSON file. Here we provide an example template [output.json](./evaluation/output.json). Once you have prepared the model responses in this format, please refer to the evaluation script [eval_results.py](./evaluation/eval_results.py), and you will get the accuracy scores across video_durations, video domains, video subcategories, and task types.
|
| 92 |
+
The evaluation does not introduce any third-party models, such as ChatGPT.
|
| 93 |
+
|
| 94 |
+
```bash
|
| 95 |
+
python eval_results.py
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
## :black_nib: Citation
|
| 100 |
+
|
| 101 |
+
If you find our work helpful for your research, please consider citing our work.
|
| 102 |
+
|
| 103 |
+
```bibtex
|
| 104 |
+
@article{2025h2vu,
|
| 105 |
+
title={H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding},
|
| 106 |
+
author={Wu, Qi and Zheng, Quanlong and Zhang, Yanhao and Xie, Junlin and Luo, Jinguo and Wang, Kuo and Liu, Peng and Xie, Qingsong and Zhen, Ru and Lu, Haonan and others},
|
| 107 |
+
journal={arXiv preprint arXiv:2503.24008},
|
| 108 |
+
year={2025}
|
| 109 |
+
}
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
## 📜 Related Works
|
| 113 |
+
|
| 114 |
+
Explore our related researches:
|
| 115 |
+
|
| 116 |
+
- **[MME]** [MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models](https://arxiv.org/pdf/2306.13394)
|
| 117 |
+
|
| 118 |
+
- **[MME-Survey]** [MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs](https://arxiv.org/pdf/2411.15296)
|
| 119 |
+
|
| 120 |
+
|