siriussa commited on
Commit
dd0541e
·
verified ·
1 Parent(s): f6ba98a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -88
README.md CHANGED
@@ -1,25 +1,5 @@
1
  # H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding
2
 
3
- ![VideoQA](https://img.shields.io/badge/Task-VideoQA-red)
4
- ![Multi-Modal](https://img.shields.io/badge/Task-Multi--Modal-red)
5
- ![H2VU](https://img.shields.io/badge/Dataset-H2VU-blue)
6
- ![Gemini](https://img.shields.io/badge/Model-Gemini-green)
7
- ![GPT-4V](https://img.shields.io/badge/Model-GPT--4V-green)
8
- ![GPT-4o](https://img.shields.io/badge/Model-GPT--4o-green)
9
-
10
-
11
- <font size=7><div align='center' > [[🍎 Project Page](/)] [[📖 arXiv Paper]()] [[📊 Dataset]()] </div></font>
12
-
13
- H2VU-Benchmark applies to **video MLLMs**. 🌟
14
-
15
- ---
16
-
17
- ## 🔥 Updates News
18
-
19
-
20
- * **`2024.04.30`** 🌟 We have open-sourced our dataset on Hugging Face.
21
- * **`2025.04.01`** 🌟 We are very proud to launch H2VU-Benchmark, the first comprehensive evaluation benchmark for Multimodal Large Language Models (MLLMs) in offline general video and online streaming analysis!
22
-
23
 
24
 
25
  ## 👀 H²VU-Benchmark Overview
@@ -49,72 +29,4 @@ Our work distinguishes itself through three key features:
49
  * Incorporates **first-person streaming video data** to better simulate real-world streaming data processing needs.
50
  * Explores multimodal models' performance in understanding first-person streaming video, crucial for AI agents functioning as real-world assistants or autonomous agents.
51
 
52
- <p align="center">
53
- <img src="./asset/sta.jpg" width="100%" height="100%">
54
- </p>
55
-
56
- ## 📐 Dataset Examples
57
-
58
- <p align="center">
59
- <img src="./asset/Highlights-2.png" width="100%" height="100%">
60
- </p>
61
-
62
-
63
- ## 🔍 Dataset
64
-
65
- **License**:
66
- ```
67
- H²VU-Benchmark is only used for academic research. Commercial use in any form is prohibited.
68
- The copyright of all videos belongs to the video owners.
69
- If there is any infringement in H²VU-Benchmark, please email us and we will remove it immediately.
70
- Without prior approval, you cannot distribute, publish, copy, disseminate, or modify H²VU-Benchmark in whole or in part.
71
- You must strictly comply with the above restrictions.
72
- ```
73
-
74
-
75
-
76
- ## 🔮 Evaluation Pipeline
77
-
78
-
79
- 📍 **Prompt**:
80
-
81
- The common prompt used in our evaluation follows this format:
82
-
83
- ```
84
- Select the best answer to the following multiple-choice question based on the video. Respond with only the letter (A, B, C, or D) of the correct option.
85
- [Question]
86
- The best answer is:
87
- ```
88
-
89
- 📍 **Evaluation**:
90
-
91
- To extract the answer and calculate the scores, we add the model response to a JSON file. Here we provide an example template [output.json](./evaluation/output.json). Once you have prepared the model responses in this format, please refer to the evaluation script [eval_results.py](./evaluation/eval_results.py), and you will get the accuracy scores across video_durations, video domains, video subcategories, and task types.
92
- The evaluation does not introduce any third-party models, such as ChatGPT.
93
-
94
- ```bash
95
- python eval_results.py
96
- ```
97
-
98
-
99
- ## :black_nib: Citation
100
-
101
- If you find our work helpful for your research, please consider citing our work.
102
-
103
- ```bibtex
104
- @article{2025h2vu,
105
- title={H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding},
106
- author={Wu, Qi and Zheng, Quanlong and Zhang, Yanhao and Xie, Junlin and Luo, Jinguo and Wang, Kuo and Liu, Peng and Xie, Qingsong and Zhen, Ru and Lu, Haonan and others},
107
- journal={arXiv preprint arXiv:2503.24008},
108
- year={2025}
109
- }
110
- ```
111
-
112
- ## 📜 Related Works
113
-
114
- Explore our related researches:
115
-
116
- - **[MME]** [MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models](https://arxiv.org/pdf/2306.13394)
117
-
118
- - **[MME-Survey]** [MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs](https://arxiv.org/pdf/2411.15296)
119
-
120
 
 
1
  # H2VU-Benchmark: A Comprehensive Benchmark for Hierarchical Holistic Video Understanding
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
 
5
  ## 👀 H²VU-Benchmark Overview
 
29
  * Incorporates **first-person streaming video data** to better simulate real-world streaming data processing needs.
30
  * Explores multimodal models' performance in understanding first-person streaming video, crucial for AI agents functioning as real-world assistants or autonomous agents.
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32