LIQIIIII commited on
Commit
d20ada1
·
verified ·
1 Parent(s): 32da574

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -39,7 +39,7 @@ configs:
39
  [![Project Page](https://img.shields.io/badge/Project-Page-blue?style=for-the-badge&logo=googlechrome&logoColor=white)](https://liqiiiii.github.io/Video-Metaphorical-Understanding/)
40
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2605.14607)
41
  [![Hugging Face](https://img.shields.io/badge/HuggingFace-Dataset-yellow?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/datasets/LIQIIIII/ViMU)
42
-
43
 
44
  [Qi Li](https://liqiiiii.github.io/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)<sup>*</sup>
45
 
@@ -49,7 +49,7 @@ configs:
49
 
50
  </div>
51
 
52
- This repository contains the evaluation scripts for ViMU, a benchmark for video metaphorical understanding. The code evaluates multimodal models on four tasks:
53
 
54
  1. Open-ended interpretation (OE)
55
  2. Evidence grounding (EG)
 
39
  [![Project Page](https://img.shields.io/badge/Project-Page-blue?style=for-the-badge&logo=googlechrome&logoColor=white)](https://liqiiiii.github.io/Video-Metaphorical-Understanding/)
40
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2605.14607)
41
  [![Hugging Face](https://img.shields.io/badge/HuggingFace-Dataset-yellow?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/datasets/LIQIIIII/ViMU)
42
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black?style=for-the-badge&logo=github&logoColor=white)](https://github.com/LiQiiiii/Video-Metaphorical-Understanding)
43
 
44
  [Qi Li](https://liqiiiii.github.io/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)<sup>*</sup>
45
 
 
49
 
50
  </div>
51
 
52
+ Our GitHub repository contains the evaluation scripts for ViMU, a benchmark for video metaphorical understanding. The code evaluates multimodal models on four tasks:
53
 
54
  1. Open-ended interpretation (OE)
55
  2. Evidence grounding (EG)