Add pipeline tag for video-to-music generation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -1,11 +1,10 @@
1
  ---
2
  license: mit
 
3
  ---
4
 
5
  # Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
6
 
7
- <!-- Provide a quick summary of what the model is/does. -->
8
-
9
  Here is the training checkpoints of **[Diff-V2M (AAAI'26)](https://arxiv.org/abs/2511.09090)**
10
 
11
  ## Overview
@@ -15,8 +14,6 @@ Diff-V2M is a hierarchical diffusion model with explicit rhythmic modeling and m
15
 
16
  ## Model Sources
17
 
18
- <!-- Provide the basic links for the model. -->
19
-
20
  - **Repository:** https://github.com/Tayjsl97/Diff-V2M
21
  - **Demo:** [demo page](https://tayjsl97.github.io/Diff-V2M-Demo)
22
 
@@ -28,4 +25,5 @@ If you use our models in your research, please cite it as follows:
28
  author={Ji, Shulei and Wang, Zihao and Yu, Jiaxing and Yang, Xiangyuan and Li, Shuyu and Wu, Songruoyao and Zhang, Kejun},
29
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
30
  year={2026}
31
- }
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: other
4
  ---
5
 
6
  # Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
7
 
 
 
8
  Here is the training checkpoints of **[Diff-V2M (AAAI'26)](https://arxiv.org/abs/2511.09090)**
9
 
10
  ## Overview
 
14
 
15
  ## Model Sources
16
 
 
 
17
  - **Repository:** https://github.com/Tayjsl97/Diff-V2M
18
  - **Demo:** [demo page](https://tayjsl97.github.io/Diff-V2M-Demo)
19
 
 
25
  author={Ji, Shulei and Wang, Zihao and Yu, Jiaxing and Yang, Xiangyuan and Li, Shuyu and Wu, Songruoyao and Zhang, Kejun},
26
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
27
  year={2026}
28
+ }
29
+ ```