chunyu-li commited on
Commit
d3d184d
·
1 Parent(s): 4b44ee2

Add sections for Introduction and Framework to the README.md

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +49 -0
  3. assets/framework.png +3 -0
.gitattributes CHANGED
@@ -32,4 +32,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.xz filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
 
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
32
  *.xz filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *.png filter=lfs diff=lfs merge=lfs -text
36
  *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -8,3 +8,52 @@ base_model:
8
  tags:
9
  - joint-audio-video-generation
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  tags:
9
  - joint-audio-video-generation
10
  ---
11
+
12
+ <h1 align="center">Hallo-Live</h1>
13
+ <!-- <h1 align="center">Hallo-Live: Real-Time Streaming Joint Audio-Video Avatar</h1> -->
14
+
15
+ <div align='center'>
16
+ <a href="https://github.com/chunyu-li" target="_blank">Chunyu Li</a><sup>1,2,*</sup> &emsp;
17
+ <a href="https://github.com/fudan-generative-vision/Hallo-Live" target="_blank">Jiaye Li</a><sup>2,*</sup> &emsp;
18
+ <a href="https://github.com/fudan-generative-vision/Hallo-Live" target="_blank">Ruiqiao Mei</a><sup>2</sup> &emsp;
19
+ <a href="https://github.com/fudan-generative-vision/Hallo-Live" target="_blank">Haoyuan Xia</a><sup>1,3</sup>
20
+ </div>
21
+ <div align='center'>
22
+ <a href="http://zhuhao.cc/home/" target="_blank">Hao Zhu</a><sup>4</sup> &emsp;
23
+ <a href="https://jingdongwang2017.github.io/" target="_blank">Jingdong Wang</a><sup>5</sup> &emsp;
24
+ <a href="https://sites.google.com/site/zhusiyucs/home" target="_blank">Siyu Zhu</a><sup>1,2,&dagger;</sup>
25
+ </div>
26
+
27
+ <br>
28
+
29
+ <div align='center'>
30
+ <sup>1</sup>Shanghai Innovation Institute &emsp;
31
+ <sup>2</sup>Fudan University
32
+ </div>
33
+ <div align='center'>
34
+ <sup>3</sup>University of Science and Technology of China &emsp;
35
+ <sup>4</sup>Nanjing University &emsp;
36
+ <sup>5</sup>Baidu
37
+ </div>
38
+
39
+ <br>
40
+
41
+ <div align="center">
42
+
43
+ [![Paper](https://img.shields.io/badge/arXiv-2604.23632-b31b1b.svg)](https://arxiv.org/abs/2604.23632)
44
+ [![arXiv](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow)](https://huggingface.co/fudan-generative-ai/Hallo-Live)
45
+ [![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
46
+
47
+ </div>
48
+
49
+ ## 📖 Introduction
50
+
51
+ We present *Hallo-Live*, a real-time text-driven joint audio-video avatar generation framework. The method adopts a causal dual-stream DiT model to generate synchronized avatar video and speech in a streaming manner. *Hallo-Live* reaches **20.38 FPS** with **0.94 s latency** on two NVIDIA H200 GPUs, while preserving strong lip-sync accuracy, visual fidelity, and speech quality.
52
+
53
+ ## 🏗️ Framework
54
+
55
+ <p align="center">
56
+ <img src="assets/framework.png" width=100%>
57
+ <p>
58
+
59
+ The framework of *Hallo-Live*. **Top left**: Stage I training adapts a pretrained dual-stream DiT to the streaming setting using cross-modal future-expanding block-causal mask. **Bottom left**: Stage II training performs autoregressive self-rollout with the audio-video KV cache and optimizes the generated trajectory with reward-weighted dual-stream DMD. **Right**: Each causal fusion block in the dual-stream DiT consists of cross-modal attention between the video and audio streams, where the block-causal masks are utilized in Stage I ODE initialization, and KV cache is maintained for Stage II self-rollout and streaming inference.
assets/framework.png ADDED

Git LFS Details

  • SHA256: 4fe3d4207f30c41b6fe8dbec6fb556d1d5a58d2e457bc844906af710c1201c7e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.3 MB