ballemann huaichang commited on
Commit
ed15e42
·
0 Parent(s):

Duplicate from huaichang/PersonaLive

Browse files

Co-authored-by: Zhiyuan Li <huaichang@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ demo/driving_video.mp4 filter=lfs diff=lfs merge=lfs -text
37
+ demo/ref_image.png filter=lfs diff=lfs merge=lfs -text
38
+ pretrained_weights/onnx/unet_opt/unet_opt.onnx.data filter=lfs diff=lfs merge=lfs -text
39
+ pretrained_weights/tensorrt/unet_work(H100).engine filter=lfs diff=lfs merge=lfs -text
40
+ results/20251209--personalive_offline/concat_vid/ref_image_driving_video.mp4 filter=lfs diff=lfs merge=lfs -text
41
+ results/20251209--personalive_offline/split_vid/ref_image_driving_video.mp4 filter=lfs diff=lfs merge=lfs -text
42
+ assets/demo_1.gif filter=lfs diff=lfs merge=lfs -text
43
+ assets/demo_2.gif filter=lfs diff=lfs merge=lfs -text
44
+ assets/demo_3.gif filter=lfs diff=lfs merge=lfs -text
45
+ assets/overview.png filter=lfs diff=lfs merge=lfs -text
46
+ assets/guide.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - portrait-animation
5
+ - real-time
6
+ - diffusion
7
+ pipeline_tag: image-to-video
8
+ library_name: diffusers
9
+ ---
10
+
11
+ <div align="center">
12
+
13
+ <h1 align="center" style="font-weight: 900; font-size: 80px; color: #FF6B6B; margin-bottom: 20px;">
14
+ PersonaLive!
15
+ </h1>
16
+
17
+ <h2>Expressive Portrait Image Animation for Live Streaming</h2>
18
+
19
+ <a href='https://arxiv.org/abs/2512.11253'><img src='https://img.shields.io/badge/ArXiv-2512.11253-red'></a> <a href='https://huggingface.co/huaichang/PersonaLive'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-ffc107'></a> <a href='https://modelscope.cn/models/huaichang/PersonaLive'><img src='https://img.shields.io/badge/ModelScope-Model-624AFF'></a> [![GitHub](https://img.shields.io/github/stars/GVCLab/PersonaLive?style=social)](https://github.com/GVCLab/PersonaLive)
20
+
21
+ [Zhiyuan Li<sup>1,2,3</sup>](https://huai-chang.github.io/) · [Chi-Man Pun<sup>1,📪</sup>](https://cmpun.github.io/) · [Chen Fang<sup>2</sup>](http://fangchen.org/) · [Jue Wang<sup>2</sup>](https://scholar.google.com/citations?user=Bt4uDWMAAAAJ&hl=en) · [Xiaodong Cun<sup>3,📪</sup>](https://vinthony.github.io/academic/)
22
+
23
+ <sup>1</sup> University of Macau &nbsp;&nbsp; <sup>2</sup> [Dzine.ai](https://www.dzine.ai/) &nbsp;&nbsp; <sup>3</sup> [GVC Lab, Great Bay University](https://gvclab.github.io/)
24
+
25
+ <h3 align="center" style="color: #ff4d4d; font-weight: 900; margin-top: 0;">
26
+ ⚡️ Real-time, Streamable, Infinite-Length ⚡️ <br>
27
+ ⚡️ Portrait Animation requires only ~12GB VRAM ⚡️
28
+ </h3>
29
+
30
+ <table width="100%" align="center" style="border: none;">
31
+ <tr>
32
+ <td width="46.5%" align="center" style="border: none;">
33
+ <img src="assets/demo_3.gif" style="width: 100%;">
34
+ </td>
35
+ <td width="41%" align="center" style="border: none;">
36
+ <img src="assets/demo_2.gif" style="width: 100%;">
37
+ </td>
38
+ </tr>
39
+ </table>
40
+
41
+ </div>
42
+
43
+ ## 📋 TODO
44
+ - [ ] If you find PersonaLive useful or interesting, please give us a Star 🌟 on our [GitHub repo](https://github.com/GVCLab/PersonaLive)! Your support drives us to keep improving. 🍻
45
+ - [ ] Fix bugs (If you encounter any issues, please feel free to open an issue or contact me! 🙏)
46
+ - [ ] Enhance WebUI (Support reference image replacement
47
+ - [x] **[2025.12.22]** 🔥 Supported streaming strategy in offline inference to generate long videos on 12GB VRAM!
48
+ - [x] **[2025.12.17]** 🔥 [ComfyUI-PersonaLive](https://github.com/okdalto/ComfyUI-PersonaLive) is now supported! (Thanks to [@okdalto](https://github.com/okdalto))
49
+ - [x] **[2025.12.15]** 🔥 Release `paper`!
50
+ - [x] **[2025.12.12]** 🔥 Release `inference code`, `config`, and `pretrained weights`!
51
+
52
+ ## ⚙️ Framework
53
+ <img src="assets/overview.png" alt="Image 1" width="100%">
54
+
55
+
56
+ We present PersonaLive, a `real-time` and `streamable` diffusion framework capable of generating `infinite-length` portrait animations on a single `12GB GPU`.
57
+
58
+
59
+ ## 🚀 Getting Started
60
+ ### 🛠 Installation
61
+ ```
62
+ # clone this repo
63
+ git clone https://github.com/GVCLab/PersonaLive
64
+ cd PersonaLive
65
+
66
+ # Create conda environment
67
+ conda create -n personalive python=3.10
68
+ conda activate personalive
69
+
70
+ # Install packages with pip
71
+ pip install -r requirements_base.txt
72
+ ```
73
+
74
+ ### ⏬ Download weights
75
+ Option 1: Download pre-trained weights of base models and other components ([sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers) and [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)). You can run the following command to download weights automatically:
76
+
77
+ ```bash
78
+ python tools/download_weights.py
79
+ ```
80
+
81
+ Option 2: Download pre-trained weights into the `./pretrained_weights` folder from one of the below URLs:
82
+
83
+ <a href='https://drive.google.com/drive/folders/1GOhDBKIeowkMpBnKhGB8jgEhJt_--vbT?usp=drive_link'><img src='https://img.shields.io/badge/Google%20Drive-5B8DEF?style=for-the-badge&logo=googledrive&logoColor=white'></a> <a href='https://pan.baidu.com/s/1DCv4NvUy_z7Gj2xCGqRMkQ?pwd=gj64'><img src='https://img.shields.io/badge/Baidu%20Netdisk-3E4A89?style=for-the-badge&logo=baidu&logoColor=white'></a> <a href='https://modelscope.cn/models/huaichang/PersonaLive'><img src='https://img.shields.io/badge/ModelScope-624AFF?style=for-the-badge&logo=alibabacloud&logoColor=white'></a> <a href='https://huggingface.co/huaichang/PersonaLive'><img src='https://img.shields.io/badge/HuggingFace-E67E22?style=for-the-badge&logo=huggingface&logoColor=white'></a>
84
+
85
+ Finally, these weights should be organized as follows:
86
+ ```
87
+ pretrained_weights
88
+ ├── onnx
89
+ │ ├── unet_opt
90
+ │ │ ├── unet_opt.onnx
91
+ │ │ └── unet_opt.onnx.data
92
+ │ └── unet
93
+ ├── personalive
94
+ │ ├── denoising_unet.pth
95
+ │ ├── motion_encoder.pth
96
+ │ ├── motion_extractor.pth
97
+ │ ├── pose_guider.pth
98
+ │ ├── reference_unet.pth
99
+ │ └── temporal_module.pth
100
+ ├── sd-vae-ft-mse
101
+ │ ├── diffusion_pytorch_model.bin
102
+ │ └── config.json
103
+ ├── sd-image-variations-diffusers
104
+ │ ├── image_encoder
105
+ │ │ ├── pytorch_model.bin
106
+ │ │ └── config.json
107
+ │ ├── unet
108
+ │ │ ├── diffusion_pytorch_model.bin
109
+ │ │ └── config.json
110
+ │ └── model_index.json
111
+ └── tensorrt
112
+ └── unet_work.engine
113
+ ```
114
+
115
+ ### 🎞️ Offline Inference
116
+ ```
117
+ python inference_offline.py
118
+ ```
119
+ ⚠️ Note for RTX 50-Series (Blackwell) Users: xformers is not yet fully compatible with the new architecture. To avoid crashes, please disable it by running:
120
+ ```
121
+ python inference_offline.py --use_xformers False
122
+ ```
123
+
124
+ ### 📸 Online Inference
125
+ #### 📦 Setup Web UI
126
+ ```
127
+ # install Node.js 18+
128
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
129
+ nvm install 18
130
+
131
+ cd webcam
132
+ source start.sh
133
+ ```
134
+
135
+ #### 🏎️ Acceleration (Optional)
136
+ Converting the model to TensorRT can significantly speed up inference (~ 2x ⚡️). Building the engine may take about `20 minutes` depending on your device. Note that TensorRT optimizations may lead to slight variations or a small drop in output quality.
137
+ ```
138
+ pip install -r requirements_trt.txt
139
+
140
+ python torch2trt.py
141
+ ```
142
+ *The provided TensorRT model is from an `H100`. We recommend `ALL users` (including H100 users) re-run `python torch2trt.py` locally to ensure best compatibility.*
143
+
144
+ #### ▶️ Start Streaming
145
+ ```
146
+ python inference_online.py --acceleration none (for RTX 50-Series) or xformers or tensorrt
147
+ ```
148
+ Then open `http://0.0.0.0:7860` in your browser. (*If `http://0.0.0.0:7860` does not work well, try `http://localhost:7860`)
149
+
150
+ **How to use**: Upload Image ➡️ Fuse Reference ➡️ Start Animation ➡️ Enjoy! 🎉
151
+ <div align="center">
152
+ <img src="assets/guide.png" alt="PersonaLive" width="60%">
153
+ </div>
154
+
155
+ **Regarding Latency**: Latency varies depending on your device's computing power. You can try the following methods to optimize it:
156
+
157
+ 1. Lower the "Driving FPS" setting in the WebUI to reduce the computational workload.
158
+ 2. You can increase the multiplier (e.g., set to `num_frames_needed * 4` or higher) to better match your device's inference speed. https://github.com/GVCLab/PersonaLive/blob/6953d1a8b409f360a3ee1d7325093622b29f1e22/webcam/util.py#L73
159
+
160
+ ## 📚 Community Contribution
161
+
162
+ Special thanks to the community for providing helpful setups! 🥂
163
+
164
+ * **Windows + RTX 50-Series Guide**: Thanks to [@dknos](https://github.com/dknos) for providing a [detailed guide](https://github.com/GVCLab/PersonaLive/issues/10#issuecomment-3662785532) on running this project on Windows with Blackwell GPUs.
165
+
166
+ * **TensorRT on Windows**: If you are trying to convert TensorRT models on Windows, [this discussion](https://github.com/GVCLab/PersonaLive/issues/8) might be helpful. Special thanks to [@MaraScott](https://github.com/MaraScott) and [@Jeremy8776](https://github.com/Jeremy8776) for their insights.
167
+
168
+ * **ComfyUI**: Thanks to [@okdalto](https://github.com/okdalto) for helping implement the [ComfyUI-PersonaLive](https://github.com/okdalto/ComfyUI-PersonaLive) support.
169
+
170
+ * **Useful Scripts**: Thanks to [@suruoxi](https://github.com/suruoxi) for implementing `download_weights.py`, and to [@andchir](https://github.com/andchir) for adding audio merging functionality.
171
+
172
+ ## 🎬 More Results
173
+ #### 👀 Visualization results
174
+
175
+ <table width="100%">
176
+ <tr>
177
+ <td width="50%">
178
+ <video src="https://github.com/user-attachments/assets/cdc885ef-5e1c-4139-987a-2fa50fefd6a4" controls="controls" style="max-width: 100%; display: block;"></video>
179
+ </td>
180
+ <td width="50%">
181
+ <video src="https://github.com/user-attachments/assets/014f7bae-74ce-4f56-8621-24bc76f3c123" controls="controls" style="max-width: 100%; display: block;"></video>
182
+ </td>
183
+ </tr>
184
+ </table>
185
+ <table width="100%">
186
+ <tr>
187
+ <td width="25%">
188
+ <video src="https://github.com/user-attachments/assets/1e6a0809-15d2-4cab-ae8f-8cf1728c6281" controls="controls" style="max-width: 100%; display: block;"></video>
189
+ </td>
190
+ <td width="25%">
191
+ <video src="https://github.com/user-attachments/assets/d9cf265d-9db0-4f83-81da-be967bbd5f26" controls="controls" style="max-width: 100%; display: block;"></video>
192
+ </td>
193
+ <td width="25%">
194
+ <video src="https://github.com/user-attachments/assets/86235139-b63e-4f26-b09c-d218466e8e24" controls="controls" style="max-width: 100%; display: block;"></video>
195
+ </td>
196
+ <td width="25%">
197
+ <video src="https://github.com/user-attachments/assets/238785de-3b4c-484e-9ad0-9d90e7962fee" controls="controls" style="max-width: 100%; display: block;"></video>
198
+ </td>
199
+ </tr>
200
+ <tr>
201
+ <td width="25%">
202
+ <video src="https://github.com/user-attachments/assets/c71c4717-d528-4a98-b132-2b0ec8cec22d" controls="controls" style="max-width: 100%; display: block;"></video>
203
+ </td>
204
+ <td width="25%">
205
+ <video src="https://github.com/user-attachments/assets/7e11fe71-fd16-4011-a6b2-2dbaf7e343fb" controls="controls" style="max-width: 100%; display: block;"></video>
206
+ </td>
207
+ <td width="25%">
208
+ <video src="https://github.com/user-attachments/assets/f62e2162-d239-4575-9514-34575c16301c" controls="controls" style="max-width: 100%; display: block;"></video>
209
+ </td>
210
+ <td width="25%">
211
+ <video src="https://github.com/user-attachments/assets/813e7fbd-37e9-47d7-a270-59887fafeca5" controls="controls" style="max-width: 100%; display: block;"></video>
212
+ </td>
213
+ </tr>
214
+ </table>
215
+
216
+ #### 🤺 Comparisons
217
+
218
+ <table width="100%">
219
+ <tr>
220
+ <td width="100%">
221
+ <video src="https://github.com/user-attachments/assets/36407cf9-bf82-43ff-9508-a794d223d3f7" controls="controls" style="max-width: 100%; display: block;"></video>
222
+ </td>
223
+ </tr>
224
+ <tr>
225
+ <td width="100%">
226
+ <video src="https://github.com/user-attachments/assets/3be99b91-c6a1-4ca4-89e9-8fad42bb9583" controls="controls" style="max-width: 100%; display: block;"></video>
227
+ </td>
228
+ </tr>
229
+ <tr>
230
+ <td width="100%">
231
+ <video src="https://github.com/user-attachments/assets/5bd21fe4-96ae-4be6-bf06-a7c476b04ec9" controls="controls" style="max-width: 100%; display: block;"></video>
232
+ </td>
233
+ </tr>
234
+ </table>
235
+
236
+
237
+ ## ⭐ Citation
238
+ If you find PersonaLive useful for your research, welcome to cite our work using the following BibTeX:
239
+ ```bibtex
240
+ @article{li2025personalive,
241
+ title={PersonaLive! Expressive Portrait Image Animation for Live Streaming},
242
+ author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong},
243
+ journal={arXiv preprint arXiv:2512.11253},
244
+ year={2025}
245
+ }
246
+ ```
247
+
248
+ ## ❤️ Acknowledgement
249
+ This code is mainly built upon [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone), [X-NeMo](https://byteaigc.github.io/X-Portrait2/), [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion), [RAIN](https://pscgylotti.github.io/pages/RAIN/) and [LivePortrait](https://github.com/KlingTeam/LivePortrait), thanks to their invaluable contributions.
assets/demo_1.gif ADDED

Git LFS Details

  • SHA256: 0494bf0c7e14df986d93b00b57c30221fafcdb8a13d7702f922ade8adc2b5ad0
  • Pointer size: 133 Bytes
  • Size of remote file: 13.8 MB
assets/demo_2.gif ADDED

Git LFS Details

  • SHA256: 18361ab35cabf494704b2ca56d8d5a5c217254f1896ede5c4ecd8d6d73f32aef
  • Pointer size: 133 Bytes
  • Size of remote file: 10.5 MB
assets/demo_3.gif ADDED

Git LFS Details

  • SHA256: a70806e32dde3b6979c69d7c5cc4db687f1f7673a16351be4d221983e3940249
  • Pointer size: 133 Bytes
  • Size of remote file: 14.8 MB
assets/guide.png ADDED

Git LFS Details

  • SHA256: d2e6c017287d62ac220ec85a69fa05d75dd118db042d4e85f9c306807132c254
  • Pointer size: 131 Bytes
  • Size of remote file: 182 kB
assets/header.svg ADDED
assets/highlight.svg ADDED
assets/overview.png ADDED

Git LFS Details

  • SHA256: 03439f3547913f335be5807fc6c341635f447ff9bf6c675fb6c5fa695a8ad820
  • Pointer size: 132 Bytes
  • Size of remote file: 1.07 MB
pretrained_weights/.DS_Store ADDED
Binary file (8.2 kB). View file
 
pretrained_weights/onnx/.DS_Store ADDED
Binary file (6.15 kB). View file
 
pretrained_weights/onnx/unet_opt/unet_opt.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:484aee7e8c45cddaac227b6ad331a88a77121dee0886f2152cc4bd0e9974b6fa
3
+ size 96224343
pretrained_weights/onnx/unet_opt/unet_opt.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa08ee8770f202be841e00f2bb94809c2ca6ca95ad8663c2917c4c6fa35d963e
3
+ size 3593537864
pretrained_weights/personalive/denoising_unet.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0446c4d2387f259d5f3c1ac54a5aefa93400f4672f942856bff2538df046162
3
+ size 4927015578
pretrained_weights/personalive/motion_encoder.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff7c6b0a84cd750046e7687f7a6f6bbc21317055bfcacef950ed347debae4d2c
3
+ size 246719031
pretrained_weights/personalive/motion_extractor.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:251e6a94ad667a1d0c69526d292677165110ef7f0cf0f6d199f0e414e8aa0ca5
3
+ size 112545506
pretrained_weights/personalive/pose_guider.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b997db63343a6a5d489778172d9544bcccaf27e6756505dc6353d84e877269d
3
+ size 4351790
pretrained_weights/personalive/reference_unet.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85eb03e6c34fab69f9246ff14b3016789232e56dc4892d0581fea21a3a8480f6
3
+ size 3438324340
pretrained_weights/personalive/temporal_module.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:295e8942a453adb48756432d99de103ecba9b840b5b8f6635a0687311cdff30e
3
+ size 1817903018
pretrained_weights/tensorrt/.DS_Store ADDED
Binary file (6.15 kB). View file
 
pretrained_weights/tensorrt/unet_work(H100).engine ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34bd6f7693300be8cf72a099f1160bfaedab7a677bcaf66f18ee33a5b871de50
3
+ size 3697605036