Improve model card and add metadata
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,21 +1,96 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
--
|
| 4 |
-
tags:
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: image-to-video
|
| 4 |
+
tags:
|
| 5 |
+
- video-generation
|
| 6 |
+
- video-relighting
|
| 7 |
+
- diffusion
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# Relit-LiVE: Relight Video by Jointly Learning Environment Video
|
| 11 |
+
|
| 12 |
+
Relit-LiVE is a novel video relighting framework that produces physically consistent, temporally stable results without requiring prior knowledge of camera pose. It explicitly introduces raw reference images into the rendering process, enabling the model to recover critical scene cues. The framework simultaneously generates relit videos and per-frame environment maps aligned with each camera viewpoint in a single diffusion process.
|
| 13 |
+
|
| 14 |
+
## Links
|
| 15 |
+
|
| 16 |
+
- **Paper:** [Relit-LiVE: Relight Video by Jointly Learning Environment Video](https://arxiv.org/abs/2605.06658)
|
| 17 |
+
- **Code:** [GitHub Repository](https://github.com/zhuxing0/Relit-LiVE)
|
| 18 |
+
- **Project Page:** [Relit-LiVE Project](https://zhuxing0.github.io/projects/Relit-LiVE/)
|
| 19 |
+
|
| 20 |
+
## Installation
|
| 21 |
+
|
| 22 |
+
To set up the environment, follow these steps:
|
| 23 |
+
|
| 24 |
+
```bash
|
| 25 |
+
conda create -n diffsynth python=3.10
|
| 26 |
+
conda activate diffsynth
|
| 27 |
+
pip install -e .
|
| 28 |
+
pip install lightning pandas websockets pyexr natsort gradio
|
| 29 |
+
pip install -U deepspeed
|
| 30 |
+
pip install transformers==4.50.0
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## Usage
|
| 34 |
+
|
| 35 |
+
You can use the provided `relit_inference.py` script to perform video relighting. Below is an example of basic 25-frame relighting:
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
python relit_inference.py \
|
| 39 |
+
--dataset_path datasets/demos \
|
| 40 |
+
--ckpt_path checkpoints/model_frame25_480_832.ckpt \
|
| 41 |
+
--output_dir inference_output \
|
| 42 |
+
--cfg_scale 1.0 \
|
| 43 |
+
--height 480 \
|
| 44 |
+
--width 832 \
|
| 45 |
+
--num_frames 25 \
|
| 46 |
+
--padding_resolution \
|
| 47 |
+
--use_ref_image \
|
| 48 |
+
--env_map_path datasets/envs/Pink_Sunrise \
|
| 49 |
+
--frame_interval 1 \
|
| 50 |
+
--num_inference_steps 50 \
|
| 51 |
+
--quality 10
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
For high-resolution single-frame relighting:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
python relit_inference.py \
|
| 58 |
+
--dataset_path datasets/demos \
|
| 59 |
+
--ckpt_path checkpoints/model_frame1_1024_1472.ckpt \
|
| 60 |
+
--output_dir inference_output \
|
| 61 |
+
--cfg_scale 1.0 \
|
| 62 |
+
--height 1024 \
|
| 63 |
+
--width 1472 \
|
| 64 |
+
--num_frames 1 \
|
| 65 |
+
--padding_resolution \
|
| 66 |
+
--use_ref_image \
|
| 67 |
+
--env_map_path datasets/envs/Pink_Sunrise \
|
| 68 |
+
--frame_interval 1 \
|
| 69 |
+
--num_inference_steps 50 \
|
| 70 |
+
--quality 10
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## Checkpoints
|
| 74 |
+
|
| 75 |
+
The following checkpoints are available in this repository:
|
| 76 |
+
|
| 77 |
+
| Checkpoint | Resolution | Frames |
|
| 78 |
+
| :--- | :---: | :---: |
|
| 79 |
+
| `model_frame25_480_832.ckpt` | 480 × 832 | 25 |
|
| 80 |
+
| `model_frame57_480_832.ckpt` | 480 × 832 | 57 |
|
| 81 |
+
| `model_frame1_1024_1472.ckpt` | 1024 × 1472 | 1 (Image) |
|
| 82 |
+
|
| 83 |
+
Note: Inference also requires the Wan2.1 base model weights to be placed under `models/Wan-AI/Wan2.1-T2V-1.3B/`.
|
| 84 |
+
|
| 85 |
+
## Citation
|
| 86 |
+
|
| 87 |
+
If you find this work helpful, please consider citing the paper:
|
| 88 |
+
|
| 89 |
+
```bibtex
|
| 90 |
+
@article{xiao2026relitlive,
|
| 91 |
+
title={Relit-LiVE: Relight Video by Jointly Learning Environment Video},
|
| 92 |
+
author={Xiao, Weiqing and Li, Hong and Yang, Xiuyu and Chen, Houyuan and Li, Wenyi and Liu, Tianqi and Xu, Shaocong and Ye, Chongjie and Zhao, Hao and Wang, Beibei},
|
| 93 |
+
journal={arXiv preprint arXiv:2605.06658},
|
| 94 |
+
year={2026}
|
| 95 |
+
}
|
| 96 |
+
```
|