Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -12,11 +12,6 @@ tags:
|
|
| 12 |
- lora
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# LuxDiT
|
| 16 |
-
|
| 17 |
-
This is the model checkpoint for [LuxDiT](https://github.com/nv-tlabs/LuxDiT): Lighting Estimation with Video Diffusion Transformer. It is finetuned on image data and includes a LoRA adapter for real scenes.
|
| 18 |
-
|
| 19 |
-

|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
@@ -29,6 +24,8 @@ LuxDiT is a generative lighting estimation model that predicts high-quality HDR
|
|
| 29 |
|
| 30 |
**Use case:** LuxDiT supports studies and prototyping in video lighting estimation. This release is an open-source implementation of our research paper, intended for AI research, development, and benchmarking for lighting estimation research.
|
| 31 |
|
|
|
|
|
|
|
| 32 |
### Model architecture
|
| 33 |
|
| 34 |
- **Architecture type:** Transformer (based on [CogVideoX](https://github.com/THUDM/CogVideoX))
|
|
@@ -49,7 +46,7 @@ LuxDiT is a generative lighting estimation model that predicts high-quality HDR
|
|
| 49 |
From the [LuxDiT repository](https://github.com/nv-tlabs/LuxDiT) root:
|
| 50 |
|
| 51 |
```bash
|
| 52 |
-
python download_weights.py --repo_id
|
| 53 |
```
|
| 54 |
|
| 55 |
This saves the checkpoint to `checkpoints/LuxDiT` by default. Use `--local_dir` to override.
|
|
@@ -196,8 +193,8 @@ python hdr_merger.py \
|
|
| 196 |
|
| 197 |
| Checkpoint | Description |
|
| 198 |
|-------------|-------------|
|
| 199 |
-
| [luxdit_image](https://huggingface.co/nvidia/LuxDiT/luxdit_image) | Image-finetuned, with LoRA for real scenes |
|
| 200 |
-
| [luxdit_video](https://huggingface.co/nvidia/LuxDiT/luxdit_video) | Video-finetuned, with LoRA for real scenes |
|
| 201 |
|
| 202 |
For video inputs and object scans, use **luxdit_video** instead.
|
| 203 |
|
|
|
|
| 12 |
- lora
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
## Model description
|
| 17 |
|
|
|
|
| 24 |
|
| 25 |
**Use case:** LuxDiT supports studies and prototyping in video lighting estimation. This release is an open-source implementation of our research paper, intended for AI research, development, and benchmarking for lighting estimation research.
|
| 26 |
|
| 27 |
+

|
| 28 |
+
|
| 29 |
### Model architecture
|
| 30 |
|
| 31 |
- **Architecture type:** Transformer (based on [CogVideoX](https://github.com/THUDM/CogVideoX))
|
|
|
|
| 46 |
From the [LuxDiT repository](https://github.com/nv-tlabs/LuxDiT) root:
|
| 47 |
|
| 48 |
```bash
|
| 49 |
+
python download_weights.py --repo_id nvidia/LuxDiT
|
| 50 |
```
|
| 51 |
|
| 52 |
This saves the checkpoint to `checkpoints/LuxDiT` by default. Use `--local_dir` to override.
|
|
|
|
| 193 |
|
| 194 |
| Checkpoint | Description |
|
| 195 |
|-------------|-------------|
|
| 196 |
+
| [luxdit_image](https://huggingface.co/nvidia/LuxDiT/tree/main/luxdit_image) | Image-finetuned, with LoRA for real scenes |
|
| 197 |
+
| [luxdit_video](https://huggingface.co/nvidia/LuxDiT/tree/main/luxdit_video) | Video-finetuned, with LoRA for real scenes |
|
| 198 |
|
| 199 |
For video inputs and object scans, use **luxdit_video** instead.
|
| 200 |
|