Datasets:
Tasks:
Image-to-Image
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
10K - 100K
ArXiv:
License:
File size: 4,777 Bytes
f0aa290 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- diffusion
- computer-vision
- image-editing
- faces
- dataset
pretty_name: Image relighting diffusion (research data)
size_categories:
- 10K<n<100K
task_categories:
- image-to-image
---
# Learning Illumination Control in Diffusion Models — Dataset (HF)
Public **data and evaluation assets** for [*Learning Illumination Control in Diffusion Models*](https://arxiv.org/abs/2604.24877) (ReALM-GEN @ ICLR 2026).
| | |
|--|--|
| **Code** | [github.com/nishitanand/image-relighting-diffusion](https://github.com/nishitanand/image-relighting-diffusion) |
| **Model weights** | [huggingface.co/nishitanand/sd-image-relighting-model](https://huggingface.co/nishitanand/sd-image-relighting-model) |
| **Paper** | [arxiv.org/abs/2604.24877](https://arxiv.org/abs/2604.24877) |
| **Project site** | [nishitanand.github.io/relighting-diffusion-website](https://nishitanand.github.io/relighting-diffusion-website) |
---
## Download (CLI)
Install the [Hugging Face CLI](https://huggingface.co/docs/huggingface_hub/guides/cli) (`pip install -U "huggingface_hub[cli]"`), then:
```bash
huggingface-cli download nishitanand/image-relighting-diffusion-data \
--repo-type dataset \
--local-dir ./image-relighting-diffusion-data
```
You can also browse files on the [dataset page](https://huggingface.co/datasets/nishitanand/image-relighting-diffusion-data) and download subsets manually.
---
## Contents
### Training & test tensors / metadata
| Folder | Description |
|--------|-------------|
| `data-train/` | Synthetic degraded inputs + paired metadata for **SD1.5** fine-tuning |
| `data-test/` | Held-out **test** split for quantitative evaluation |
| `data_hf_train/` | (Optional) Pre-sharded `datasets` format for faster dataloader startup |
| `data-val/` | (Optional) Extra split folder if you mirror the paper’s three-way split on disk |
### OOD qualitative pack
| Path | Description |
|------|-------------|
| `qualitative_comparison/selected-64/` | 64 face crops for the paper’s out-of-distribution qualitative evaluation |
| `qualitative_comparison/ood_test_64.csv` | **64 rows** — one `editing_instruction` per image (paper Figure 6 qualitative protocol) |
| `qualitative_comparison/ood-64-results/` | (Optional) Archived run outputs + `ood_results.json` |
Paths in `ood_test_64.csv` are relative to the **`qualitative_comparison/`** directory (e.g. `selected-64/img000-lat349.png`).
### Optional evaluation bundles
| Path | Description |
|------|-------------|
| `evaluation/evaluation_results_comparison/` | Saved comparisons (our model vs SD1.5 baseline) + JSON |
| `evaluation/baseline_sdxl_long_descriptions/` | SDXL baseline outputs |
| `evaluation/baseline_flux_long_descriptions/` | FLUX baseline outputs |
---
## Using with the GitHub code
1. Clone **[image-relighting-diffusion](https://github.com/nishitanand/image-relighting-diffusion)**.
2. Download this dataset to e.g. `./hf_dataset` (command above).
3. **Training:** point `--data_dir` (or symlink) at `hf_dataset/data_hf_train` or rebuild triplets from CSVs in the code repo — see the GitHub **README** “Download prebuilt data” and “Full pipeline”.
4. **OOD:** copy `hf_dataset/qualitative_comparison/selected-64/` and `ood_test_64.csv` into the clone’s `qualitative_comparison/` next to `process_ood_test.py`.
5. **Quantitative eval:** CSVs in the code repo use paths relative to the **repository root**; keep the same relative layout or rewrite prefixes.
**FFHQ** originals are **not** part of this dataset; obtain FFHQ under its license from [NVlabs/ffhq-dataset](https://github.com/NVlabs/ffhq-dataset) and cite the StyleGAN / FFHQ paper and NVIDIA terms as required.
---
## Citation (BibTeX)
```bibtex
@article{anand2026learning,
title={Learning Illumination Control in Diffusion Models},
author={Anand, Nishit and Suri, Manan and Metzler, Christopher and Manocha, Dinesh and Duraiswami, Ramani},
journal={arXiv preprint arXiv:2604.24877},
year={2026},
note={ReALM-GEN @ ICLR 2026}
}
```
---
## License
This dataset bundle on Hugging Face is released under [**CC BY-NC-SA 4.0**](https://creativecommons.org/licenses/by-nc-sa/4.0/). See [`LICENSE`](LICENSE) in this repository.
**FFHQ.** Curated training splits in the paper trace to **Flickr-Faces-HQ (FFHQ)**. Individual FFHQ images were published on Flickr under licenses such as CC BY 2.0, CC BY-NC 2.0, and public-domain marks; the **FFHQ dataset distribution** (metadata, scripts, documentation) is provided by NVIDIA under **CC BY-NC-SA 4.0**. See [NVlabs/ffhq-dataset](https://github.com/NVlabs/ffhq-dataset).
Cite [**arXiv:2604.24877**](https://arxiv.org/abs/2604.24877) when publishing results built on this bundle.
|