Papers
arxiv:2309.15842

Exploiting the Signal-Leak Bias in Diffusion Models

Published on Sep 27, 2023
Authors:
,
,
,
,

Abstract

The paper exploits signal-leak bias in diffusion models to enhance control over generated images, improving brightness variation and style matching without additional training.

AI-generated summary

There is a bias in the inference pipeline of most diffusion models. This bias arises from a signal leak whose distribution deviates from the noise distribution, creating a discrepancy between training and inference processes. We demonstrate that this signal-leak bias is particularly significant when models are tuned to a specific style, causing sub-optimal style matching. Recent research tries to avoid the signal leakage during training. We instead show how we can exploit this signal-leak bias in existing diffusion models to allow more control over the generated images. This enables us to generate images with more varied brightness, and images that better match a desired style or color. By modeling the distribution of the signal leak in the spatial frequency and pixel domains, and including a signal leak in the initial latent, we generate images that better match expected results without any additional training.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2309.15842
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.15842 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.15842 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2309.15842 in a Space README.md to link it from this page.

Collections including this paper 1