Papers
arxiv:2605.04453

StableI2I: Spotting Unintended Changes in Image-to-Image Transition

Published on May 6
· Submitted by
lijiayang
on May 6
Authors:
,
,
,
,
,
,
,
,

Abstract

StableI2I is a unified evaluation framework that assesses content fidelity and consistency in image-to-image tasks without requiring reference images, providing accurate and interpretable measurements correlated with human judgments.

AI-generated summary

In most real-world image-to-image (I2I) scenarios, existing evaluations primarily focus on instruction following and the perceptual quality or aesthetics of the generated images. However, they largely fail to assess whether the output image preserves the semantic correspondence and spatial structure of the input image. To address this limitation, we propose StableI2I, a unified and dynamic evaluation framework that explicitly measures content fidelity and pre--post consistency across a wide range of I2I tasks without requiring reference images, including image editing and image restoration. In addition, we construct StableI2I-Bench, a benchmark designed to systematically evaluate the accuracy of MLLMs on such fidelity and consistency assessment tasks. Extensive experimental results demonstrate that StableI2I provides accurate, fine-grained, and interpretable evaluations of content fidelity and consistency, with strong correlations to human subjective judgments. Our framework serves as a practical and reliable evaluation tool for diagnosing content consistency and benchmarking model performance in real-world I2I systems.

Community

Paper submitter

[ICML 2026] The first model for evaluating fidelity in image-to-image tasks. It assesses whether the generated image suffers from content errors, texture repainting, or other unintended changes, helping ensure consistency in regions that should be preserved.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.04453
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.04453 in a Space README.md to link it from this page.

Collections including this paper 1