File size: 3,431 Bytes
498cd1a
 
 
 
 
 
 
 
 
93b1197
ba94a5e
 
d1ed0f3
 
 
 
 
 
 
83fa815
 
d1ed0f3
 
 
 
 
 
 
 
 
 
 
 
e90d02d
d1ed0f3
e90d02d
d1ed0f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
datasets:
- lijiayangCS/StableI2I_Bench
language:
- en
base_model:
- Qwen/Qwen3-VL-8B-Instruct
new_version: lijiayangCS/StableI2I_PLUS
pipeline_tag: image-text-to-text
---

# StableI2I

Official implementation of **StableI2I: Spotting Unintended Changes in Image-to-Image Transition**  
**ICML 2026**

> This HuggingFace repository provides the checkpoint used in the paper.  
> For the latest code, demo, inference scripts, and score-supported version, please refer to the official GitHub repository:  
> https://github.com/Henry-Lee-real/StableI2I.
> This model is associated with our paper: https://arxiv.org/abs/2605.04453

Any questions can be consulted via email: **lijiayang.cs@gmail.com**

Looking forward to your ⭐!

---

## 📌 TODOs

- [x] Release code
- [x] Release checkpoint
- [ ] Release pip package
- [x] Release arXiv version
- [ ] Release ICML camera-ready paper
- [x] Release HuggingFace project page

---

## 🔥 News

- **StableI2I** is accepted by **ICML 2026**.
- This HuggingFace repository hosts the checkpoint used in the paper.
- The latest codebase is maintained in the official GitHub repository.
- If you need the version with explicit **score output**, please use the latest GitHub code.

---

## Core Concept

In most real-world image-to-image (I2I) scenarios, existing evaluations primarily focus on instruction following and perceptual quality or aesthetics of the generated images. However, they often fail to assess whether the output image faithfully preserves the semantic correspondence, spatial structure, and low-level appearance of the input image.

To address this limitation, we propose **StableI2I**, a unified and dynamic evaluation framework for measuring content fidelity and pre--post consistency in image-to-image transitions. StableI2I does not require reference images and can be applied to a wide range of I2I tasks, including image editing and image restoration.

StableI2I evaluates unintended changes from three complementary perspectives:

1. **Semantic Level**  
   Checks whether the output introduces unintended object-level or meaning-level changes, such as object addition, removal, replacement, or identity drift.

2. **Structure Level**  
   Checks whether the output preserves spatial layout and geometric consistency, including misalignment, deformation, repainting, and structural distortion.

3. **Low-level Appearance**  
   Checks whether the output introduces unintended visual degradation, such as blur, noise, color cast, exposure degradation, or artifacts.

In addition, we construct **StableI2I-Bench**, a benchmark designed to systematically evaluate the ability of MLLMs to judge content fidelity and consistency in image-to-image tasks.

Extensive experiments show that StableI2I provides accurate, fine-grained, and interpretable evaluations, with strong correlations to human subjective judgments. It serves as a practical evaluation tool for diagnosing content consistency and benchmarking real-world I2I systems.

---

## Model Checkpoint

This HuggingFace repository provides the checkpoint used in the StableI2I paper.

Please note:

- The checkpoint corresponds to the paper version.
- For the latest inference pipeline, API interface, and score-supported output format, please refer to the official GitHub repository.
- The model is built upon the Qwen3-VL environment and follows the Qwen3-VL inference style.

---