File size: 6,124 Bytes
e1ac665
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19863b2
e1ac665
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: apache-2.0
task_categories:
- image-retrieval
- vision-language-navigation
tags:
- composed-image-retrieval
- robust-learning
- blip-2
- pytorch
- icassp-2026
---
<a id="top"></a>
<div align="center">
  <h1>(ICASSP 2026) HINT: Composed Image Retrieval with Dual-Path Compositional Contextualized Network (Model Weights)</h1>
  <div>
  <a target="_blank" href="https://zh-mingyu.github.io/">Mingyu&#160;Zhang</a><sup>1</sup>,
  <a target="_blank" href="https://lee-zixu.github.io/">Zixu&#160;Li</a><sup>1</sup>,
  <a target="_blank" href="https://zivchen-ty.github.io/">Zhiwei&#160;Chen</a><sup>1</sup>,
  <a target="_blank" href="https://zhihfu.github.io/">Zhiheng&#160;Fu</a><sup>1</sup>,
  Xiaowei&#160;Zhu<sup>1</sup>,
  Jiajia&#160;Nie<sup>1</sup>,
  <a target="_blank" href="https://faculty.sdu.edu.cn/weiyinwei1/zh_CN/index.htm">Yinwei&#160;Wei</a><sup>1</sup>
  <a target="_blank" href="https://faculty.sdu.edu.cn/huyupeng1/zh_CN/index.htm">Yupeng&#160;Hu</a><sup>1&#9993</sup>,
  </div>
  <sup>1</sup>School of Software, Shandong University &#160&#160&#160</span>
  <br />
  <sup>&#9993&#160;</sup>Corresponding author&#160;&#160;</span>
  <br/>
  <p>
      <a href="https://2026.ieeeicassp.org/"><img src="https://img.shields.io/badge/ICASSP-2026-blue.svg?style=flat-square" alt="ICASSP 2026"></a>
      <a href="https://arxiv.org/pdf/2603.26341v1"><img alt='Paper' src="https://img.shields.io/badge/Paper-ICASSP-green.svg"></a>
      <a href="https://zh-mingyu.github.io/HINT.github.io"><img alt='page' src="https://img.shields.io/badge/Website-orange"></a>
      <a href="https://github.com/iLearn-Lab/ICASSP26-HINT"><img alt='GitHub' src="https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github"></a>
  </p>
</div>
This repository hosts the official pre-trained checkpoints for **HINT**, a novel framework designed to tackle the neglect of contextual information and the absence of discrepancy-amplification mechanisms in Composed Image Retrieval (CIR).

---

## πŸ“Œ Model Information

### 1. Model Name
**HINT** (dual-patH composItional coNtextualized neTwork) Checkpoints.

### 2. Task Type & Applicable Tasks
- **Task Type:** Composed Image Retrieval (CIR) / Vision-Language Retrieval.
- **Applicable Tasks:** Retrieving target images based on a reference image and a modification text.

### 3. Project Introduction
Existing Composed Image Retrieval (CIR) methods often suffer from the neglect of contextual information in discriminating matching samples , struggling to understand complex modifications and implicit dependencies in real-world scenarios. HINT effectively addresses this through:

- 🧩 Dual Context Extraction (DCE): Extracts both intra-modal context and cross-modal context, enhancing joint semantic representation by integrating multimodal contextual information.

- πŸ“ Quantification of Contextual Relevance (QCR): Measures the relevance between cross-modal contextual information and the target image semantics, enabling the quantification of the implicit dependencies.

- βš–οΈ Dual-Path Consistency Constraints (DPCC): Optimizes the training process by constraining representation consistency, ensuring the stable enhancement of similarity for matching instances while lowering it for non-matching ones.

Based on the BLIP-2 architecture , HINT achieves State-of-the-Art (SOTA) retrieval performance across both open-domain and fashion-domain benchmarks.

### 4. Training Data Source & Hosted Weights
The models were trained on the **FashionIQ** and **CIRR** datasets . This Hugging Face repository provides the corresponding `.pt` checkpoint files organized by dataset:


* `fashioniq.pt` (Trained on FashionIQ)

* `cirr.pt` (Trained on CIRR)

---

## πŸš€ Usage & Basic Inference

These weights are designed to be evaluated seamlessly using the official [HINT GitHub repository](https://github.com/iLearn-Lab/ICASSP26-HINT).

### Step 1: Prepare the Environment
Clone the GitHub repository and install dependencies:
```bash
git clone https://github.com/iLearn-Lab/ICASSP26-HINT
cd ICASSP26-HINT
conda create -n hint python=3.8 -y
conda activate hint
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install open-clip-torch==2.24.0 scikit-learn==1.3.2 transformers==4.25.0 salesforce-lavis==1.0.2 timm==0.9.16
```

### Step 2: Download Model Weights
Download the specific `.pt` files you wish to evaluate from this Hugging Face repository. Place them into a `checkpoints/` directory within your cloned GitHub repo. For example, to evaluate the CIRR model:

```text
ICASSP26-HINT/
└── checkpoints/
        └── cirr.pt  <-- (Rename to best_model.pt if required by your specific test script)
```

### Step 3: Run Testing / Evaluation
To generate prediction files on the CIRR dataset for the [CIRR Evaluation Server](https://cirr.cecs.anu.edu.au/), point the test script to the directory containing your downloaded checkpoint:

```bash
python src/cirr_test_submission.py checkpoints/
```
*(The script will automatically output `.json` files based on the checkpoint for online evaluation.)*

---

## ⚠️ Limitations & Notes

- **Hardware Requirements:** Because HINT is built upon the powerful BLIP-2 architecture, inference and further fine-tuning require GPUs with sufficient memory (e.g., NVIDIA A40 48G / V100 32G is recommended).
- **Intended Use:** These weights are provided for academic research and to facilitate reproducibility of the ICASSP 2026 paper. 

---

## πŸ“β­οΈ Citation

If you find our work, code, or these model weights useful in your research, please consider leaving a **Star** ⭐️ on our GitHub repository and citing our paper:

```bibtex
@inproceedings{HINT2026,
  title={HINT: COMPOSED IMAGE RETRIEVAL WITH DUAL-PATH COMPOSITIONAL CONTEXTUALIZED NETWORK},
  author={Zhang, Mingyu and Li, Zixu and Chen, Zhiwei and Fu, Zhiheng and Zhu, Xiaowei and Nie, Jiajia and Wei, Yinwei and Hu, Yupeng},
  booktitle={Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2026}
}
```