File size: 2,462 Bytes
96911e1
6bf91e7
96911e1
6bf91e7
96911e1
 
06d0043
 
6bf91e7
96911e1
6bf91e7
96911e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6bf91e7
96911e1
6bf91e7
96911e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6bf91e7
 
96911e1
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
<div align="center">

<h1><a color="red" href="https://arxiv.org/pdf/2511.12034">Calibrated Multimodal Representation Learning with Missing Modalities</a></h1>

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![License](https://img.shields.io/badge/Accepted-ICML'2026-red)
[![License: MIT](https://img.shields.io/badge/Github-CalMRL-black.svg)](https://github.com/Xiaohao-Liu/CalMRL)


*Multimodal representation learning under partial-modality settings*

</div>

## ✨ Overview

<p align="center">
  <img src="img/anchor_shift.jpg" alt="Anchor shift" width="420" />
</p>

**CalMRL** is a multimodal representation learning framework designed for alignment calibration when some modalities are missing.
CalMRL combines two complementary goals:
- **Cross-modal alignment** for robust shared representations
- **Missing-modality calibration** through posterior inference and learned generative parameters

---

## 🎯 Key Features

🔄 **Partial-Modality Learning**
- Handles missing video, audio, text, or subtitle signals
- Supports posterior-based feature completion with learned modality-specific parameters

🎯 **Multimodal Retrieval**
- Joint training over text-video, text-audio, text-video-audio, and subtitle-aware setups
- Config-driven recipes for pretraining, finetuning, and evaluation

🧠 **Feature Calibration**
- Uses latent posterior inference for modality completion
- Includes a warmup pipeline to estimate `W`, `mu`, and `log_sigma`

---

## 🏗️ Architecture

![](img/framework.png)

The current codebase is organized around three main stages:

1. **🔧 Multimodal Encoding**: Video, audio, text, and subtitle features are extracted with VAST-style encoders.
2. **🧮 Representation Calibration**: Shared embeddings are aligned while latent posterior inference estimates missing information.
3. **🔄 Downstream Evaluation**: Retrieval and other tasks are executed through a unified config-driven pipeline.

---

## Citation

If this project is useful for your research, you can cite the work as:

```bibtex
@article{liu2025calibrated,
  title={Calibrated Multimodal Representation Learning with Missing Modalities},
  author={Liu, Xiaohao and Xia, Xiaobo and Wei, Jiaheng and Yang, Shuo and Su, Xiu and Ng, See-Kiong and Chua, Tat-Seng},
  journal={arXiv preprint arXiv:2511.12034},
  year={2025}
}
```

<div align="center">

**[🔝 Back to Top](#-overview)**

</div>