MM_eval / README.md
xhLiu's picture
Update README.md
06d0043 verified
<div align="center">
<h1><a color="red" href="https://arxiv.org/pdf/2511.12034">Calibrated Multimodal Representation Learning with Missing Modalities</a></h1>
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![License](https://img.shields.io/badge/Accepted-ICML'2026-red)
[![License: MIT](https://img.shields.io/badge/Github-CalMRL-black.svg)](https://github.com/Xiaohao-Liu/CalMRL)
*Multimodal representation learning under partial-modality settings*
</div>
## ✨ Overview
<p align="center">
<img src="img/anchor_shift.jpg" alt="Anchor shift" width="420" />
</p>
**CalMRL** is a multimodal representation learning framework designed for alignment calibration when some modalities are missing.
CalMRL combines two complementary goals:
- **Cross-modal alignment** for robust shared representations
- **Missing-modality calibration** through posterior inference and learned generative parameters
---
## 🎯 Key Features
🔄 **Partial-Modality Learning**
- Handles missing video, audio, text, or subtitle signals
- Supports posterior-based feature completion with learned modality-specific parameters
🎯 **Multimodal Retrieval**
- Joint training over text-video, text-audio, text-video-audio, and subtitle-aware setups
- Config-driven recipes for pretraining, finetuning, and evaluation
🧠 **Feature Calibration**
- Uses latent posterior inference for modality completion
- Includes a warmup pipeline to estimate `W`, `mu`, and `log_sigma`
---
## 🏗️ Architecture
![](img/framework.png)
The current codebase is organized around three main stages:
1. **🔧 Multimodal Encoding**: Video, audio, text, and subtitle features are extracted with VAST-style encoders.
2. **🧮 Representation Calibration**: Shared embeddings are aligned while latent posterior inference estimates missing information.
3. **🔄 Downstream Evaluation**: Retrieval and other tasks are executed through a unified config-driven pipeline.
---
## Citation
If this project is useful for your research, you can cite the work as:
```bibtex
@article{liu2025calibrated,
title={Calibrated Multimodal Representation Learning with Missing Modalities},
author={Liu, Xiaohao and Xia, Xiaobo and Wei, Jiaheng and Yang, Shuo and Su, Xiu and Ng, See-Kiong and Chua, Tat-Seng},
journal={arXiv preprint arXiv:2511.12034},
year={2025}
}
```
<div align="center">
**[🔝 Back to Top](#-overview)**
</div>