xhLiu commited on
Commit
96911e1
·
verified ·
1 Parent(s): 6bf91e7

Add files using upload-large-folder tool

Browse files
Files changed (3) hide show
  1. README.md +62 -15
  2. img/anchor_shift.jpg +3 -0
  3. img/framework.png +3 -0
README.md CHANGED
@@ -1,23 +1,70 @@
1
- # MM_eval Packaged Dataset
2
 
3
- This directory contains tar shards prepared for upload to the Hugging Face dataset repo `xhLiu/MM_eval`.
4
 
5
- Packaging choices:
6
- - Each top-level dataset under `datasets/` is archived independently.
7
- - Archives are split into 5 GiB shards named `DATASET.tar.part-XXX`.
8
- - Local cache directories such as `audiocaps_train/.cache` are excluded.
9
 
10
- To restore a dataset after download:
11
 
12
- ```bash
13
- cat DATASET.tar.part-* | tar -xf -
14
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- Examples:
17
 
18
- ```bash
19
- cat activitynet.tar.part-* | tar -xf -
20
- cat vatex.tar.part-* | tar -xf -
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ```
22
 
23
- Checksums for all shards are stored in `sha256sums.txt`.
 
 
 
 
 
1
+ <div align="center">
2
 
3
+ <h1><a color="red" href="https://arxiv.org/pdf/2511.12034">Calibrated Multimodal Representation Learning with Missing Modalities</a></h1>
4
 
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ ![License](https://img.shields.io/badge/Accepted-ICML'2026-red)
 
 
7
 
8
+ *Multimodal representation learning under partial-modality settings*
9
 
10
+ </div>
11
+
12
+ ## ✨ Overview
13
+
14
+ <p align="center">
15
+ <img src="img/anchor_shift.jpg" alt="Anchor shift" width="420" />
16
+ </p>
17
+
18
+ **CalMRL** is a multimodal representation learning framework designed for alignment calibration when some modalities are missing.
19
+ CalMRL combines two complementary goals:
20
+ - **Cross-modal alignment** for robust shared representations
21
+ - **Missing-modality calibration** through posterior inference and learned generative parameters
22
+
23
+ ---
24
+
25
+ ## 🎯 Key Features
26
+
27
+ 🔄 **Partial-Modality Learning**
28
+ - Handles missing video, audio, text, or subtitle signals
29
+ - Supports posterior-based feature completion with learned modality-specific parameters
30
+
31
+ 🎯 **Multimodal Retrieval**
32
+ - Joint training over text-video, text-audio, text-video-audio, and subtitle-aware setups
33
+ - Config-driven recipes for pretraining, finetuning, and evaluation
34
+
35
+ 🧠 **Feature Calibration**
36
+ - Uses latent posterior inference for modality completion
37
+ - Includes a warmup pipeline to estimate `W`, `mu`, and `log_sigma`
38
 
39
+ ---
40
 
41
+ ## 🏗️ Architecture
42
+
43
+ ![](img/framework.png)
44
+
45
+ The current codebase is organized around three main stages:
46
+
47
+ 1. **🔧 Multimodal Encoding**: Video, audio, text, and subtitle features are extracted with VAST-style encoders.
48
+ 2. **🧮 Representation Calibration**: Shared embeddings are aligned while latent posterior inference estimates missing information.
49
+ 3. **🔄 Downstream Evaluation**: Retrieval and other tasks are executed through a unified config-driven pipeline.
50
+
51
+ ---
52
+
53
+ ## Citation
54
+
55
+ If this project is useful for your research, you can cite the work as:
56
+
57
+ ```bibtex
58
+ @article{liu2025calibrated,
59
+ title={Calibrated Multimodal Representation Learning with Missing Modalities},
60
+ author={Liu, Xiaohao and Xia, Xiaobo and Wei, Jiaheng and Yang, Shuo and Su, Xiu and Ng, See-Kiong and Chua, Tat-Seng},
61
+ journal={arXiv preprint arXiv:2511.12034},
62
+ year={2025}
63
+ }
64
  ```
65
 
66
+ <div align="center">
67
+
68
+ **[🔝 Back to Top](#-overview)**
69
+
70
+ </div>
img/anchor_shift.jpg ADDED

Git LFS Details

  • SHA256: c4d2cefd42dacef53dd9b5fdcdb3eff1fd793b54ed27b680a9db918015a2dfa6
  • Pointer size: 131 Bytes
  • Size of remote file: 108 kB
img/framework.png ADDED

Git LFS Details

  • SHA256: 3a71ed10bcdf73137c117e52d2612e119f6a786bb663f2f90c05d57eb67d3cf9
  • Pointer size: 131 Bytes
  • Size of remote file: 462 kB