itshardtogetaname commited on
Commit
86a1bd4
·
verified ·
1 Parent(s): f55d84b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -7
README.md CHANGED
@@ -1,11 +1,26 @@
1
  ---
2
- pipeline_tag: biosignal-model
3
  tags:
4
- - model_hub_mixin
5
- - pytorch_model_hub_mixin
 
 
 
6
  ---
7
 
8
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
9
- - Code: https://github.com/hzhou3/xMAE
10
- - Paper: [More Information Needed]
11
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: biosignal
3
  tags:
4
+ - biosignal
5
+ - wearable
6
+ - ppg
7
+ - ecg
8
+ - masked-autoencoder
9
  ---
10
 
11
+
12
+ # xMAE: Physiology-Aware Masked Cross-Modal Reconstruction
13
+
14
+
15
+ **Abstract:**
16
+
17
+ Biosignals acquired from different locations on the body often provide temporally ordered views of the same underlying physiological process. However, most existing self-supervised learning methods treat these signals as interchangeable views, overlooking the directional temporal dynamics that link them. A canonical example is the relationship between electrocardiography (ECG), which captures the electrical activation initiating each heartbeat, and photoplethysmography (PPG), which records the resulting peripheral pulse delayed by vascular dynamics. To capture this structured relationship, we introduce xMAE, a biosignal pretraining framework that leverages masked cross-modal reconstruction across temporally ordered biosignals as a training-time constraint to encourage physiologically meaningful timing structure in the learned representations. We show that pretraining with xMAE yields representations that outperform both unimodal and multimodal baselines on 15 of 19 downstream tasks, including cardiovascular outcome prediction, abnormal laboratory test detection, sleep staging, and demographic inference, while generalizing across devices, body locations, and acquisition settings. Further analysis suggests that the ECG--PPG timing structure is reflected in the learned PPG representations. More broadly, xMAE demonstrates the effectiveness of incorporating temporal structure into multimodal pretraining when signals observe different stages of a shared underlying process.
18
+
19
+
20
+
21
+ [![arXiv](https://img.shields.io/badge/arXiv-2605.00973-b31b1b.svg)](https://arxiv.org/abs/2605.00973)
22
+ [![PDF](https://img.shields.io/badge/PDF-View-blue.svg)](https://arxiv.org/pdf/2605.00973)
23
+ [![GitHub](https://img.shields.io/github/stars/hzhou3/xMAE?style=social)](https://github.com/hzhou3/xMAE)
24
+
25
+
26
+