Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,59 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [Static for Dynamic: Towards a Deeper Understanding of Dynamic Facial Expressions Using Static Expression Data](https://arxiv.org/pdf/2409.06154)
|
| 2 |
+
<img width="1024" height="506" alt="image" src="https://github.com/user-attachments/assets/db750330-84e2-4128-96c3-77c4a8fdc76c" />
|
| 3 |
+
|
| 4 |
+
## 📰 News
|
| 5 |
+
|
| 6 |
+
**[2025.9.17]** Our previous work [S2D](https://github.com/MSA-LMC/S2D/tree/main) has been recognized as a Highly Cited Paper by Clarivate.
|
| 7 |
+
|
| 8 |
+
**[2025.9.17]** The code and pre-trained models are available.
|
| 9 |
+
|
| 10 |
+
**[2025.9.15]** The paper is accepted by the IEEE Transactions on Affective Computing.
|
| 11 |
+
|
| 12 |
+
~~[2024.9.5] Code and pre-trained models will be released here.~~
|
| 13 |
+
|
| 14 |
+
## 🚀 Main Results
|
| 15 |
+
|
| 16 |
+
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/31b131e1-6530-4486-9bb4-a006fe464d32" />
|
| 17 |
+
|
| 18 |
+
<img width="1024" height="464" alt="image" src="https://github.com/user-attachments/assets/41904e7a-31cb-4025-badc-4fdc979b1763" />
|
| 19 |
+
|
| 20 |
+
<img width="1024" height="377" alt="image" src="https://github.com/user-attachments/assets/237962f6-4aa8-4855-b7d0-306df5d0ee73" />
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## Pre-Training and Fine-Tune
|
| 24 |
+
1、 Download the pre-trained weights from [Huggingface](https://huggingface.co/cyinen/S4D), and move it to the [finetune/checkpoints/pretrain/voxceleb2+AffectNet] directory.
|
| 25 |
+
|
| 26 |
+
2、 Run the following command to pre-train or fine-tune the model on the target dataset.
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
# create the envs
|
| 30 |
+
conda create -n s4d python=3.9
|
| 31 |
+
conda activate s4d
|
| 32 |
+
pip install -r requirements.txt
|
| 33 |
+
|
| 34 |
+
# pre-train
|
| 35 |
+
cd pretrain/omnivision && OMP_NUM_THREADS=1 HYDRA_FULL_ERROR=1 python train_app_submitit.py +experiments=videomae/videomae_base_vox2_affectnet
|
| 36 |
+
|
| 37 |
+
# fine-tune
|
| 38 |
+
cd finetune && bash run.sh
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## ✏️ Citation
|
| 42 |
+
|
| 43 |
+
If you find this work helpful, please consider citing:
|
| 44 |
+
```bibtex
|
| 45 |
+
@ARTICLE{10663980,
|
| 46 |
+
author={Chen, Yin and Li, Jia and Shan, Shiguang and Wang, Meng and Hong, Richang},
|
| 47 |
+
journal={IEEE Transactions on Affective Computing},
|
| 48 |
+
title={From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos},
|
| 49 |
+
year={2024},
|
| 50 |
+
volume={},
|
| 51 |
+
number={},
|
| 52 |
+
pages={1-15},
|
| 53 |
+
keywords={Adaptation models;Videos;Computational modeling;Feature extraction;Transformers;Task analysis;Face recognition;Dynamic facial expression recognition;emotion ambiguity;model adaptation;transfer learning},
|
| 54 |
+
doi={10.1109/TAFFC.2024.3453443}}
|
| 55 |
+
|
| 56 |
+
@ARTICLE{11207542,
|
| 57 |
+
author={Chen, Yin and Li, Jia and Zhang, Yu and Hu, Zhenzhen and Shan, Shiguang and Wang, Meng and Hong, Richang},
|
| 58 |
+
journal={IEEE Transactions on Affective Computing},
|
| 59 |
+
title={Static for Dynamic: Towards a Deeper Understanding of Dynamic Facial Expressions Using Static Expression Data},
|