Datasets:

Languages:
English
ArXiv:
License:
MER2026 / README.md
MERChallenge's picture
Update dataset README and post-approval guide
881f867 verified
---
license: cc-by-nc-4.0
viewer: false
extra_gated_prompt: >-
This dataset is provided for academic research and MER2026 challenge participation only.
By requesting access, your team confirms that all submitted information is accurate and complete.
The dataset, annotations, and any derived files must not be redistributed, mirrored, modified, or used for commercial purposes without prior written permission.
Access will be granted only after manual review by the MER2026 organizers.
extra_gated_fields:
Team Name: text
Team Leader Name: text
Team Leader Email: text
Team Members (comma-separated): text
Organization / University / Company: text
Country / Region: country
I confirm that my team will use this dataset for academic research and challenge participation only: checkbox
I agree not to redistribute the dataset, annotations, or derived files without written permission: checkbox
language:
- en
---
## Dataset Access Form
Please **follow this format before submitting the gated form**. Many requests are rejected because the team information does not match the expected format.
### Example Application
| Field | Example |
| :--- | :--- |
| Team Name | Tongji-Affect-Lab |
| Team Leader Name | Alice Chen |
| Team Leader Email | alice.chen@university.edu |
| Team Members (comma-separated) | Alice Chen, Bob Li, Carol Wang |
| Organization / University / Company | Tongji University |
| Country / Region | China |
- Submit **one request per team**, not one request per member.
- The request should be submitted by the **team leader or the main contact person**.
- Make sure the team name and member list are consistent with your challenge registration.
---
# MER2026
Official gated dataset page for the **[MER2026 Challenge](https://zeroqiaoba.github.io/MER-Challenge/)** at **ACM Multimedia 2026**.
From **Discriminative Emotion Recognition** to **Generative Emotion Understanding**.
- Baseline code: [MERTools for MER2026](https://github.com/zeroQiaoba/MERTools)
- Baseline paper: [MER 2026: From Discriminative Emotion Recognition to Generative Emotion Understanding](https://arxiv.org/abs/2604.19417)
---
## What Is MER2026?
MER2026 marks the fourth edition of the MER series of challenges. The MER series provides valuable data resources to the research community and offers tasks centered on recent research trends, establishing itself as one of the largest challenges in this field.
Throughout its history, the focus of MER has shifted from discriminative emotion recognition to generative emotion understanding. MER2023 concentrated on discriminative emotion recognition with fixed basic labels. In MER2024 and MER2025, we transitioned to generative emotion understanding, leveraging the extensive vocabulary and multimodal understanding capabilities of MLLMs to facilitate fine-grained and explainable emotion recognition.
Building on this trajectory, MER2026 contains four tracks: **MER-Cross**, **MER-FG**, **MER-Prefer**, and **MER-PS**.
---
## Four Challenge Tracks
### MER-Cross: Interlocutor Emotion
A newly introduced track that shifts the focus from individual scenarios to dyadic interactions. When speaker `s1` is talking, MER-Cross targets the emotion of the listening `s2` rather than the speaker, enabling the model to capture both sides of a conversation.
### MER-FG: Fine-grained Emotion
Human emotion extends far beyond a small set of basic labels. In this track, participants can predict any number of emotion labels across diverse categories, expanding recognition scope from basic to more nuanced emotions.
### MER-Prefer: Emotion Preference
A newly introduced track predicting which of two emotion descriptions is preferred by human annotators for a given video. This is an important component for reward modeling in emotion understanding.
### MER-PS: Physiological Signal Emotion
This track shifts emotion recognition from observable behaviors to physiological evidence. It uses synchronized EEG and fNIRS signals, together with real-time dynamic valence-arousal annotation, to estimate emotion trajectories from brain signals.
---
## Important Dates
| Event | Date |
| :--- | :--- |
| Data, baseline paper, and code available | April 30, 2026 |
| Results submission opens | June 26, 2026 |
| Results submission deadline | July 13, 2026 |
| Paper submission deadline | July 22, 2026 |
| Paper acceptance notification | July 30, 2026 |
| Camera-ready paper deadline | August 6, 2026 |
| ACM Multimedia 2026 | November 10-14, 2026 |
> All deadlines are **23:59 Anywhere on Earth (AoE)**.
---
## Resources
- Official website: [MER2026 Challenge & MRAC2026 Workshop](https://zeroqiaoba.github.io/MER-Challenge/)
- Baseline code: [https://github.com/zeroQiaoba/MERTools](https://github.com/zeroQiaoba/MERTools)
- Baseline paper: [MER 2026: From Discriminative Emotion Recognition to Generative Emotion Understanding](https://arxiv.org/abs/2604.19417)
---
## Access Policy
- Academic research and official challenge participation only.
- Manual approval is required.
- No redistribution, re-hosting, or unauthorized modification.
After approval, please check **`README_AFTER_APPROVAL.md`** for download instructions.
Large raw data files are currently distributed as compressed archives and should be extracted locally when needed.
**Important:** Approved participants should read the post-approval guide below before downloading archive packages.
![Post-approval guide](README_AFTER_APPROVAL.png)
---
## Contact
- `merchallenge.contact@gmail.com`
- `lianzheng@tongji.edu.cn`
---
## License
Non-commercial academic use only under gated access and organizer restrictions.
By requesting access, users agree to provide true team information and to avoid redistributing the dataset, annotations, or derived files.