AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization

ICLR 2026  Β·  Rio de Janeiro, Brazil

arXiv OpenReview GitHub Website Model Weights Dataset License Python


Official checkpoint for the ICLR 2026 paper - AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization

πŸ“„ Abstract

Emotion understanding is essential for building socially intelligent agents. Although recent multimodal large language models (MLLMs) have shown strong performance on this task, two key challenges remain: (i) spurious associations between emotions and irrelevant audiovisual cues (reasoning errors) and (ii) hallucination of audiovisual cues (perception errors) driven by text priors in the language model backbone.

To quantify and understand these issues, we introduce EmoReAlM, a benchmark designed to evaluate MLLMs for cue–emotion associations, hallucinations, and modality agreement. We then propose AVEm-DPO, a preference optimization technique that aligns model responses with both audiovisual inputs and emotion-centric queries. Specifically, we construct preferences over (i) responses exhibiting spurious associations or hallucinations and (ii) audiovisual input pairs guided by textual prompts. We also include a regularization term that penalizes reliance on text priors, thereby mitigating modality-specific cue hallucinations.

Experimental results on DFEW, RAVDESS, and EMER demonstrate that our method significantly improves the performance of reference baseline models (6–19% relative improvement) in zero-shot settings.


πŸ“£ News

Date Update
Mar. 2026 Official codebase is now public. Initial release includes evaluation scripts for EmoReAlM and other emotion benchmarks. Training code coming soon.
Mar. 2026 Model weights for AVERE-7B are live on πŸ€— HuggingFace β†’ chaubeyG/AVERE-7B.
Feb. 2026 Pleased to announce our CVPR 2026 paper on using DPO to mitigate cross-modal hallucinations in omni-LLMs β†’ MoD-DPO.
Jan. 2026 EmoReAlM benchmark released on πŸ€— HuggingFace β†’ chaubeyG/EmoReAlM.
Jan. 2026 AVERE accepted to ICLR 2026. See you in Rio de Janeiro!

πŸ† Results

Detailed results and the EmoReAlM leaderboard are available on the project website: avere-iclr.github.io

To submit your model to the EmoReAlM leaderboard, please contact the first author at achaubey@usc.edu.


πŸ”§ Installation and Instructions

Please visit our GitHub repository - ihp-lab/AVERE for detailed instructions on how to use the model checkpoints for inference.


βš–οΈ License

This codebase is distributed under the USC Research License. See LICENSE.rst for details.

Portions of this codebase are derived from Vista-DPO and VideoLLaVA; those portions inherit their respective licenses.


πŸ™Œ Credits

AVERE builds upon the following excellent open-source works:

We gratefully acknowledge their contributions to the open-source community.


πŸͺΆ Citation

If you find AVERE or EmoReAlM useful in your research, please cite:

@inproceedings{chaubey2026avere,
  title     = {{AVERE}: Improving Audiovisual Emotion Reasoning with Preference Optimization},
  author    = {Ashutosh Chaubey and Jiacheng Pang and Maksim Siniukov and Mohammad Soleymani},
  booktitle = {The Fourteenth International Conference on Learning Representations},
  year      = {2026},
  url       = {https://openreview.net/forum?id=td682AAuPr}
}
Downloads last month
66
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for chaubeyG/AVERE-7B