Access Request & User Agreement

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access this dataset, you must first review and agree to our user agreement by filling out this form: Link to Google Form.
Important: The email address you provide in the Google Form MUST match your Hugging Face account email. Once you have submitted the form, click the acknowledge button below. Your request will be manually reviewed and approved within 1-2 business days.
For more information on the challenge, see the Challenge Website, and the github page

Log in or Sign Up to review the conditions and access this dataset content.

DIEM-A Challenge Dataset

Dataset for the MMAC@ACII 2026 Challenge: Multilingual and Multimodal Affective Computing Workshop. The task is to classify performer emotional intent from full-body motion capture data across 12 emotion categories: anger, contempt, disgust, fear, joy, sadness, surprise, gratitude, guilt, jealousy, shame, and pride.

Dataset Overview

  • Source: DIEM-A (Diverse Intercultural E-Motion Database of Asian Performers) dataset
  • Performers: 92 professional artists (49 Japanese, 43 Taiwanese)
  • Modalities: 3 motion capture formats (.bvh, .fbx, .c3d)
  • Skeleton: 24 joints
  • Intensity levels: Low (L), Medium (M), High (H)
Split Performers Sequences Labels
Train 74 (40 JP, 34 TW) 7,992 Provided (emotion + scenario text)
Test 18 (9 JP, 9 TW) 1,944 Hidden

Structure

.
β”œβ”€β”€ bvh/
β”‚   β”œβ”€β”€ train/          # 7,992 BVH files
β”‚   └── test/           # 1,944 BVH files
β”œβ”€β”€ fbx/
β”‚   β”œβ”€β”€ train/
β”‚   └── test/
β”œβ”€β”€ c3d/
β”‚   β”œβ”€β”€ train/
β”‚   └── test/
β”œβ”€β”€ train_data.csv      # Labels, scenarios, and metadata
β”œβ”€β”€ test_data.csv       # Metadata only (no labels or scenarios)
└── README.md

Baseline

STGCN++ with leave-performer-out 10-fold cross-validation:

Metric Score
Accuracy 27.1% (\u00b13.7%)
Macro-F1 25.2% (\u00b14.5%)

Random baseline: 8.33%. See the benchmark repo for code and details.

Evaluation

Submissions are ranked by Macro-F1 and Accuracy on the hidden test set. A bonus explainability award is also given.

Challenge

Citation

If you use this dataset, please cite the DIEM-A dataset paper (see challenge website for details).

Downloads last month
35