Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
FaceEmo-Set Features (ViT Representations)
Dataset Illustration
Overview
This dataset provides feature representations extracted from multiple facial emotion recognition datasets using a Vision Transformer (ViT-Base/16) model.
Instead of releasing raw images, which cannot be redistributed because some source materials are subject to copyright restrictions, we provide:
- High-level feature embeddings
- Model outputs (logits and probabilities)
- Structured metadata for reproducibility
This enables reproducible research and downstream analysis without access to the original visual data.
Motivation
Traditional facial emotion recognition benchmarks often suffer from:
- Severe class imbalance (e.g., FER2013)
- Limited diversity of sources
- Poor cross-dataset generalization
This dataset is designed to support data-centric analysis and representation-level research, particularly in the context of:
- Cross-dataset evaluation
- Domain shift robustness
- Feature-space analysis
- Representation-level benchmarking
- Feature-space analysis of multimodal systems
Source Datasets
Feature representations are extracted from a combined dataset including:
- FaceEmo-Set
- FER2013
- RAF-DB
- RAVDESS
- AffectNet (used for evaluation only)
Each sample is associated with its original dataset source via metadata.
Feature Extraction
- Backbone: ViT-Base/16
- Pretraining: ImageNet-21k
- Fine-tuning: Combined dataset (FaceEmo-Set + FER2013 + RAF-DB + RAVDESS)
- Feature type: CLS token from the final hidden layer
- Feature dimension: 768
Features are extracted using the same model weights available at:
https://huggingface.co/jihedjabnoun/faceemo-set
Dataset Structure
Splits
| Split | Samples |
|---|---|
| Train | 62,902 |
| Validation | 15,946 |
| Test (FER2013) | 7,178 |
| Test (AffectNet) | 3,006 |
Feature Files
Each .npz file contains:
features: shape(N, 768)logits: shape(N, 7)probs: shape(N, 7)
Files:
combined_vit_features_train.npzcombined_vit_features_val.npzcombined_vit_features_test_fer2013.npzcombined_vit_features_test_affectnet.npz
Metadata
combined_vit_metadata_all.csv
Columns:
sample_idDatasetEmotionSplitvalid_image
Summary
combined_vit_feature_summary.csv
Labels
Seven emotion classes:
- anger
- disgust
- fear
- happiness
- neutral
- sadness
- surprise
π How to Load the Dataset
Option 1 β Google Colab
from huggingface_hub import hf_hub_download
import numpy as np
train_path = hf_hub_download(
repo_id="jihedjabnoun/faceemo-set-features",
filename="combined_vit_features_train.npz",
repo_type="dataset"
)
data = np.load(train_path)
X = data["features"]
logits = data["logits"]
probs = data["probs"]
print(X.shape)
Option 2 β Load Metadata
import pandas as pd
from huggingface_hub import hf_hub_download
csv_path = hf_hub_download(
repo_id="jihedjabnoun/faceemo-set-features",
filename="combined_vit_metadata_all.csv",
repo_type="dataset"
)
df = pd.read_csv(csv_path)
print(df.head())
Option 3 β Download Full Dataset
from huggingface_hub import snapshot_download
local_dir = snapshot_download(
repo_id="jihedjabnoun/faceemo-set-features",
repo_type="dataset"
)
print(local_dir)
Option 4 β CLI
huggingface-cli download jihedjabnoun/faceemo-set-features --repo-type dataset --local-dir ./faceemo_features
π Feature Inversion & Reconstruction Analysis
To evaluate whether the released feature representations retain recoverable visual information, we conducted a feature inversion experiment using gradient-based optimization.
Method
- Input: CLS embeddings (768-dimensional)
- Model: the same ViT used for feature extraction
- Objective:
- minimize feature distance in embedding space
- optionally match logits
- Regularization:
- total variation (TV)
- L2 pixel prior
π Example Results
Original vs Reconstructed Image
Optimization Behavior
Quantitative Example
- Original predicted class: fear (0.72)
- Reconstructed predicted class: fear (0.71)
- Final cosine similarity: ~0.96
- Final CLS MSE: ~0.005
Key Observations
- High feature alignment was achieved in embedding space (cosine similarity > 0.95)
- Reconstructed images preserve:
- coarse facial structure
- emotion-related cues
- semantic characteristics sufficient to recover the predicted class in this example
- Reconstructed images do not preserve:
- identity
- fine visual details
- exact pixel-level appearance
Core Insight
High similarity in feature space does not imply faithful visual reconstruction.
Despite strong alignment in embedding space, the reconstructed image differs substantially from the original at the pixel level. This suggests that the released CLS representations retain semantic information while discarding most fine-grained spatial detail.
Conclusion
- CLS embeddings retain semantic information
- Exact image reconstruction is not observed
- Approximate inversion is possible but remains visually ambiguous
π Reproducible Notebook
The full inversion pipeline is provided in:
upload_predict_cls_reconstruct_compare_colab.ipynb
This notebook allows users to:
- upload an image
- run prediction with the released model
- extract its CLS feature
- reconstruct an image from that feature
- compare original and reconstructed outputs
Applications
- Cross-dataset generalization studies
- Representation learning analysis
- Domain shift evaluation
- Multimodal fusion research
- Low-resource training
Limitations
- No raw images
- No temporal modeling
- Limited to 7 emotions
- Potential dataset bias
- No landmarks
- Feature inversion may reveal approximate semantic information
Ethical Considerations
Emotion recognition systems may:
- Encode biases
- Be misused in surveillance
Additionally:
- Feature representations may allow approximate inversion
- Reconstructed images in our experiments do not preserve identity or exact appearance
Users should:
- Evaluate fairness
- Avoid high-risk deployment
- Consider privacy implications of feature sharing
π¬ Reproducibility
All features were extracted using a single trained ViT model to ensure consistency.
The inversion example included in this repository is intended as a transparency and analysis tool to help users understand what information may remain in released embeddings.
Citation
@inproceedings{jabnoun2026faceemoset,
title={Improving Cross-Dataset Generalization in Facial Emotion Recognition Through FaceEmo-Set: A Balanced and Diverse Dataset},
author={Jabnoun, Jihed and Maraoui, Mohsen and Zrigui, Mounir},
booktitle={18th Asian Conference on Intelligent Information and Database Systems (ACIIDS)},
year={2026},
address={Kaohsiung, Taiwan},
month={April}
}
Acknowledgments
University of Monastir, Tunisia
License
MIT License
- Downloads last month
- 44