If you wish to use this model for commercial purposes, please obtain a license by contacting: info@tabularis.ai
🎭 Multilingual Emotion Classification Model (23 Languages, 11 Emotions)
Model Details
Model Name:tabularisai/multilingual-emotion-classificationBase Model:FacebookAI/xlm-roberta-baseTask:Multi-label Text Classification (Emotion Recognition)Languages:23 — English, Mandarin Chinese (中文), Spanish (Español), Hindi (हिन्दी), Arabic (العربية), Bengali (বাংলা), Portuguese (Português), Russian (Русский), Japanese (日本語), German (Deutsch), Indonesian (Bahasa Indonesia), Tamil (தமிழ்), Vietnamese (Tiếng Việt), Korean (한국어), French (Français), Turkish (Türkçe), Italian (Italiano), Polish (Polski), Ukrainian (Українська), Urdu (اردو), Dutch (Nederlands), Punjabi (ਪੰਜਾਬੀ), and Swahili.Number of Classes:11 — anger, contempt, disgust, fear, frustration, gratitude, joy, love, neutral, sadness, surpriseLabel Mode:Multi-label — each text can be assigned zero, one, or multiple emotions (independent sigmoid heads, τ = 0.5).Usage:- Social media emotion analysis
- Customer feedback analysis
- Product review emotion tagging
- Brand monitoring
- Conversational AI / chatbot affect tracking
- Market research
- Customer service optimization
Model Description
This model is a fine-tuned version of FacebookAI/xlm-roberta-base for multilingual multi-label emotion classification. It was trained on synthetic multilingual data covering 23 languages and 11 emotion categories, enabling robust emotion detection across languages, registers, and cultural contexts.
Unlike single-label sentiment classifiers, this model predicts a set of emotions per input — reflecting the reality that utterances often carry mixed affect (e.g. gratitude + love, frustration + sadness).
Training Data
Trained on synthetic multilingual data generated by advanced LLMs, providing broad coverage of emotion expressions across all 23 supported languages. All labels are multi-hot vectors over the 11 emotion classes.
Training Procedure
- Fine-tuned for 3 epochs with BCEWithLogitsLoss (independent-binary-per-label).
- Cosine LR schedule with 6% warmup,
lr=2e-5, effective batch size 64. - Mixed precision (bf16) on a single A100; max sequence length 192.
- Per-epoch checkpointing; epoch 3 was selected by validation F1-micro.
Evaluation (held-out multilingual test set, 11,500 rows)
| Metric | Value |
|---|---|
| F1 (micro) | 0.840 |
| F1 (macro) | 0.839 |
| Jaccard (samples) | 0.794 |
| Subset accuracy | 0.640 |
| Hamming accuracy | 0.953 |
| AUROC (micro) | 0.980 |
| Average Precision (micro) | 0.923 |
| LRAP | 0.936 |
Decision threshold: τ = 0.5 applied independently per label.
Intended Use
Ideal for:
- Multilingual social media emotion monitoring
- International customer feedback affect analysis
- Global product review emotion tagging
- Worldwide brand sentiment & emotion tracking
- Affect-aware conversational systems
How to Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "tabularisai/multilingual-emotion-classification"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
LABELS = ["anger", "contempt", "disgust", "fear", "frustration",
"gratitude", "joy", "love", "neutral", "sadness", "surprise"]
@torch.no_grad()
def predict_emotions(texts, threshold: float = 0.5):
inputs = tokenizer(texts, return_tensors="pt", truncation=True,
padding=True, max_length=192)
probs = torch.sigmoid(model(**inputs).logits).cpu().numpy()
results = []
for row in probs:
picked = [(LABELS[i], float(row[i])) for i in range(len(LABELS)) if row[i] >= threshold]
picked.sort(key=lambda x: -x[1])
results.append(picked or [("neutral", float(row[LABELS.index("neutral")]))])
return results
texts = [
# English
"Thank you so much for helping me, I really appreciate it!",
"I can't believe they cancelled the flight again, this is ridiculous.",
# Spanish
"¡Qué alegría verte después de tanto tiempo!",
"Estoy muy decepcionado con el servicio.",
# Chinese
"收到你的礼物我真的很感动,谢谢你!",
"这部电影太吓人了,我都不敢一个人看。",
# Arabic
"أنا ممتن جدًا لكل ما فعلته من أجلي.",
"لا أستطيع تحمّل هذا الوضع أكثر من ذلك.",
# Hindi
"आपका यह तोहफ़ा देखकर मेरी आँखों में आँसू आ गए।",
"यह सेवा बिल्कुल घटिया थी, मैं बहुत निराश हूँ।",
# Japanese
"久しぶりに会えて本当に嬉しいです!",
"また電車が遅れた...本当にうんざりする。",
# French
"Je suis tellement reconnaissant pour tout ce que tu as fait.",
"C'est inadmissible, j'en ai assez de cette situation.",
# Swahili
"Asante sana kwa msaada wako, nakupenda sana!",
"Nimechoka kabisa na huduma hii mbaya.",
]
for t, r in zip(texts, predict_emotions(texts)):
tags = ", ".join(f"{lbl}({p:.2f})" for lbl, p in r)
print(f"Text: {t}\nEmotions: {tags}\n")
Using pipelines (returns probability for each of the 11 classes):
from transformers import pipeline
pipe = pipeline(
"text-classification",
model="tabularisai/multilingual-emotion-classification",
function_to_apply="sigmoid",
top_k=None,
)
print(pipe("I love this product! It's amazing and works perfectly."))
Ethical Considerations
Synthetic training data reduces annotator bias and broadens language coverage, but real-world validation is strongly advised before deploying in high-stakes settings. Emotion labels are culturally situated — predictions should be treated as probabilistic signals, not ground truth about a person's internal state.
Citation
///// comming soon /////
Contact
For inquiries, data, private APIs, better models, contact info@tabularis.ai
tabularis.ai
|
|
|
|
|
- Downloads last month
- -
Model tree for tabularisai/multilingual-emotion-classification
Base model
FacebookAI/xlm-roberta-base