File size: 30,043 Bytes
13f5266
e93454d
13f5266
 
e93454d
13f5266
 
e93454d
 
 
 
13f5266
 
e93454d
 
 
 
 
 
13f5266
 
 
 
 
 
 
 
e93454d
13f5266
 
 
 
 
 
e93454d
 
 
 
 
 
 
 
0743d2a
13f5266
 
7cf8677
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e93454d
13f5266
e93454d
13f5266
e93454d
13f5266
e93454d
13f5266
e93454d
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e349b2
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e349b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e349b2
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e349b2
 
 
 
 
 
 
 
 
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2641408
 
 
 
 
 
 
 
 
 
 
 
 
13f5266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2641408
13f5266
 
 
 
 
 
 
 
 
 
 
 
df40f5d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
---
license: cc-by-4.0
language:
- ur
- en
tags:
- roman-urdu
- emotion-classification
- text-classification
- ekman
- nlp
- low-resource-nlp
- affective-computing
- social-media
- urdu-nlp
- mental-health
- whatsapp
- inter-annotator-agreement
pretty_name: RUEmoCorp  Roman Urdu Emotion Corpus
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- multi-class-classification
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- Khubaib01/RomanUrdu-NLP-Sentiment-Corpus
configs:
- config_name: ruemocorp-annotated
  data_files:
    - split: train
      path: RUEmoCorpus.csv
- config_name: ruemocorp-silver
  data_files:
    - split: train
      path: RUEmoCorp_134k_silver/RUEmoCorp_134_labeled.csv
---

# RUEmoCorp

> **The largest publicly available, human-annotated, inter-annotator-agreement-validated emotion dataset for Roman Urdu.**

[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![HuggingFace](https://img.shields.io/badge/🤗%20HuggingFace-Khubaib01-orange)](https://huggingface.co/Khubaib01)
[![Harvard Dataverse](https://img.shields.io/badge/Harvard%20Dataverse-Archived-red)](https://dataverse.harvard.edu)
[![Model v2](https://img.shields.io/badge/Model-roman--urdu--emotion--xlmr--v2-green)](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2)
[![IAA: Fleiss κ = 0.66](https://img.shields.io/badge/IAA%20Fleiss%20κ-0.659-brightgreen)](https://huggingface.co/datasets/Khubaib01/roman-urdu-emotion-corpus)

---

## Table of Contents

1. [Dataset Overview](#1-dataset-overview)
2. [Background and Motivation](#2-background-and-motivation)
3. [Dataset Statistics](#3-dataset-statistics)
4. [Inter-Annotator Agreement (IAA)](#4-inter-annotator-agreement-iaa)
5. [Annotation Methodology](#5-annotation-methodology)
6. [Annotation Team](#6-annotation-team)
7. [Data Fields](#7-data-fields)
8. [Data Splits](#8-data-splits)
9. [Source and Collection](#9-source-and-collection)
10. [Associated Model](#10-associated-model)
11. [Related Resources](#11-related-resources)
12. [Datasheet (Gebru et al., 2018)](#12-datasheet-gebru-et-al-2018)
13. [Citation](#13-citation)
14. [License and Ethics](#14-license-and-ethics)
15. [Contact](#15-contact)

---

## Dataset Overview

**RUEmoCorp** (Roman Urdu Emotion Corpus) is a large-scale, manually curated, expert-annotated dataset of Roman Urdu social media and conversational texts labeled across **7 emotion categories**: *joy, anger, sadness, fear, disgust, surprise,* and *none*. It is the training corpus behind [`roman-urdu-emotion-xlmr-v2`](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2) — the highest-accuracy open-source emotion classifier for Roman Urdu, achieving **Macro F1 = 0.9896**.

Data was collected from Pakistani social media platforms and WhatsApp conversations and underwent a rigorous multi-phase annotation process by four expert annotators recruited from three independent Pakistani universities. An inter-annotator agreement (IAA) study on a 700-sample benchmark yields **Fleiss' κ = 0.6588** and **Mean Pairwise Cohen's κ = 0.6597**, indicating *substantial agreement* (Landis & Koch, 1977) — a strong result for a 7-class affective labeling task in a low-resource, orthographically irregular language.

The emotion taxonomy adopts Ekman's six universal basic emotions augmented with a *none* class for emotionally neutral utterances — a deliberate design choice absent from prior Roman Urdu emotion work, which has used only four or six categories. Omitting a neutral class forces classifiers to assign emotional labels to neutral text, inflating false positive rates in deployed systems.

**This dataset fills a documented gap**: prior to this release, no large-scale, openly accessible, IAA-validated emotion corpus existed for Roman Urdu, despite Roman Urdu being the dominant digital writing mode for over 230 million Urdu speakers worldwide. RUEmoCorp is permanently archived on Harvard Dataverse (doi:[10.7910/DVN/BPWHOZ](https://doi.org/10.7910/DVN/BPWHOZ)) and released under CC BY 4.0.
---

## 2. Background and Motivation

### 2.1 The Roman Urdu Digital Language Problem

Urdu is the national language of Pakistan and a major language of India, with over 230 million speakers. However, in digital communication — social media, messaging apps, online forums — native speakers overwhelmingly write in **Roman Urdu**: Urdu lexicon and grammar rendered in the Latin script, without standardised orthography.

This creates a profound NLP challenge:

- The same word can be spelled in dozens of valid ways (*khushi*, *khushee*, *khushi*, *khuushi*)
- No standard keyboard layout, no spell-checker, no official romanisation standard
- Extensive code-switching with English at both the word and phrase level
- Existing Urdu NLP resources built for Nastaliq script do not transfer to Roman Urdu

### 2.2 Why Emotion Classification

Emotion classification is foundational for downstream applications in mental health monitoring, social media analysis, customer feedback systems, and conflict detection in multilingual communities. For Roman Urdu specifically, no validated emotion resource existed before this work.

### 2.3 Research Lineage

This dataset is part of a growing research programme on Roman Urdu affective computing:

| Resource | Size | Task | Status |
|----------|------|------|--------|
| [RomanUrdu-NLP-Sentiment-Corpus](https://huggingface.co/datasets/Khubaib01/RomanUrdu-NLP-Sentiment-Corpus) | 134K | 3-class sentiment | Public |
| [roman-urdu-sentiment-xlm-r](https://huggingface.co/Khubaib01/roman-urdu-sentiment-xlm-r) | — | Sentiment model | Public |
| **RUEC-28K (this dataset)** | **28K** | **7-class emotion** | **Public** |
| [roman-urdu-emotion-xlmr-v2](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2) | — | Emotion model | Public |
| RomanUrdu-NLP-Emotion-Corpus-134K | 134K | Emotion (model-labeled) | Forthcoming |

---

## 3. Dataset Statistics

## RUEmoCorp (28k) training
### 3.1 Size and Format

| Property | Value |
|----------|-------|
| Total samples | 28,000 |
| Emotion classes | 7 |
| Annotation format | Single label per sample |
| Language | Roman Urdu (code-switched with English) |
| Script | Latin (Roman) |
| Domain | Social media text |
| Format | CSV / Parquet |

### 3.2 Class Distribution

| Emotion Label | Sample Count | % of Dataset |
|---------------|-------------|-------------|
| Happy | ~4,000 | ~14.3% |
| Sad | ~4,000 | ~14.3% |
| Anger | ~4,000 | ~14.3% |
| Disgust | ~4,000 | ~14.3% |
| Fear | ~4,000 | ~14.3% |
| Surprise | ~4,000 | ~14.3% |
| Neutral | ~4,000 | ~14.3% |

> The dataset was constructed with approximate class balance to ensure unbiased classifier training.

## Dataset Statistics — `RUEmoCorp-silver`

> RUEmoCorp is annotated by the Khubaib01/roman-urdu-emotion-xlmr-v2
### Overview

| Property | Value |
|----------|-------|
| Total utterances | 134,053 |
| Annotation method | Automated — `roman-urdu-emotion-xlmr-v2` |
| Confidence threshold | ≥ 0.75 (softmax probability) |
| Mean confidence | 0.8039 |
| Median confidence | 0.8733 |
| Mean prediction entropy | 0.7623 |
| Low-confidence rows (< 0.75) | 10,109 (7.54%) |
| Fallback / unresolved rows | 0 (0.00%) |

The high median confidence (0.8733) indicates that the majority of retained predictions are well above the retention threshold, with low-confidence rows constituting only 7.54% of the corpus. Zero fallback rows confirm complete model coverage across all retained utterances.

---

### Class Distribution (with 95% Wilson Confidence Intervals)

| Emotion | Count | % | CI Lower | CI Upper |
|---------|------:|--:|:--------:|:--------:|
| joy | 28,389 | 21.18% | 0.2096 | 0.2140 |
| none | 28,167 | 21.01% | 0.2079 | 0.2123 |
| disgust | 25,959 | 19.36% | 0.1915 | 0.1958 |
| sadness | 22,570 | 16.84% | 0.1664 | 0.1704 |
| anger | 18,275 | 13.63% | 0.1345 | 0.1382 |
| fear | 6,613 | 4.93% | 0.0482 | 0.0505 |
| surprise | 4,080 | 3.04% | 0.0295 | 0.0314 |

> ⚠️ The distribution is **naturally imbalanced**, reflecting the organic frequency of emotional expression in scraped social media and WhatsApp data. `joy` and `none` together account for ~42% of the corpus. `fear` and `surprise` are the least frequent classes (combined ~8%). Users should apply class reweighting or stratified sampling before using this corpus as a primary training source.

---

### Per-Class Confidence Statistics

| Emotion | Mean Conf. | Std | Median Conf. | Min | Max |
|---------|:----------:|:---:|:------------:|:---:|:---:|
| anger | 0.8175 | 0.1292 | 0.8778 | 0.2291 | 0.9105 |
| disgust | 0.7819 | 0.1499 | 0.8630 | 0.1875 | 0.9142 |
| fear | 0.7545 | 0.1842 | 0.8484 | 0.1900 | 0.9320 |
| joy | 0.8476 | 0.1244 | 0.9037 | 0.2336 | 0.9290 |
| none | 0.8155 | 0.1361 | 0.8804 | 0.2207 | 0.9193 |
| sadness | 0.7696 | 0.1510 | 0.8488 | 0.2147 | 0.9068 |
| surprise | 0.7699 | 0.1755 | 0.8637 | 0.2195 | 0.9294 |

`joy` and `anger` record the highest mean confidence (0.8476 and 0.8175 respectively), consistent with their strong per-class F1 scores on the human-annotated gold set. `fear` and `surprise` record the lowest mean confidence and highest standard deviation, reflecting their lower corpus frequency and greater lexical ambiguity in informal Roman Urdu — also the classes with the widest Wilson CI bounds in the distribution table above. All per-class median confidence values exceed 0.84, indicating that the central tendency of predictions is substantially above the 0.75 retention threshold across all seven categories.

### 3.3 IAA Validation Subset

A stratified random sample of **700 instances** (100 per class) was independently re-annotated by all four annotators for the IAA study. Results are reported in Section 4.

---

## 4. Inter-Annotator Agreement (IAA)

IAA was computed on a **700-sample stratified subset** independently re-annotated by all four members of the annotation team. All annotators worked blindly — they had no access to the original labels or each other's responses during annotation.

### 4.1 Aggregate Agreement

| Metric | Value | Interpretation |
|--------|-------|----------------|
| **Fleiss' Kappa (κ)** | **0.6588** | Substantial agreement |
| **Mean Pairwise Cohen's κ** | **0.6597** | Substantial agreement |
| Total IAA Samples | 700 | Stratified (100 per class) |
| Full Agreement (4/4 annotators) | 348 (49.7%) | — |
| Majority Agreement (3/4 annotators) | 241 (34.4%) | — |
| Ambiguous (no majority) | 111 (15.9%) | — |

**Benchmark context:** A Fleiss' κ of 0.659 is considered *substantial agreement* on the Landis & Koch (1977) scale (0.61–0.80 = substantial). For a 7-class affective labeling task in a low-resource, orthographically irregular language, this result compares favourably with comparable published corpora. SemEval-2018 Task 1 reported average κ values in the 0.60–0.72 range for multi-label emotion classification in English tweets.

### 4.2 Agreement Breakdown by Sample Category

| Agreement Category | Count | Percentage |
|-------------------|-------|------------|
| Full agreement (all 4 annotators agree) | 348 | 49.7% |
| Majority agreement (3 of 4 agree) | 241 | 34.4% |
| Ambiguous (2-2 split or no clear majority) | 111 | 15.9% |
| **Total** | **700** | **100%** |

### 4.3 IAA Visualisation

<!-- FIGURE: Insert IAA agreement chart here (inter_annotator_agreement_chart.png) -->
<!-- Recommended: Confusion-style heatmap or stacked bar chart showing -->
<!-- per-class agreement rates across all four annotators -->
<!-- Caption: Figure 1. Inter-Annotator Agreement on 700-sample IAA subset. -->
<!-- Fleiss' κ = 0.659. Dark segments = full agreement; medium = majority; light = ambiguous. -->


![IAA Agreement Distribution](kappa_analysis.png)
> *Figure 1 — IAA Agreement Distribution Chart*

### 4.4 Ambiguous Sample Handling

The 111 ambiguous samples (15.9%) — those where no majority label emerged — were adjudicated by the corresponding author (Khubaib Ahmad) using the following protocol:

1. Review the sample in its original posting context
2. Apply the primary criterion: *which emotion would a native Roman Urdu speaker most likely intend?*
3. If genuinely unresolvable, the sample was marked with the label most consistent with the broader textual context
4. Edge cases between **Anger** and **Disgust** are the dominant source of ambiguity — these two classes share lexical overlap in Roman Urdu informal expression and represent the known hard boundary in affective computing for South Asian languages

---

## 5. Annotation Methodology

### 5.1 Annotation Design

The annotation followed a **three-phase blind re-annotation protocol** designed to maximise label reliability for difficult low-resource boundary cases:

**Phase 1 — Independent Annotation**
Each annotator labeled all assigned samples independently, with no communication between annotators. Annotation guidelines were provided in written form and discussed in a single calibration session before annotation began.

**Phase 2 — Calibration on Boundary Cases**
After Phase 1, all annotators jointly reviewed a set of 35 pre-selected boundary-case samples (primarily anger/disgust and sad/neutral pairs). The purpose was alignment on class definitions, not correction of existing labels. Phase 2 outputs were not used in the final dataset.

**Phase 3 — Re-annotation of High-Disagreement Samples**
Samples flagged as high-disagreement from Phase 1 were re-annotated independently by all four annotators. Final labels were determined by majority vote.

### 5.2 Emotion Label Definitions

Annotators were provided with the following operational definitions, grounded in Ekman's (1992) six basic emotions framework, adapted for the Roman Urdu social media context:

| Label | Definition | Roman Urdu Signal Examples |
|-------|-----------|---------------------------|
| **Happy** | Joy, contentment, excitement, celebration | *maza aa gaya, khushi ho rahi, zabardast* |
| **Sad** | Grief, disappointment, longing, loss | *dil dukha, rona aa raha, yaad aa rahi* |
| **Anger** | Frustration, rage, strong displeasure directed outward | *gussa aa raha, bura lag raha, tang aa gaya* |
| **Disgust** | Revulsion, moral rejection, strong aversion | *nafrat hai, ganda lagta, sharm karo* |
| **Fear** | Anxiety, dread, nervousness about outcome | *dar lag raha, fikr ho rahi, kuch bura hoga* |
| **Surprise** | Unexpected reaction, shock (positive or negative) | *hairan reh gaya, pata nahi tha, achanak* |
| **Neutral** | No dominant emotion detectable; informational or factual | *news sharing, plain description, announcements* |

### 5.3 Annotation Challenges Specific to Roman Urdu

Several properties of Roman Urdu text created annotation challenges not present in standard NLP annotation tasks:

- **Orthographic variability**: The same word in different spellings was sometimes perceived differently by annotators. Guidelines included canonical forms.
- **Code-switching**: English emotion words embedded in Roman Urdu phrases (e.g., *"itna sad feel ho raha"*) required consistent treatment. Guidelines specified to treat code-switched expressions at their semantic value.
- **Implicit emotion**: Roman Urdu social media text frequently expresses emotion indirectly through cultural references, humour, or rhetorical questions. These samples constituted the majority of ambiguous cases.
- **Anger-Disgust boundary**: The most frequent source of disagreement. Both emotions share vocabulary in informal Pakistani social media usage. The calibration session (Phase 2) focused specifically on this boundary.

---

## 6. Annotation Team

RUEC-28K was annotated by a dedicated four-person expert team, all native or fluent Roman Urdu speakers with academic backgrounds in relevant fields.

| Annotator | Affiliation | Location | Role |
|-----------|-------------|----------|------|
| **Muzammil Shadab** | Bahauddin Zakariya University (BZU) | Multan, Punjab | Annotator |
| **Sara** | COMSATS University Islamabad (CUI) | Islamabad | Annotator |
| **Faiez Ahmad** | Emerson University Multan (EUM) | Multan, Punjab | Annotator |
| **Khadija Faisal** | Emerson University Multan (EUM) | Multan, Punjab | Data Manager & Annotator |

**Corresponding author / Project lead:** Muhammad Khubaib Ahmad, Emerson University Multan (EUM), Multan, Punjab.

All annotators participated in the calibration session and the IAA study. The annotation team has no financial conflict of interest in the publication of this dataset.

---

## 7. Data Fields

```python
{
  "message":       str,   # Raw Roman Urdu text (social media post or message)
  "emotion_label": str    # One of: anger | disgust | fear | happy | neutral | sad | surprise
}
```

### Field Details

**`message`**
- Raw Roman Urdu text, preserved as collected with minimal preprocessing
- May contain code-switched English words or phrases
- May contain common social media abbreviations and informal orthography
- No personally identifiable information (PII) — all samples were anonymised prior to release
- Length: typically 5–80 tokens

**`emotion_label`**
- String label, lowercase
- Assigned by majority vote across 4 expert annotators for the IAA-validated subset
- For the full 28K corpus: assigned by primary annotator and reviewed by data manager (Khadija Faisal)
- Valid values: `anger`, `disgust`, `fear`, `happy`, `neutral`, `sad`, `surprise`

---

## 8. Data Splits

| Split | Size | Notes |
|-------|------|-------|
| Train | ~24,000 | Used to train roman-urdu-emotion-xlmr-v1 and v2 |
| Validation | ~2,000 | Held out during training |
| Test | ~2,000 | Used for final evaluation; annotated by same team |

> **Note on test set construction:** The test set was sampled from the same 28K corpus and labeled by the same annotation team as the training set. This is a known limitation of the current release — see Section 12.4 (Limitations). An independently annotated external validation set is in preparation.

---

## 9. Source and Collection

### 9.1 Parent Corpus

RUEC-28K is a subset of the [**RomanUrdu-NLP-Sentiment-Corpus**](https://huggingface.co/datasets/Khubaib01/RomanUrdu-NLP-Sentiment-Corpus) (134K samples), which was collected from publicly accessible Pakistani social media platforms. Texts were selected to represent diverse emotional expression in everyday Roman Urdu communication.

### 9.2 Selection Criteria for RUEC-28K

From the 134K sentiment corpus, 28K samples were selected for emotion annotation based on:

- Sufficient emotional signal (low-emotion purely informational text excluded)
- Reasonable length (very short texts < 4 tokens and very long texts > 200 tokens excluded)
- Linguistic diversity (orthographic variants represented across classes)
- Approximate class balance across 7 emotion categories

### 9.3 Preprocessing

- User mentions and names replaced with `PERSON`
- URLs removed
- Duplicate samples removed
- No stemming, lemmatisation, or normalisation applied — raw orthographic variety preserved intentionally

---

## 10. Associated Model

RUEC-28K is the training corpus for the **roman-urdu-emotion-xlmr-v2** model.

- 🤗 Model: [Khubaib01/roman-urdu-emotion-xlmr-v2](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2)
- Architecture: XLM-RoBERTa base + two-layer MLP classification head
- Training lineage: Sentiment fine-tuned → Emotion v1 → Emotion v2
- Test set Macro F1: **0.9896**
- Paper: in-progress

```python
from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="Khubaib01/roman-urdu-emotion-xlmr-v2"
)

result = classifier("yaar dil bht dukha aaj")
print(result)
# [{'label': 'sad', 'score': 0.987}]
```

---

## 11. Related Resources

| Resource | Description | Link |
|----------|-------------|------|
| RomanUrdu-NLP-Sentiment-Corpus | 134K sentiment-labeled Roman Urdu corpus | [HuggingFace](https://huggingface.co/datasets/Khubaib01/RomanUrdu-NLP-Sentiment-Corpus) |
| roman-urdu-sentiment-xlm-r | Sentiment classifier (3-class) | [HuggingFace](https://huggingface.co/Khubaib01/roman-urdu-sentiment-xlm-r) |
| roman-urdu-emotion-xlmr-v1 | Emotion classifier v1 | [HuggingFace](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr) |
| roman-urdu-emotion-xlmr-v2 | Emotion classifier v2 (current best) | [HuggingFace](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2) |
| Paper | In-progess | in-progress |
| Harvard Dataverse | Archival deposit | under-review |
| RomanUrdu-NLP-Emotion-Corpus-134K | 134K model-labeled emotion corpus | [HuggingFace](https://huggingface.co/Khubaib01/RUEmoCorp) |

---

## 12. Datasheet (Gebru et al., 2018)

This datasheet follows the framework proposed by Gebru et al. (2018) — *Datasheets for Datasets*.

### 12.1 Motivation

**For what purpose was the dataset created?**
To address the complete absence of large-scale, human-annotated, inter-annotator-agreement-validated emotion data for Roman Urdu — the dominant digital writing mode for Urdu speakers across Pakistan and India.

**Who created the dataset and on whose behalf?**
Muhammad Khubaib Ahmad (AI Research Engineer, Emerson University Multan), as an independent research project, with annotation support from a four-person expert team (see Section 6). No external funding or institutional commission.

**Any other comments?**
This dataset is part of a broader research programme on Roman Urdu affective computing. The associated 134K sentiment corpus and emotion classification models are co-released.

---

### 12.2 Composition

**What do the instances represent?**
Each instance is a single Roman Urdu social media text — a post, comment, or message — labeled with a single dominant emotion.

**How many instances?**
28,000 total. Approximately 4,000 per class across 7 emotion categories.

**Does the dataset contain all possible instances or a sample?**
A sample. The parent corpus (134K) was itself a sample of available Roman Urdu social media text.

**Is there a label or target associated with each instance?**
Yes. Each instance has one `emotion_label` from: anger, disgust, fear, happy, neutral, sad, surprise.

**Is any information missing from individual instances?**
Metadata such as posting timestamp, platform, user demographics are not included to protect privacy.

**Are relationships between instances made explicit?**
No explicit relationships. Instances are treated as independent.

**Are there recommended data splits?**
Yes — see Section 8.

**Are there any errors, sources of noise, or redundancies?**
The 111 ambiguous samples in the IAA study (15.9% of the 700-sample subset) represent genuine annotation uncertainty, primarily at the anger-disgust boundary. These samples are included in the corpus with adjudicated labels. Near-duplicate samples were removed during preprocessing.

**Is the dataset self-contained?**
Yes. All required data is in this repository. The associated models are separately hosted on HuggingFace.

**Does the dataset contain data that might be considered confidential?**
No. All samples were collected from publicly accessible platforms. PII was removed.

---

### 12.3 Collection Process

**How was data associated with each instance acquired?**
Text collected from publicly accessible Pakistani social media platforms. Emotion labels assigned by expert human annotators following a structured multi-phase protocol (see Section 5).

**What mechanisms were used to collect data?**
Manual collection and curation from public sources. No automated scraping APIs are disclosed in this release.

**Over what timeframe was data collected?**
Collected and annotated over approximately 12 months.

**Were any ethical review processes conducted?**
The project was conducted as independent academic research. All data was sourced from publicly accessible platforms. No PII was retained.

---

### 12.4 Preprocessing, Cleaning, Labeling

**Was any preprocessing/cleaning done?**
Yes — see Section 9.3. User mentions replaced, URLs removed, duplicates removed. Raw orthographic variety was preserved intentionally.

**Was the raw data saved in addition to the preprocessed data?**
The preprocessed form is the release form. Original raw collection is retained by the corresponding author.

**Is the labeling/annotation described in detail?**
Yes — see Sections 4 and 5.

**Was any human labeling conducted?**
Yes. Four expert annotators (see Section 6). IAA computed on 700-sample stratified subset (Fleiss' κ = 0.659).

---

### 12.5 Uses

**Has the dataset been used for any tasks already?**
Yes. It is the training corpus for [roman-urdu-emotion-xlmr-v2](https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2), achieving Macro F1 = 0.9896 on the in-distribution test set.

**What are the recommended uses?**
- Training and benchmarking Roman Urdu emotion classifiers
- Low-resource multilingual NLP research
- Transfer learning experiments for South Asian languages
- Affective computing and sentiment analysis research

**What are the uses that should be avoided?**
- Clinical or medical applications without additional validation
- Real-time surveillance or monitoring of individuals
- Applications targeting vulnerable populations without appropriate ethical review
- Any use that relies on the model's output as a ground truth for individual emotional state

---

### 12.6 Distribution

**How is the dataset distributed?**
Via HuggingFace Datasets (primary) and Harvard Dataverse (archival).

**Is the dataset distributed under a copyright or license?**
Apache 2.0 License. Free for academic and commercial use with attribution.

**Have any third parties imposed IP-based restrictions?**
No.

---

### 12.7 Maintenance

**Who maintains the dataset?**
Muhammad Khubaib Ahmad. Contact via HuggingFace or the email in Section 15.

**Will the dataset be updated?**
An extended 134K model-labeled version is planned for release. The 28K manually annotated corpus is considered stable.

**Will older versions be maintained?**
Yes. Versioned releases on both HuggingFace and Harvard Dataverse.

---

### 12.8 Limitations

1. **Test set annotator overlap**: The test split was annotated by the same team as the training split. In-distribution performance (Macro F1 = 0.9896) should be interpreted accordingly. An externally annotated validation set is in preparation.

2. **Domain specificity**: Samples are drawn from social media text. Performance on formal text, news, or other domains may differ.

3. **Orthographic coverage**: While orthographic variety is preserved, the corpus cannot cover all possible romanisation patterns for all Urdu words.

4. **Geographic bias**: Pakistani Roman Urdu predominates. Indian Roman Urdu may show stylistic and lexical differences.

5. **Sarcasm and irony**: Implicitly expressed emotions, particularly sarcastic positivity, are a known weak point. These cases appear disproportionately in the ambiguous sample pool.

6. **Static snapshot**: Social media language evolves. Newer slang or expression patterns post-collection may not be represented.

---

## 13. Citation

If you use this dataset in your research, please cite:

```bibtex
@data{DVN/BPWHOZ_2026,
author = {Ahmad, Muhammad Khubaib Ahmad and Khadija Faisal},
publisher = {Harvard Dataverse},
title = {{RUEmoCorp}},
UNF = {UNF:6:h03jo4SJGEAKuZCik1R/Bw==},
year = {2026},
version = {V1},
doi = {10.7910/DVN/BPWHOZ},
url = {https://doi.org/10.7910/DVN/BPWHOZ}
}
```

If you use the associated model, please also cite:

```bibtex
@misc{muhammad_khubaib_ahmad_2026,
	author       = { Muhammad Khubaib Ahmad and Khadija Faisal },
	title        = { roman-urdu-emotion-xlmr-v2 (Revision 7cd7dd2) },
	year         = 2026,
	url          = { https://huggingface.co/Khubaib01/roman-urdu-emotion-xlmr-v2 },
	doi          = { 10.57967/hf/8347 },
	publisher    = { Hugging Face }
}
```

---

## 14. Team and Contributions

| Name | Role | Affiliation |
|------|------|-------------|
| **Muhammad Khubaib Ahmad** | Core Researcher · Lead Engineer · Project Administration · Model Development | Independent Researcher |
| **Khadija Faisal** | Data Manager · Annotation Coordination · Annotator | Emerson University Multan |
| **Muzammil Shadab** | Annotator | Bahauddin Zakariya University, Multan |
| **Sara** | Annotator | COMSATS University Islamabad |
| **Faiez Ahmad** | Annotator | Emerson University Multan |

---

## 15. License and Ethics

**License:** [Apache 2.0](https://opensource.org/licenses/Apache-2.0)

This dataset is freely available for academic research, commercial use, and derivative works with appropriate attribution.

**Ethical considerations:**

- All source texts were collected from publicly accessible platforms
- No personally identifiable information (PII) is present in the released dataset
- The emotion labels reflect the *expressed* emotion in text as interpreted by expert annotators — they do not constitute claims about the psychological state of any individual
- Emotion classification systems carry inherent risks of misuse, particularly in surveillance, profiling, or targeting applications. Users of this dataset are responsible for ensuring their applications comply with applicable data protection laws and ethical guidelines
- The annotation team was compensated appropriately for their work

---

## 16. Contact

**Muhammad Khubaib Ahmad**
AI Research Engineer
Multan, Punjab, Pakistan

- 🤗 HuggingFace: [Khubaib01](https://huggingface.co/Khubaib01)
- 📄 Paper: In-progress

For questions about the dataset, annotation methodology, or collaboration requests, please open a [Discussion](https://huggingface.co/datasets/Khubaib01/roman-urdu-emotion-corpus/discussions) on this repository.

---

*RUEmoCorp — The largest publicly available, IAA-validated Roman Urdu emotion dataset. Released to support low-resource NLP research for South Asian languages.*