cei-benchmark / README.md
jonc's picture
Upload README.md with huggingface_hub
56c687a verified
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - n<1K
task_categories:
  - text-classification
task_ids:
  - emotion-classification
tags:
  - pragmatic-reasoning
  - theory-of-mind
  - emotion-inference
  - indirect-speech
  - benchmark
  - multi-annotator
  - plutchik-emotions
  - vad-dimensions
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: subtype
      dtype: string
    - name: context
      dtype: string
    - name: speaker
      dtype: string
    - name: listener
      dtype: string
    - name: utterance
      dtype: string
    - name: power_relation
      dtype: string
    - name: gold_standard
      dtype: string
    - name: ann1_emotion
      dtype: string
    - name: ann2_emotion
      dtype: string
    - name: ann3_emotion
      dtype: string
    - name: valence_mean
      dtype: float64
    - name: arousal_mean
      dtype: float64
    - name: dominance_mean
      dtype: float64
  splits:
    - name: train
      num_examples: 211
    - name: validation
      num_examples: 48
    - name: test
      num_examples: 41

CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models

Dataset Description

CEI (Contextual Emotional Inference) is a benchmark of 300 expert-authored scenarios for evaluating how well language models interpret pragmatically complex utterances in social contexts. Each scenario presents a communicative exchange involving indirect speech (sarcasm, mixed signals, strategic politeness, passive aggression, or deflection) where the speaker's literal words diverge from their actual emotional state.

Dataset Structure

Scenarios

  • 300 scenarios across 5 pragmatic subtypes (60 each)
  • 3 independent annotations per scenario (900 total)
  • Predefined splits: train (211), validation (48), test (41), stratified by subtype and power relation

Fields

Field Type Description
id int Scenario ID (unique within subtype)
subtype string Pragmatic subtype (sarcasm-irony, mixed-signals, strategic-politeness, passive-aggression, deflection-misdirection)
context string Situational context (2-4 sentences)
speaker string Speaker's role in the scenario
listener string Listener's role in the scenario
utterance string The speaker's pragmatically ambiguous utterance
power_relation string Power dynamic: peer, high-to-low, or low-to-high
gold_standard string Gold-standard emotion (majority vote + expert adjudication)
ann1_emotion string Annotator 1's emotion label (Plutchik)
ann2_emotion string Annotator 2's emotion label (Plutchik)
ann3_emotion string Annotator 3's emotion label (Plutchik)
valence_mean float Mean valence rating across annotators (-1.0 to +1.0)
arousal_mean float Mean arousal rating across annotators (-1.0 to +1.0)
dominance_mean float Mean dominance rating across annotators (-1.0 to +1.0)

Pragmatic Subtypes

Subtype Description Fleiss' kappa
Sarcasm/Irony Speaker says the opposite of what they mean 0.25
Passive Aggression Hostility expressed through superficial compliance 0.22
Strategic Politeness Polite language masking negative intent 0.20
Mixed Signals Contradictory verbal and contextual cues 0.16
Deflection/Misdirection Speaker redirects to avoid revealing feelings 0.06

Labels

  • Primary emotion: One of Plutchik's 8 basic emotions (joy, trust, fear, surprise, sadness, disgust, anger, anticipation)
  • VAD ratings: Mean Valence, Arousal, Dominance across 3 annotators, mapped to [-1.0, +1.0]
  • Gold standard: Majority vote with expert adjudication for three-way splits

Power Relations

  • Peer (72%), High-to-Low authority (20%), Low-to-High authority (7%)

Key Statistics

  • Inter-annotator agreement: Overall kappa = 0.21 (fair), ranging from 0.06 (deflection) to 0.25 (sarcasm)
  • Human accuracy (vs. gold): 61% mean, 14.3% unanimous, 31.3% three-way split
  • Best LLM baseline: 25.0% accuracy (Llama-3.1-70B, zero-shot) vs. 54% human majority agreement
  • Random baseline: 12.5% (8-class)

Intended Uses

  • Benchmarking LLM pragmatic reasoning capabilities
  • Diagnosing model failure modes on indirect speech subtypes
  • Research on emotion inference, social AI, Theory of Mind
  • Soft-label training using per-annotator distributions

Limitations

  • All scenarios are expert-authored (not naturalistic)
  • English only
  • 15 undergraduate annotators from a single institution
  • Small scale (300 scenarios) optimized for annotation quality over quantity

Citation

@article{chun2026cei,
  title={CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models},
  author={Chun, Jon and Sussman, Hannah and Pechon-Elkins, Mateo and Mangine, Adrian and Kocaman, Murathan and Sidorko, Kirill and Koirala, Abhigya and McCloud, Andre and Akanwe, Wisdom and Gassama, Moustapha and Enright, Anne-Duncan and Dunson, Peter and Ng, Tiffanie and von Rosenstiel, Anna and Idowu, Godwin},
  journal={Journal of Data-centric Machine Learning Research (DMLR)},
  year={2026}
}