metadata
annotations_creators:
- expert-generated
- machine-generated
language:
- en
license: mit
multilinguality: monolingual
pretty_name: SemanticQA
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- semantics
- idioms
- noun-compounds
- collocations
- multiword-expressions
- benchmark
- nlp
task_categories:
- text-classification
- text-generation
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
dataset_info:
- config_name: collocate_retrieval
splits:
- name: test
num_examples: 306
- config_name: collocation_categorization
splits:
- name: test
num_examples: 305
- config_name: collocation_extraction
splits:
- name: test
num_examples: 305
- config_name: collocation_paraphrase
splits:
- name: test
num_examples: 305
- config_name: idiom_detection
splits:
- name: test
num_examples: 273
- config_name: idiom_extraction
splits:
- name: test
num_examples: 447
- config_name: idiom_paraphrase
splits:
- name: test
num_examples: 818
- config_name: noun_compound_compositionality
splits:
- name: test
num_examples: 242
- config_name: noun_compound_compositionality_ft
splits:
- name: train
num_examples: 121
- name: test
num_examples: 97
- name: validation
num_examples: 24
- config_name: noun_compound_extraction
splits:
- name: test
num_examples: 720
- config_name: noun_compound_interpretation
splits:
- name: test
num_examples: 110
- config_name: verbal_mwe_extraction
splits:
- name: test
num_examples: 475
configs:
- config_name: collocate_retrieval
data_files:
- split: test
path: data/collocate_retrieval/collocate_retrieval.json
- config_name: collocation_categorization
data_files:
- split: test
path: data/collocation_categorization/collocation_categorization.json
- config_name: collocation_extraction
data_files:
- split: test
path: data/collocation_extraction/collocation_extraction.json
- config_name: collocation_paraphrase
data_files:
- split: test
path: data/collocation_paraphrase/collocation_paraphrase.json
- config_name: idiom_detection
data_files:
- split: test
path: data/idiom_detection/idiom_detection.json
default: true
- config_name: idiom_extraction
data_files:
- split: test
path: data/idiom_extraction/idiom_extraction.json
- config_name: idiom_paraphrase
data_files:
- split: test
path: data/idiom_paraphrase/idiom_paraphrase.json
- config_name: noun_compound_compositionality
data_files:
- split: test
path: >-
data/noun_compound_compositionality/noun_compound_compositionality.json
- config_name: noun_compound_compositionality_ft
data_files:
- split: train
path: >-
data/noun_compound_compositionality/noun_compound_compositionality_ft_train.json
- split: test
path: >-
data/noun_compound_compositionality/noun_compound_compositionality_ft_test.json
- split: validation
path: >-
data/noun_compound_compositionality/noun_compound_compositionality_ft_valid.json
- config_name: noun_compound_extraction
data_files:
- split: test
path: data/noun_compound_extraction/noun_compound_extraction.json
- config_name: noun_compound_interpretation
data_files:
- split: test
path: data/noun_compound_interpretation/noun_compound_interpretation.json
- config_name: verbal_mwe_extraction
data_files:
- split: test
path: data/verbal_mwe_extraction/verbal_mwe_extraction.json
SemanticQA
A comprehensive benchmark for evaluating language models on semantic phrase processing, from the paper Revisiting a Pain in the Neck: Semantic Phrase Processing Benchmark for Language Models.
Usage
from datasets import load_dataset
# Load a specific subset
dataset = load_dataset("jacklanda/SemanticQA", "idiom_detection")
# Available configs:
# collocate_retrieval, collocation_categorization, collocation_extraction,
# collocation_paraphrase, idiom_detection, idiom_extraction, idiom_paraphrase,
# noun_compound_compositionality, noun_compound_compositionality_ft,
# noun_compound_extraction, noun_compound_interpretation, verbal_mwe_extraction
Subsets
| Config | Task | Phrase Type | Size | Eval Metrics |
|---|---|---|---|---|
collocate_retrieval |
Collocate Retrieval (CR) | Collocation | 306 | Exact Match |
collocation_categorization |
Collocation Categorization (LCC) | Collocation | 305 | Accuracy, F1 |
collocation_extraction |
Collocation Extraction (LCE) | Collocation | 305 | Exact Match |
collocation_paraphrase |
Collocation Interpretation (LCI) | Collocation | 305 | ROUGE-L, BERTScore, METEOR, BLEU |
idiom_detection |
Idiom Detection (IED) | Idiom | 273 | MCQ Accuracy |
idiom_extraction |
Idiom Extraction (IEE) | Idiom | 447 | Exact Match |
idiom_paraphrase |
Idiom Interpretation (IEI) | Idiom | 818 | ROUGE-L, BERTScore, METEOR, BLEU |
noun_compound_compositionality |
NC Compositionality (NCC) | Noun Compound | 242 | MCQ Accuracy |
noun_compound_compositionality_ft |
NCC Fine-tuning splits | Noun Compound | 242 | — |
noun_compound_extraction |
NC Extraction (NCE) | Noun Compound | 720 | Exact Match |
noun_compound_interpretation |
NC Interpretation (NCI) | Noun Compound | 110 | ROUGE-L, BERTScore, METEOR, BLEU |
verbal_mwe_extraction |
VMWE Extraction | Verbal MWE | 475 | Exact Match |
Citation
@article{liu2026revisiting,
title={Revisiting a Pain in the Neck: A Semantic Reasoning Benchmark for Language Models},
author={Liu, Yang and Li, Hongming and Qin, Melissa Xiaohui and Liu, Qiankun and Huang, Chao},
journal={arXiv preprint arXiv:2604.16593},
year={2026}
}
@article{liu2024revisiting,
title={Revisiting a Pain in the Neck: Semantic Phrase Processing Benchmark for Language Models},
author={Liu, Yang and Qin, Melissa Xiaohui and Li, Hongming and Huang, Chao},
journal={arXiv preprint arXiv:2405.02861},
year={2024}
}
License
MIT