SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research
Abstract
A unified benchmark called SuperTweetEval is introduced to evaluate NLP models on social media tasks, revealing that social media remains a challenging domain despite recent language modeling advancements.
Despite its relevance, the maturity of NLP for social media pales in comparison with general-purpose models, metrics and benchmarks. This fragmented landscape makes it hard for the community to know, for instance, given a task, which is the best performing model and how it compares with others. To alleviate this issue, we introduce a unified benchmark for NLP evaluation in social media, SuperTweetEval, which includes a heterogeneous set of tasks and datasets combined, adapted and constructed from scratch. We benchmarked the performance of a wide range of models on SuperTweetEval and our results suggest that, despite the recent advances in language modelling, social media remains challenging.
Get this paper in your agent:
hf papers read 2310.14757 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 19
Browse 19 models citing this paperDatasets citing this paper 1
Spaces citing this paper 4
Collections including this paper 0
No Collection including this paper