category
string
split
string
Name
string
Subsets
string
HF Link
null
Link
string
License
string
Year
int64
Language
string
Dialect
string
Domain
string
Form
string
Collection Style
null
Description
string
Volume
float64
Unit
string
Ethical Risks
null
Provider
string
Derived From
null
Paper Title
null
Paper Link
null
Script
string
Tokenized
bool
Host
string
Access
string
Cost
string
Test Split
null
Tasks
string
Venue Title
null
Venue Type
null
Venue Name
null
Authors
string
Affiliations
string
Abstract
string
Name_exist
int64
Subsets_exist
int64
HF Link_exist
null
Link_exist
int64
License_exist
int64
Year_exist
int64
Language_exist
int64
Dialect_exist
int64
Domain_exist
int64
Form_exist
int64
Collection Style_exist
null
Description_exist
int64
Volume_exist
int64
Unit_exist
int64
Ethical Risks_exist
null
Provider_exist
int64
Derived From_exist
null
Paper Title_exist
null
Paper Link_exist
null
Script_exist
int64
Tokenized_exist
int64
Host_exist
int64
Access_exist
int64
Cost_exist
int64
Test Split_exist
null
Tasks_exist
int64
Venue Title_exist
null
Venue Type_exist
null
Venue Name_exist
null
Authors_exist
int64
Affiliations_exist
int64
Abstract_exist
int64
fr
test
20min-XD
null
null
https://github.com/ZurichNLP/20min-XD
custom
2,025
multilingual
null
['news articles']
text
null
A French-German, document-level comparable corpus of news articles from the Swiss online news outlet 20 Minuten/20 minutes. It contains 15,000 article pairs from 2015-2024, automatically aligned based on semantic similarity, exhibiting a broad spectrum of cross-lingual similarity.
15,000
documents
null
[' University of Zurich', '20 Minuten (TX Group)']
null
null
null
null
false
GitHub
Free
null
['machine translation', 'other']
null
null
null
['Michelle Wastl', 'Jannis Vamvas', 'Selena Calleri', 'Rico Sennrich']
['Department of Computational Linguistics, University of Zurich', '20 Minuten (TX Group)']
We present 20min-XD (20 Minuten cross-lingual document-level), a French-German, document-level comparable corpus of news articles, sourced from the Swiss online news outlet 20 Minuten/20 minutes. Our dataset comprises around 15,000 article pairs spanning 2015 to 2024, automatically aligned based on semantic similarity....
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
Alloprof
null
null
https://huggingface.co/datasets/antoinelb7/alloprof
MIT License
2,023
multilingual
null
['web pages']
text
null
A French question-answering dataset from the Alloprof educational help website. It contains 29,349 questions from K-12 students and their explanations, often including images and links to 2,596 reference pages, covering various school subjects like math, French, and science.
29,349
sentences
null
['Alloprof', 'Mila']
null
null
null
null
false
HuggingFace
Free
null
['question answering', 'information retrieval']
null
null
null
['Antoine Lefebvre-Brossard', 'Stephane Gazaille', 'Michel C. Desmarais']
['Mila-Quebec AI Institute', 'Polytechnique Montréal']
Teachers and students are increasingly relying on online learning resources to supplement the ones provided in school. This increase in the breadth and depth of available resources is a great thing for students, but only provided they are able to find answers to their queries. Question-answering and information retriev...
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FREDSum
null
null
https://github.com/linto-ai/FREDSum
CC BY-SA 4.0
2,023
fr
null
['TV Channels', 'web pages']
text
null
A dataset of manually transcribed and annotated French political debates from 1974-2023. It is designed for multi-party dialogue summarization and includes abstractive/extractive summaries, topic segmentation, and abstractive communities annotations to support research in this area.
142
documents
null
['Linagora Labs']
null
null
null
null
false
GitHub
Free
null
['summarization', 'speech recognition']
null
null
null
['Virgile Rennard', 'Guokan Shang', 'Damien Grari', 'Julie Hunter', 'Michalis Vazirgiannis']
['Linagora, France', 'École Polytechnique', 'Grenoble Ecole de Management']
Recent advances in deep learning, and especially the invention of encoder-decoder architectures, has significantly improved the performance of abstractive summarization systems. The majority of research has focused on written documents, however, neglecting the problem of multi-party dialogue summarization. In this pape...
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
Vibravox
null
null
https://huggingface.co/datasets/Cnam-LMSSC/vibravox
CC BY 4.0
2,024
fr
null
['wikipedia']
audio
null
Vibravox is a GDPR-compliant dataset containing audio recordings of French speech using five different body-conduction audio sensors and a reference airborne microphone. It includes 45 hours of speech per sensor from 188 participants under various acoustic conditions, with linguistic and phonetic transcriptions.
273.72
hours
null
['LMSSC']
null
null
null
null
false
HuggingFace
Free
null
['speaker identification', 'speech recognition']
null
null
null
['Julien Hauret', 'Malo Olivier', 'Thomas Joubaud', 'Christophe Langrenne', 'Sarah Poire´e', 'Ve´ronique Zimpfer', 'E´ric Bavu']
['Laboratoire de Me´canique des Structures et des Syste`mes Couple´s, Conservatoire national des arts et me´tiers, HESAM Universite´, 75003 Paris, France', 'Department of Acoustics and Soldier Protection, French-German Research Institute of Saint-Louis (ISL)']
Vibravox is a dataset compliant with the General Data Protection Regulation (GDPR) containing audio recordings using five different body-conduction audio sensors: two in-ear microphones, two bone conduction vibration pickups, and a laryngophone. The dataset also includes audio data from an airborne microphone used as a...
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
MTNT
null
null
https://github.com/pmichel31415/mtnt
MIT License
2,018
multilingual
null
['social media', 'commentary']
text
null
A benchmark dataset for Machine Translation of Noisy Text (MTNT), consisting of noisy comments on Reddit and professionally sourced translations. It includes English comments translated into French and Japanese, as well as French and Japanese comments translated into English, on the order of 7k-37k sentences per langua...
37,930
sentences
null
['Carnegie Mellon University']
null
null
null
null
false
GitHub
Free
null
['machine translation']
null
null
null
['Paul Michel', 'Graham Neubig']
['Language Technologies Institute', 'Carnegie Mellon University']
Noisy or non-standard input text can cause disastrous mistranslations in most modern Machine Translation (MT) systems, and there has been growing research interest in creating noise-robust MT systems. However, as of yet there are no publicly available parallel corpora of with naturally occurring noisy inputs and transl...
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
fr
test
PIAF
null
null
https://github.com/etalab/piaf
MIT License
2,020
fr
null
['wikipedia']
text
null
PIAF is a French Question Answering dataset that was collected through a participatory approach. The dataset consists of question-answer pairs extracted from Wikipedia articles.
3,835
sentences
null
['Etalab']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Rachel Keraron', 'Guillaume Lancrenon', 'Mathilde Bras', 'Frédéric Allary', 'Gilles Moyse', 'Thomas Scialom', 'Edmundo-Pavel Soriano-Morales', 'Jacopo Staiano']
['reciTAL', 'Etalab', "Sorbonne Universit'e"]
Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collectio...
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FrenchToxicityPrompts
null
null
https://download.europe.naverlabs.com/FrenchToxicityPrompts/
CC BY-SA 4.0
2,024
fr
null
['social media', 'public datasets']
text
null
A dataset of 50,000 naturally occurring French prompts and their continuations, annotated with toxicity scores from a widely used toxicity classifier. It is designed to evaluate and mitigate toxicity in French language models.
50,000
sentences
null
['NAVER LABS Europe']
null
null
null
null
false
other
Free
null
['offensive language detection']
null
null
null
['Caroline Brun', 'Vassilina Nikoulina']
['NAVER LABS Europe']
Large language models (LLMs) are increasingly popular but are also prone to generating bias, toxic or harmful language, which can have detrimental effects on individuals and communities. Although most efforts is put to assess and mitigate toxicity in generated content, it is primarily concentrated on English, while it'...
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
OBSINFOX
null
null
https://github.com/obs-info/obsinfox
CC BY-NC 4.0
2,024
fr
null
['news articles']
text
null
A corpus of 100 French press documents from 17 unreliable sources. The documents were annotated by 8 human annotators using 11 labels (e.g., FakeNews, Subjective, Exaggeration) to analyze the characteristics of fake news.
100
documents
null
['Observatoire']
null
null
null
null
false
GitHub
Free
null
['fake news detection', 'topic classification']
null
null
null
['Benjamin Icard', 'François Maine', 'Morgane Casanova', 'Géraud Faye', 'Julien Chanson', 'Guillaume Gadek', 'Ghislain Atemezing', 'François Bancilhon', 'Paul Égré']
['Sorbonne Université', 'Institut Jean-Nicod', 'Freedom Partners', 'Université de Rennes', 'Airbus Defence and Space', 'Université Paris-Saclay', 'Mondeca', 'European Union Agency for Railways', 'Observatoire des Médias']
We present a corpus of 100 documents, OBSINFOX, selected from 17 sources of French press considered unreliable by expert agencies, annotated using 11 labels by 8 annotators. By collecting more labels than usual, by more annotators than is typically done, we can identify features that humans consider as characteristic o...
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
CFDD
null
null
https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1
CC BY-NC-SA 4.0
2,023
fr
null
['captions', 'public datasets', 'web pages']
text
null
The Claire French Dialogue Dataset (CFDD) is a corpus containing roughly 160 million words from transcripts and stage plays in French.
160,000,000
tokens
null
['LINAGORA Labs']
null
null
null
null
false
HuggingFace
Free
null
['language modeling', 'text generation']
null
null
null
['Julie Hunter', 'Jérôme Louradour', 'Virgile Rennard', 'Ismaïl Harrando', 'Guokan Shang', 'Jean-Pierre Lorré']
['LINAGORA']
We present the Claire French Dialogue Dataset (CFDD), a resource created by members of LINAGORA Labs in the context of the OpenLLM France initiative. CFDD is a corpus containing roughly 160 million words from transcripts and stage plays in French that we have assembled and publicly released in an effort to further the ...
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FREEMmax
null
null
https://github.com/FreEM-corpora/FreEMmax_OA
custom
2,022
fr
null
['web pages', 'public datasets']
text
null
FREEMmax is a large corpus of Early Modern French (16th-18th centuries), with some texts extending to the 1920s. It aggregates texts from various sources, including institutional databases, research projects, and web scraping, covering diverse genres like literature, correspondence, and plays.
185,643,482
tokens
null
['Inria', 'Sorbonne Universite', 'Universite de Geneve', 'LIGM', 'Universite Gustage Eiffel', 'CNRS']
null
null
null
null
false
zenodo
Free
null
['language modeling']
null
null
null
['Simon Gabay', 'Pedro Ortiz Suarez', 'Alexandre Bartz', 'Alix Chague', 'Rachel Bawden', 'Philippe Gambette', 'Benoît Sagot']
['Inria', 'Sorbonne Universite', 'Universite de Geneve', 'LIGM', 'Universite Gustage Eiffel', 'CNRS']
Language models for historical states of language are becoming increasingly important to allow the optimal digitisation and analysis of old textual sources. Because these historical states are at the same time more complex to process and more scarce in the corpora available, specific efforts are necessary to train natu...
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FQuAD2.0
null
null
https://huggingface.co/datasets/illuin/fquad
CC BY-NC-SA 3.0
2,021
fr
null
['wikipedia']
text
null
A French Question Answering dataset that extends FQuAD1.1 with over 17,000 adversarially created unanswerable questions. The questions are extracted from Wikipedia articles, and the total dataset comprises almost 80,000 questions. It is designed to train models to distinguish answerable from unanswerable questions.
79,768
sentences
null
['Illuin Technology']
null
null
null
null
false
HuggingFace
Free
null
['question answering']
null
null
null
['Quentin Heinrich', 'Gautier Viaud', 'Wacim Belblidia']
['Illuin Technology']
Question Answering, including Reading Comprehension, is one of the NLP research areas that has seen significant scientific breakthroughs over the past few years, thanks to the concomitant advances in Language Modeling. Most of these breakthroughs, however, are centered on the English language. In 2020, as a first stron...
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
XNLI
[{'Name': 'en', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'fr', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'es', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'de', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': ...
null
https://github.com/facebookresearch/XNLI
CC BY-NC 4.0
2,018
['English', 'French', 'Spanish', 'German', 'Greek', 'Bulgarian', 'Russian', 'Turkish', 'Arabic', 'Vietnamese', 'Thai', 'Chinese', 'Hindi', 'Swahili', 'Urdu']
null
['public datasets']
text
null
Evaluation set for NLI by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages
112,500
sentences
null
['Facebook']
null
null
null
null
false
GitHub
Free
null
['natural language inference']
null
null
null
['Alexis Conneau', 'Guillaume Lample', 'Ruty Rinott', 'Adina Williams', 'Samuel R. Bowman', 'Holger Schwenk', 'Veselin Stoyanov']
['Facebook AI Research', 'New York University']
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic,...
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
0
0
null
1
null
null
null
1
1
1
multi
test
X-stance
[{'Name': 'DE', 'Volume': 40200.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'FR', 'Volume': 14129.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'IT', 'Volume': 1172.7, 'Unit': 'sentences', 'Language': 'Italian'}]
null
http://doi.org/10.5281/zenodo.3831317
CC BY-NC 4.0
2,020
['German', 'French', 'Italian']
null
['commentary']
text
null
A large-scale, multilingual (German, French, Italian) dataset for stance detection. It contains over 67,000 comments from Swiss political candidates on more than 150 political issues, formatted as question-comment pairs. The dataset is designed for cross-lingual and cross-target evaluation.
55,502
sentences
null
['Univeristy of Zurich']
null
null
null
null
false
zenodo
Free
null
['stance detection']
null
null
null
['Jannis Vamvas', 'Rico Sennrich']
['University of Zurich', 'University of Edinburgh']
We extract a large-scale stance detection dataset from comments written by candidates of elections in Switzerland. The dataset consists of German, French and Italian text, allowing for a cross-lingual evaluation of stance detection. It contains 67 000 comments on more than 150 political issues (targets). Unlike stance ...
1
1
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
DiS-ReX
[{'Name': 'English', 'Language': 'English', 'Volume': 532499.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Unit': 'sentences', 'Volume': 409087.0}, {'Name': 'Spanish', 'Language': 'Spanish', 'Unit': 'sentences', 'Volume': 456418.0}, {'Volume': 438315.0, 'Language': 'German', 'Name': 'German', 'Unit...
null
https://github.com/dair-iitd/DiS-ReX
unknown
2,021
['English', 'German', 'Spanish', 'French']
null
['wikipedia']
text
null
DiS-ReX is a multilingual dataset for distantly supervised relation extraction (DS-RE) spanning English, German, Spanish, and French. It contains over 1.5 million sentences aligned with DBpedia, featuring 36 relation classes and a 'no relation' class, designed to be a challenging benchmark.
1,836,319
sentences
null
['Indian Institute of Technology']
null
null
null
null
false
GitHub
Free
null
['relation extraction']
null
null
null
['Abhyuday Bhartiya', 'Kartikeya Badola', 'Mausam']
['Indian Institute of Technology']
Distant supervision (DS) is a well established technique for creating large-scale datasets for relation extraction (RE) without using human annotations. However, research in DS-RE has been mostly limited to the English language. Constraining RE to a single language inhibits utilization of large amounts of data in other...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
RELX
[{'Name': 'English', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'French', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'German', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'Spanish', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'Spanis...
null
https://github.com/boun-tabi/RELX
MIT License
2,020
['English', 'French', 'German', 'Spanish', 'Turkish']
null
['public datasets']
text
null
A public benchmark dataset for cross-lingual relation classification in English, French, German, Spanish, and Turkish. It contains 502 parallel sentences created by selecting a subset from the KBP-37 test set and having them professionally translated and annotated.
2,510
sentences
null
['Boğaziçi University']
null
null
null
null
false
GitHub
Free
null
['cross-lingual information retrieval']
null
null
null
['Abdullatif Köksal', 'Arzucan Özgür']
['Department of Computer Engineering, Boğaziçi University']
Relation classification is one of the key topics in information extraction, which can be used to construct knowledge bases or to provide useful information for question answering. Current approaches for relation classification are mainly focused on the English language and require lots of training data with human annot...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MultiSubs
[{'Language': 'English', 'Name': 'English', 'Volume': 2159635.0, 'Unit': 'sentences'}, {'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 2159635.0, 'Unit': 'sentences'}, {'Name': 'Portuguese', 'Language': 'Portuguese', 'Volume': 1796095.0, 'Unit': 'sentences'}, {'Name': 'French', 'Volume': 1063071.0, 'Language': 'Fr...
null
https://doi.org/10.5281/zenodo.5034604
CC BY 4.0
2,022
['English', 'Spanish', 'Portuguese', 'French', 'German']
null
['TV Channels', 'public datasets']
text
null
A large-scale multimodal and multilingual dataset of images aligned to text fragments from movie subtitles. It aims to facilitate research on grounding words to images in their contextual usage in language. The images are aligned to text fragments rather than whole sentences, and the parallel texts are multilingual.
5,403,281
sentences
null
['Imperial College London', 'Federal University of Mato Grosso']
null
null
null
null
false
zenodo
Free
null
['machine translation', 'fill-in-the blank']
null
null
null
['Josiah Wang', 'Pranava Madhyastha', 'Josiel Figueiredo', 'Chiraag Lala', 'Lucia Specia']
['Imperial College London', 'Federal University of Mato Grosso']
This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable...
1
1
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MEE
[{'Name': 'English', 'Language': 'English', 'Volume': 13000.0, 'Unit': 'documents'}, {'Name': 'Portuguese', 'Language': 'Portuguese', 'Volume': 1500.0, 'Unit': 'documents'}, {'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 3268.0, 'Unit': 'documents'}, {'Volume': 4479.0, 'Language': 'Polish', 'Name': 'Polish', 'Uni...
null
unknown
2,022
['English', 'Spanish', 'Portuguese', 'Polish', 'Turkish', 'Hindi', 'Korean', 'Japanese']
null
['wikipedia']
text
null
A large-scale Multilingual Event Extraction (MEE) dataset covering 8 typologically different languages. Sourced from Wikipedia, it provides comprehensive annotations for entity mentions, event triggers, and event arguments across diverse topics like politics, technology, and military.
31,226
documents
null
['University of Oregon', 'Adobe Research']
null
null
null
null
false
other
Free
null
['named entity recognition']
null
null
null
['Amir Pouran Ben Veyseh', 'Javid Ebrahimi', 'Franck Dernoncourt', 'Thien Huu Nguyen']
['Department of Computer Science, University of Oregon', 'Adobe Research']
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for ...
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
XCOPA
[{'Name': 'Estonian', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Estonian'}, {'Name': 'Haitian Creole', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Haitian Creole'}, {'Name': 'Indonesian', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Indonesian'}, {'Name': 'Italian', 'Volume': 600.0, 'Unit': 'sente...
null
https://github.com/cambridgeltl/xcopa
CC BY 4.0
2,020
['Indonesian', 'Italian', 'Swahili', 'Thai', 'Turkish', 'Vietnamese', 'Chinese', 'Estonian', 'Haitian Creole', 'Eastern Apurímac Quechua', 'Tamil']
null
['public datasets']
text
null
XCOPA is a typologically diverse multilingual dataset for causal commonsense reasoning. It was created by translating and re-annotating the English COPA dataset's validation and test sets into 11 languages. The task is to choose the more plausible cause or effect for a given premise.
6,600
sentences
null
['Cambridge']
null
null
null
null
false
GitHub
Free
null
['commonsense reasoning']
null
null
null
['Edoardo M. Ponti', 'Goran Glavasˇ', 'Olga Majewska', 'Qianchu Liu', 'Ivan Vulic´', 'Anna Korhonen']
['Language Technology Lab, TAL, University of Cambridge, UK', 'Data and Web Science Group, University of Mannheim, Germany']
In order to simulate human language capacity, natural language processing systems must be able to reason about the dynamics of everyday situations, including their possible causes and effects. Moreover, they should be able to generalise the acquired world knowledge to new languages, modulo cultural differences. Advance...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MLQA
[{'Name': 'en', 'Volume': 12738.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'ar', 'Volume': 5852.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'de', 'Volume': 5029.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'vi', 'Volume': 6006.0, 'Unit': 'sentences', 'Language': 'Vietnamese'}, {'Nam...
null
https://github.com/facebookresearch/mlqa
CC BY-SA 3.0
2,020
['English', 'Arabic', 'German', 'Vietnamese', 'Spanish', 'Simplified Chinese', 'Hindi']
null
['wikipedia']
text
null
MLQA has over 12K instances in English and 5K in each other language, with each instance parallel between 4 languages on average.
46,461
documents
null
['Facebook']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Patrick Lewis', 'Barlas Oğuz', 'Ruty Rinott', 'S. Riedel', 'Holger Schwenk']
['Facebook AI Research;University College London']
Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making training QA systems in other languages challenging. An alternative to buil...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
M2DS
[{'Name': 'English', 'Volume': 67000.0, 'Unit': 'documents', 'Language': 'English'}, {'Name': 'Tamil', 'Volume': 32000.0, 'Unit': 'documents', 'Language': 'Tamil'}, {'Name': 'Japanese', 'Volume': 29000.0, 'Unit': 'documents', 'Language': 'Japanese'}, {'Name': 'Korean', 'Volume': 27000.0, 'Unit': 'documents', 'Language'...
null
https://huggingface.co/datasets/KushanH/m2ds
unknown
2,024
['English', 'Tamil', 'Japanese', 'Korean', 'Sinhala']
null
['news articles', 'public datasets']
text
null
M2DS is a multilingual multi-document summarization (MDS) dataset. It contains 180,000 news articles from the BBC, organized into 51,500 clusters across five languages: English, Japanese, Korean, Tamil, and Sinhala. The data covers the period from 2010 to 2023.
180,000
documents
null
['University of Moratuwa', 'ConscientAI']
null
null
null
null
false
HuggingFace
Free
null
['summarization']
null
null
null
['Kushan Hewapathirana', 'Nisansa de Silva', 'C.D. Athuraliya']
['Dept. of Computer Science & Engineering, University of Moratuwa, Sri Lanka', 'ConscientAI, Sri Lanka']
In the rapidly evolving digital era, there is an increasing demand for concise information as individuals seek to distil key insights from various sources. Recent attention from researchers on Multi-document Summarisation (MDS) has resulted in diverse datasets covering customer reviews, academic papers, medical and leg...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
XOR-TyDi
[{'Name': 'Ar', 'Volume': 17218.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'Bn', 'Volume': 2682.0, 'Unit': 'sentences', 'Language': 'Bengali'}, {'Name': 'Fi', 'Volume': 9132.0, 'Unit': 'sentences', 'Language': 'Finnish'}, {'Name': 'Ja', 'Volume': 6531.0, 'Unit': 'sentences', 'Language': 'Japanese'}, {'Name...
null
https://nlp.cs.washington.edu/xorqa/
CC BY-SA 4.0
2,021
['Arabic', 'Bengali', 'Finnish', 'Japanese', 'Korean', 'Russian', 'Telugu']
null
['public datasets']
text
null
XOR-TyDi QA brings together information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annota...
53,059
sentences
null
[]
null
null
null
null
false
other
Free
null
['cross-lingual information retrieval', 'question answering']
null
null
null
['Akari Asai', 'Jungo Kasai', 'Jonathan H. Clark', 'Kenton Lee', 'Eunsol Choi', 'Hannaneh Hajishirzi']
['University of Washington', 'University of Washington', 'Google Research', 'The University of Texas at Austin; Allen Institute for AI']
Multilingual question answering tasks typically assume answers exist in the same language as the question. Yet in practice, many languages face both information scarcity -- where languages have few reference articles -- and information asymmetry -- where questions reference concepts from other cultures. This work exten...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
0
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Multilingual Hate Speech Detection Dataset
[{'Name': 'Arabic', 'Volume': 5790.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'English', 'Volume': 96323.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'German', 'Volume': 6155.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'Indonesian', 'Volume': 13882.0, 'Unit': 'sentences', 'Language'...
null
https://github.com/hate-alert/DE-LIMIT
MIT License
2,020
['Arabic', 'English', 'German', 'Indonesian', 'Italian', 'Polish', 'Portuguese', 'Spanish', 'French']
null
['public datasets', 'social media']
text
null
Combined MLMA and L-HSAB datasets
159,753
sentences
null
['Indian Institute of Technology Kharagpur']
null
null
null
null
false
GitHub
Free
null
['offensive language detection']
null
null
null
['Sai Saket Aluru', 'Binny Mathew', 'Punyajoy Saha', 'Animesh Mukherjee']
['Indian Institute of Technology Kharagpur']
Hate speech detection is a challenging problem with most of the datasets available in only one language: English. In this paper, we conduct a large scale analysis of multilingual hate speech in 9 languages from 16 different sources. We observe that in low resource setting, simple models such as LASER embedding with log...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MINION
[{'Name': 'English', 'Volume': 13000.0, 'Unit': 'documents', 'Language': 'English'}, {'Name': 'Spanish', 'Volume': 1500.0, 'Unit': 'documents', 'Language': 'Spanish'}, {'Name': 'Portuguese', 'Volume': 3268.0, 'Unit': 'documents', 'Language': 'Portuguese'}, {'Name': 'Polish', 'Volume': 4479.0, 'Unit': 'documents', 'Lang...
null
unknown
2,022
['English', 'Spanish', 'Portuguese', 'Polish', 'Turkish', 'Hindi', 'Japanese', 'Korean']
null
['wikipedia']
text
null
MINION is a large-scale, multilingual dataset for Event Detection (ED). It contains over 50,000 manually annotated event triggers in 8 languages (English, Spanish, Portuguese, Polish, Turkish, Hindi, Japanese, Korean) sourced from Wikipedia articles. The annotation schema is a pruned version of the ACE 2005 ontology.
31,226
documents
null
['University of Oregon']
null
null
null
null
false
other
Free
null
['other']
null
null
null
['AmirPouranBenVeyseh', 'MinhVanNguyen', 'FranckDernoncourt', 'ThienHuuNguyen']
['Dept. of Computer and Information Science, University of Oregon, Eugene, OR, USA', 'Adobe Research, Seattle, WA, USA']
Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED...
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
SEAHORSE
[{'Name': 'de', 'Language': 'German', 'Volume': 14591.0, 'Unit': 'sentences'}, {'Name': 'en', 'Language': 'English', 'Volume': 22339.0, 'Unit': 'sentences'}, {'Name': 'es', 'Language': 'Spanish', 'Volume': 14749.0, 'Unit': 'sentences'}, {'Name': 'ru', 'Language': 'Russian', 'Volume': 14542.0, 'Unit': 'sentences'}, {'Na...
null
https://goo.gle/seahorse
CC BY 4.0
2,023
['German', 'English', 'Spanish', 'Russian', 'Turkish', 'Vietnamese']
null
['public datasets']
text
null
SEAHORSE is a large-scale dataset for multilingual, multifaceted summarization evaluation. It consists of 96,645 summaries with human ratings along 6 quality dimensions: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness. It covers 6 languages, 9 systems, and 4 summarization datasets.
96,645
sentences
null
['Google']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Elizabeth Clark', 'Shruti Rijhwani', 'Sebastian Gehrmann', 'Joshua Maynez', 'Roee Aharoni', 'Vitaly Nikolaev', 'Thibault Sellam', 'Aditya Siddhant', 'Dipanjan Das', 'Ankur P. Parikh']
['Google DeepMind', 'Google Research']
Reliable automatic evaluation of summarization systems is challenging due to the multifaceted and subjective nature of the task. This is especially the case for languages other than English, where human evaluations are scarce. In this work, we introduce SEAHORSE, a dataset for multilingual, multifaceted summarization e...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Mintaka
[{'Name': 'English', 'Language': 'English', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'Arabic', 'Language': 'Arabic', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'German', 'Volume': 20000.0, 'Language': 'German', 'Unit': 'se...
null
https://github.com/amazon-research/mintaka
CC BY 4.0
2,022
['English', 'Arabic', 'French', 'German', 'Hindi', 'Italian', 'Japanese', 'Portuguese', 'Spanish']
null
['wikipedia']
text
null
Mintaka is a large, complex, naturally-elicited, and multilingual question answering dataset. It contains 20,000 English question-answer pairs, which have been translated into 8 other languages, totaling 180,000 samples. The dataset is annotated with Wikidata entities and includes 8 types of complex questions.
180,000
sentences
null
['Amazon']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Priyanka Sen', 'Alham Fikri Aji', 'Amir Saffari']
['Amazon Alexa AI']
We introduce Mintaka, a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian, Japanese, Portu...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Multi2WOZ
[{'Name': 'Arabic', 'Language': 'Arabic', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'Chinese', 'Language': 'Chinese', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'German', 'Language': 'German', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'Russian', 'Language': 'Russian', 'Volume': 29500.0, 'Unit': '...
null
https://github.com/umanlp/Multi2WOZ
MIT License
2,022
['Arabic', 'Chinese', 'German', 'Russian']
null
['public datasets']
text
null
A multilingual, multi-domain task-oriented dialog (TOD) dataset in Arabic, Chinese, German, and Russian. It was created by translating and manually post-editing the 2,000 development and test dialogs from the English MultiWOZ 2.1 dataset, enabling reliable cross-lingual transfer evaluation.
118,000
sentences
null
['University of Mannheim']
null
null
null
null
false
GitHub
Free
null
['instruction tuning']
null
null
null
['Chia-Chien Hung', 'Anne Lauscher', 'Ivan Vulic´', 'Simone Paolo Ponzetto', 'Goran Glavasˇ']
['Data and Web Science Group, University of Mannheim, Germany', 'MilaNLP, Bocconi University, Italy', 'LTL, University of Cambridge, UK', 'CAIDAS, University of Wu¨rzburg, Germany']
Research on (multi-domain) task-oriented dialog (TOD) has predominantly focused on the English language, primarily due to the shortage of robust TOD datasets in other languages, preventing the systematic investigation of cross-lingual transfer for this crucial NLP application area. In this work, we introduce Multi2WOZ,...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MTOP
[{'Name': 'English', 'Volume': 22288.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'German', 'Volume': 18788.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'French', 'Volume': 16584.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'Spanish', 'Volume': 15459.0, 'Unit': 'sentences', 'Language':...
null
https://fb.me/mtop_dataset
unknown
2,021
['English', 'German', 'French', 'Spanish', 'Hindi', 'Thai']
null
['other']
text
null
MTOP is a multilingual, almost-parallel dataset for task-oriented semantic parsing. It comprises 100k annotated utterances in 6 languages (English, German, French, Spanish, Hindi, Thai) across 11 domains. The dataset is designed to handle complex, nested queries through a compositional representation scheme.
104,445
sentences
null
['Facebook']
null
null
null
null
true
other
Free
null
['named entity recognition', 'intent classification']
null
null
null
['Haoran Li', 'Abhinav Arora', 'Shuohui Chen', 'Anchit Gupta', 'Sonal Gupta', 'Yashar Mehdad']
['Facebook']
Scaling semantic parsing models for task-oriented dialog systems to new languages is often expensive and time-consuming due to the lack of available datasets. Available datasets suffer from several shortcomings: a) they contain few languages b) they contain small amounts of labeled examples per language c) they are bas...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
X-RiSAWOZ
[{'Name': 'Chinese', 'Language': 'Chinese', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'English', 'Language': 'English', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'Hindi', 'Language': 'Hindi', 'Volume': 18000.0, 'Unit': 'se...
null
https://github.com/stanford-oval/dialogues
custom
2,023
['Chinese', 'English', 'French', 'Hindi', 'Korean']
null
['public datasets']
text
null
A multi-domain, large-scale, and high-quality task-oriented dialogue benchmark, produced by translating the Chinese RiSAWOZ data to four diverse languages: English, French, Hindi, and Korean; and one code-mixed English-Hindi language. It is an end-to-end dataset for building fully-functioning agents.
90,000
sentences
null
['Stanford University']
null
null
null
null
false
GitHub
Free
null
['instruction tuning']
null
null
null
['Mehrad Moradshahi', 'Tianhao Shen', 'Kalika Bali', 'Monojit Choudhury', 'Gaël de Chalendar', 'Anmol Goel', 'Sungkyun Kim', 'Prashant Kodali', 'Ponnurangam Kumaraguru', 'Nasredine Semmar', 'Sina J. Semnani', 'Jiwon Seo', 'Vivek Seshadri', 'Manish Shrivastava', 'Michael Sun', 'Aditya Yadavalli', 'Chaobin You', 'Deyi Xi...
['Stanford University', 'Tianjin University', 'Microsoft', 'Université Paris-Saclay', 'International Institute of Information Technology, Hyderabad', 'Hanyang University', 'Karya Inc.']
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese Ri...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
PRESTO
[{'Language': 'German', 'Name': 'German', 'Volume': 83584.0, 'Unit': 'sentences'}, {'Name': 'English', 'Unit': 'sentences', 'Language': 'English', 'Volume': 95671.0}, {'Unit': 'sentences', 'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 96164.0}, {'Volume': 95870.0, 'Unit': 'sentences', 'Language': 'French', 'Name'...
null
https://github.com/google-research-datasets/presto
CC BY 4.0
2,023
['German', 'English', 'Spanish', 'French', 'Hindi', 'Japanese']
null
['other']
text
null
PRESTO is a public, multilingual dataset of over 550K contextual conversations between humans and virtual assistants for parsing realistic task-oriented dialogs. It contains challenges like disfluencies, code-switching, and user revisions, and provides structured context (contacts, lists) for each example across six la...
552,924
sentences
null
['Google Inc.']
null
null
null
null
false
GitHub
Free
null
['intent classification', 'instruction tuning']
null
null
null
['Rahul Goel', 'Waleed Ammar', 'Aditya Gupta', 'Siddharth Vashishtha', 'Motoki Sano', 'Faiz Surani', 'Max Chang', 'HyunJeong Choe', 'David Greene', 'Kyle He', 'Rattima Nitisaroj', 'Anna Trukhina', 'Shachi Paul', 'Pararth Shah', 'Rushin Shah', 'Zhou Yu']
['Google Inc.', 'University of Rochester', 'University of California, Santa Barbara', 'Columbia University']
Research interest in task-oriented dialogs has increased as systems such as Google Assistant, Alexa and Siri have become ubiquitous in everyday life. However, the impact of academic research in this area has been limited by the lack of datasets that realistically capture the wide array of user pain points. To enable re...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
LAHM
[{'Name': 'English', 'Volume': 105120.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'Hindi', 'Volume': 32734.0, 'Unit': 'sentences', 'Language': 'Hindi'}, {'Name': 'Arabic', 'Volume': 5394.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'French', 'Volume': 20809.0, 'Unit': 'sentences', 'Language': 'F...
null
unknown
2,023
['English', 'Hindi', 'Arabic', 'French', 'German', 'Spanish']
null
['social media', 'news articles', 'public datasets']
text
null
A large-scale, semi-supervised dataset for multilingual and multi-domain hate speech identification. It contains nearly 300k tweets across 6 languages (English, Hindi, Arabic, French, German, Spanish) and 5 domains (Abuse, Racism, Sexism, Religious Hate, Extremism), created using a 3-layer annotation pipeline.
227,836
sentences
null
['Logically.ai']
null
null
null
null
false
other
Free
null
['offensive language detection']
null
null
null
['Ankit Yadav', 'Shubham Chandel', 'Sushant Chatufale', 'Anil Bandhakavi']
['Logically.ai']
Current research on hate speech analysis is typically oriented towards monolingual and single classification tasks. In this paper, we present a new multilingual hate speech analysis dataset for English, Hindi, Arabic, French, German and Spanish languages for multiple domains across hate speech - Abuse, Racism, Sexism, ...
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
0
0
null
1
null
null
null
1
1
1
multi
test
MARC
[{'Name': 'English', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'Japanese', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'Japanese'}, {'Name': 'German', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'French', 'Volume': 2100000.0, 'Unit': 'sentences', ...
null
https://registry.opendata.aws/amazon-reviews-ml
custom
2,020
['Japanese', 'English', 'German', 'French', 'Spanish', 'Chinese']
null
['reviews']
text
null
Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019
12,600,000
sentences
null
['Amazon']
null
null
null
null
false
other
Free
null
['sentiment analysis', 'review classification']
null
null
null
['Phillip Keung', 'Yichao Lu', 'Gyorgy Szarvas', 'Noah A. Smith']
['Amazon', 'Washington University']
We present the Multilingual Amazon Reviews Corpus (MARC), a large-scale collection of Amazon reviews for multilingual text classification. The corpus contains reviews in English, Japanese, German, French, Spanish, and Chinese, which were collected between 2015 and 2019. Each record in the dataset contains the review te...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MLSUM
[{'Name': 'FR', 'Volume': 424763.0, 'Unit': 'documents', 'Language': 'French'}, {'Name': 'DE', 'Volume': 242982.0, 'Unit': 'documents', 'Language': 'German'}, {'Name': 'ES', 'Volume': 290645.0, 'Unit': 'documents', 'Language': 'Spanish'}, {'Name': 'RU', 'Volume': 27063.0, 'Unit': 'documents', 'Language': 'Russian'}, {'...
null
https://github.com/recitalAI/MLSUM
custom
2,020
['French', 'German', 'Spanish', 'Russian', 'Turkish']
null
['news articles', 'web pages']
text
null
MLSUM is a large-scale multilingual summarization dataset with over 1.5 million article/summary pairs in French, German, Spanish, Russian, and Turkish. Collected from online newspapers, it is designed to complement the English CNN/Daily Mail dataset, enabling new research in cross-lingual summarization.
1,259,070
documents
null
['reciTAL', 'Sorbonne Université', 'CNRS']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Thomas Scialom', 'Paul-Alexis Dray', 'Sylvain Lamprier', 'Benjamin Piwowarski', 'Jacopo Staiano']
['reciTAL, Paris, France', 'Sorbonne Université, CNRS, LIP6, F-75005 Paris, France', 'CNRS, France']
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected d...
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1