context stringlengths 0 168k | questions sequencelengths 1 12 | answers sequencelengths 1 12 |
|---|---|---|
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (N... | [
"What is the seed lexicon?",
"What are the results?",
"How are relations used to propagate polarity?",
"How big is the Japanese data?",
"What are labels available in dataset for supervision?",
"How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed a... | [
[
"a vocabulary of positive and negative predicates that helps determine the polarity score of an event",
""
],
[
"Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, A... |
1.1em1.1.1em1.1.1.1emThomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt$^{3}$Institut für Maschinelle Sp... | [
"Does the paper report macro F1?",
"How is the annotation experiment evaluated?",
"What are the aesthetic emotions formalized?"
] | [
[
"",
""
],
[
""
],
[
""
]
] |
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”— Italo Calvino, Invisible CitiesA community's identity—defined through the common interests and shared experiences of its users—shapes va... | [
"Do they report results only on English data?",
"How do the various social phenomena examined manifest in different types of communities?",
"What patterns do they observe about how user engagement varies with the characteristics of a community?",
"How did the select the 300 Reddit communities for comparison?"... | [
[
"",
""
],
[
"Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's ... |
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the spe... | [
"What data is the language model pretrained on?",
"What baselines is the proposed model compared against?",
"How is the clinical text structuring task defined?",
"What are the specific tasks being unified?",
"Is all text in this dataset a question, or are there unrelated sentences in between questions?",
... | [
[
"",
""
],
[
"",
""
],
[
"",
"CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text."
],
[
"",
""
],
[
"the dataset consists of pathology reports includin... |
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs)... | [
"What aspects have been compared between various language models?",
"what classic language models are mentioned in the paper?",
"What is a commonly used evaluation metric for language models?"
] | [
[
"Quality measures using perplexity and recall, and performance measured using latency and energy usage. "
],
[
""
],
[
"",
""
]
] |
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train thei... | [
"Which dataset do they use a starting point in generating fake reviews?",
"Do they use a pretrained NMT model to help generating reviews?",
"How does using NMT ensure generated reviews stay on topic?",
"What kind of model do they use for detection?",
"Does their detection tool work better than human detecti... | [
[
"",
""
],
[
"",
""
],
[
""
],
[
""
],
[
""
],
[
""
]
] |
"Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importan(...TRUNCATED) | ["Which baselines did they compare?","How many attention layers are there in their model?","Is the e(...TRUNCATED) | [
[
"",
""
],
[
"one"
],
[
""
]
] |
"Word embeddings, or vector representations of words, are an important component of Natural Language(...TRUNCATED) | ["How is embedding quality assessed?","What are the three measures of bias which are reduced in expe(...TRUNCATED) | [
[
"",
""
],
[
"RIPA, Neighborhood Metric, WEAT"
],
[
""
]
] |
"In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for t(...TRUNCATED) | ["What turn out to be more important high volume or high quality data?","How much is model improved (...TRUNCATED) | [
[
"",
""
],
[
""
],
[
""
]
] |
"Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront(...TRUNCATED) | ["Does this paper target European or Brazilian Portuguese?","What were the word embeddings trained o(...TRUNCATED) | [
[
"",
""
],
[
""
],
[
""
]
] |
Qasper (Question Answering on Scientific Research Papers)
This dataset card aims to be a base template for QasperQA dataset released by Allenai. It has been generated using this raw template. This is a dataset of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
Dataset Details - Abstract of the Paper
Readers of academic research papers often read with the goal of answering specific ques- tions. Question Answering systems that can answer those questions can make consumption of the content much more efficient. However, building such tools requires data that reflect the difficulty of the task arising from complex reasoning about claims made in multiple parts of a paper. In contrast, existing information- seeking question answering datasets usually contain questions about generic factoid-type information. We therefore present QASPER, a dataset of 5,049 questions over 1,585 Natu- ral Language Processing papers. Each ques- tion is written by an NLP practitioner who read only the title and abstract of the corre- sponding paper, and the question seeks infor- mation present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting ev- idence to answers. We find that existing mod- els that do well on other QA tasks do not per- form well on answering these questions, un- derperforming humans by at least 27 F1 points when answering them from entire papers, motivating further research in document-grounded, information-seeking QA, which our dataset is designed to facilitate.
Dataset Description
- Curated by: Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner
- Shared by: Allen Institute for AI, Paul G. Allen School of CSE, University of Washington
Dataset Sources
- Repository: GitHub Repo
- Paper: A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers
Dataset Structure
{ context: , questions: [question1, question2, ...], asnwers: [[answer1_to_question1, answer2_to_question1, ...], [answer1_to_question2, answer2_to_question2, ...], ...] }
Citation
BibTeX:
@misc{dasigi2021datasetinformationseekingquestionsanswers, title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers}, author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner}, year={2021}, eprint={2105.03011}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2105.03011}, }
Dataset Card Author
Hulki Çıray, researcher at GGLab, Linkedin, Hugging Face
Dataset Card Contact
- Downloads last month
- 7