Datasets:
lisi1
/

Languages:
English
ArXiv:
License:
lisi1 parquet-converter commited on
Commit
b21f1af
·
0 Parent(s):

Duplicate from allenai/qasper

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +235 -0
  3. dataset_infos.json +212 -0
  4. dummy/qasper/0.1.0/dummy_data.zip +3 -0
  5. qasper.py +152 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: QASPER
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - expert-generated
7
+ language:
8
+ - en
9
+ language_bcp47:
10
+ - en-US
11
+ license:
12
+ - cc-by-4.0
13
+ multilinguality:
14
+ - monolingual
15
+ size_categories:
16
+ - 10K<n<100K
17
+ source_datasets:
18
+ - extended|s2orc
19
+ task_categories:
20
+ - question-answering
21
+ task_ids:
22
+ - closed-domain-qa
23
+ paperswithcode_id: qasper
24
+ ---
25
+
26
+ # Dataset Card for Qasper
27
+
28
+ ## Table of Contents
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
42
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
43
+ - [Annotations](#annotations)
44
+ - [Annotation process](#annotation-process)
45
+ - [Who are the annotators?](#who-are-the-annotators)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Discussion of Biases](#discussion-of-biases)
50
+ - [Other Known Limitations](#other-known-limitations)
51
+ - [Additional Information](#additional-information)
52
+ - [Dataset Curators](#dataset-curators)
53
+ - [Licensing Information](#licensing-information)
54
+ - [Citation Information](#citation-information)
55
+ - [Contributions](#contributions)
56
+
57
+ ## Dataset Description
58
+
59
+ - **Homepage:** [https://allenai.org/data/qasper](https://allenai.org/data/qasper)
60
+ - **Demo:** [https://qasper-demo.apps.allenai.org/](https://qasper-demo.apps.allenai.org/)
61
+ - **Paper:** [https://arxiv.org/abs/2105.03011](https://arxiv.org/abs/2105.03011)
62
+ - **Blogpost:** [https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c](https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c)
63
+ - **Leaderboards:** [https://paperswithcode.com/dataset/qasper](https://paperswithcode.com/dataset/qasper)
64
+
65
+ ### Dataset Summary
66
+
67
+ QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ - `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
72
+
73
+ - `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
74
+
75
+
76
+ ### Languages
77
+
78
+ English, as it is used in research papers.
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A typical instance in the dataset:
85
+
86
+ ```
87
+ {
88
+ 'id': "Paper ID (string)",
89
+ 'title': "Paper Title",
90
+ 'abstract': "paper abstract ...",
91
+ 'full_text': {
92
+ 'paragraphs':[["section1_paragraph1_text","section1_paragraph2_text",...],["section2_paragraph1_text","section2_paragraph2_text",...]],
93
+ 'section_name':["section1_title","section2_title"],...},
94
+ 'qas': {
95
+ 'answers':[{
96
+ 'annotation_id': ["q1_answer1_annotation_id","q1_answer2_annotation_id"]
97
+ 'answer': [{
98
+ 'unanswerable':False,
99
+ 'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
100
+ 'yes_no':False,
101
+ 'free_form_answer':"q1_answer1",
102
+ 'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
103
+ 'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
104
+ },
105
+ {
106
+ 'unanswerable':False,
107
+ 'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
108
+ 'yes_no':False,
109
+ 'free_form_answer':"q1_answer2",
110
+ 'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
111
+ 'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
112
+ }],
113
+ 'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
114
+ },{...["question2's answers"]..},{...["question3's answers"]..}],
115
+ 'question':["question1","question2","question3"...],
116
+ 'question_id':["question1_id","question2_id","question3_id"...],
117
+ 'question_writer':["question1_writer_id","question2_writer_id","question3_writer_id"...],
118
+ 'nlp_background':["question1_writer_nlp_background","question2_writer_nlp_background",...],
119
+ 'topic_background':["question1_writer_topic_background","question2_writer_topic_background",...],
120
+ 'paper_read': ["question1_writer_paper_read_status","question2_writer_paper_read_status",...],
121
+ 'search_query':["question1_search_query","question2_search_query","question3_search_query"...],
122
+ }
123
+ }
124
+ ```
125
+
126
+ ### Data Fields
127
+
128
+ The following is an excerpt from the dataset README:
129
+
130
+ Within "qas", some fields should be obvious. Here is some explanation about the others:
131
+
132
+ #### Fields specific to questions:
133
+
134
+ - "nlp_background" shows the experience the question writer had. The values can be "zero" (no experience), "two" (0 - 2 years of experience), "five" (2 - 5 years of experience), and "infinity" (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
135
+
136
+ - "topic_background" shows how familiar the question writer was with the topic of the paper. The values are "unfamiliar", "familiar", "research" (meaning that the topic is the research area of the writer), or null.
137
+
138
+ - "paper_read", when specified shows whether the questionwriter has read the paper.
139
+
140
+ - "search_query", if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
141
+
142
+ #### Fields specific to answers
143
+
144
+ Unanswerable answers have "unanswerable" set to true. The remaining answers have exactly one of the following fields being non-empty.
145
+
146
+ - "extractive_spans" are spans in the paper which serve as the answer.
147
+ - "free_form_answer" is a written out answer.
148
+ - "yes_no" is true iff the answer is Yes, and false iff the answer is No.
149
+
150
+ "evidence" is the set of paragraphs, figures or tables used to arrive at the answer. Tables or figures start with the string "FLOAT SELECTED"
151
+
152
+ "highlighted_evidence" is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the "evidence" field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the "evidence" field, it is guaranteed to be entire paragraphs, while that is not the case with "highlighted_evidence".
153
+
154
+
155
+ ### Data Splits
156
+
157
+ | | Train | Valid |
158
+ | ----- | ------ | ----- |
159
+ | Number of papers | 888 | 281 |
160
+ | Number of questions | 2593 | 1005 |
161
+ | Number of answers | 2675 | 1764 |
162
+
163
+ ## Dataset Creation
164
+
165
+ ### Curation Rationale
166
+
167
+ [More Information Needed]
168
+
169
+ ### Source Data
170
+
171
+ NLP papers: The full text of the papers is extracted from [S2ORC](https://huggingface.co/datasets/s2orc) (Lo et al., 2020)
172
+
173
+ #### Initial Data Collection and Normalization
174
+
175
+ [More Information Needed]
176
+
177
+ #### Who are the source language producers?
178
+
179
+ [More Information Needed]
180
+
181
+ ### Annotations
182
+
183
+ [More Information Needed]
184
+
185
+ #### Annotation process
186
+
187
+ [More Information Needed]
188
+
189
+ #### Who are the annotators?
190
+
191
+ "The annotators are NLP practitioners, not
192
+ expert researchers, and it is likely that an expert
193
+ would score higher"
194
+
195
+ ### Personal and Sensitive Information
196
+
197
+ [More Information Needed]
198
+
199
+ ## Considerations for Using the Data
200
+
201
+ ### Social Impact of Dataset
202
+
203
+ [More Information Needed]
204
+
205
+ ### Discussion of Biases
206
+
207
+ [More Information Needed]
208
+
209
+ ### Other Known Limitations
210
+
211
+ [More Information Needed]
212
+
213
+ ## Additional Information
214
+
215
+ ### Dataset Curators
216
+
217
+ Crowdsourced NLP practitioners
218
+
219
+ ### Licensing Information
220
+
221
+ [CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
222
+
223
+ ### Citation Information
224
+
225
+ ```
226
+ @inproceedings{Dasigi2021ADO,
227
+ title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
228
+ author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
229
+ year={2021}
230
+ }
231
+ ```
232
+
233
+ ### Contributions
234
+
235
+ Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "qasper": {
3
+ "description": "A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.\n",
4
+ "citation": "@inproceedings{Dasigi2021ADO,\n title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},\n author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},\n year={2021}\n}\n",
5
+ "homepage": "https://allenai.org/data/qasper",
6
+ "license": "CC BY 4.0",
7
+ "features": {
8
+ "id": {
9
+ "dtype": "string",
10
+ "id": null,
11
+ "_type": "Value"
12
+ },
13
+ "title": {
14
+ "dtype": "string",
15
+ "id": null,
16
+ "_type": "Value"
17
+ },
18
+ "abstract": {
19
+ "dtype": "string",
20
+ "id": null,
21
+ "_type": "Value"
22
+ },
23
+ "full_text": {
24
+ "feature": {
25
+ "section_name": {
26
+ "dtype": "string",
27
+ "id": null,
28
+ "_type": "Value"
29
+ },
30
+ "paragraphs": [
31
+ {
32
+ "dtype": "string",
33
+ "id": null,
34
+ "_type": "Value"
35
+ }
36
+ ]
37
+ },
38
+ "length": -1,
39
+ "id": null,
40
+ "_type": "Sequence"
41
+ },
42
+ "qas": {
43
+ "feature": {
44
+ "question": {
45
+ "dtype": "string",
46
+ "id": null,
47
+ "_type": "Value"
48
+ },
49
+ "question_id": {
50
+ "dtype": "string",
51
+ "id": null,
52
+ "_type": "Value"
53
+ },
54
+ "nlp_background": {
55
+ "dtype": "string",
56
+ "id": null,
57
+ "_type": "Value"
58
+ },
59
+ "topic_background": {
60
+ "dtype": "string",
61
+ "id": null,
62
+ "_type": "Value"
63
+ },
64
+ "paper_read": {
65
+ "dtype": "string",
66
+ "id": null,
67
+ "_type": "Value"
68
+ },
69
+ "search_query": {
70
+ "dtype": "string",
71
+ "id": null,
72
+ "_type": "Value"
73
+ },
74
+ "question_writer": {
75
+ "dtype": "string",
76
+ "id": null,
77
+ "_type": "Value"
78
+ },
79
+ "answers": {
80
+ "feature": {
81
+ "answer": {
82
+ "unanswerable": {
83
+ "dtype": "bool",
84
+ "id": null,
85
+ "_type": "Value"
86
+ },
87
+ "extractive_spans": {
88
+ "feature": {
89
+ "dtype": "string",
90
+ "id": null,
91
+ "_type": "Value"
92
+ },
93
+ "length": -1,
94
+ "id": null,
95
+ "_type": "Sequence"
96
+ },
97
+ "yes_no": {
98
+ "dtype": "bool",
99
+ "id": null,
100
+ "_type": "Value"
101
+ },
102
+ "free_form_answer": {
103
+ "dtype": "string",
104
+ "id": null,
105
+ "_type": "Value"
106
+ },
107
+ "evidence": {
108
+ "feature": {
109
+ "dtype": "string",
110
+ "id": null,
111
+ "_type": "Value"
112
+ },
113
+ "length": -1,
114
+ "id": null,
115
+ "_type": "Sequence"
116
+ },
117
+ "highlighted_evidence": {
118
+ "feature": {
119
+ "dtype": "string",
120
+ "id": null,
121
+ "_type": "Value"
122
+ },
123
+ "length": -1,
124
+ "id": null,
125
+ "_type": "Sequence"
126
+ }
127
+ },
128
+ "annotation_id": {
129
+ "dtype": "string",
130
+ "id": null,
131
+ "_type": "Value"
132
+ },
133
+ "worker_id": {
134
+ "dtype": "string",
135
+ "id": null,
136
+ "_type": "Value"
137
+ }
138
+ },
139
+ "length": -1,
140
+ "id": null,
141
+ "_type": "Sequence"
142
+ }
143
+ },
144
+ "length": -1,
145
+ "id": null,
146
+ "_type": "Sequence"
147
+ },
148
+ "figures_and_tables": {
149
+ "feature": {
150
+ "caption": {
151
+ "dtype": "string",
152
+ "id": null,
153
+ "_type": "Value"
154
+ },
155
+ "file": {
156
+ "dtype": "string",
157
+ "id": null,
158
+ "_type": "Value"
159
+ }
160
+ },
161
+ "length": -1,
162
+ "id": null,
163
+ "_type": "Sequence"
164
+ }
165
+ },
166
+ "post_processed": null,
167
+ "supervised_keys": null,
168
+ "builder_name": "qasper",
169
+ "config_name": "qasper",
170
+ "version": {
171
+ "version_str": "0.3.0",
172
+ "description": null,
173
+ "major": 0,
174
+ "minor": 3,
175
+ "patch": 0
176
+ },
177
+ "splits": {
178
+ "train": {
179
+ "name": "train",
180
+ "num_bytes": 27277970,
181
+ "num_examples": 888,
182
+ "dataset_name": "qasper"
183
+ },
184
+ "validation": {
185
+ "name": "validation",
186
+ "num_bytes": 9535330,
187
+ "num_examples": 281,
188
+ "dataset_name": "qasper"
189
+ },
190
+ "test": {
191
+ "name": "test",
192
+ "num_bytes": 9535330,
193
+ "num_examples": 416,
194
+ "dataset_name": "qasper"
195
+ }
196
+ },
197
+ "download_checksums": {
198
+ "https://qasper-dataset.s3.us-west-2.amazonaws.com/qasper-train-dev-v0.3.tgz": {
199
+ "num_bytes": 10835856,
200
+ "checksum": "a28fdf966db827bcee3d873107d6b6669864fb7ca8fbf73a192f5e39191bdb5a"
201
+ },
202
+ "https://qasper-dataset.s3.us-west-2.amazonaws.com/qasper-test-and-evaluator-v0.3.tgz": {
203
+ "num_bytes": 3865061,
204
+ "checksum": "72a52a41193e2838b8074f80ac074b94f956b84886c36a61c58a7df4171bdd72"
205
+ }
206
+ },
207
+ "download_size": 14700917,
208
+ "post_processing_size": null,
209
+ "dataset_size": 36813300,
210
+ "size_in_bytes": 68556447
211
+ }
212
+ }
dummy/qasper/0.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b220315af309990e51221f07dfac6fad8b43b00b7d8f267b1f555c797bb5c2e
3
+ size 15066
qasper.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Qasper: A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers."""
18
+
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+
28
+ _CITATION = """\
29
+ @inproceedings{Dasigi2021ADO,
30
+ title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
31
+ author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
32
+ year={2021}
33
+ }
34
+ """
35
+ _LICENSE = "CC BY 4.0"
36
+ _DESCRIPTION = """\
37
+ A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.
38
+ """
39
+
40
+ _HOMEPAGE = "https://allenai.org/data/qasper"
41
+ _URL_TRAIN_DEV = "https://qasper-dataset.s3.us-west-2.amazonaws.com/qasper-train-dev-v0.3.tgz"
42
+ _URL_TEST = "https://qasper-dataset.s3.us-west-2.amazonaws.com/qasper-test-and-evaluator-v0.3.tgz"
43
+ _DATA_FILES = {"train": "qasper-train-v0.3.json",
44
+ "dev": "qasper-dev-v0.3.json",
45
+ "test": "qasper-test-v0.3.json"}
46
+
47
+ _VERSION = "0.3.0"
48
+
49
+
50
+ class Qasper(datasets.GeneratorBasedBuilder):
51
+ """Qasper: A Dataset of Information-Seeking Q&A Anchored in Research Papers."""
52
+
53
+ BUILDER_CONFIGS = [
54
+ datasets.BuilderConfig(
55
+ name="qasper",
56
+ version=datasets.Version(_VERSION),
57
+ description=_DESCRIPTION,
58
+ )
59
+ ]
60
+
61
+ def _info(self):
62
+
63
+ features = datasets.Features(
64
+ {
65
+ "id": datasets.Value("string"),
66
+ "title": datasets.Value("string"),
67
+ "abstract": datasets.Value("string"),
68
+ "full_text": datasets.features.Sequence(
69
+ {
70
+ "section_name": datasets.Value("string"),
71
+ "paragraphs": [datasets.Value("string")],
72
+ }
73
+ ),
74
+ "qas": datasets.features.Sequence(
75
+ {
76
+ "question": datasets.Value("string"),
77
+ "question_id": datasets.Value("string"),
78
+ "nlp_background": datasets.Value("string"),
79
+ "topic_background": datasets.Value("string"),
80
+ "paper_read": datasets.Value("string"),
81
+ "search_query": datasets.Value("string"),
82
+ "question_writer": datasets.Value("string"),
83
+ "answers": datasets.features.Sequence(
84
+ {
85
+ "answer": {
86
+ "unanswerable": datasets.Value("bool"),
87
+ "extractive_spans": datasets.features.Sequence(datasets.Value("string")),
88
+ "yes_no": datasets.Value("bool"),
89
+ "free_form_answer": datasets.Value("string"),
90
+ "evidence": datasets.features.Sequence(datasets.Value("string")),
91
+ "highlighted_evidence": datasets.features.Sequence(datasets.Value("string")),
92
+ },
93
+ "annotation_id": datasets.Value("string"),
94
+ "worker_id": datasets.Value("string"),
95
+ }
96
+ ),
97
+ }
98
+ ),
99
+ "figures_and_tables": datasets.features.Sequence(
100
+ {
101
+ "caption": datasets.Value("string"),
102
+ "file": datasets.Value("string"),
103
+ }
104
+ ),
105
+ }
106
+ )
107
+
108
+ return datasets.DatasetInfo(
109
+ description=_DESCRIPTION,
110
+ features=features,
111
+ supervised_keys=None,
112
+ homepage=_HOMEPAGE,
113
+ license=_LICENSE,
114
+ citation=_CITATION,
115
+ )
116
+
117
+ def _split_generators(self, dl_manager):
118
+ archive_train_dev, archive_test = dl_manager.download((
119
+ _URL_TRAIN_DEV, _URL_TEST)
120
+ )
121
+
122
+ return [
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.TRAIN,
125
+ gen_kwargs={
126
+ "filepath": _DATA_FILES["train"],
127
+ "files": dl_manager.iter_archive(archive_train_dev)},
128
+ ),
129
+ datasets.SplitGenerator(
130
+ name=datasets.Split.VALIDATION,
131
+ gen_kwargs={
132
+ "filepath": _DATA_FILES["dev"],
133
+ "files": dl_manager.iter_archive(archive_train_dev)},
134
+ ),
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.TEST,
137
+ gen_kwargs={
138
+ "filepath": _DATA_FILES["test"],
139
+ "files": dl_manager.iter_archive(archive_test)},
140
+ ),
141
+ ]
142
+
143
+ def _generate_examples(self, filepath, files):
144
+ """This function returns the examples in the raw (text) form."""
145
+ logger.info("generating examples from = %s", filepath)
146
+ for path, f in files:
147
+ if path == filepath:
148
+ qasper = json.loads(f.read().decode("utf-8"))
149
+ for id_ in qasper:
150
+ qasper[id_]["id"] = id_
151
+ yield id_, qasper[id_]
152
+ break