levibismarkkofi parquet-converter commited on
Commit
7c6e153
·
0 Parent(s):

Duplicate from ehovy/race

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - multiple-choice
18
+ task_ids:
19
+ - multiple-choice-qa
20
+ paperswithcode_id: race
21
+ pretty_name: RACE
22
+ dataset_info:
23
+ - config_name: all
24
+ features:
25
+ - name: example_id
26
+ dtype: string
27
+ - name: article
28
+ dtype: string
29
+ - name: answer
30
+ dtype: string
31
+ - name: question
32
+ dtype: string
33
+ - name: options
34
+ sequence: string
35
+ splits:
36
+ - name: test
37
+ num_bytes: 8775370
38
+ num_examples: 4934
39
+ - name: train
40
+ num_bytes: 157308478
41
+ num_examples: 87866
42
+ - name: validation
43
+ num_bytes: 8647176
44
+ num_examples: 4887
45
+ download_size: 41500647
46
+ dataset_size: 174731024
47
+ - config_name: high
48
+ features:
49
+ - name: example_id
50
+ dtype: string
51
+ - name: article
52
+ dtype: string
53
+ - name: answer
54
+ dtype: string
55
+ - name: question
56
+ dtype: string
57
+ - name: options
58
+ sequence: string
59
+ splits:
60
+ - name: test
61
+ num_bytes: 6989097
62
+ num_examples: 3498
63
+ - name: train
64
+ num_bytes: 126243228
65
+ num_examples: 62445
66
+ - name: validation
67
+ num_bytes: 6885263
68
+ num_examples: 3451
69
+ download_size: 33750880
70
+ dataset_size: 140117588
71
+ - config_name: middle
72
+ features:
73
+ - name: example_id
74
+ dtype: string
75
+ - name: article
76
+ dtype: string
77
+ - name: answer
78
+ dtype: string
79
+ - name: question
80
+ dtype: string
81
+ - name: options
82
+ sequence: string
83
+ splits:
84
+ - name: test
85
+ num_bytes: 1786273
86
+ num_examples: 1436
87
+ - name: train
88
+ num_bytes: 31065250
89
+ num_examples: 25421
90
+ - name: validation
91
+ num_bytes: 1761913
92
+ num_examples: 1436
93
+ download_size: 7781596
94
+ dataset_size: 34613436
95
+ configs:
96
+ - config_name: all
97
+ data_files:
98
+ - split: test
99
+ path: all/test-*
100
+ - split: train
101
+ path: all/train-*
102
+ - split: validation
103
+ path: all/validation-*
104
+ - config_name: high
105
+ data_files:
106
+ - split: test
107
+ path: high/test-*
108
+ - split: train
109
+ path: high/train-*
110
+ - split: validation
111
+ path: high/validation-*
112
+ - config_name: middle
113
+ data_files:
114
+ - split: test
115
+ path: middle/test-*
116
+ - split: train
117
+ path: middle/train-*
118
+ - split: validation
119
+ path: middle/validation-*
120
+ ---
121
+
122
+ # Dataset Card for "race"
123
+
124
+ ## Table of Contents
125
+ - [Dataset Description](#dataset-description)
126
+ - [Dataset Summary](#dataset-summary)
127
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
128
+ - [Languages](#languages)
129
+ - [Dataset Structure](#dataset-structure)
130
+ - [Data Instances](#data-instances)
131
+ - [Data Fields](#data-fields)
132
+ - [Data Splits](#data-splits)
133
+ - [Dataset Creation](#dataset-creation)
134
+ - [Curation Rationale](#curation-rationale)
135
+ - [Source Data](#source-data)
136
+ - [Annotations](#annotations)
137
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
138
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
139
+ - [Social Impact of Dataset](#social-impact-of-dataset)
140
+ - [Discussion of Biases](#discussion-of-biases)
141
+ - [Other Known Limitations](#other-known-limitations)
142
+ - [Additional Information](#additional-information)
143
+ - [Dataset Curators](#dataset-curators)
144
+ - [Licensing Information](#licensing-information)
145
+ - [Citation Information](#citation-information)
146
+ - [Contributions](#contributions)
147
+
148
+ ## Dataset Description
149
+
150
+ - **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
151
+ - **Repository:** https://github.com/qizhex/RACE_AR_baselines
152
+ - **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
153
+ - **Point of Contact:** [Guokun Lai](mailto:guokun@cs.cmu.edu), [Qizhe Xie](mailto:qzxie@cs.cmu.edu)
154
+ - **Size of downloaded dataset files:** 76.33 MB
155
+ - **Size of the generated dataset:** 349.46 MB
156
+ - **Total amount of disk used:** 425.80 MB
157
+
158
+ ### Dataset Summary
159
+
160
+ RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
161
+ dataset is collected from English examinations in China, which are designed for middle school and high school students.
162
+ The dataset can be served as the training and test sets for machine comprehension.
163
+
164
+ ### Supported Tasks and Leaderboards
165
+
166
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
+
168
+ ### Languages
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ ## Dataset Structure
173
+
174
+ ### Data Instances
175
+
176
+ #### all
177
+
178
+ - **Size of downloaded dataset files:** 25.44 MB
179
+ - **Size of the generated dataset:** 174.73 MB
180
+ - **Total amount of disk used:** 200.17 MB
181
+
182
+ An example of 'train' looks as follows.
183
+ ```
184
+ This example was too long and was cropped:
185
+
186
+ {
187
+ "answer": "A",
188
+ "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
189
+ "example_id": "high132.txt",
190
+ "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
191
+ "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
192
+ }
193
+ ```
194
+
195
+ #### high
196
+
197
+ - **Size of downloaded dataset files:** 25.44 MB
198
+ - **Size of the generated dataset:** 140.12 MB
199
+ - **Total amount of disk used:** 165.56 MB
200
+
201
+ An example of 'train' looks as follows.
202
+ ```
203
+ This example was too long and was cropped:
204
+
205
+ {
206
+ "answer": "A",
207
+ "article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
208
+ "example_id": "high132.txt",
209
+ "options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
210
+ "question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
211
+ }
212
+ ```
213
+
214
+ #### middle
215
+
216
+ - **Size of downloaded dataset files:** 25.44 MB
217
+ - **Size of the generated dataset:** 34.61 MB
218
+ - **Total amount of disk used:** 60.05 MB
219
+
220
+ An example of 'train' looks as follows.
221
+ ```
222
+ This example was too long and was cropped:
223
+
224
+ {
225
+ "answer": "B",
226
+ "article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
227
+ "example_id": "middle3.txt",
228
+ "options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
229
+ "question": "According to the passage, which of the following statements is TRUE?"
230
+ }
231
+ ```
232
+
233
+ ### Data Fields
234
+
235
+ The data fields are the same among all splits.
236
+
237
+ #### all
238
+ - `example_id`: a `string` feature.
239
+ - `article`: a `string` feature.
240
+ - `answer`: a `string` feature.
241
+ - `question`: a `string` feature.
242
+ - `options`: a `list` of `string` features.
243
+
244
+ #### high
245
+ - `example_id`: a `string` feature.
246
+ - `article`: a `string` feature.
247
+ - `answer`: a `string` feature.
248
+ - `question`: a `string` feature.
249
+ - `options`: a `list` of `string` features.
250
+
251
+ #### middle
252
+ - `example_id`: a `string` feature.
253
+ - `article`: a `string` feature.
254
+ - `answer`: a `string` feature.
255
+ - `question`: a `string` feature.
256
+ - `options`: a `list` of `string` features.
257
+
258
+ ### Data Splits
259
+
260
+ | name |train|validation|test|
261
+ |------|----:|---------:|---:|
262
+ |all |87866| 4887|4934|
263
+ |high |62445| 3451|3498|
264
+ |middle|25421| 1436|1436|
265
+
266
+ ## Dataset Creation
267
+
268
+ ### Curation Rationale
269
+
270
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
271
+
272
+ ### Source Data
273
+
274
+ #### Initial Data Collection and Normalization
275
+
276
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
277
+
278
+ #### Who are the source language producers?
279
+
280
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
281
+
282
+ ### Annotations
283
+
284
+ #### Annotation process
285
+
286
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
287
+
288
+ #### Who are the annotators?
289
+
290
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
291
+
292
+ ### Personal and Sensitive Information
293
+
294
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
295
+
296
+ ## Considerations for Using the Data
297
+
298
+ ### Social Impact of Dataset
299
+
300
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
301
+
302
+ ### Discussion of Biases
303
+
304
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
305
+
306
+ ### Other Known Limitations
307
+
308
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
309
+
310
+ ## Additional Information
311
+
312
+ ### Dataset Curators
313
+
314
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
315
+
316
+ ### Licensing Information
317
+
318
+ http://www.cs.cmu.edu/~glai1/data/race/
319
+
320
+ 1. RACE dataset is available for non-commercial research purpose only.
321
+
322
+ 2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
323
+
324
+ 3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
325
+
326
+ 4. We reserve the right to terminate your access to the RACE dataset at any time.
327
+
328
+ ### Citation Information
329
+
330
+ ```
331
+ @inproceedings{lai-etal-2017-race,
332
+ title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
333
+ author = "Lai, Guokun and
334
+ Xie, Qizhe and
335
+ Liu, Hanxiao and
336
+ Yang, Yiming and
337
+ Hovy, Eduard",
338
+ booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
339
+ month = sep,
340
+ year = "2017",
341
+ address = "Copenhagen, Denmark",
342
+ publisher = "Association for Computational Linguistics",
343
+ url = "https://aclanthology.org/D17-1082",
344
+ doi = "10.18653/v1/D17-1082",
345
+ pages = "785--794",
346
+ }
347
+ ```
348
+
349
+
350
+ ### Contributions
351
+
352
+ Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
all/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:442e6b5f22f525811544016552b2ee7d8d1b8ef31e989132edc86e7925f94136
3
+ size 2075881
all/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9986432dc7b5ced3d3e9372573eafdcfa599920bb5e65bbbdc245df3abfd09e
3
+ size 37378263
all/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b3668ef8556831d174fab9cf9a7557e565a312034bce715a28e116903fa6a53
3
+ size 2046503
high/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97ef697f4c81e804eb43e7a667a6ac7bb06c11a99cfa537997ae5e2f6c5db2b7
3
+ size 1683666
high/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea4766ae684939d67e115c0abe7cbe803c198eef77ecfa8557b0d0f32cdbe58d
3
+ size 30411742
high/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fc6a837dfa7839614bbfb78b54d98463d891d97509bdcd5ff9062bd9d446542
3
+ size 1655472
middle/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ea3fb86ab6bbf8746dd0eb0ac1961a1fdbd1214d6d5a901c37b0cad5b5c5971
3
+ size 404725
middle/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33a9d28467c326b13684bddd651a3bdc057aea61b35f93258c95efd76a435f63
3
+ size 6969931
middle/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8ede76683827fc6f5219643626d171c2556ee6b47193b192f399c2f7c4e60a0
3
+ size 406940