aaron-schlesinger Satya10 commited on
Commit
ba7bca3
·
0 Parent(s):

Duplicate from openlifescienceai/medmcqa

Browse files

Co-authored-by: Satya <Satya10@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ - multiple-choice
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ - open-domain-qa
22
+ paperswithcode_id: medmcqa
23
+ pretty_name: MedMCQA
24
+ dataset_info:
25
+ features:
26
+ - name: id
27
+ dtype: string
28
+ - name: question
29
+ dtype: string
30
+ - name: opa
31
+ dtype: string
32
+ - name: opb
33
+ dtype: string
34
+ - name: opc
35
+ dtype: string
36
+ - name: opd
37
+ dtype: string
38
+ - name: cop
39
+ dtype:
40
+ class_label:
41
+ names:
42
+ '0': a
43
+ '1': b
44
+ '2': c
45
+ '3': d
46
+ - name: choice_type
47
+ dtype: string
48
+ - name: exp
49
+ dtype: string
50
+ - name: subject_name
51
+ dtype: string
52
+ - name: topic_name
53
+ dtype: string
54
+ splits:
55
+ - name: train
56
+ num_bytes: 131903297
57
+ num_examples: 182822
58
+ - name: test
59
+ num_bytes: 1399350
60
+ num_examples: 6150
61
+ - name: validation
62
+ num_bytes: 2221428
63
+ num_examples: 4183
64
+ download_size: 88311487
65
+ dataset_size: 135524075
66
+ configs:
67
+ - config_name: default
68
+ data_files:
69
+ - split: train
70
+ path: data/train-*
71
+ - split: test
72
+ path: data/test-*
73
+ - split: validation
74
+ path: data/validation-*
75
+ ---
76
+
77
+ # Dataset Card for MedMCQA
78
+
79
+ ## Table of Contents
80
+ - [Dataset Description](#dataset-description)
81
+ - [Dataset Summary](#dataset-summary)
82
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
83
+ - [Languages](#languages)
84
+ - [Dataset Structure](#dataset-structure)
85
+ - [Data Instances](#data-instances)
86
+ - [Data Fields](#data-instances)
87
+ - [Data Splits](#data-instances)
88
+ - [Dataset Creation](#dataset-creation)
89
+ - [Curation Rationale](#curation-rationale)
90
+ - [Source Data](#source-data)
91
+ - [Annotations](#annotations)
92
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
93
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
94
+ - [Social Impact of Dataset](#social-impact-of-dataset)
95
+ - [Discussion of Biases](#discussion-of-biases)
96
+ - [Other Known Limitations](#other-known-limitations)
97
+ - [Additional Information](#additional-information)
98
+ - [Dataset Curators](#dataset-curators)
99
+ - [Licensing Information](#licensing-information)
100
+ - [Citation Information](#citation-information)
101
+
102
+ ## Dataset Description
103
+
104
+ - **Homepage:** https://medmcqa.github.io
105
+ - **Repository:** https://github.com/medmcqa/medmcqa
106
+ - **Paper:** [MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering](https://proceedings.mlr.press/v174/pal22a)
107
+ - **Leaderboard:** https://paperswithcode.com/dataset/medmcqa
108
+ - **Point of Contact:** [Aaditya Ura](mailto:aadityaura@gmail.com)
109
+
110
+ ### Dataset Summary
111
+
112
+ MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
113
+
114
+ MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
115
+
116
+ Each sample contains a question, correct answer(s), and other options which require a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.
117
+
118
+ MedMCQA provides an open-source dataset for the Natural Language Processing community.
119
+ It is expected that this dataset would facilitate future research toward achieving better QA systems.
120
+ The dataset contains questions about the following topics:
121
+
122
+ - Anesthesia
123
+ - Anatomy
124
+ - Biochemistry
125
+ - Dental
126
+ - ENT
127
+ - Forensic Medicine (FM)
128
+ - Obstetrics and Gynecology (O&G)
129
+ - Medicine
130
+ - Microbiology
131
+ - Ophthalmology
132
+ - Orthopedics
133
+ - Pathology
134
+ - Pediatrics
135
+ - Pharmacology
136
+ - Physiology
137
+ - Psychiatry
138
+ - Radiology
139
+ - Skin
140
+ - Preventive & Social Medicine (PSM)
141
+ - Surgery
142
+
143
+ ### Supported Tasks and Leaderboards
144
+
145
+ multiple-choice-QA, open-domain-QA: The dataset can be used to train a model for multi-choice questions answering, open domain questions answering. Questions in these exams are challenging and generally require deeper domain and language understanding as it tests the 10+ reasoning abilities across a wide range of medical subjects & topics.
146
+
147
+ ### Languages
148
+
149
+ The questions and answers are available in English.
150
+
151
+ ## Dataset Structure
152
+
153
+ ### Data Instances
154
+
155
+ ```
156
+ {
157
+ "question":"A 40-year-old man presents with 5 days of productive cough and fever. Pseudomonas aeruginosa is isolated from a pulmonary abscess. CBC shows an acute effect characterized by marked leukocytosis (50,000 mL) and the differential count reveals a shift to left in granulocytes. Which of the following terms best describes these hematologic findings?",
158
+ "exp": "Circulating levels of leukocytes and their precursors may occasionally reach very high levels (>50,000 WBC mL). These extreme elevations are sometimes called leukemoid reactions because they are similar to the white cell counts observed in leukemia, from which they must be distinguished. The leukocytosis occurs initially because of the accelerated release of granulocytes from the bone marrow (caused by cytokines, including TNF and IL-1) There is a rise in the number of both mature and immature neutrophils in the blood, referred to as a shift to the left. In contrast to bacterial infections, viral infections (including infectious mononucleosis) are characterized by lymphocytosis Parasitic infestations and certain allergic reactions cause eosinophilia, an increase in the number of circulating eosinophils. Leukopenia is defined as an absolute decrease in the circulating WBC count.",
159
+ "cop":1,
160
+ "opa":"Leukemoid reaction",
161
+ "opb":"Leukopenia",
162
+ "opc":"Myeloid metaplasia",
163
+ "opd":"Neutrophilia",
164
+ "subject_name":"Pathology",
165
+ "topic_name":"Basic Concepts and Vascular changes of Acute Inflammation",
166
+ "id":"4e1715fe-0bc3-494e-b6eb-2d4617245aef",
167
+ "choice_type":"single"
168
+ }
169
+ ```
170
+ ### Data Fields
171
+
172
+ - `id` : a string question identifier for each example
173
+ - `question` : question text (a string)
174
+ - `opa` : Option A
175
+ - `opb` : Option B
176
+ - `opc` : Option C
177
+ - `opd` : Option D
178
+ - `cop` : Correct option, i.e., 1,2,3,4
179
+ - `choice_type` ({"single", "multi"}): Question choice type.
180
+ - "single": Single-choice question, where each choice contains a single option.
181
+ - "multi": Multi-choice question, where each choice contains a combination of multiple suboptions.
182
+ - `exp` : Expert's explanation of the answer
183
+ - `subject_name` : Medical Subject name of the particular question
184
+ - `topic_name` : Medical topic name from the particular subject
185
+
186
+ ### Data Splits
187
+
188
+ The goal of MedMCQA is to emulate the rigor of real word medical exams. To enable that, a predefined split of the dataset is provided. The split is by exams instead of the given questions. This also ensures the reusability and generalization ability of the models.
189
+
190
+ The training set of MedMCQA consists of all the collected mock & online test series, whereas the test set consists of all AIIMS PG exam MCQs (years 1991-present). The development set consists of NEET PG exam MCQs (years 2001-present) to approximate real exam evaluation.
191
+
192
+ Similar questions from train , test and dev set were removed based on similarity. The final split sizes are as follow:
193
+
194
+ | | Train | Test | Valid |
195
+ | ----- | ------ | ----- | ---- |
196
+ | Question #| 182,822 | 6,150 | 4,183|
197
+ | Vocab | 94,231 | 11,218 | 10,800 |
198
+ | Max Ques tokens | 220 | 135| 88 |
199
+ | Max Ans tokens | 38 | 21 | 25 |
200
+
201
+ ## Dataset Creation
202
+
203
+ ### Curation Rationale
204
+
205
+ Before this attempt, very few works have been done to construct biomedical MCQA datasets (Vilares and Gomez-Rodr, 2019), and they are (1) mostly small, containing up to few thousand questions, and (2) cover a limited number of Medical topics and Subjects. This paper addresses the aforementioned limitations by introducing MedMCQA, a new large-scale, Multiple-Choice Question Answering
206
+ (MCQA) dataset designed to address real-world medical entrance exam questions.
207
+
208
+ ### Source Data
209
+
210
+ #### Initial Data Collection and Normalization
211
+
212
+ Historical Exam questions from official websites - AIIMS & NEET PG (1991- present)
213
+ The raw data is collected from open websites and books
214
+
215
+
216
+ #### Who are the source language producers?
217
+
218
+ The dataset was created by Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu
219
+
220
+ ### Annotations
221
+
222
+ #### Annotation process
223
+
224
+ The dataset does not contain any additional annotations.
225
+
226
+
227
+
228
+ #### Who are the annotators?
229
+
230
+ [Needs More Information]
231
+
232
+ ### Personal and Sensitive Information
233
+
234
+ [Needs More Information]
235
+
236
+ ## Considerations for Using the Data
237
+
238
+ ### Social Impact of Dataset
239
+
240
+ [Needs More Information]
241
+
242
+ ### Discussion of Biases
243
+
244
+ [Needs More Information]
245
+
246
+ ### Other Known Limitations
247
+
248
+ [Needs More Information]
249
+
250
+ ## Additional Information
251
+
252
+ ### Dataset Curators
253
+
254
+ [Needs More Information]
255
+
256
+ ### Licensing Information
257
+
258
+ [Needs More Information]
259
+
260
+ ### Citation Information
261
+
262
+ If you find this useful in your research, please consider citing the dataset paper
263
+
264
+ ```
265
+ @InProceedings{pmlr-v174-pal22a,
266
+ title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
267
+ author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
268
+ booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
269
+ pages = {248--260},
270
+ year = {2022},
271
+ editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
272
+ volume = {174},
273
+ series = {Proceedings of Machine Learning Research},
274
+ month = {07--08 Apr},
275
+ publisher = {PMLR},
276
+ pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
277
+ url = {https://proceedings.mlr.press/v174/pal22a.html},
278
+ abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS &amp; NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects &amp; topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
279
+ }
280
+ ```
281
+
282
+ ### Contributions
283
+
284
+ Thanks to [@monk1337](https://github.com/monk1337) for adding this dataset.
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d308841f0c82df363d3d638b33b69b8abd266b1e615ae5d2607dbb24c70beb1
3
+ size 936358
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b119434ba551517a6ec0ba1f7e0b4c029165ed284a4704f262ce37c791c493c5
3
+ size 85899025
data/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b768a1ea34afc9f80d3106d9b21f80fa8a00ec450a1f6cd641af72ca9e591021
3
+ size 1476104