Datasets:
Piyush007k
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Piyush007k parquet-converter commited on
Commit
4fe75c5
·
0 Parent(s):

Duplicate from xlangai/spider

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ - machine-generated
7
+ language:
8
+ - en
9
+ license:
10
+ - cc-by-sa-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text2text-generation
19
+ task_ids: []
20
+ paperswithcode_id: spider-1
21
+ pretty_name: Spider
22
+ tags:
23
+ - text-to-sql
24
+ dataset_info:
25
+ config_name: spider
26
+ features:
27
+ - name: db_id
28
+ dtype: string
29
+ - name: query
30
+ dtype: string
31
+ - name: question
32
+ dtype: string
33
+ - name: query_toks
34
+ sequence: string
35
+ - name: query_toks_no_value
36
+ sequence: string
37
+ - name: question_toks
38
+ sequence: string
39
+ splits:
40
+ - name: train
41
+ num_bytes: 4743786
42
+ num_examples: 7000
43
+ - name: validation
44
+ num_bytes: 682090
45
+ num_examples: 1034
46
+ download_size: 957246
47
+ dataset_size: 5425876
48
+ configs:
49
+ - config_name: spider
50
+ data_files:
51
+ - split: train
52
+ path: spider/train-*
53
+ - split: validation
54
+ path: spider/validation-*
55
+ default: true
56
+ ---
57
+
58
+
59
+ # Dataset Card for Spider
60
+
61
+ ## Table of Contents
62
+ - [Dataset Description](#dataset-description)
63
+ - [Dataset Summary](#dataset-summary)
64
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
65
+ - [Languages](#languages)
66
+ - [Dataset Structure](#dataset-structure)
67
+ - [Data Instances](#data-instances)
68
+ - [Data Fields](#data-fields)
69
+ - [Data Splits](#data-splits)
70
+ - [Dataset Creation](#dataset-creation)
71
+ - [Curation Rationale](#curation-rationale)
72
+ - [Source Data](#source-data)
73
+ - [Annotations](#annotations)
74
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
75
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
76
+ - [Social Impact of Dataset](#social-impact-of-dataset)
77
+ - [Discussion of Biases](#discussion-of-biases)
78
+ - [Other Known Limitations](#other-known-limitations)
79
+ - [Additional Information](#additional-information)
80
+ - [Dataset Curators](#dataset-curators)
81
+ - [Licensing Information](#licensing-information)
82
+ - [Citation Information](#citation-information)
83
+ - [Contributions](#contributions)
84
+
85
+ ## Dataset Description
86
+
87
+ - **Homepage:** https://yale-lily.github.io/spider
88
+ - **Repository:** https://github.com/taoyds/spider
89
+ - **Paper:** https://www.aclweb.org/anthology/D18-1425/
90
+ - **Paper:** https://arxiv.org/abs/1809.08887
91
+ - **Point of Contact:** [Yale LILY](https://yale-lily.github.io/)
92
+
93
+ ### Dataset Summary
94
+
95
+ Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students.
96
+ The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
97
+
98
+ ### Supported Tasks and Leaderboards
99
+
100
+ The leaderboard can be seen at https://yale-lily.github.io/spider
101
+
102
+ ### Languages
103
+
104
+ The text in the dataset is in English.
105
+
106
+ ## Dataset Structure
107
+
108
+ ### Data Instances
109
+
110
+ **What do the instances that comprise the dataset represent?**
111
+
112
+ Each instance is natural language question and the equivalent SQL query
113
+
114
+ **How many instances are there in total?**
115
+
116
+ **What data does each instance consist of?**
117
+
118
+ [More Information Needed]
119
+
120
+ ### Data Fields
121
+
122
+ * **db_id**: Database name
123
+ * **question**: Natural language to interpret into SQL
124
+ * **query**: Target SQL query
125
+ * **query_toks**: List of tokens for the query
126
+ * **query_toks_no_value**: List of tokens for the query
127
+ * **question_toks**: List of tokens for the question
128
+
129
+ ### Data Splits
130
+
131
+ **train**: 7000 questions and SQL query pairs
132
+ **dev**: 1034 question and SQL query pairs
133
+
134
+ [More Information Needed]
135
+
136
+ ## Dataset Creation
137
+
138
+ ### Curation Rationale
139
+
140
+ [More Information Needed]
141
+
142
+ ### Source Data
143
+
144
+ #### Initial Data Collection and Normalization
145
+
146
+ #### Who are the source language producers?
147
+
148
+ [More Information Needed]
149
+
150
+ ### Annotations
151
+
152
+ The dataset was annotated by 11 college students at Yale University
153
+
154
+ #### Annotation process
155
+
156
+ #### Who are the annotators?
157
+
158
+ ### Personal and Sensitive Information
159
+
160
+ [More Information Needed]
161
+
162
+ ## Considerations for Using the Data
163
+
164
+ ### Social Impact of Dataset
165
+
166
+ ### Discussion of Biases
167
+
168
+ [More Information Needed]
169
+
170
+ ### Other Known Limitations
171
+
172
+ ## Additional Information
173
+
174
+ The listed authors in the homepage are maintaining/supporting the dataset.
175
+
176
+ ### Dataset Curators
177
+
178
+ [More Information Needed]
179
+
180
+ ### Licensing Information
181
+
182
+ The spider dataset is licensed under
183
+ the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
184
+
185
+ [More Information Needed]
186
+
187
+ ### Citation Information
188
+
189
+ ```
190
+ @inproceedings{yu-etal-2018-spider,
191
+ title = "{S}pider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-{SQL} Task",
192
+ author = "Yu, Tao and
193
+ Zhang, Rui and
194
+ Yang, Kai and
195
+ Yasunaga, Michihiro and
196
+ Wang, Dongxu and
197
+ Li, Zifan and
198
+ Ma, James and
199
+ Li, Irene and
200
+ Yao, Qingning and
201
+ Roman, Shanelle and
202
+ Zhang, Zilin and
203
+ Radev, Dragomir",
204
+ editor = "Riloff, Ellen and
205
+ Chiang, David and
206
+ Hockenmaier, Julia and
207
+ Tsujii, Jun{'}ichi",
208
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
209
+ month = oct # "-" # nov,
210
+ year = "2018",
211
+ address = "Brussels, Belgium",
212
+ publisher = "Association for Computational Linguistics",
213
+ url = "https://aclanthology.org/D18-1425",
214
+ doi = "10.18653/v1/D18-1425",
215
+ pages = "3911--3921",
216
+ archivePrefix={arXiv},
217
+ eprint={1809.08887},
218
+ primaryClass={cs.CL},
219
+ }
220
+ ```
221
+
222
+ ### Contributions
223
+
224
+ Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.
spider/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb4b681558f6f8f428e516fb94c5a1cb19c5a0a0c153c0618c8cc4a28115d4cb
3
+ size 831359
spider/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3e2a46303899a2d4afe3f6a3a62e59f8d589f241b3cbfb52356479b1f054888
3
+ size 125887