ktagowski commited on
Commit
2e992f1
·
verified ·
1 Parent(s): 904c3ec

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +91 -20
README.md CHANGED
@@ -1,4 +1,27 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: question
@@ -8,13 +31,13 @@ dataset_info:
8
  - name: is_impossible
9
  dtype: bool
10
  - name: answers
11
- struct:
12
- - name: answer_end
13
- sequence: int64
14
  - name: answer_start
15
- sequence: int64
 
 
16
  - name: text
17
- sequence: string
18
  - name: context
19
  dtype: string
20
  - name: dataset
@@ -23,26 +46,74 @@ dataset_info:
23
  dtype: int64
24
  splits:
25
  - name: train
26
- num_bytes: 23838786
27
  num_examples: 6458
28
  - name: validation
29
- num_bytes: 5603089
30
  num_examples: 1639
31
- download_size: 0
32
- dataset_size: 29441875
33
- configs:
34
- - config_name: default
35
- data_files:
36
- - split: train
37
- path: data/train-*
38
- - split: validation
39
- path: data/validation-*
40
  ---
41
- Loading data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ```python
44
  from datasets import load_dataset
45
 
46
- # If the dataset is gated/private, make sure you have run huggingface-cli login
47
- dataset = load_dataset("expansio/qa-wikipedia")
48
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - pl
4
+ task_categories:
5
+ - question-answering
6
+ task_ids:
7
+ - extractive-qa
8
+ pretty_name: QA-Wikipedia (Polish)
9
+ size_categories:
10
+ - 1K<n<10K
11
+ tags:
12
+ - polish
13
+ - squad
14
+ - extractive-question-answering
15
+ - paraphrase
16
+ - unanswerable
17
+ source_datasets:
18
+ - original
19
+ multilinguality:
20
+ - monolingual
21
+ annotations_creators:
22
+ - expert-generated
23
+ language_creators:
24
+ - found
25
  dataset_info:
26
  features:
27
  - name: question
 
31
  - name: is_impossible
32
  dtype: bool
33
  - name: answers
34
+ sequence:
 
 
35
  - name: answer_start
36
+ dtype: int64
37
+ - name: answer_end
38
+ dtype: int64
39
  - name: text
40
+ dtype: string
41
  - name: context
42
  dtype: string
43
  - name: dataset
 
46
  dtype: int64
47
  splits:
48
  - name: train
 
49
  num_examples: 6458
50
  - name: validation
 
51
  num_examples: 1639
 
 
 
 
 
 
 
 
 
52
  ---
53
+
54
+ # QA-Wikipedia
55
+
56
+ Polish extractive question answering dataset built on top of Polish Wikipedia passages. Each example pairs a question with a context paragraph, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.
57
+
58
+ ## Dataset summary
59
+
60
+ | Split | Examples |
61
+ |------------|---------:|
62
+ | train | 6,458 |
63
+ | validation | 1,639 |
64
+
65
+ - **Language**: Polish (`pl`)
66
+ - **Task**: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
67
+ - **Domain**: Polish Wikipedia
68
+ - **Format**: One row per (question, context) pair
69
+
70
+ ## Features
71
+
72
+ | Field | Type | Notes |
73
+ |-----------------|-----------------------------------------------------|----------------------------------------------------------------------|
74
+ | `question` | `string` | Question text in Polish |
75
+ | `context` | `string` | Wikipedia passage that may contain the answer |
76
+ | `answers` | `{answer_start: int[], answer_end: int[], text: str[]}` | Character-level answer span(s); `null` when `is_impossible` is true |
77
+ | `is_impossible` | `bool` | `true` if the question cannot be answered from `context` |
78
+ | `is_paraphrase` | `bool` | `true` if the item is a paraphrase of another question for the same context |
79
+ | `dataset` | `string` | Source identifier (`wikipedia`) |
80
+ | `context_id` | `int64` | Identifier shared by all questions on the same context |
81
+
82
+ ## Loading
83
 
84
  ```python
85
  from datasets import load_dataset
86
 
87
+ ds = load_dataset("expansio/qa-wikipedia")
88
+ print(ds)
89
+ print(ds["train"][0])
90
+ ```
91
+
92
+ ## Evaluation
93
+
94
+ The dataset is evaluated with the SQuAD 2.0 metric family:
95
+
96
+ - `exact` / `f1` — overall scores
97
+ - `HasAns_exact` / `HasAns_f1` — restricted to answerable questions
98
+ - `NoAns_f1` — accuracy on questions flagged as unanswerable
99
+
100
+ ## License
101
+
102
+ TBD. Source text is derived from Polish Wikipedia (CC BY-SA 3.0); attribution must be preserved on redistribution. The final license for this redistribution will be specified before publication.
103
+
104
+ ## Citation
105
+
106
+ If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.
107
+
108
+ ```bibtex
109
+ @inproceedings{augustyniak2022lepiszcze,
110
+ title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
111
+ author = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
112
+ booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
113
+ year = {2022}
114
+ }
115
+ ```
116
+
117
+ ## Maintainer
118
+
119
+ [Expansio Software House](https://expans.io) in collaboration with [CLARIN-PL](https://clarin-pl.eu/).