File size: 4,810 Bytes
5789991
c8d7f95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5789991
 
 
 
 
 
887ae1f
 
5789991
c8d7f95
5789991
c8d7f95
 
 
5789991
c8d7f95
5789991
 
60a2257
 
 
 
5789991
 
5c0a1df
60a2257
5c0a1df
5789991
c8d7f95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c03ac7
 
 
 
c8d7f95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
language:
- pl
task_categories:
- question-answering
task_ids:
- extractive-qa
pretty_name: QA-KPWr (Polish)
size_categories:
- 1K<n<10K
tags:
- polish
- kpwr
- squad
- extractive-question-answering
- paraphrase
- unanswerable
source_datasets:
- extended|kpwr
multilinguality:
- monolingual
annotations_creators:
- expert-generated
language_creators:
- found
dataset_info:
  features:
  - name: question
    dtype: string
  - name: is_paraphrase
    dtype: bool
  - name: is_impossible
    dtype: bool
  - name: answers
    sequence:
    - name: answer_start
      dtype: int64
    - name: answer_end
      dtype: int64
    - name: text
      dtype: string
  - name: context
    dtype: string
  - name: dataset
    dtype: string
  - name: context_id
    dtype: int64
  splits:
  - name: train
    num_examples: 7563
  - name: validation
    num_examples: 1878
---

# QA-KPWr

Polish extractive question answering dataset built on passages from the Polish Corpus of Wrocław University of Technology (*Korpus Języka Polskiego Politechniki Wrocławskiej*, KPWr). Each example pairs a question with a context paragraph drawn from KPWr, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.

## Dataset summary

| Split      | Examples |
|------------|---------:|
| train      |    7,563 |
| validation |    1,878 |

- **Language**: Polish (`pl`)
- **Task**: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
- **Domain**: KPWr - mixed-domain, mixed-genre Polish texts
- **Format**: One row per (question, context) pair

## Features

| Field           | Type                                                | Notes                                                                 |
|-----------------|-----------------------------------------------------|-----------------------------------------------------------------------|
| `question`      | `string`                                            | Question text in Polish                                               |
| `context`       | `string`                                            | KPWr passage that may contain the answer                              |
| `answers`       | `{answer_start: int[], answer_end: int[], text: str[]}` | Character-level answer span(s); `null` when `is_impossible` is true |
| `is_impossible` | `bool`                                              | `true` if the question cannot be answered from `context`              |
| `is_paraphrase` | `bool`                                              | `true` if the item is a paraphrase of another question for the same context |
| `dataset`       | `string`                                            | Source identifier (`KPWR`)                                            |
| `context_id`    | `int64`                                             | Identifier shared by all questions on the same context                |

## Loading

```python
from datasets import load_dataset

ds = load_dataset("expansio/qa-kpwr")
print(ds)
print(ds["train"][0])
```

## Evaluation

The dataset is evaluated with the SQuAD 2.0 metric family:

- `exact` / `f1` — overall scores
- `HasAns_exact` / `HasAns_f1` — restricted to answerable questions
- `NoAns_f1` — accuracy on questions flagged as unanswerable

## License

TBD. Source passages come from KPWr, originally distributed under [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/); attribution must be preserved on redistribution. The final license for this redistribution will be specified before publication.

## Citation

If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.

```bibtex
@inproceedings{augustyniak2022lepiszcze,
  title     = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
  author    = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
  booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
  year      = {2022}
}

@inproceedings{broda-etal-2012-kpwr,
  title     = {{KPW}r: Towards a Free Corpus of {P}olish},
  author    = {Broda, Bartosz and Marci{\'n}czuk, Micha{\l} and Maziarz, Marek and Radziszewski, Adam and Wardy{\'n}ski, Adam},
  booktitle = {Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)},
  year      = {2012}
}
```

## Maintainer

[Expansio Software House](https://expans.io) in collaboration with [CLARIN-PL](https://clarin-pl.eu/).