File size: 4,733 Bytes
b4e57dd
d569ff8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4e57dd
 
 
 
 
 
227a12c
 
b4e57dd
d569ff8
b4e57dd
d569ff8
 
 
b4e57dd
d569ff8
b4e57dd
 
01e1d3f
 
 
 
b4e57dd
 
227a12c
01e1d3f
98c690c
b4e57dd
d569ff8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc3bc9a
 
 
 
d569ff8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
language:
- pl
task_categories:
- question-answering
task_ids:
- extractive-qa
pretty_name: QA-NKJP (Polish)
size_categories:
- 1K<n<10K
tags:
- polish
- nkjp
- squad
- extractive-question-answering
- paraphrase
- unanswerable
source_datasets:
- extended|nkjp
multilinguality:
- monolingual
annotations_creators:
- expert-generated
language_creators:
- found
dataset_info:
  features:
  - name: question
    dtype: string
  - name: is_paraphrase
    dtype: bool
  - name: is_impossible
    dtype: bool
  - name: answers
    sequence:
    - name: answer_start
      dtype: int64
    - name: answer_end
      dtype: int64
    - name: text
      dtype: string
  - name: context
    dtype: string
  - name: dataset
    dtype: string
  - name: context_id
    dtype: int64
  splits:
  - name: train
    num_examples: 4417
  - name: validation
    num_examples: 908
---

# QA-NKJP

Polish extractive question answering dataset built on passages from the National Corpus of Polish (*Narodowy Korpus Języka Polskiego*, NKJP). Each example pairs a question with a context paragraph drawn from NKJP, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.

## Dataset summary

| Split      | Examples |
|------------|---------:|
| train      |    4,417 |
| validation |      908 |

- **Language**: Polish (`pl`)
- **Task**: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
- **Domain**: NKJP - mixed-genre Polish corpus (press, fiction, transcripts, web)
- **Format**: One row per (question, context) pair

## Features

| Field           | Type                                                | Notes                                                                 |
|-----------------|-----------------------------------------------------|-----------------------------------------------------------------------|
| `question`      | `string`                                            | Question text in Polish                                               |
| `context`       | `string`                                            | NKJP passage that may contain the answer                              |
| `answers`       | `{answer_start: int[], answer_end: int[], text: str[]}` | Character-level answer span(s); `null` when `is_impossible` is true |
| `is_impossible` | `bool`                                              | `true` if the question cannot be answered from `context`              |
| `is_paraphrase` | `bool`                                              | `true` if the item is a paraphrase of another question for the same context |
| `dataset`       | `string`                                            | Source identifier (`NKJP`)                                            |
| `context_id`    | `int64`                                             | Identifier shared by all questions on the same context                |

## Loading

```python
from datasets import load_dataset

ds = load_dataset("expansio/qa-nkjp")
print(ds)
print(ds["train"][0])
```

## Evaluation

The dataset is evaluated with the SQuAD 2.0 metric family:

- `exact` / `f1` — overall scores
- `HasAns_exact` / `HasAns_f1` — restricted to answerable questions
- `NoAns_f1` — accuracy on questions flagged as unanswerable

## License

TBD. Source passages come from NKJP - please respect the [NKJP licensing terms](http://nkjp.pl/) for the underlying texts. The final license for this redistribution will be specified before publication.

## Citation

If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.

```bibtex
@inproceedings{augustyniak2022lepiszcze,
  title     = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
  author    = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
  booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
  year      = {2022}
}

@inproceedings{przepiorkowski2010nkjp,
  title     = {The National Corpus of Polish},
  author    = {Przepi{\'o}rkowski, Adam and Ba{\'n}ko, Miros{\l}aw and G{\'o}rski, Rafa{\l} L. and Lewandowska-Tomaszczyk, Barbara},
  booktitle = {Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
  year      = {2010}
}
```

## Maintainer

[Expansio Software House](https://expans.io) in collaboration with [CLARIN-PL](https://clarin-pl.eu/).