Datasets:
language:
- pl
task_categories:
- question-answering
task_ids:
- extractive-qa
pretty_name: QA-KPWr (Polish)
size_categories:
- 1K<n<10K
tags:
- polish
- kpwr
- squad
- extractive-question-answering
- paraphrase
- unanswerable
source_datasets:
- extended|kpwr
multilinguality:
- monolingual
annotations_creators:
- expert-generated
language_creators:
- found
dataset_info:
features:
- name: question
dtype: string
- name: is_paraphrase
dtype: bool
- name: is_impossible
dtype: bool
- name: answers
sequence:
- name: answer_start
dtype: int64
- name: answer_end
dtype: int64
- name: text
dtype: string
- name: context
dtype: string
- name: dataset
dtype: string
- name: context_id
dtype: int64
splits:
- name: train
num_examples: 7563
- name: validation
num_examples: 1878
QA-KPWr
Polish extractive question answering dataset built on passages from the Polish Corpus of Wrocław University of Technology (Korpus Języka Polskiego Politechniki Wrocławskiej, KPWr). Each example pairs a question with a context paragraph drawn from KPWr, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.
Dataset summary
| Split | Examples |
|---|---|
| train | 7,563 |
| validation | 1,878 |
- Language: Polish (
pl) - Task: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
- Domain: KPWr - mixed-domain, mixed-genre Polish texts
- Format: One row per (question, context) pair
Features
| Field | Type | Notes |
|---|---|---|
question |
string |
Question text in Polish |
context |
string |
KPWr passage that may contain the answer |
answers |
{answer_start: int[], answer_end: int[], text: str[]} |
Character-level answer span(s); null when is_impossible is true |
is_impossible |
bool |
true if the question cannot be answered from context |
is_paraphrase |
bool |
true if the item is a paraphrase of another question for the same context |
dataset |
string |
Source identifier (KPWR) |
context_id |
int64 |
Identifier shared by all questions on the same context |
Loading
from datasets import load_dataset
ds = load_dataset("expansio/qa-kpwr")
print(ds)
print(ds["train"][0])
Evaluation
The dataset is evaluated with the SQuAD 2.0 metric family:
exact/f1— overall scoresHasAns_exact/HasAns_f1— restricted to answerable questionsNoAns_f1— accuracy on questions flagged as unanswerable
License
TBD. Source passages come from KPWr, originally distributed under CC BY 3.0; attribution must be preserved on redistribution. The final license for this redistribution will be specified before publication.
Citation
If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.
@inproceedings{augustyniak2022lepiszcze,
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
author = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
year = {2022}
}
@inproceedings{broda-etal-2012-kpwr,
title = {{KPW}r: Towards a Free Corpus of {P}olish},
author = {Broda, Bartosz and Marci{\'n}czuk, Micha{\l} and Maziarz, Marek and Radziszewski, Adam and Wardy{\'n}ski, Adam},
booktitle = {Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012}
}
Maintainer
Expansio Software House in collaboration with CLARIN-PL.