Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Polish
Size:
1K - 10K
metadata
language:
- pl
task_categories:
- question-answering
task_ids:
- extractive-qa
pretty_name: QA-Wikipedia (Polish)
size_categories:
- 1K<n<10K
tags:
- polish
- squad
- extractive-question-answering
- paraphrase
- unanswerable
source_datasets:
- original
multilinguality:
- monolingual
annotations_creators:
- expert-generated
language_creators:
- found
dataset_info:
features:
- name: question
dtype: string
- name: is_paraphrase
dtype: bool
- name: is_impossible
dtype: bool
- name: answers
sequence:
- name: answer_start
dtype: int64
- name: answer_end
dtype: int64
- name: text
dtype: string
- name: context
dtype: string
- name: dataset
dtype: string
- name: context_id
dtype: int64
splits:
- name: train
num_examples: 6458
- name: validation
num_examples: 1639
QA-Wikipedia
Polish extractive question answering dataset built on top of Polish Wikipedia passages. Each example pairs a question with a context paragraph, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.
Dataset summary
| Split | Examples |
|---|---|
| train | 6,458 |
| validation | 1,639 |
- Language: Polish (
pl) - Task: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
- Domain: Polish Wikipedia
- Format: One row per (question, context) pair
Features
| Field | Type | Notes |
|---|---|---|
question |
string |
Question text in Polish |
context |
string |
Wikipedia passage that may contain the answer |
answers |
{answer_start: int[], answer_end: int[], text: str[]} |
Character-level answer span(s); null when is_impossible is true |
is_impossible |
bool |
true if the question cannot be answered from context |
is_paraphrase |
bool |
true if the item is a paraphrase of another question for the same context |
dataset |
string |
Source identifier (wikipedia) |
context_id |
int64 |
Identifier shared by all questions on the same context |
Loading
from datasets import load_dataset
ds = load_dataset("expansio/qa-wikipedia")
print(ds)
print(ds["train"][0])
Evaluation
The dataset is evaluated with the SQuAD 2.0 metric family:
exact/f1— overall scoresHasAns_exact/HasAns_f1— restricted to answerable questionsNoAns_f1— accuracy on questions flagged as unanswerable
License
TBD. Source text is derived from Polish Wikipedia (CC BY-SA 3.0); attribution must be preserved on redistribution. The final license for this redistribution will be specified before publication.
Citation
If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.
@inproceedings{augustyniak2022lepiszcze,
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
author = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
year = {2022}
}
Maintainer
Expansio Software House in collaboration with CLARIN-PL.