qa-nkjp / README.md
ktagowski's picture
Add dataset card
d569ff8 verified
|
raw
history blame
4.73 kB
metadata
language:
  - pl
task_categories:
  - question-answering
task_ids:
  - extractive-qa
pretty_name: QA-NKJP (Polish)
size_categories:
  - 1K<n<10K
tags:
  - polish
  - nkjp
  - squad
  - extractive-question-answering
  - paraphrase
  - unanswerable
source_datasets:
  - extended|nkjp
multilinguality:
  - monolingual
annotations_creators:
  - expert-generated
language_creators:
  - found
dataset_info:
  features:
    - name: question
      dtype: string
    - name: is_paraphrase
      dtype: bool
    - name: is_impossible
      dtype: bool
    - name: answers
      sequence:
        - name: answer_start
          dtype: int64
        - name: answer_end
          dtype: int64
        - name: text
          dtype: string
    - name: context
      dtype: string
    - name: dataset
      dtype: string
    - name: context_id
      dtype: int64
  splits:
    - name: train
      num_examples: 4417
    - name: validation
      num_examples: 908

QA-NKJP

Polish extractive question answering dataset built on passages from the National Corpus of Polish (Narodowy Korpus Języka Polskiego, NKJP). Each example pairs a question with a context paragraph drawn from NKJP, optional answer spans, and flags indicating whether the question is a paraphrase of another item and whether it is unanswerable from the provided context.

Dataset summary

Split Examples
train 4,417
validation 908
  • Language: Polish (pl)
  • Task: Extractive question answering (SQuAD 2.0-style, with unanswerable questions)
  • Domain: NKJP - mixed-genre Polish corpus (press, fiction, transcripts, web)
  • Format: One row per (question, context) pair

Features

Field Type Notes
question string Question text in Polish
context string NKJP passage that may contain the answer
answers {answer_start: int[], answer_end: int[], text: str[]} Character-level answer span(s); null when is_impossible is true
is_impossible bool true if the question cannot be answered from context
is_paraphrase bool true if the item is a paraphrase of another question for the same context
dataset string Source identifier (NKJP)
context_id int64 Identifier shared by all questions on the same context

Loading

from datasets import load_dataset

ds = load_dataset("expansio/qa-nkjp")
print(ds)
print(ds["train"][0])

Evaluation

The dataset is evaluated with the SQuAD 2.0 metric family:

  • exact / f1 — overall scores
  • HasAns_exact / HasAns_f1 — restricted to answerable questions
  • NoAns_f1 — accuracy on questions flagged as unanswerable

License

TBD. Source passages come from NKJP - please respect the NKJP licensing terms for the underlying texts. The final license for this redistribution will be specified before publication.

Citation

If you use this dataset, please cite the LEPISZCZE benchmark and the source corpus.

@inproceedings{augustyniak2022lepiszcze,
  title     = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
  author    = {Augustyniak, {\L}ukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and W{\k{a}}troba, Patryk and Mr{\'o}z, Krzysztof and Walczak, Bart{\l}omiej and Smywi{\'n}ski-Pohl, Aleksander and Mizgajski, Jan and Augustyniak, Piotr and Kajdanowicz, Tomasz},
  booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track},
  year      = {2022}
}

@inproceedings{przepiorkowski2010nkjp,
  title     = {The National Corpus of Polish},
  author    = {Przepi{\'o}rkowski, Adam and Ba{\'n}ko, Miros{\l}aw and G{\'o}rski, Rafa{\l} L. and Lewandowska-Tomaszczyk, Barbara},
  booktitle = {Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
  year      = {2010}
}

Maintainer

Expansio Software House in collaboration with CLARIN-PL.