license: cc-by-sa-3.0
language:
- en
task_categories:
- text-retrieval
- question-answering
size_categories:
- 100K<n<1M
source_datasets:
- extended|natural_questions
tags:
- generative-retrieval
- dsi
- nci
- ripor
- semantic-id
configs:
- config_name: corpus
data_files:
- split: corpus
path: data/corpus.jsonl
- split: corpus_summary
path: data/corpus_summary.jsonl
- config_name: pairs
data_files:
- split: train
path: data/train.jsonl
- split: validation
path: data/valid.jsonl
dataset_info:
- config_name: corpus
features:
- name: docid
dtype: int64
- name: document
dtype: string
splits:
- name: corpus
num_examples: 109650
- name: corpus_summary
num_examples: 21119
- config_name: pairs
features:
- name: query
dtype: string
- name: docid
dtype: int64
- name: nq_id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: long_answer
dtype: string
- name: short_answer
dtype: string
splits:
- name: train
num_examples: 307373
- name: validation
num_examples: 7830
NQ320K (NCI-style preprocessing)
A reproduction of the NQ320K corpus used by generative-retrieval papers
(DSI, NCI, GenRet, RIPOR, Ultron, LTRGR, …) built directly from the
Hugging Face google-research-datasets/natural_questions snapshot.
At a glance
| Split | Rows |
|---|---|
corpus |
109,650 |
corpus_summary |
21,119 |
train pairs |
307,373 |
validation pairs |
7,830 |
- Train pairs with non-empty
long_answer: 152,148 / 307,373 (49.5%) - Train pairs with non-empty
short_answer: 106,926 / 307,373 (34.8%) - Date built: 2026-05-06
Schema
corpus.jsonl
{"docid": 0, "document": "<NCI doc_tac string, ~5K-50K chars>"}
train.jsonl / valid.jsonl
{
"query": "when is the last episode of season 8 of the walking dead",
"docid": 0,
"nq_id": "5225754983651766092",
"url": "https://en.wikipedia.org//w/index.php?title=The_Walking_Dead_(season_8)&oldid=...",
"title": "The Walking Dead (season 8)",
"long_answer": "List of The Walking Dead episodes ...",
"short_answer": ""
}
docid is a stable integer that joins to corpus.jsonl. To materialise
a (query, document) pair:
from datasets import load_dataset
corpus = load_dataset("<your-username>/NQ320K-NCI", "corpus", split="corpus")
pairs = load_dataset("<your-username>/NQ320K-NCI", "pairs", split="train")
doc_lookup = {r["docid"]: r["document"] for r in corpus}
for p in pairs:
document = doc_lookup[p["docid"]]
# ... feed (p["query"], document) to your model
Preprocessing
Faithful port of the official NCI notebook (Wang et al., NeurIPS 2022, Data_process/NQ_dataset/NQ_dataset_Process.ipynb in the released code). Each NQ row produces one record:
- Reconstruct
document_text = " ".join(document.tokens.token)(HTML tags appear as their own tokens). title = document.titleabs = document_text[<P>+3 : </P>]— HTML tags inside<P>are kept, matching NCI.content = document_text[</P>+4 : second-to-last </Ul>], then HTML stripped,\ndeleted, multiple spaces collapsed.doc_tac = title + abs + content— no separators.long_answer/short_answer: token-span slices from the first annotator (annotations[0]), HTML stripped.
Documents are de-duplicated by their BERT-uncased-tokenizer-normalised
title (tokenizer.tokenize(title) → convert_to_ids → decode), exactly as
in NCI's released notebook. Concatenating train + validation and dropping
duplicates yields 109,650 unique documents (NCI reports 109,739; the
~80-doc delta comes from a slightly newer Hugging Face snapshot of NQ).
Known formatting characteristics
These are inherited from NCI's preprocessing and intentional:
- Token-joined whitespace:
"AMC ,"instead of"AMC,". NCI'sdoc_tacis built by" ".join(tokens), leaving a space before every punctuation mark. NCI's downstream BERT/T5 tokenizer absorbs these correctly; you may want to detokenize when feeding into other encoders. - HTML tags inside
abs: e.g."<Table><Tr>…<P>The eighth season…</P>". Onlycontenthas its tags stripped. This is the canonical NCI format. - Non-detokenized hyphenation:
"post - apocalyptic","Spider - Man".
Caveat: nq_id is a string
NQ's original example_id is a uint64, and roughly half of the IDs
exceed 2^63 = 9.22 × 10^18. They fit unsigned but overflow signed int64.
nq_id is therefore stored as a string, exactly as Google publishes it.
Do not auto-cast it to int64 — about 50% of the values would silently
wrap to negative numbers. If you load with pandas:
import pandas as pd
df = pd.read_json("train.jsonl", lines=True, dtype={"nq_id": str})
If you load with datasets, the typed dataset_info in this card already
enforces string, so you don't need to do anything extra:
from datasets import load_dataset
ds = load_dataset("<your-username>/NQ320K-NCI", "pairs")
print(ds["train"].features["nq_id"]) # Value(dtype='string', id=None)
Corpus Summary
We additionally present a subset of corpus as a summarized text. We use sshleifer/distilbart-cnn-12-6 model for the summarization task.
License & attribution
This dataset is a derivative of the Natural Questions dataset by Google (Kwiatkowski et al., TACL 2019), released under CC BY-SA 3.0. This derivative dataset is therefore also released under CC BY-SA 3.0 (ShareAlike).
The preprocessing recipe is from Neural Corpus Indexer (Wang et al., NeurIPS 2022); see their released notebook.
Citation
If you use this dataset, please cite:
@article{kwiatkowski2019natural,
author = {Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia
and Collins, Michael and Parikh, Ankur and Alberti, Chris and
Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and
Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey,
Matthew and Chang, Ming-Wei and Dai, Andrew and Uszkoreit, Jakob
and Le, Quoc and Petrov, Slav},
title = {Natural Questions: a Benchmark for Question Answering Research},
journal = {Transactions of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{wang2022neural,
author = {Wang, Yujing and Hou, Yingyan and Wang, Haonan and Miao, Ziming
and Wu, Shibin and Sun, Hao and Chen, Qi and Xia, Yuqing and
Chi, Chengmin and Zhao, Guoshuai and Liu, Zheng and Xie, Xing
and Sun, Hao Allen and Deng, Weiwei and Zhang, Qi and Yang,
Mao},
title = {A Neural Corpus Indexer for Document Retrieval},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2022}
}