--- license: cc-by-sa-3.0 language: - en task_categories: - text-retrieval - question-answering size_categories: - 100K"} ``` ### `train.jsonl` / `valid.jsonl` ```json { "query": "when is the last episode of season 8 of the walking dead", "docid": 0, "nq_id": "5225754983651766092", "url": "https://en.wikipedia.org//w/index.php?title=The_Walking_Dead_(season_8)&oldid=...", "title": "The Walking Dead (season 8)", "long_answer": "List of The Walking Dead episodes ...", "short_answer": "" } ``` `docid` is a stable integer that **joins** to `corpus.jsonl`. To materialise a (query, document) pair: ```python from datasets import load_dataset corpus = load_dataset("/NQ320K-NCI", "corpus", split="corpus") pairs = load_dataset("/NQ320K-NCI", "pairs", split="train") doc_lookup = {r["docid"]: r["document"] for r in corpus} for p in pairs: document = doc_lookup[p["docid"]] # ... feed (p["query"], document) to your model ``` ## Preprocessing Faithful port of the official NCI notebook ([Wang et al., NeurIPS 2022](https://arxiv.org/abs/2206.02743), [Data_process/NQ_dataset/NQ_dataset_Process.ipynb](https://github.com/solidsea98/Neural-Corpus-Indexer-NCI/blob/main/Data_process/NQ_dataset/NQ_dataset_Process.ipynb) in the released code). Each NQ row produces one record: 1. Reconstruct `document_text = " ".join(document.tokens.token)` (HTML tags appear as their own tokens). 2. `title = document.title` 3. `abs = document_text[

+3 :

]` — **HTML tags inside `

` are kept**, matching NCI. 4. `content = document_text[

+4 : second-to-last ]`, then HTML stripped, `\n` deleted, multiple spaces collapsed. 5. `doc_tac = title + abs + content` — **no separators**. 6. `long_answer` / `short_answer`: token-span slices from the first annotator (`annotations[0]`), HTML stripped. Documents are de-duplicated by their **BERT-uncased-tokenizer-normalised title** (`tokenizer.tokenize(title) → convert_to_ids → decode`), exactly as in NCI's released notebook. Concatenating `train + validation` and dropping duplicates yields **109,650** unique documents (NCI reports 109,739; the ~80-doc delta comes from a slightly newer Hugging Face snapshot of NQ). ## Known formatting characteristics These are **inherited from NCI's preprocessing** and intentional: * **Token-joined whitespace**: `"AMC ,"` instead of `"AMC,"`. NCI's `doc_tac` is built by `" ".join(tokens)`, leaving a space before every punctuation mark. NCI's downstream BERT/T5 tokenizer absorbs these correctly; you may want to detokenize when feeding into other encoders. * **HTML tags inside `abs`**: e.g. `"…

The eighth season…

"`. Only `content` has its tags stripped. This is the canonical NCI format. * **Non-detokenized hyphenation**: `"post - apocalyptic"`, `"Spider - Man"`. ## Caveat: `nq_id` is a string NQ's original `example_id` is a **uint64**, and roughly half of the IDs exceed `2^63 = 9.22 × 10^18`. They fit unsigned but overflow signed int64. `nq_id` is therefore stored as a **string**, exactly as Google publishes it. **Do not auto-cast it to int64** — about 50% of the values would silently wrap to negative numbers. If you load with pandas: ```python import pandas as pd df = pd.read_json("train.jsonl", lines=True, dtype={"nq_id": str}) ``` If you load with `datasets`, the typed `dataset_info` in this card already enforces `string`, so you don't need to do anything extra: ```python from datasets import load_dataset ds = load_dataset("/NQ320K-NCI", "pairs") print(ds["train"].features["nq_id"]) # Value(dtype='string', id=None) ``` ## Corpus Summary We additionally present a subset of corpus as a summarized text. We use `sshleifer/distilbart-cnn-12-6` model for the summarization task. ## License & attribution This dataset is a derivative of the Natural Questions dataset by Google ([Kwiatkowski et al., TACL 2019](https://aclanthology.org/Q19-1026/)), released under **CC BY-SA 3.0**. This derivative dataset is therefore also released under **CC BY-SA 3.0** (ShareAlike). The preprocessing recipe is from [Neural Corpus Indexer (Wang et al., NeurIPS 2022)](https://arxiv.org/abs/2206.02743); see their released [notebook](https://github.com/solidsea98/Neural-Corpus-Indexer-NCI/blob/main/Data_process/NQ_dataset/NQ_dataset_Process.ipynb). ## Citation If you use this dataset, please cite: ```bibtex @article{kwiatkowski2019natural, author = {Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav}, title = {Natural Questions: a Benchmark for Question Answering Research}, journal = {Transactions of the Association for Computational Linguistics}, year = {2019} } @inproceedings{wang2022neural, author = {Wang, Yujing and Hou, Yingyan and Wang, Haonan and Miao, Ziming and Wu, Shibin and Sun, Hao and Chen, Qi and Xia, Yuqing and Chi, Chengmin and Zhao, Guoshuai and Liu, Zheng and Xie, Xing and Sun, Hao Allen and Deng, Weiwei and Zhang, Qi and Yang, Mao}, title = {A Neural Corpus Indexer for Document Retrieval}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2022} } ```