--- license: cc-by-sa-4.0 task_categories: - text-generation - token-classification - feature-extraction language: - en size_categories: - 1M` substitution tokens, and a `.raw` format retaining original surface forms. This pipeline operates on the `.raw` files exclusively. A precautionary contamination audit computed a penalized degradation score per text block: ``` D*(P) = (|unk| / N) · log₂(1 + √N) ``` The audit confirmed zero `` tokens across all 23,767 text blocks, returning a clean result. No filtering was applied or required. This validates the source file selection: by operating on `.raw` rather than `.tokens`, the pipeline inherits no vocabulary substitution artifacts, and downstream analyses reflect genuine surface token distributions. ### Phase 3: GPU-Accelerated Normalization Text normalization was performed using NVIDIA RAPIDS cuDF on an L4 GPU. Four operations applied in sequence: 1. **Whitespace normalization:** leading/trailing whitespace stripped 2. **Hyphen modernization:** legacy `@-@` artifacts collapsed to standard hyphens (e.g. `Apollo @-@ Soyuz` → `Apollo-Soyuz`) 3. **Punctuation normalization:** floating punctuation corrected via CPU bypass using Python `re` with backreferences (e.g. `word ,` → `word,`) 4. **Header normalization:** `= Title =` through `====== Title ======` converted to Markdown H1–H6 in strict descending order to preserve document hierarchy ### Phase 4: Stanza NLP Enrichment Stanza 1.11.1 initialized with `tokenize, pos, lemma, depparse, ner` on GPU. Output serialized to Parquet with ZSTD compression (level 3). Following enrichment, all Parquet files were subjected to a microscopic integrity audit guaranteeing: 1. **Dimensional symmetry:** all parallel arrays within a row are equal length 2. **Root singularity:** every sentence has exactly one dependency root (`head == 0`) 3. **Graph bounds:** no head index points outside the sentence boundary 8 structurally invalid sentences were identified in the train split and removed via automated ledger repair. The Stanza-Wikitext-2 dataset is **100% structurally valid** across all splits. ### Phase 5: Structural Metadata Injection `is_header` and `section_level` columns injected via vectorized Markdown header detection. Enables structure-aware models to condition on document position without reprocessing raw text. --- ## Usage ```python import pandas as pd # Load a split df = pd.read_parquet("hf://datasets/EXOROBOURII/Stanza-Wikitext-2/wiki.train.enriched.parquet") # Aligned token access sentence = df.iloc[0] for token, upos, deprel, head in zip( sentence['tokens'], sentence['upos'], sentence['deprel'], sentence['head'] ): print(f"{token:<20} {upos:<8} {deprel:<16} head={head}") # Filter to content sentences only (exclude headers) content = df[~df['is_header']].reset_index(drop=True) # Filter to a specific section level h2_headers = df[df['section_level'] == 2] # Reconstruct dependency tree for a sentence from collections import defaultdict def get_children(head_array): children = defaultdict(list) for i, h in enumerate(head_array): if h > 0: children[h - 1].append(i) # convert to 0-indexed return children row = df.iloc[10] children = get_children(row['head']) root_idx = list(row['head']).index(0) print(f"Root token: {row['tokens'][root_idx]} ({row['upos'][root_idx]})") print(f"Root dependents: {[row['tokens'][c] for c in children[root_idx]]}") ``` --- ## Reports and Analysis Artifacts The following analytical reports are available in the dataset repository: | File | Description | |------|-------------| | `structural_grammar_matrix.csv` | 451 UPOS×DepRel combinations with frequencies | | `geometric_motifs_wiki.train.enriched.csv` | 106,057 unique dependency motifs | | `entity_distribution.csv` | Named entity frequencies and types | | `entity_cooccurrence.csv` | Sentence-level entity co-occurrence pairs | | `motif_analytics_summary.txt` | Motif coverage analysis and valency statistics | | `structural_rigidity_full.csv` | Per-UPOS weighted valency statistics | | `degree_distribution.csv` | Full token degree frequency table | | `depth_distribution.csv` | Full token depth frequency table | | `mi_summary.csv` | NMI values for degree/depth × UPOS/DepRel | | `sentence_structural_stats.csv` | Per-sentence degree and depth statistics | --- ## Citation ```bibtex @dataset{belanger2025stanza2, author = {Belanger, Jonathan R.}, title = {Stanza-Wikitext-2: A Structurally Enriched Modernization of WikiText-2}, year = {2026}, publisher = {HuggingFace}, url = {https://huggingface.co/datasets/EXOROBOURII/Stanza-Wikitext-2}, doi = {10.57967/hf/8060} } ``` --- ## License CC-BY-SA-4.0. Derivative of WikiText-2 (CC-BY-SA-4.0, Merity et al. 2016).