Stanza-Wikitext-2 / README.md
EXOROBOURII's picture
Update README.md
b147403 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - text-generation
  - token-classification
  - feature-extraction
language:
  - en
size_categories:
  - 1M<n<10M
pretty_name: 'Stanza-Wikitext-2: A Structurally Enriched Modernization of WikiText-2'
tags:
  - dependency-parsing
  - universal-dependencies
  - nlp-dataset
  - structural-linguistics
  - named-entity-recognition
  - wikipedia

Dataset Card for Stanza-Wikitext-2

Dataset Description

Stanza-Wikitext-2 is a structurally pristine, mathematically verified NLP dataset designed for multi-task language modeling, custom tokenizer training, structural NLP research, and mechanistic interpretability work.

It is a rigorously modernized and annotated derivative of the wikitext-2-raw-v1 corpus. Using the Stanford NLP Stanza neural pipeline, every token in the corpus has been explicitly mapped to its grammatical, syntactic, and semantic function across seven aligned annotation layers. Stanza-Wikitext-2 preserves document geometry, explicitly labeling Markdown headers to support structure-aware neural architectures.

  • Curated by: Jonathan R. Belanger (Exorobourii LLC)
  • Language: English (en)
  • License: CC-BY-SA-4.0
  • DOI: 10.57967/hf/8060
  • Total Sentences: 101,455 (across all splits)
  • Total Tokens: 2,469,912

Corpus Statistics

Split Sentences Tokens
Train 82,760 2,021,438
Validation 8,622 210,732
Test 10,073 237,742
Total 101,455 2,469,912

Rows removed by Phase 4c integrity repair: 8 (train split only)


Structural Characterization

Unlike standard text corpora, Stanza-Wikitext-2 ships with a full quantitative geometric characterization derived from its dependency structure. These figures are provided to assist researchers in assessing corpus suitability before use.

Dependency Degree Distribution

Dependency degree (number of dependents per token) is strongly right-skewed with faster-than-power-law decay. A KS-based MLE scan (Clauset et al., 2009) found no well-supported power-law regime across the observable degree range. The corpus is heavily left-concentrated — the majority of tokens are leaves.

Percentile Degree
50th (median) 0
90th 3
99th 6
99.9th 9
Maximum 43
  • Degree entropy: 1.839 bits
  • Effective degree vocabulary: degree 0–11 (values above 12 are sparse artifacts of list coordination)

Token Depth Distribution

Token depth (distance from dependency root, measured upward) characterizes positional distribution within the tree.

Metric Value
Range 0 – 25
Mean 2.745
Std 1.674
Entropy 2.679 bits

Mean subtree height (measured downward from each node): 5.45 nodes. Maximum subtree height: 26 nodes. Root center of mass: 0.24.

Structural Grammar Matrix

The cross-product of UPOS tags and DepRel labels yields 451 unique UPOS×DepRel combinations observed across the corpus. This matrix constitutes a compact geometric fingerprint of the corpus's syntactic behavior and is available as structural_grammar_matrix.csv in the associated reports.

Geometric Motif Analysis

A dependency motif is defined as a parent node (UPOS×DepRel) paired with a sorted tuple of its children's (UPOS×DepRel) labels. The train split contains 106,057 unique motifs with a strongly right-skewed frequency distribution.

Coverage Motifs Required % of Total Motifs
50% 343 0.32%
80% 7,743 7.30%
90% 33,080 31.19%
95% 69,571 65.60%
100% 106,057 100%

The top 343 motifs account for half of all motif occurrences. The distribution is heavily long-tailed: 95% coverage requires 65.6% of the full motif vocabulary, indicating a compact high-frequency structural core alongside a large population of rare configurations.

Structural Rigidity by UPOS

Dependency degree varies substantially by part-of-speech, reflecting syntactic valency differences. VERB is the highest-degree head class; functional categories cluster near zero.

UPOS Mean Degree Max Degree Entropy (bits)
VERB 3.54 15 2.88
NOUN 2.22 36 2.61
PROPN 1.32 43 2.27
ADJ 0.56 17 1.32
ADV 0.25 9 0.89
AUX 0.02 8 0.11
DET 0.02 10 0.11
PUNCT 0.006 11 0.03
PART 0.005 6 0.03

Structural Information Content

Normalized mutual information between structural measurements and linguistic labels:

Pair NMI
Degree × UPOS 0.223
Degree × DepRel 0.294
Depth × UPOS 0.054
Depth × DepRel 0.118

Degree carries substantially more linguistic signal than depth. Neither measurement is redundant with linguistic category — they capture geometrically distinct aspects of syntactic structure.

Per-Sentence Structural Complexity

Per-sentence degree entropy has mean 1.555 bits (std 0.275, max 1.954 bits). Structural complexity means are stable across all three splits, confirming that the canonical WikiText-2 split boundaries do not introduce distributional artifacts.


Dataset Structure

Stanza-Wikitext-2 uses Parallel Arrays. Each row represents a single sentence. All linguistic features are stored in co-indexed, equal-length arrays guaranteeing 1:1 token-to-annotation alignment.

Schema

Column Type Description
chunk_id int64 Positional ID of the text block within the document stream
sentence_id int64 Positional ID of the sentence within its chunk
raw_text string Cleaned, normalized sentence text
is_header bool True if the sentence is a structural document header
section_level int64 Markdown header depth (1–6); 0 if not a header
tokens list[str] Surface word forms
lemmas list[str] Morphological base forms
upos list[str] Universal POS tags (17-class UD tagset)
xpos list[str] Penn Treebank POS tags
head list[int64] 1-indexed syntactic head positions (0 = root anchor)
deprel list[str] Universal Dependencies relation labels
ner list[str] Named entity tags in BIOES format

All array columns are co-indexed: column[i] refers to the same token across all columns for a given row.


Methodology & Provenance

Phase 1: Cryptographic Ingestion

To prevent silent upstream updates from compromising downstream reproducibility, this dataset was built from a cryptographically verified snapshot of the ggml-org/ci raw mirror.

  • Source Archive: wikitext-2-raw-v1.zip
  • SHA-256 Checksum: ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11

Phase 2: Degradation Audit

WikiText-2 is distributed in two variants: a pre-tokenized .tokens format in which low-frequency terms are replaced with <unk> substitution tokens, and a .raw format retaining original surface forms. This pipeline operates on the .raw files exclusively. A precautionary contamination audit computed a penalized degradation score per text block:

D*(P) = (|unk| / N) · log₂(1 + √N)

The audit confirmed zero <unk> tokens across all 23,767 text blocks, returning a clean result. No filtering was applied or required. This validates the source file selection: by operating on .raw rather than .tokens, the pipeline inherits no vocabulary substitution artifacts, and downstream analyses reflect genuine surface token distributions.

Phase 3: GPU-Accelerated Normalization

Text normalization was performed using NVIDIA RAPIDS cuDF on an L4 GPU. Four operations applied in sequence:

  1. Whitespace normalization: leading/trailing whitespace stripped
  2. Hyphen modernization: legacy @-@ artifacts collapsed to standard hyphens (e.g. Apollo @-@ SoyuzApollo-Soyuz)
  3. Punctuation normalization: floating punctuation corrected via CPU bypass using Python re with backreferences (e.g. word ,word,)
  4. Header normalization: = Title = through ====== Title ====== converted to Markdown H1–H6 in strict descending order to preserve document hierarchy

Phase 4: Stanza NLP Enrichment

Stanza 1.11.1 initialized with tokenize, pos, lemma, depparse, ner on GPU. Output serialized to Parquet with ZSTD compression (level 3).

Following enrichment, all Parquet files were subjected to a microscopic integrity audit guaranteeing:

  1. Dimensional symmetry: all parallel arrays within a row are equal length
  2. Root singularity: every sentence has exactly one dependency root (head == 0)
  3. Graph bounds: no head index points outside the sentence boundary

8 structurally invalid sentences were identified in the train split and removed via automated ledger repair. The Stanza-Wikitext-2 dataset is 100% structurally valid across all splits.

Phase 5: Structural Metadata Injection

is_header and section_level columns injected via vectorized Markdown header detection. Enables structure-aware models to condition on document position without reprocessing raw text.


Usage

import pandas as pd

# Load a split
df = pd.read_parquet("hf://datasets/EXOROBOURII/Stanza-Wikitext-2/wiki.train.enriched.parquet")

# Aligned token access
sentence = df.iloc[0]
for token, upos, deprel, head in zip(
    sentence['tokens'],
    sentence['upos'],
    sentence['deprel'],
    sentence['head']
):
    print(f"{token:<20} {upos:<8} {deprel:<16} head={head}")

# Filter to content sentences only (exclude headers)
content = df[~df['is_header']].reset_index(drop=True)

# Filter to a specific section level
h2_headers = df[df['section_level'] == 2]

# Reconstruct dependency tree for a sentence
from collections import defaultdict

def get_children(head_array):
    children = defaultdict(list)
    for i, h in enumerate(head_array):
        if h > 0:
            children[h - 1].append(i)  # convert to 0-indexed
    return children

row = df.iloc[10]
children = get_children(row['head'])
root_idx  = list(row['head']).index(0)
print(f"Root token: {row['tokens'][root_idx]} ({row['upos'][root_idx]})")
print(f"Root dependents: {[row['tokens'][c] for c in children[root_idx]]}")

Reports and Analysis Artifacts

The following analytical reports are available in the dataset repository:

File Description
structural_grammar_matrix.csv 451 UPOS×DepRel combinations with frequencies
geometric_motifs_wiki.train.enriched.csv 106,057 unique dependency motifs
entity_distribution.csv Named entity frequencies and types
entity_cooccurrence.csv Sentence-level entity co-occurrence pairs
motif_analytics_summary.txt Motif coverage analysis and valency statistics
structural_rigidity_full.csv Per-UPOS weighted valency statistics
degree_distribution.csv Full token degree frequency table
depth_distribution.csv Full token depth frequency table
mi_summary.csv NMI values for degree/depth × UPOS/DepRel
sentence_structural_stats.csv Per-sentence degree and depth statistics

Citation

@dataset{belanger2025stanza2,
  author    = {Belanger, Jonathan R.},
  title     = {Stanza-Wikitext-2: A Structurally Enriched Modernization of WikiText-2},
  year      = {2026},
  publisher = {HuggingFace},
  url       = {https://huggingface.co/datasets/EXOROBOURII/Stanza-Wikitext-2},
  doi       = {10.57967/hf/8060}
}

License

CC-BY-SA-4.0. Derivative of WikiText-2 (CC-BY-SA-4.0, Merity et al. 2016).