You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

sozkz-corpus-dedup-kk-web-v1

Deduplicated Kazakh web text corpus collected from 6 public HuggingFace datasets. Contains only texts not present in kz-transformers/multidomain-kazakh-dataset (12.4M texts were used as the dedup reference).

Stats

Field Value
Total unique texts 9,475,089
Format Parquet (142 shards)
Columns text, source
Dedup method MD5 hash (exact match)
Dedup reference kz-transformers/multidomain-kazakh-dataset (12.4M hashes)
Date 2026-02-13
Version v1
License Apache 2.0

Sources

Source HF Dataset Unique texts Dupes removed
culturax uonlp/CulturaX (kk) 2,705,991 25,943
hplt HPLT/HPLT2.0_cleaned (kaz_Cyrl) 2,246,264 391,066
mc4 allenai/c4 (kk) 2,230,795 140,733
madlad400 allenai/MADLAD-400 (kk) 1,807,827 169
moscar oscar-corpus/mOSCAR (kaz_Cyrl) 245,869 0
wikipedia wikimedia/wikipedia (20231101.kk) 238,343 13

Cleaning

  • Unicode NFC normalization
  • Control character removal
  • Whitespace/newline collapsing
  • Min text length: 50 chars
  • Max URL density filter
  • HTML tag filter

Usage

from datasets import load_dataset

ds = load_dataset("stukenov/sozkz-corpus-dedup-kk-web-v1", split="train")
print(len(ds))  # 9,475,089
print(ds[0]["text"][:200])
print(ds[0]["source"])  # e.g. "culturax"

Complementary dataset

This corpus is designed to complement kz-transformers/multidomain-kazakh-dataset. Together they provide ~21.9M unique Kazakh texts.

Related repos

Downloads last month
7

Collection including stukenov/sozkz-corpus-dedup-kk-web-v1