SozKZ Corpora: Kazakh Training Datasets
Collection
Training corpora for Kazakh LLMs — raw, cleaned, deduplicated, tokenized, synthetic, and parallel datasets • 22 items • Updated
Deduplicated Kazakh web text corpus collected from 6 public HuggingFace datasets. Contains only texts not present in kz-transformers/multidomain-kazakh-dataset (12.4M texts were used as the dedup reference).
| Field | Value |
|---|---|
| Total unique texts | 9,475,089 |
| Format | Parquet (142 shards) |
| Columns | text, source |
| Dedup method | MD5 hash (exact match) |
| Dedup reference | kz-transformers/multidomain-kazakh-dataset (12.4M hashes) |
| Date | 2026-02-13 |
| Version | v1 |
| License | Apache 2.0 |
| Source | HF Dataset | Unique texts | Dupes removed |
|---|---|---|---|
| culturax | uonlp/CulturaX (kk) | 2,705,991 | 25,943 |
| hplt | HPLT/HPLT2.0_cleaned (kaz_Cyrl) | 2,246,264 | 391,066 |
| mc4 | allenai/c4 (kk) | 2,230,795 | 140,733 |
| madlad400 | allenai/MADLAD-400 (kk) | 1,807,827 | 169 |
| moscar | oscar-corpus/mOSCAR (kaz_Cyrl) | 245,869 | 0 |
| wikipedia | wikimedia/wikipedia (20231101.kk) | 238,343 | 13 |
from datasets import load_dataset
ds = load_dataset("stukenov/sozkz-corpus-dedup-kk-web-v1", split="train")
print(len(ds)) # 9,475,089
print(ds[0]["text"][:200])
print(ds[0]["source"]) # e.g. "culturax"
This corpus is designed to complement kz-transformers/multidomain-kazakh-dataset. Together they provide ~21.9M unique Kazakh texts.