SozKZ Corpora: Kazakh Training Datasets
Collection
Training corpora for Kazakh LLMs — raw, cleaned, deduplicated, tokenized, synthetic, and parallel datasets • 22 items • Updated
Machine-translated parallel corpus of educational web texts (English → Kazakh), derived from FineWeb-Edu.
| Source | FineWeb-Edu Score-2 (first 20M rows) |
| Languages | English (en) → Kazakh (kk) |
| Translation | CTranslate2, NLLB-200 (en→kk), greedy decoding |
| Filtering | Length (50–10K chars), exact dedup (xxhash), language detection (fasttext) |
| Rows | ~18M (after filtering ~90% pass rate) |
| Format | Expandable Parquet shards (train-XXXXX.parquet) |
Each row contains:
| Field | Type | Description |
|---|---|---|
text_en |
string | Original English text from FineWeb-Edu |
text_kk |
string | Machine-translated Kazakh text |
id |
string | Original document ID from FineWeb-Edu |
num_sentences |
int | Number of sentences in the document |
from datasets import load_dataset
ds = load_dataset("stukenov/sozkz-corpus-clean-enkk-fineweb-edu-v1", split="train")
print(ds[0])
# {'text_en': '...', 'text_kk': '...', 'id': '...', 'num_sentences': 5}
HuggingFaceFW/fineweb-edu-score-2Translation runs on 2× NVIDIA A10 GPUs in parallel.
Shards are numbered sequentially (train-00000.parquet, train-00001.parquet, ...) and can be extended without renaming existing files.
| Shards | Source |
|---|---|
train-00000 |
Existing 902K pre-translated rows |
train-00001 – train-00009 |
FineWeb-Edu rows 1M–10M (filtered) |
train-00010 – train-00019 |
FineWeb-Edu rows 10M–20M (filtered) |
CC-BY-4.0 (following FineWeb-Edu licensing)
@dataset{sozkz_corpus_clean_enkk_fineweb_edu_v1,
title={SozKZ Corpus Clean EN-KK (FineWeb-Edu) v1},
author={Saken Tukenov},
year={2026},
url={https://huggingface.co/datasets/stukenov/sozkz-corpus-clean-enkk-fineweb-edu-v1},
note={Machine-translated from FineWeb-Edu using NLLB-200}
}