SozKZ Corpora: Kazakh Training Datasets
Collection
Training corpora for Kazakh LLMs — raw, cleaned, deduplicated, tokenized, synthetic, and parallel datasets • 22 items • Updated
Pre-tokenized Kazakh corpus ready for LLaMA 50M training. Each example is a packed block of 1024 tokens.
| Property | Value |
|---|---|
| Token block size | 1,024 |
| Tokenizer | kazakh-bpe-32k (32K vocab) |
| Fields | input_ids, labels, attention_mask |
| Source | kz-transformers/multidomain-kazakh-dataset |
| License | Apache 2.0 |
from datasets import load_dataset
ds = load_dataset("stukenov/kazakh-llama-50m-tokenized")
print(ds["train"][0].keys()) # dict_keys(['input_ids', 'labels', 'attention_mask'])
Part of the Soz — Kazakh Language Models project.
Apache 2.0