You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

SozKZ Corpus Tokenized v3 (50K)

Pre-tokenized Kazakh corpus. Each sample = 1024 packed tokens.

Source: stukenov/sozkz-corpus-clean-v3

Property Value
Train blocks 8,787,709
Val blocks 88,300
Block size 1024
Total tokens ~9.09B
Tokenizer stukenov/sozkz-core-gpt2-50k-kk-base-v1

Source distribution

Source Documents Share
culturax 2,707,214 19.8%
hplt_new 2,204,165 16.1%
mc4 1,906,763 13.9%
madlad400 1,794,308 13.1%
kazparc_sync 1,370,243 10.0%
cc100 1,365,739 10.0%
md_leipzig 1,128,122 8.2%
md_kazakhNews 288,247 2.1%
md_oscar 239,807 1.8%
wikipedia 235,915 1.7%
moscar 231,693 1.7%
kazparc 165,754 1.2%
kazsandra 40,236 0.3%
md_kazakhBooks 20,482 0.1%
sib200 648 0.0%
belebele 488 0.0%
wikiann 194 0.0%

Usage

from datasets import load_dataset
ds = load_dataset("stukenov/sozkz-corpus-tokenized-kk-llama50k-v3")
sample = ds["train"][0]
print(len(sample["input_ids"]))  # 1024
Downloads last month
11

Models trained or fine-tuned on stukenov/sozkz-corpus-tokenized-kk-llama50k-v3

Collection including stukenov/sozkz-corpus-tokenized-kk-llama50k-v3