stukenov/sozkz-core-llama-600m-kk-base-v1
Text Generation • 0.6B • Updated • 36 • 1
Pre-tokenized Kazakh corpus. Each sample = 1024 packed tokens.
Source: stukenov/sozkz-corpus-clean-v3
| Property | Value |
|---|---|
| Train blocks | 8,787,709 |
| Val blocks | 88,300 |
| Block size | 1024 |
| Total tokens | ~9.09B |
| Tokenizer | stukenov/sozkz-core-gpt2-50k-kk-base-v1 |
| Source | Documents | Share |
|---|---|---|
| culturax | 2,707,214 | 19.8% |
| hplt_new | 2,204,165 | 16.1% |
| mc4 | 1,906,763 | 13.9% |
| madlad400 | 1,794,308 | 13.1% |
| kazparc_sync | 1,370,243 | 10.0% |
| cc100 | 1,365,739 | 10.0% |
| md_leipzig | 1,128,122 | 8.2% |
| md_kazakhNews | 288,247 | 2.1% |
| md_oscar | 239,807 | 1.8% |
| wikipedia | 235,915 | 1.7% |
| moscar | 231,693 | 1.7% |
| kazparc | 165,754 | 1.2% |
| kazsandra | 40,236 | 0.3% |
| md_kazakhBooks | 20,482 | 0.1% |
| sib200 | 648 | 0.0% |
| belebele | 488 | 0.0% |
| wikiann | 194 | 0.0% |
from datasets import load_dataset
ds = load_dataset("stukenov/sozkz-corpus-tokenized-kk-llama50k-v3")
sample = ds["train"][0]
print(len(sample["input_ids"])) # 1024