You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Kazakh LLaMA 50M — Pre-tokenized

Pre-tokenized Kazakh corpus ready for LLaMA 50M training. Each example is a packed block of 1024 tokens.

Overview

Property Value
Token block size 1,024
Tokenizer kazakh-bpe-32k (32K vocab)
Fields input_ids, labels, attention_mask
Source kz-transformers/multidomain-kazakh-dataset
License Apache 2.0

Usage

from datasets import load_dataset

ds = load_dataset("stukenov/kazakh-llama-50m-tokenized")
print(ds["train"][0].keys())  # dict_keys(['input_ids', 'labels', 'attention_mask'])

Project

Part of the Soz — Kazakh Language Models project.

License

Apache 2.0

Downloads last month
7

Collection including stukenov/sozkz-corpus-tokenized-kk-llama50m-v1