SozKZ Misc: TTS, Sentiment & Other
Collection
Miscellaneous Kazakh AI models and datasets — TTS, sentiment analysis, speech, benchmarks • 10 items • Updated
Discrete codec tokens extracted from 232,350 Kazakh speech utterances (~439 hours) using the Mimi neural audio codec (Kyutai).
This dataset is the B2 output of the KZ-CALM TTS pipeline — it bridges raw audio and the latent-space generative model.
All audio comes from stukenov/kzcalm-tts-kk-v1, which merges:
| Source | Samples | Hours | Speakers |
|---|---|---|---|
| KazakhTTS (ISSAI) | 91,424 | 177.7 | 5 professional |
| KazEmoTTS (ISSAI) | 140,926 | 261.1 | 3, 6 emotions |
| Total | 232,350 | 438.8 | 8 unique |
| Parameter | Value |
|---|---|
| Codec | Mimi (Kyutai) via transformers.MimiModel |
| Weights | kyutai/mimi |
| Sample rate | 24,000 Hz |
| Frame rate | 12.5 Hz (1 frame = 80 ms) |
| Codebooks | 8 (RVQ) |
| Codebook size | 2,048 entries each |
| Column | Type | Description |
|---|---|---|
text |
string |
Kazakh utterance text |
speaker_id |
string |
Speaker identifier (e.g. ISSAI_KazakhTTS_M01) |
source |
string |
KazakhTTS or KazEmoTTS |
emotion |
string |
Emotion label (neutral, angry, happy, sad, surprised, scared, or empty) |
duration |
float |
Audio duration in seconds |
codes |
list[list[int]] |
Codec tokens — shape (8, T) where T = num_frames |
num_frames |
int |
Number of codec frames (ceil(duration * 12.5)) |
from datasets import load_dataset
ds = load_dataset("stukenov/kzcalm-mimi-codes-kk-v1", split="train")
sample = ds[0]
codes = sample["codes"] # list of 8 lists (one per codebook)
print(f"Text: {sample['text']}")
print(f"Codebooks: {len(codes)}, Frames: {sample['num_frames']}")
print(f"Duration: {sample['duration']:.1f}s")
import torch
from transformers import MimiModel
model = MimiModel.from_pretrained("kyutai/mimi").cuda()
codes_tensor = torch.tensor([sample["codes"]], device="cuda") # (1, 8, T)
waveform = model.decode(codes_tensor).audio_values # (1, 1, T_samples)
datasets streaming (24 kHz mono)MimiModel.encode() on GPU (RTX 3060)CC-BY-4.0 (following the source datasets from ISSAI).
If you use this dataset, please cite the original corpora:
@inproceedings{mussakhojayeva2022kazakhtts,
title={KazakhTTS: An Open-Source Kazakh Text-to-Speech Synthesis Dataset},
author={Mussakhojayeva, Saida and Khassanov, Yerbolat and Varol, Huseyin Atakan},
booktitle={Proc. LREC},
year={2022}
}
@inproceedings{razakhan2024kazemotts,
title={KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech Synthesis},
author={Razakhan, Adal and Mussakhojayeva, Saida and Khassanov, Yerbolat},
booktitle={Proc. LREC-COLING},
year={2024}
}