Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Synthesis SER Dataset
Overview
This dataset contains synthetic speech audio files for Speech Emotion Recognition (SER) training, specifically designed for training the emotion2vec model. The data is generated using two state-of-the-art TTS systems: CosyVoice2 and Kimi-Audio.
Paper
If you use this dataset, please cite our paper:
https://arxiv.org/pdf/2603.16483
Dataset Structure
Each subdirectory corresponds to a source emotional speech dataset and contains synthetic data generated using both TTS engines:
| Subdirectory | Source | TTS Engines | # WAV Files |
|---|---|---|---|
CREMA_D_SYN.tar.gz |
CREMA-D | CosyVoice2 + Kimi-Audio | 2,236 |
IEMOCAP_SYN.tar.gz |
IEMOCAP | CosyVoice2 + Kimi-Audio | 3,570 |
LLM_DATA_SYN.tar.gz |
LLM_DATA | CosyVoice2 + Kimi-Audio | 3,155 |
RAVDESS_SYN.tar.gz |
RAVDESS | CosyVoice2 + Kimi-Audio | 12 |
SAVEE_SYN.tar.gz |
SAVEE | CosyVoice2 + Kimi-Audio | 124 |
TESS_SYN.tar.gz |
TESS | CosyVoice2 + Kimi-Audio | 12,397 |
Internal Structure
Each tar.gz contains:
βββ cosyvoice2/
β βββ train/
β β βββ *.wav
β β βββ train.json
β βββ test/
β βββ *.wav
β βββ test.json
βββ kimi-audio/
βββ train/
β βββ *.wav
β βββ train.json
βββ test/
βββ *.wav
βββ test.json
Each train.json / test.json contains metadata:
{
"emotion": "angry",
"wav_path": "..."
}
Supported Emotions
- angry
- disgust
- fearful
- happy
- neutral
- sad
- surprised
Real Data Sources
Since the real emotional speech data is publicly available, we only host the synthetic data on HuggingFace. If you need the real data for research, please download from the original sources:
Usage
from huggingface_hub import hf_hub_download
# Download and extract a dataset
repo_id = "ak0255/Synthesis_SER"
filename = "SAVEE_SYN.tar.gz"
path = hf_hub_download(repo_id=repo_id, filename=filename, repo_type="dataset")
License
Apache 2.0
- Downloads last month
- 28