Datasets:
npy listlengths 63 750 | json dict | __key__ stringlengths 14 45 | __url__ stringclasses 7
values |
|---|---|---|---|
[[-1.18359375,0.007122039794921875,0.6357421875,-0.459716796875,0.40234375,-1.0126953125,-0.90039062(...TRUNCATED) | {"_cluster_idx":0,"_cosine_similarity":0.5949822068214417,"characters_per_second":12.84832304355081,(...TRUNCATED) | cluster_best/0 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-0.73876953125,-2.24609375,0.06866455078125,-0.7099609375,-0.51220703125,-0.360595703125,-2.197265(...TRUNCATED) | {"_cluster_idx":1,"_cosine_similarity":0.5773552060127258,"characters_per_second":21.645021645021643(...TRUNCATED) | cluster_best/1 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[0.81396484375,0.322998046875,-0.077880859375,-0.0771484375,-0.2166748046875,0.478271484375,-0.9663(...TRUNCATED) | {"_cluster_idx":10,"_cosine_similarity":0.5820783972740173,"characters_per_second":12.00213371266003(...TRUNCATED) | cluster_best/10 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-1.3466796875,-0.0198211669921875,0.59521484375,0.09234619140625,0.36767578125,-0.431640625,-1.298(...TRUNCATED) | {"_cluster_idx":100,"_cosine_similarity":0.6369779706001282,"characters_per_second":20.1634299055502(...TRUNCATED) | cluster_best/100 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-0.3173828125,-0.173828125,-0.8037109375,0.228759765625,-0.047821044921875,0.4775390625,-0.7495117(...TRUNCATED) | {"_cluster_idx":1000,"_cosine_similarity":0.6628676056861877,"characters_per_second":11.956425471614(...TRUNCATED) | cluster_best/1000 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-0.80224609375,0.287109375,0.261474609375,0.42626953125,0.39501953125,0.54296875,-0.5234375,1.1171(...TRUNCATED) | {"_cluster_idx":1001,"_cosine_similarity":0.6249390244483948,"characters_per_second":16.310257339615(...TRUNCATED) | cluster_best/1001 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-0.403564453125,0.1776123046875,-0.057342529296875,0.80029296875,-0.351806640625,0.21435546875,-1.(...TRUNCATED) | {"_cluster_idx":1002,"_cosine_similarity":0.7503834366798401,"characters_per_second":13.976945244956(...TRUNCATED) | cluster_best/1002 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-0.72412109375,1.3740234375,0.66650390625,0.78955078125,0.0037708282470703125,-0.44580078125,-0.19(...TRUNCATED) | {"_cluster_idx":1003,"_cosine_similarity":0.5871855616569519,"characters_per_second":11.647254575707(...TRUNCATED) | cluster_best/1003 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-1.001953125,0.182861328125,0.591796875,-0.5537109375,-0.62646484375,0.469482421875,-0.38720703125(...TRUNCATED) | {"_cluster_idx":1004,"_cosine_similarity":0.7005765438079834,"characters_per_second":11.876103354196(...TRUNCATED) | cluster_best/1004 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
[[-1.30078125,0.10302734375,0.55322265625,-0.25830078125,0.69482421875,-0.3583984375,-0.034088134765(...TRUNCATED) | {"_cluster_idx":1005,"_cosine_similarity":0.8173031806945801,"characters_per_second":16.174863387978(...TRUNCATED) | cluster_best/1005 | "/tmp/hf-datasets-cache/medium/datasets/52553086190209-config-parquet-and-info-TTS-AGI-emolia-3k-spe(...TRUNCATED) |
Emolia 3K Speaker Clusters
A curated set of 3,000 diverse speaker clusters derived from the TTS-AGI/emolia-hq dataset, with up to 20 representative audio samples per cluster.
Overview
The original emolia-hq dataset contains hundreds of thousands of speech samples with 128-dimensional WavLM speaker timbre embeddings. These were first clustered into 10,000 centroids, then intelligently pruned to 3,000 using density-aware farthest-point sampling to ensure:
- Outlier preservation: Unique/rare voice types are kept (1.4x outlier over-representation)
- Redundancy reduction: Dense clusters of similar voices (e.g., many similar bright female voices) are collapsed into representatives
- Even coverage: The 3,000 centroids are spread uniformly across the embedding space
Key Statistics
| Metric | Value |
|---|---|
| Total clusters | 3,000 |
| Clusters fully filled (20 samples) | 2994 |
| Total audio samples | 59,977 |
| Samples per cluster | up to 20 |
| Embedding dimension | 128 (WavLM timbre) |
| Distance metric | Cosine |
| Avg DNS-MOS (best samples) | 3.46 |
| Avg duration (best samples) | 9.3s |
| Source dataset | TTS-AGI/emolia-hq |
Language Distribution (best-of samples)
| Language | Count |
|---|---|
| EN | 3000 |
Repository Structure
.
βββ cluster_samples/ # Tar archives of all samples
β βββ cluster_samples_0000-0499.tar
β βββ cluster_samples_0500-0999.tar
β βββ cluster_samples_1000-1499.tar
β βββ cluster_samples_1500-1999.tar
β βββ cluster_samples_2000-2499.tar
β βββ cluster_samples_2500-2999.tar
βββ cluster_best.tar # Best sample per cluster (highest DNS-MOS)
β # Contains cluster_best/{0..2999}.mp3 + .json
βββ centroids_pruned.npy # 3000x128 float32 cluster centroids
βββ centroids_pruned_indices.npy # Indices mapping to original 10k centroids
βββ pruning_report.html # Detailed report on the pruning methodology
βββ pruning_stats.json # Raw metrics for all pruning methods tested
βββ scripts/
β βββ prune_centroids.py # Centroid pruning pipeline
β βββ extract_cluster_samples.py # Sample extraction pipeline
βββ README.md
Each cluster_samples_XXXX-YYYY.tar contains folders named by cluster index, each with up to 20 .mp3 + .json pairs.
Pruning Methodology
Three methods were compared to reduce 10,000 centroids to ~3,000:
1. HDBSCAN + Medoid Selection
HDBSCAN identifies density-based clusters; noise points (outliers) are preserved, and each cluster is represented by its medoid. Result: Could not reach the 2k-4k target range (kept 6,500+ points due to high noise fraction).
2. Farthest-Point Sampling with Outlier Protection (Winner)
- Identify the top 10% most isolated points (by avg cosine distance to 10 nearest neighbors)
- Pre-seed these 1,000 outliers into the selection
- Iteratively pick the point farthest from the current selection
This method won because it provides the best balance of coverage, spread, and outlier preservation.
3. Density-Based Greedy Pruning
Remove points from densest regions first, preserving isolated points. Good outlier preservation but worse coverage quality.
Quality Metrics (Selected Method)
| Metric | Value | Meaning |
|---|---|---|
| Coverage mean | 0.1185 | Avg cosine dist from any original centroid to nearest selected |
| Coverage max | 0.2741 | Worst-case distance (no centroid is "orphaned") |
| Mean min pairwise | 0.3028 | Selected centroids are well spread apart |
| Outlier preservation | 1.40x | Isolated voices over-represented (desired) |
Sample Metadata Format
Each .json file contains:
{
"id": "EN_B00042_S00123_W000456",
"text": "Transcription of the utterance",
"duration": 8.5,
"dnsmos": 3.82,
"speaker": "EN_B00042_S00123",
"language": "en",
"emotion_caption": "Natural language description of emotional content",
"emotion_annotation": { "Arousal_best": 1.5, "Valence_best": 0.8, "..." : "..." },
"wavelm_timbre_embedding": [0.044, -0.022, "...128 dims..."],
"_cluster_idx": 42,
"_cosine_similarity": 0.95
}
Usage
Load centroids
import numpy as np
centroids = np.load("centroids_pruned.npy") # (3000, 128)
Find nearest cluster for a new embedding
def find_cluster(embedding, centroids):
emb = np.array(embedding) / np.linalg.norm(embedding)
norms = np.linalg.norm(centroids, axis=1, keepdims=True)
centroids_normed = centroids / np.maximum(norms, 1e-8)
similarities = centroids_normed @ emb
return int(np.argmax(similarities)), float(similarities.max())
cluster_idx, similarity = find_cluster(new_embedding, centroids)
Extract samples from tar
import tarfile
with tarfile.open("cluster_samples_0000-0499.tar") as tf:
tf.extractall(".")
# Now cluster_samples/0/, cluster_samples/1/, ... are available
License
CC-BY-4.0 (inherited from TTS-AGI/emolia-hq)
Citation
@dataset{emolia_3k_speaker_clusters,
title={Emolia 3K Speaker Clusters},
author={LAION},
year={2026},
url={https://huggingface.co/datasets/laion/emolia-3k-speaker-clusters},
note={Derived from TTS-AGI/emolia-hq}
}
- Downloads last month
- 39