Datasets:
Formats:
parquet
Size:
100K - 1M
ArXiv:
Tags:
audio
audio-similarity
zero-shot-learning
representation-learning
embedding-evaluation
unsupervised-learning
License:
File size: 4,196 Bytes
19313bd e493974 56843d8 19313bd 0db24f3 26c6e42 0db24f3 19313bd 73241e4 0db24f3 2fb6fd2 73241e4 0db24f3 1aca5a3 0db24f3 2fb6fd2 0db24f3 73241e4 0db24f3 73241e4 0db24f3 73241e4 0db24f3 73241e4 0db24f3 2fb6fd2 2de9295 56843d8 0db24f3 56843d8 0db24f3 56843d8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | ---
license: cc-by-4.0
size_categories:
- 100K<n<1M
pretty_name: VocSim
tags:
- audio
- audio-similarity
- zero-shot-learning
- representation-learning
- embedding-evaluation
- unsupervised-learning
- speech
- environmental-sounds
- animal-vocalizations
- benchmark
paperswithcode_id: audiosim
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: subset
dtype: string
- name: speaker
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 5452179735
num_examples: 114641
download_size: 5500616162
dataset_size: 5452179735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# VocSim — Public Benchmark
[](https://github.com/vocsim/benchmark)
[](https://huggingface.co/spaces/vocsim/VocSim)
[](https://creativecommons.org/licenses/by/4.0/)
The public split of **VocSim**, a training-free benchmark for zero-shot content identity in single-source audio embeddings. VocSim probes the intrinsic geometric quality of frozen audio representations: do acoustically variable instances of the same content land near each other in embedding space, without any task-specific training?
> Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120)
## What's here
- **114,641 clips** across **15 public subsets**, drawn from 19 source corpora.
- Domains: human speech (phones, words, utterances), animal vocalizations (birdsong, otter calls), environmental sounds.
- Conditions: clean to noisy, sub-100ms to multi-second, few to thousands of classes per subset.
- All audio standardized to **16 kHz mono**.
- Single-source only — no overlapping speakers or simultaneous sources — so evaluation isolates content representation from source separation.
Four additional **blind out-of-distribution subsets** (low-resource speech in Shipibo-Conibo and Chintang) are held out for server-side evaluation via the [leaderboard](https://huggingface.co/spaces/vocsim/VocSim).
## Schema
```python
{
"audio": {"array": np.ndarray, "sampling_rate": 16000},
"subset": "HW1", # source-corpus tag (see paper for the full list)
"speaker": "spk_042", # speaker / animal / source ID, or "N/A"
"label": "hello", # ground-truth class for similarity
}
```
## Quick start
```python
from datasets import load_dataset
ds = load_dataset("vocsim/public", split="train")
print(ds[0])
```
For end-to-end evaluation (feature extraction, distance computation, P@k / GSR), use the reference pipeline at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark).
## Companion datasets
| Dataset | Purpose |
|---|---|
| [`vocsim/avian-perception-benchmark`](https://huggingface.co/datasets/vocsim/avian-perception-benchmark) | Alignment of embeddings with zebra-finch perceptual judgments |
| [`vocsim/mouse-strain-classification-benchmark`](https://huggingface.co/datasets/vocsim/mouse-strain-classification-benchmark) | C57 vs DBA USV classification |
| [`vocsim/mouse-identity-classification-benchmark`](https://huggingface.co/datasets/vocsim/mouse-identity-classification-benchmark) | Individual-mouse identification from USVs |
## Licensing
Aggregation and metadata are released under CC BY 4.0. Each source corpus retains its original license; see Appendix A.1.1 of the paper for a per-source breakdown.
## Citation
```bibtex
@inproceedings{basha2026vocsim,
title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
year = {2026},
doi = {10.48550/arXiv.2512.10120}
}
```
|