Datasets:
Formats:
parquet
Size:
100K - 1M
ArXiv:
Tags:
audio
audio-similarity
zero-shot-learning
representation-learning
embedding-evaluation
unsupervised-learning
License:
Maris Basha
De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos
0db24f3 | license: cc-by-4.0 | |
| size_categories: | |
| - 100K<n<1M | |
| pretty_name: VocSim | |
| tags: | |
| - audio | |
| - audio-similarity | |
| - zero-shot-learning | |
| - representation-learning | |
| - embedding-evaluation | |
| - unsupervised-learning | |
| - speech | |
| - environmental-sounds | |
| - animal-vocalizations | |
| - benchmark | |
| paperswithcode_id: audiosim | |
| dataset_info: | |
| features: | |
| - name: audio | |
| dtype: | |
| audio: | |
| sampling_rate: 16000 | |
| - name: subset | |
| dtype: string | |
| - name: speaker | |
| dtype: string | |
| - name: label | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 5452179735 | |
| num_examples: 114641 | |
| download_size: 5500616162 | |
| dataset_size: 5452179735 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| # VocSim — Public Benchmark | |
| [](https://github.com/vocsim/benchmark) | |
| [](https://huggingface.co/spaces/vocsim/VocSim) | |
| [](https://creativecommons.org/licenses/by/4.0/) | |
| The public split of **VocSim**, a training-free benchmark for zero-shot content identity in single-source audio embeddings. VocSim probes the intrinsic geometric quality of frozen audio representations: do acoustically variable instances of the same content land near each other in embedding space, without any task-specific training? | |
| > Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120) | |
| ## What's here | |
| - **114,641 clips** across **15 public subsets**, drawn from 19 source corpora. | |
| - Domains: human speech (phones, words, utterances), animal vocalizations (birdsong, otter calls), environmental sounds. | |
| - Conditions: clean to noisy, sub-100ms to multi-second, few to thousands of classes per subset. | |
| - All audio standardized to **16 kHz mono**. | |
| - Single-source only — no overlapping speakers or simultaneous sources — so evaluation isolates content representation from source separation. | |
| Four additional **blind out-of-distribution subsets** (low-resource speech in Shipibo-Conibo and Chintang) are held out for server-side evaluation via the [leaderboard](https://huggingface.co/spaces/vocsim/VocSim). | |
| ## Schema | |
| ```python | |
| { | |
| "audio": {"array": np.ndarray, "sampling_rate": 16000}, | |
| "subset": "HW1", # source-corpus tag (see paper for the full list) | |
| "speaker": "spk_042", # speaker / animal / source ID, or "N/A" | |
| "label": "hello", # ground-truth class for similarity | |
| } | |
| ``` | |
| ## Quick start | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("vocsim/public", split="train") | |
| print(ds[0]) | |
| ``` | |
| For end-to-end evaluation (feature extraction, distance computation, P@k / GSR), use the reference pipeline at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark). | |
| ## Companion datasets | |
| | Dataset | Purpose | | |
| |---|---| | |
| | [`vocsim/avian-perception-benchmark`](https://huggingface.co/datasets/vocsim/avian-perception-benchmark) | Alignment of embeddings with zebra-finch perceptual judgments | | |
| | [`vocsim/mouse-strain-classification-benchmark`](https://huggingface.co/datasets/vocsim/mouse-strain-classification-benchmark) | C57 vs DBA USV classification | | |
| | [`vocsim/mouse-identity-classification-benchmark`](https://huggingface.co/datasets/vocsim/mouse-identity-classification-benchmark) | Individual-mouse identification from USVs | | |
| ## Licensing | |
| Aggregation and metadata are released under CC BY 4.0. Each source corpus retains its original license; see Appendix A.1.1 of the paper for a per-source breakdown. | |
| ## Citation | |
| ```bibtex | |
| @inproceedings{basha2026vocsim, | |
| title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio}, | |
| author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.}, | |
| booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)}, | |
| year = {2026}, | |
| doi = {10.48550/arXiv.2512.10120} | |
| } | |
| ``` | |