Maris Basha
De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos
ad69b17 | dataset_info: | |
| features: | |
| - name: index | |
| dtype: int64 | |
| - name: audio | |
| dtype: | |
| audio: | |
| sampling_rate: 16000 | |
| - name: subset | |
| dtype: string | |
| - name: speaker | |
| dtype: string | |
| - name: label | |
| dtype: string | |
| - name: original_name | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 24343526.0 | |
| num_examples: 887 | |
| download_size: 22452898 | |
| dataset_size: 24343526.0 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| license: cc-by-4.0 | |
| tags: | |
| - audio | |
| - animal-vocalization | |
| - birdsong | |
| - zebra-finch | |
| - perceptual-similarity | |
| - benchmark | |
| - zero-shot | |
| - vocsim | |
| - avian-perceptual-judgment | |
| - audio-perceptual-judgment | |
| size_categories: | |
| - n<1K | |
| pretty_name: VocSim — Avian Perception Alignment | |
| # VocSim — Avian Perception Alignment | |
| [](https://github.com/vocsim/benchmark) | |
| [](https://huggingface.co/datasets/vocsim/public) | |
| [](https://creativecommons.org/licenses/by/4.0/) | |
| A companion dataset for the **VocSim** benchmark that tests whether neural audio embeddings align with **biological perceptual judgments**. It packages zebra finch (*Taeniopygia guttata*) song-syllable recordings together with the behavioral probe and triplet results from Zandberg et al. (2024), so an embedding's pairwise distance matrix can be compared directly against the birds' perceptual decisions. | |
| > Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120) | |
| ## How to use it | |
| 1. Extract features from each syllable with the audio model you want to evaluate. | |
| 2. Compute pairwise distances between embeddings. | |
| 3. Score the distances against the behavioral judgments in `probes.csv` (probe trials) and `triplets.csv` (triplet trials). | |
| The reference implementation lives at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark) — see `reproducibility/scripts/avian_perception.py` and `reproducibility/configs/avian_paper.yaml`. | |
| **Bundled files:** | |
| - Hugging Face `Dataset` with the audio + metadata. | |
| - `probes.csv` — probe-trial results (`sound_id`, `left`, `right`, `decision`, …), filtered to rows whose audio is present. | |
| - `triplets.csv` — triplet-trial results (`Anchor`, `Positive`, `Negative`, `diff`, …), filtered the same way. | |
| - `missing_audio_files.txt` (when applicable) — original IDs without matching audio. | |
| ## Schema | |
| ```python | |
| { | |
| "audio": {"array": np.ndarray, "sampling_rate": 16000}, | |
| "subset": "avian_perception", | |
| "index": 42, | |
| "speaker": "ZF_M_123", # bird ID | |
| "label": "ZF_M_123", # set to speaker for this dataset | |
| "original_name": "ZF_M_123_syllable_A.wav" # identifier used in the CSVs | |
| } | |
| ``` | |
| ## Quick start | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("vocsim/avian-perception-benchmark", split="train") | |
| print(ds[0]) | |
| ``` | |
| ## Source data | |
| The recordings and behavioral results are from Zandberg et al. (2024). Please cite both that work and the VocSim paper if you use this dataset. | |
| ## Citation | |
| ```bibtex | |
| @inproceedings{basha2026vocsim, | |
| title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio}, | |
| author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.}, | |
| booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)}, | |
| year = {2026}, | |
| doi = {10.48550/arXiv.2512.10120} | |
| } | |
| @article{zandberg2024bird, | |
| author = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.}, | |
| title = {Bird song comparison using deep learning trained from avian perceptual judgments}, | |
| journal = {PLoS Computational Biology}, | |
| volume = {20}, | |
| number = {8}, | |
| pages = {e1012329}, | |
| year = {2024}, | |
| doi = {10.1371/journal.pcbi.1012329}, | |
| publisher = {Public Library of Science} | |
| } | |
| ``` | |