File size: 4,265 Bytes
cccccd5
 
 
29d82ee
 
cccccd5
 
 
 
 
 
 
 
 
 
 
 
 
 
29d82ee
cccccd5
3362622
29d82ee
cccccd5
 
 
 
 
29d82ee
c80df60
6a0113b
 
 
 
 
 
 
 
c80df60
 
 
 
ad69b17
cccccd5
6a0113b
ad69b17
c80df60
ad69b17
 
 
c80df60
ad69b17
6a0113b
ad69b17
6a0113b
ad69b17
6a0113b
ad69b17
 
 
c80df60
ad69b17
c80df60
ad69b17
 
 
 
 
c80df60
ad69b17
c80df60
 
 
ad69b17
 
 
 
 
 
c80df60
6a0113b
 
ad69b17
6a0113b
ad69b17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a0113b
 
 
ad69b17
 
 
 
 
 
 
 
6a0113b
 
ad69b17
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
dataset_info:
  features:
  - name: index
    dtype: int64
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: subset
    dtype: string
  - name: speaker
    dtype: string
  - name: label
    dtype: string
  - name: original_name
    dtype: string
  splits:
  - name: train
    num_bytes: 24343526.0
    num_examples: 887
  download_size: 22452898
  dataset_size: 24343526.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
tags:
- audio
- animal-vocalization
- birdsong
- zebra-finch
- perceptual-similarity
- benchmark
- zero-shot
- vocsim
- avian-perceptual-judgment
- audio-perceptual-judgment
size_categories:
- n<1K
pretty_name: VocSim  Avian Perception Alignment
---

# VocSim — Avian Perception Alignment

[![GitHub](https://img.shields.io/badge/GitHub-vocsim%2Fbenchmark-black?logo=github)](https://github.com/vocsim/benchmark)
[![Core dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Core-vocsim%2Fpublic-blue)](https://huggingface.co/datasets/vocsim/public)
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)

A companion dataset for the **VocSim** benchmark that tests whether neural audio embeddings align with **biological perceptual judgments**. It packages zebra finch (*Taeniopygia guttata*) song-syllable recordings together with the behavioral probe and triplet results from Zandberg et al. (2024), so an embedding's pairwise distance matrix can be compared directly against the birds' perceptual decisions.

> Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120)

## How to use it

1. Extract features from each syllable with the audio model you want to evaluate.
2. Compute pairwise distances between embeddings.
3. Score the distances against the behavioral judgments in `probes.csv` (probe trials) and `triplets.csv` (triplet trials).

The reference implementation lives at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark) — see `reproducibility/scripts/avian_perception.py` and `reproducibility/configs/avian_paper.yaml`.

**Bundled files:**
- Hugging Face `Dataset` with the audio + metadata.
- `probes.csv` — probe-trial results (`sound_id`, `left`, `right`, `decision`, …), filtered to rows whose audio is present.
- `triplets.csv` — triplet-trial results (`Anchor`, `Positive`, `Negative`, `diff`, …), filtered the same way.
- `missing_audio_files.txt` (when applicable) — original IDs without matching audio.

## Schema

```python
{
  "audio": {"array": np.ndarray, "sampling_rate": 16000},
  "subset": "avian_perception",
  "index": 42,
  "speaker": "ZF_M_123",                     # bird ID
  "label": "ZF_M_123",                       # set to speaker for this dataset
  "original_name": "ZF_M_123_syllable_A.wav" # identifier used in the CSVs
}
```

## Quick start

```python
from datasets import load_dataset

ds = load_dataset("vocsim/avian-perception-benchmark", split="train")
print(ds[0])
```

## Source data

The recordings and behavioral results are from Zandberg et al. (2024). Please cite both that work and the VocSim paper if you use this dataset.

## Citation

```bibtex
@inproceedings{basha2026vocsim,
  title     = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
  author    = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
  booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
  year      = {2026},
  doi       = {10.48550/arXiv.2512.10120}
}

@article{zandberg2024bird,
  author    = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.},
  title     = {Bird song comparison using deep learning trained from avian perceptual judgments},
  journal   = {PLoS Computational Biology},
  volume    = {20},
  number    = {8},
  pages     = {e1012329},
  year      = {2024},
  doi       = {10.1371/journal.pcbi.1012329},
  publisher = {Public Library of Science}
}
```