Maris Basha commited on
Commit ·
ad69b17
1
Parent(s): 96a5768
De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos
Browse files
README.md
CHANGED
|
@@ -40,66 +40,79 @@ tags:
|
|
| 40 |
- audio-perceptual-judgment
|
| 41 |
size_categories:
|
| 42 |
- n<1K
|
| 43 |
-
pretty_name: VocSim
|
| 44 |
---
|
| 45 |
|
| 46 |
-
#
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
-
1. Extract features/embeddings from the song syllables using various computational models.
|
| 54 |
-
2. Compute pairwise distances between these embeddings.
|
| 55 |
-
3. Compare the resulting computational similarity matrices against the avian perceptual judgments recorded in the accompanying `probes.csv` and `triplets.csv` files.
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
* `triplets.csv`: Contains results from perceptual triplet trials (Anchor, Positive, Negative, diff, etc.). Filtered to include only rows where all mentioned audio files exist.
|
| 63 |
-
* `missing_audio_files.txt` (optional): Lists identifiers from the original CSVs for which no corresponding audio file was found.
|
| 64 |
|
| 65 |
-
|
| 66 |
|
| 67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
-
|
| 70 |
|
| 71 |
```python
|
| 72 |
{
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
}
|
| 80 |
```
|
| 81 |
-
## Citation Information
|
| 82 |
|
| 83 |
-
|
| 84 |
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
}
|
| 92 |
|
| 93 |
@article{zandberg2024bird,
|
| 94 |
-
author
|
| 95 |
-
title
|
| 96 |
-
journal
|
| 97 |
-
volume
|
| 98 |
-
number
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
doi = {10.1371/journal.pcbi.1012329},
|
| 103 |
publisher = {Public Library of Science}
|
| 104 |
}
|
| 105 |
-
```
|
|
|
|
| 40 |
- audio-perceptual-judgment
|
| 41 |
size_categories:
|
| 42 |
- n<1K
|
| 43 |
+
pretty_name: VocSim — Avian Perception Alignment
|
| 44 |
---
|
| 45 |
|
| 46 |
+
# VocSim — Avian Perception Alignment
|
| 47 |
|
| 48 |
+
[](https://github.com/vocsim/benchmark)
|
| 49 |
+
[](https://huggingface.co/datasets/vocsim/public)
|
| 50 |
+
[](https://creativecommons.org/licenses/by/4.0/)
|
| 51 |
|
| 52 |
+
A companion dataset for the **VocSim** benchmark that tests whether neural audio embeddings align with **biological perceptual judgments**. It packages zebra finch (*Taeniopygia guttata*) song-syllable recordings together with the behavioral probe and triplet results from Zandberg et al. (2024), so an embedding's pairwise distance matrix can be compared directly against the birds' perceptual decisions.
|
| 53 |
|
| 54 |
+
> Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120)
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
## How to use it
|
| 57 |
|
| 58 |
+
1. Extract features from each syllable with the audio model you want to evaluate.
|
| 59 |
+
2. Compute pairwise distances between embeddings.
|
| 60 |
+
3. Score the distances against the behavioral judgments in `probes.csv` (probe trials) and `triplets.csv` (triplet trials).
|
|
|
|
|
|
|
| 61 |
|
| 62 |
+
The reference implementation lives at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark) — see `reproducibility/scripts/avian_perception.py` and `reproducibility/configs/avian_paper.yaml`.
|
| 63 |
|
| 64 |
+
**Bundled files:**
|
| 65 |
+
- Hugging Face `Dataset` with the audio + metadata.
|
| 66 |
+
- `probes.csv` — probe-trial results (`sound_id`, `left`, `right`, `decision`, …), filtered to rows whose audio is present.
|
| 67 |
+
- `triplets.csv` — triplet-trial results (`Anchor`, `Positive`, `Negative`, `diff`, …), filtered the same way.
|
| 68 |
+
- `missing_audio_files.txt` (when applicable) — original IDs without matching audio.
|
| 69 |
|
| 70 |
+
## Schema
|
| 71 |
|
| 72 |
```python
|
| 73 |
{
|
| 74 |
+
"audio": {"array": np.ndarray, "sampling_rate": 16000},
|
| 75 |
+
"subset": "avian_perception",
|
| 76 |
+
"index": 42,
|
| 77 |
+
"speaker": "ZF_M_123", # bird ID
|
| 78 |
+
"label": "ZF_M_123", # set to speaker for this dataset
|
| 79 |
+
"original_name": "ZF_M_123_syllable_A.wav" # identifier used in the CSVs
|
| 80 |
}
|
| 81 |
```
|
|
|
|
| 82 |
|
| 83 |
+
## Quick start
|
| 84 |
|
| 85 |
+
```python
|
| 86 |
+
from datasets import load_dataset
|
| 87 |
+
|
| 88 |
+
ds = load_dataset("vocsim/avian-perception-benchmark", split="train")
|
| 89 |
+
print(ds[0])
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
## Source data
|
| 93 |
+
|
| 94 |
+
The recordings and behavioral results are from Zandberg et al. (2024). Please cite both that work and the VocSim paper if you use this dataset.
|
| 95 |
+
|
| 96 |
+
## Citation
|
| 97 |
+
|
| 98 |
+
```bibtex
|
| 99 |
+
@inproceedings{basha2026vocsim,
|
| 100 |
+
title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
|
| 101 |
+
author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
|
| 102 |
+
booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
|
| 103 |
+
year = {2026},
|
| 104 |
+
doi = {10.48550/arXiv.2512.10120}
|
| 105 |
}
|
| 106 |
|
| 107 |
@article{zandberg2024bird,
|
| 108 |
+
author = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.},
|
| 109 |
+
title = {Bird song comparison using deep learning trained from avian perceptual judgments},
|
| 110 |
+
journal = {PLoS Computational Biology},
|
| 111 |
+
volume = {20},
|
| 112 |
+
number = {8},
|
| 113 |
+
pages = {e1012329},
|
| 114 |
+
year = {2024},
|
| 115 |
+
doi = {10.1371/journal.pcbi.1012329},
|
|
|
|
| 116 |
publisher = {Public Library of Science}
|
| 117 |
}
|
| 118 |
+
```
|