Maris Basha commited on
Commit
ad69b17
·
1 Parent(s): 96a5768

De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos

Browse files
Files changed (1) hide show
  1. README.md +54 -41
README.md CHANGED
@@ -40,66 +40,79 @@ tags:
40
  - audio-perceptual-judgment
41
  size_categories:
42
  - n<1K
43
- pretty_name: VocSim - Avian Perception Alignment
44
  ---
45
 
46
- # Dataset Card for VocSim - Avian Perception Alignment
47
 
48
- ## Dataset Description
 
 
49
 
50
- This dataset is used in the **VocSim benchmark** paper, specifically designed to evaluate how well neural audio embeddings align with biological perceptual judgments of similarity. It utilizes data from **Zandberg et al. (2024)**, which includes recordings of zebra finch (*Taeniopygia guttata*) song syllables and results from behavioral experiments (probe and triplet tasks) measuring the birds' perception of syllable similarity.
51
 
52
- The dataset allows researchers to:
53
- 1. Extract features/embeddings from the song syllables using various computational models.
54
- 2. Compute pairwise distances between these embeddings.
55
- 3. Compare the resulting computational similarity matrices against the avian perceptual judgments recorded in the accompanying `probes.csv` and `triplets.csv` files.
56
 
57
- This facilitates the development and benchmarking of audio representations that better capture biologically relevant acoustic features.
58
 
59
- **Included Files:**
60
- * Hugging Face `Dataset` object containing audio file paths and metadata.
61
- * `probes.csv`: Contains results from perceptual probe trials (sound_id, left, right, decision, etc.). Filtered to include only rows where all mentioned audio files exist.
62
- * `triplets.csv`: Contains results from perceptual triplet trials (Anchor, Positive, Negative, diff, etc.). Filtered to include only rows where all mentioned audio files exist.
63
- * `missing_audio_files.txt` (optional): Lists identifiers from the original CSVs for which no corresponding audio file was found.
64
 
65
- ## Dataset Structure
66
 
67
- ### Data Instances
 
 
 
 
68
 
69
- A typical example in the dataset looks like this:
70
 
71
  ```python
72
  {
73
- 'audio': {'path': '/path/to/datasets/avian_perception/wavs/ZF_M_123_syllable_A.wav', 'array': array([-0.00024414, -0.00048828, ..., 0.00024414], dtype=float32), 'sampling_rate': 16000},
74
- 'subset': 'avian_perception',
75
- 'index': 42,
76
- 'speaker': 'ZF_M_123',
77
- 'label': 'ZF_M_123', # Label is set to speaker ID for this dataset
78
- 'original_name': 'ZF_M_123_syllable_A.wav' # Identifier as used in CSVs
79
  }
80
  ```
81
- ## Citation Information
82
 
83
- If you use this dataset in your work, please cite both the VocSim benchmark paper and the original source data paper:
84
 
85
- ```bib
86
- @unpublished{vocsim2025,
87
- title={VocSim: A Training-Free Benchmark for Content Identity in Single-Source Audio Embeddings},
88
- author={Anonymous},
89
- year={2025},
90
- note={Submitted manuscript}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  }
92
 
93
  @article{zandberg2024bird,
94
- author = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.},
95
- title = {Bird song comparison using deep learning trained from avian perceptual judgments},
96
- journal = {PLoS Computational Biology},
97
- volume = {20},
98
- number = {8},
99
- year = {2024},
100
- month = {aug},
101
- pages = {e1012329},
102
- doi = {10.1371/journal.pcbi.1012329},
103
  publisher = {Public Library of Science}
104
  }
105
- ```
 
40
  - audio-perceptual-judgment
41
  size_categories:
42
  - n<1K
43
+ pretty_name: VocSim Avian Perception Alignment
44
  ---
45
 
46
+ # VocSim Avian Perception Alignment
47
 
48
+ [![GitHub](https://img.shields.io/badge/GitHub-vocsim%2Fbenchmark-black?logo=github)](https://github.com/vocsim/benchmark)
49
+ [![Core dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Core-vocsim%2Fpublic-blue)](https://huggingface.co/datasets/vocsim/public)
50
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
51
 
52
+ A companion dataset for the **VocSim** benchmark that tests whether neural audio embeddings align with **biological perceptual judgments**. It packages zebra finch (*Taeniopygia guttata*) song-syllable recordings together with the behavioral probe and triplet results from Zandberg et al. (2024), so an embedding's pairwise distance matrix can be compared directly against the birds' perceptual decisions.
53
 
54
+ > Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120)
 
 
 
55
 
56
+ ## How to use it
57
 
58
+ 1. Extract features from each syllable with the audio model you want to evaluate.
59
+ 2. Compute pairwise distances between embeddings.
60
+ 3. Score the distances against the behavioral judgments in `probes.csv` (probe trials) and `triplets.csv` (triplet trials).
 
 
61
 
62
+ The reference implementation lives at [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark) — see `reproducibility/scripts/avian_perception.py` and `reproducibility/configs/avian_paper.yaml`.
63
 
64
+ **Bundled files:**
65
+ - Hugging Face `Dataset` with the audio + metadata.
66
+ - `probes.csv` — probe-trial results (`sound_id`, `left`, `right`, `decision`, …), filtered to rows whose audio is present.
67
+ - `triplets.csv` — triplet-trial results (`Anchor`, `Positive`, `Negative`, `diff`, …), filtered the same way.
68
+ - `missing_audio_files.txt` (when applicable) — original IDs without matching audio.
69
 
70
+ ## Schema
71
 
72
  ```python
73
  {
74
+ "audio": {"array": np.ndarray, "sampling_rate": 16000},
75
+ "subset": "avian_perception",
76
+ "index": 42,
77
+ "speaker": "ZF_M_123", # bird ID
78
+ "label": "ZF_M_123", # set to speaker for this dataset
79
+ "original_name": "ZF_M_123_syllable_A.wav" # identifier used in the CSVs
80
  }
81
  ```
 
82
 
83
+ ## Quick start
84
 
85
+ ```python
86
+ from datasets import load_dataset
87
+
88
+ ds = load_dataset("vocsim/avian-perception-benchmark", split="train")
89
+ print(ds[0])
90
+ ```
91
+
92
+ ## Source data
93
+
94
+ The recordings and behavioral results are from Zandberg et al. (2024). Please cite both that work and the VocSim paper if you use this dataset.
95
+
96
+ ## Citation
97
+
98
+ ```bibtex
99
+ @inproceedings{basha2026vocsim,
100
+ title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
101
+ author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
102
+ booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
103
+ year = {2026},
104
+ doi = {10.48550/arXiv.2512.10120}
105
  }
106
 
107
  @article{zandberg2024bird,
108
+ author = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.},
109
+ title = {Bird song comparison using deep learning trained from avian perceptual judgments},
110
+ journal = {PLoS Computational Biology},
111
+ volume = {20},
112
+ number = {8},
113
+ pages = {e1012329},
114
+ year = {2024},
115
+ doi = {10.1371/journal.pcbi.1012329},
 
116
  publisher = {Public Library of Science}
117
  }
118
+ ```