Maris Basha commited on
Commit
0d591ef
·
1 Parent(s): 79c48a9

De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos

Browse files
Files changed (1) hide show
  1. README.md +60 -31
README.md CHANGED
@@ -26,53 +26,82 @@ configs:
26
  data_files:
27
  - split: train
28
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ---
30
- # Dataset Card for VocSim - Mouse Identity Classification
31
 
32
- ## Dataset Description
33
 
34
- This dataset is used in the **VocSim benchmark** paper for evaluating the ability of neural audio embeddings to identify individual mice based on their ultrasonic vocalization (USV) syllables. It contains pre-segmented USV syllables from multiple individual mice, derived from recordings created by **Van Segbroeck et al. (2017)**.
 
 
35
 
36
- The primary task associated with this dataset is supervised classification: training a model to predict the correct mouse identity (`speaker` field) given an audio input (a single syllable) or its derived features.
37
 
38
- **Included Files:**
39
- * Hugging Face `Dataset` object containing audio file paths (individual syllables) and metadata (mouse identity).
40
 
41
- ## Dataset Structure
42
 
43
- ### Data Instances
44
 
45
- A typical example in the dataset looks like this:
46
 
47
  ```python
48
  {
49
- 'audio': {'path': '/path/to/datasets/mouse_identity/BM003/BM003_syllable_1.wav', 'array': array([...], dtype=float32), 'sampling_rate': 250000},
50
- 'subset': 'mouse_identity',
51
- 'index': 50,
52
- 'speaker': 'BM003', # The crucial individual mouse ID (target label)
53
- 'label': 'BM003_syllable_1', # Syllable-specific identifier
54
- 'original_name': 'BM003/BM003_syllable_1.wav' # Example original relative path
55
  }
56
  ```
57
- ### Citation Information
58
 
59
- If you use this dataset, please cite the VocSim benchmark paper, and the MUPET software if relying on the provided segmentation:
 
 
 
 
 
 
60
  ```
61
- @unpublished{vocsim2025,
62
- title={VocSim: A Training-Free Benchmark for Content Identity in Single-Source Audio Embeddings},
63
- author={Anonymous},
64
- year={2025},
65
- note={Submitted manuscript}
 
 
 
 
 
 
 
 
 
 
 
66
  }
67
 
68
  @article{VanSegbroeck2017,
69
- author = {Van Segbroeck, Maarten and Knoll, Aaron T. and Levitt, Patricia and Narayanan, Shrikanth},
70
- title = "{MUPET}-Mouse Ultrasonic Profile ExTraction: A Signal Processing Tool for Rapid and Unsupervised Analysis of Ultrasonic Vocalizations",
71
- journal = {Neuron},
72
- volume = {94},
73
- number = {3},
74
- pages = {465--485.e5},
75
- year = {2017},
76
- doi = {10.1016/j.neuron.2017.04.018}
77
  }
78
- ```
 
26
  data_files:
27
  - split: train
28
  path: data/train-*
29
+ license: cc-by-4.0
30
+ tags:
31
+ - audio
32
+ - animal-vocalization
33
+ - ultrasonic-vocalization
34
+ - mouse
35
+ - bioacoustics
36
+ - speaker-identification
37
+ - benchmark
38
+ - vocsim
39
+ size_categories:
40
+ - 10K<n<100K
41
+ pretty_name: VocSim — Mouse Identity Classification
42
  ---
 
43
 
44
+ # VocSim — Mouse Identity Classification
45
 
46
+ [![GitHub](https://img.shields.io/badge/GitHub-vocsim%2Fbenchmark-black?logo=github)](https://github.com/vocsim/benchmark)
47
+ [![Core dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Core-vocsim%2Fpublic-blue)](https://huggingface.co/datasets/vocsim/public)
48
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
49
 
50
+ A companion dataset for the **VocSim** benchmark that tests whether audio embeddings preserve **individual identity** in mouse ultrasonic vocalizations (USVs). It contains pre-segmented USV syllables from multiple individual mice (the `speaker` field), sampled at the native 250 kHz, derived from recordings by Van Segbroeck et al. (2017).
51
 
52
+ > Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. *VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio.* ICML 2026. [arXiv:2512.10120](https://doi.org/10.48550/arXiv.2512.10120)
 
53
 
54
+ ## Task
55
 
56
+ Supervised multi-class classification: given an audio syllable (or features derived from it), predict which individual mouse produced it. The target is `speaker`. In the paper we use this dataset to validate that VocSim-top embeddings transfer to a fine-grained downstream bioacoustic task.
57
 
58
+ ## Schema
59
 
60
  ```python
61
  {
62
+ "audio": {"array": np.ndarray, "sampling_rate": 250000},
63
+ "subset": "mouse_identity",
64
+ "index": 50,
65
+ "speaker": "BM003", # target: individual mouse ID
66
+ "label": "BM003_syllable_1", # syllable-specific identifier
67
+ "original_name": "BM003/BM003_syllable_1.wav"
68
  }
69
  ```
 
70
 
71
+ ## Quick start
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+
76
+ ds = load_dataset("vocsim/mouse-identity-classification-benchmark", split="train")
77
+ print(ds[0])
78
  ```
79
+
80
+ For end-to-end evaluation, use [github.com/vocsim/benchmark](https://github.com/vocsim/benchmark) see `reproducibility/scripts/mouse_identity.py`.
81
+
82
+ ## Source data
83
+
84
+ USV recordings and segmentation rely on MUPET (Van Segbroeck et al., 2017). Please cite both that work and the VocSim paper if you use this dataset.
85
+
86
+ ## Citation
87
+
88
+ ```bibtex
89
+ @inproceedings{basha2026vocsim,
90
+ title = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
91
+ author = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
92
+ booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
93
+ year = {2026},
94
+ doi = {10.48550/arXiv.2512.10120}
95
  }
96
 
97
  @article{VanSegbroeck2017,
98
+ author = {Van Segbroeck, Maarten and Knoll, Aaron T. and Levitt, Patricia and Narayanan, Shrikanth},
99
+ title = {{MUPET}-Mouse Ultrasonic Profile ExTraction: A Signal Processing Tool for Rapid and Unsupervised Analysis of Ultrasonic Vocalizations},
100
+ journal = {Neuron},
101
+ volume = {94},
102
+ number = {3},
103
+ pages = {465--485.e5},
104
+ year = {2017},
105
+ doi = {10.1016/j.neuron.2017.04.018}
106
  }
107
+ ```