Maris Basha
De-anonymize and polish README: add ICML 2026 paper info, arXiv DOI, cross-links to vocsim/* repos
0d591ef
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 250000
    - name: speaker
      dtype: string
    - name: subset
      dtype: string
    - name: index
      dtype: int64
    - name: label
      dtype: string
    - name: original_name
      dtype: string
  splits:
    - name: train
      num_bytes: 12703856316.896
      num_examples: 99024
  download_size: 8163587149
  dataset_size: 12703856316.896
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
tags:
  - audio
  - animal-vocalization
  - ultrasonic-vocalization
  - mouse
  - bioacoustics
  - speaker-identification
  - benchmark
  - vocsim
size_categories:
  - 10K<n<100K
pretty_name: VocSim  Mouse Identity Classification

VocSim — Mouse Identity Classification

GitHub Core dataset License: CC BY 4.0

A companion dataset for the VocSim benchmark that tests whether audio embeddings preserve individual identity in mouse ultrasonic vocalizations (USVs). It contains pre-segmented USV syllables from multiple individual mice (the speaker field), sampled at the native 250 kHz, derived from recordings by Van Segbroeck et al. (2017).

Basha, M., Zai, A. T., Stoll, S., & Hahnloser, R. H. R. VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio. ICML 2026. arXiv:2512.10120

Task

Supervised multi-class classification: given an audio syllable (or features derived from it), predict which individual mouse produced it. The target is speaker. In the paper we use this dataset to validate that VocSim-top embeddings transfer to a fine-grained downstream bioacoustic task.

Schema

{
  "audio":  {"array": np.ndarray, "sampling_rate": 250000},
  "subset": "mouse_identity",
  "index":  50,
  "speaker": "BM003",                  # target: individual mouse ID
  "label":   "BM003_syllable_1",       # syllable-specific identifier
  "original_name": "BM003/BM003_syllable_1.wav"
}

Quick start

from datasets import load_dataset

ds = load_dataset("vocsim/mouse-identity-classification-benchmark", split="train")
print(ds[0])

For end-to-end evaluation, use github.com/vocsim/benchmark — see reproducibility/scripts/mouse_identity.py.

Source data

USV recordings and segmentation rely on MUPET (Van Segbroeck et al., 2017). Please cite both that work and the VocSim paper if you use this dataset.

Citation

@inproceedings{basha2026vocsim,
  title     = {VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio},
  author    = {Basha, Maris and Zai, Anja T. and Stoll, Sabine and Hahnloser, Richard H. R.},
  booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
  year      = {2026},
  doi       = {10.48550/arXiv.2512.10120}
}

@article{VanSegbroeck2017,
  author  = {Van Segbroeck, Maarten and Knoll, Aaron T. and Levitt, Patricia and Narayanan, Shrikanth},
  title   = {{MUPET}-Mouse Ultrasonic Profile ExTraction: A Signal Processing Tool for Rapid and Unsupervised Analysis of Ultrasonic Vocalizations},
  journal = {Neuron},
  volume  = {94},
  number  = {3},
  pages   = {465--485.e5},
  year    = {2017},
  doi     = {10.1016/j.neuron.2017.04.018}
}