anonymous-submission000 commited on
Commit
2fb6fd2
·
verified ·
1 Parent(s): 26c6e42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -28
README.md CHANGED
@@ -31,50 +31,74 @@ tags:
31
  - zero-shot
32
  - unsupervised-clustering-generalization
33
  ---
34
- # VocSim: Zero-Shot Audio Similarity Benchmark
35
 
36
- ## Dataset Description
 
 
37
 
38
- **Repository:** [LINK UPON DOI]
39
- **Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings (LINK UPON DOI)]()
40
- **Point of Contact:** Anonymous at the moment
41
 
42
- ### Overview
 
 
43
 
44
- **VocSim** is a large-scale benchmark dataset designed to evaluate the generalization capabilities of neural audio embeddings for **zero-shot audio similarity**.
 
 
 
 
 
 
 
 
45
 
46
- The dataset is intentionally structured into distinct subsets (19 in total, 15 available here and 4 blind test set) to represent variability in:
 
 
 
 
47
 
48
- * **Sound Source:** Human speech (phonemes, words, utterances), birdsong (calls, syllables from multiple species), otter calls, environmental events.
49
- * **Duration:** From <100ms syllables to multi-second utterances.
50
- * **Class Structure:** Subsets range from few, well-populated classes to thousands of classes with few examples.
51
- * **Acoustic Conditions:** Clean recordings (e.g., TIMIT) contrasted with noisy, naturalistic recordings (e.g., AMI meetings, field recordings).
52
- * **Intra-class Variation:** Natural differences in loudness, rate, speaker/animal identity within classes.
53
 
54
- ### Supported Tasks and Leaderboards
55
 
56
- The primary intended task is **zero-shot audio similarity evaluation**. Given an audio sample, the goal is to retrieve other samples with the same `label` (representing the same underlying sound class, e.g., word, syllable type) based solely on comparing their embeddings using a distance metric (e.g., Cosine distance).
57
 
58
- Performance is measured using:
 
59
 
60
- * **RA@k (Retrieval Accuracy @k):** Measures the fraction of the k-nearest neighbors that belong to the same class as the query sample. Higher is better.
61
- * **CSCF (Cluster Separation Confusion Fraction):** Measures the fraction of samples that are, on average, closer to samples from *other* classes than to samples from their *own* class. Lower is better.
62
 
63
- The dataset can also be used for evaluating embeddings in downstream tasks like unsupervised clustering or as features for supervised classification (see Section 5 of the paper).
64
 
65
- [Leaderboard upon DOI]
 
 
 
 
 
 
 
 
 
 
66
 
 
67
 
68
  ### Data Instances
69
 
70
- A typical data point consists of an audio sample and its associated metadata:
71
 
72
  ```python
73
  {
74
- 'audio': {'path': '/path/to/audio.wav', 'array': array([-0.00024414, -0.00048828, ..., 0.00012207], dtype=float32), 'sampling_rate': 16000},
75
- 'subset': 'HW1', # Example: Human Words from TIMIT
76
- 'speaker': 'speaker_id_abc', # Or animal ID, or 'N/A'
77
- 'label': 'hello', # Example label (could be a word, phone, class ID)
78
- 'index': 12345,
79
- 'sampling_rate': 16000
80
- }
 
 
 
 
31
  - zero-shot
32
  - unsupervised-clustering-generalization
33
  ---
34
+ # VocSim: A Benchmark for Zero-Shot Audio Similarity
35
 
36
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
37
+ [![Dataset Size](https://img.shields.io/badge/Dataset%20Size-5.45%20GB-orange)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
38
+ [![Number of Examples](https://img.shields.io/badge/Examples-114k-brightgreen)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
39
 
40
+ **VocSim** is a large-scale benchmark dataset meticulously crafted to evaluate the generalization capabilities of neural audio embeddings for **zero-shot audio similarity tasks**. It challenges models to recognize fine-grained acoustic similarity between sounds they haven't been explicitly trained to classify together.
 
 
41
 
42
+ **Repository:** [GitHub Link - Add Upon DOI]
43
+ **Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings - Link Upon DOI]()
44
+ **Point of Contact:** Anonymous Authors (initially)
45
 
46
+ ---
47
+
48
+ ## 🎯 Dataset Goal: Evaluating Generalization
49
+
50
+ The core purpose of VocSim is to provide a standardized testbed for assessing how well learned audio representations can capture the notion of "sameness" between sounds based purely on their acoustic properties, without relying on supervised training signals for the specific similarity task being evaluated.
51
+
52
+ ## 🧩 Dataset Structure: Diverse Acoustic Challenges
53
+
54
+ VocSim achieves its goal by aggregating **19 distinct subsets** (15 publicly available, 4 reserved as blind test sets) that introduce significant variability across key acoustic and structural dimensions:
55
 
56
+ * 🎤 **Sound Source:** Spans human speech (phonemes, words, utterances), intricate birdsong (calls, syllables from multiple species), otter calls, and common environmental events.
57
+ * ⏱️ **Duration:** Ranges from fleeting sub-100ms syllables to multi-second recordings.
58
+ * 📊 **Class Structure:** Includes subsets with a few well-populated classes (e.g., environmental sounds) alongside subsets with thousands of classes having only a handful of examples each (e.g., rare words or syllables).
59
+ * 🔊 **Acoustic Conditions:** Features both clean, studio-quality recordings (like TIMIT portions) and challenging, noisy, naturalistic recordings (e.g., AMI meetings, field recordings of animals).
60
+ * 🧬 **Intra-class Variation:** Naturally incorporates real-world differences in loudness, speaking/singing rate, speaker identities, animal individuals, and recording environments within the same semantic `label`.
61
 
62
+ This diversity forces embedding models to learn robust representations that are invariant to nuisance factors while remaining sensitive to the core acoustic characteristics defining a sound class.
 
 
 
 
63
 
64
+ ### Key Metrics
65
 
66
+ Performance is measured using metrics sensitive to retrieval quality and cluster coherence:
67
 
68
+ * **RA@k (Retrieval Accuracy @k):** The fraction of the *k* retrieved nearest neighbors that share the same `label` as the query sample. *Higher is better.* (e.g., RA@1 checks if the single closest neighbor is correct).
69
+ * **CSCF (Cluster Separation Confusion Fraction):** Estimates the proportion of samples that are, on average, closer to embeddings from *other* classes than to embeddings from their *own* class. Reflects how well-separated the class clusters are in the embedding space. *Lower is better.*
70
 
71
+ (See Section 5 of the paper for detailed definitions and analysis.)
 
72
 
73
+ ### Other Potential Uses
74
 
75
+ Beyond zero-shot similarity, VocSim embeddings can be evaluated on:
76
+
77
+ * Unsupervised Clustering (e.g., using metrics like NMI or ARI).
78
+ * As input features for downstream supervised classification tasks.
79
+
80
+ ## 🏆 Leaderboard
81
+
82
+ Track the performance of different embedding models on VocSim:
83
+
84
+ ➡️ **[VocSim Leaderboard on Papers With Code](https://paperswithcode.com/dataset/audiosim)** ⬅️
85
+ ---
86
 
87
+ ## 💾 Data Format
88
 
89
  ### Data Instances
90
 
91
+ Each data point in the dataset includes the audio waveform and associated metadata. All audio is standardized to **16kHz mono**.
92
 
93
  ```python
94
  {
95
+ 'audio': {
96
+ 'path': None, # Path may be None if loaded directly
97
+ 'array': array([-0.00024, -0.00048, ..., 0.00012], dtype=float32), # Waveform
98
+ 'sampling_rate': 16000 # Always 16kHz
99
+ },
100
+ 'subset': 'HW1', # Origin subset ID (e.g., Human Words 1)
101
+ 'speaker': 'speaker_id_abc', # Identifier for speaker, animal, or session. 'N/A' if not applicable.
102
+ 'label': 'hello' # Ground truth class label (word, phone, syllable type, etc.)
103
+ }
104
+ ```