Datasets:
Formats:
parquet
Size:
100K - 1M
ArXiv:
Tags:
audio
audio-similarity
zero-shot-learning
representation-learning
embedding-evaluation
unsupervised-learning
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,50 +31,74 @@ tags:
|
|
| 31 |
- zero-shot
|
| 32 |
- unsupervised-clustering-generalization
|
| 33 |
---
|
| 34 |
-
# VocSim: Zero-Shot Audio Similarity
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
**
|
| 39 |
-
**Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings (LINK UPON DOI)]()
|
| 40 |
-
**Point of Contact:** Anonymous at the moment
|
| 41 |
|
| 42 |
-
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
-
* **Duration:** From <100ms syllables to multi-second utterances.
|
| 50 |
-
* **Class Structure:** Subsets range from few, well-populated classes to thousands of classes with few examples.
|
| 51 |
-
* **Acoustic Conditions:** Clean recordings (e.g., TIMIT) contrasted with noisy, naturalistic recordings (e.g., AMI meetings, field recordings).
|
| 52 |
-
* **Intra-class Variation:** Natural differences in loudness, rate, speaker/animal identity within classes.
|
| 53 |
|
| 54 |
-
###
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
-
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
-
* **CSCF (Cluster Separation Confusion Fraction):** Measures the fraction of samples that are, on average, closer to samples from *other* classes than to samples from their *own* class. Lower is better.
|
| 62 |
|
| 63 |
-
|
| 64 |
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
|
|
|
| 67 |
|
| 68 |
### Data Instances
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
```python
|
| 73 |
{
|
| 74 |
-
'audio': {
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
'
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
- zero-shot
|
| 32 |
- unsupervised-clustering-generalization
|
| 33 |
---
|
| 34 |
+
# VocSim: A Benchmark for Zero-Shot Audio Similarity
|
| 35 |
|
| 36 |
+
[](https://creativecommons.org/licenses/by/4.0/)
|
| 37 |
+
[](https://huggingface.co/datasets/anonymous-submission000/vocsim)
|
| 38 |
+
[](https://huggingface.co/datasets/anonymous-submission000/vocsim)
|
| 39 |
|
| 40 |
+
**VocSim** is a large-scale benchmark dataset meticulously crafted to evaluate the generalization capabilities of neural audio embeddings for **zero-shot audio similarity tasks**. It challenges models to recognize fine-grained acoustic similarity between sounds they haven't been explicitly trained to classify together.
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
**Repository:** [GitHub Link - Add Upon DOI]
|
| 43 |
+
**Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings - Link Upon DOI]()
|
| 44 |
+
**Point of Contact:** Anonymous Authors (initially)
|
| 45 |
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## 🎯 Dataset Goal: Evaluating Generalization
|
| 49 |
+
|
| 50 |
+
The core purpose of VocSim is to provide a standardized testbed for assessing how well learned audio representations can capture the notion of "sameness" between sounds based purely on their acoustic properties, without relying on supervised training signals for the specific similarity task being evaluated.
|
| 51 |
+
|
| 52 |
+
## 🧩 Dataset Structure: Diverse Acoustic Challenges
|
| 53 |
+
|
| 54 |
+
VocSim achieves its goal by aggregating **19 distinct subsets** (15 publicly available, 4 reserved as blind test sets) that introduce significant variability across key acoustic and structural dimensions:
|
| 55 |
|
| 56 |
+
* 🎤 **Sound Source:** Spans human speech (phonemes, words, utterances), intricate birdsong (calls, syllables from multiple species), otter calls, and common environmental events.
|
| 57 |
+
* ⏱️ **Duration:** Ranges from fleeting sub-100ms syllables to multi-second recordings.
|
| 58 |
+
* 📊 **Class Structure:** Includes subsets with a few well-populated classes (e.g., environmental sounds) alongside subsets with thousands of classes having only a handful of examples each (e.g., rare words or syllables).
|
| 59 |
+
* 🔊 **Acoustic Conditions:** Features both clean, studio-quality recordings (like TIMIT portions) and challenging, noisy, naturalistic recordings (e.g., AMI meetings, field recordings of animals).
|
| 60 |
+
* 🧬 **Intra-class Variation:** Naturally incorporates real-world differences in loudness, speaking/singing rate, speaker identities, animal individuals, and recording environments within the same semantic `label`.
|
| 61 |
|
| 62 |
+
This diversity forces embedding models to learn robust representations that are invariant to nuisance factors while remaining sensitive to the core acoustic characteristics defining a sound class.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
+
### Key Metrics
|
| 65 |
|
| 66 |
+
Performance is measured using metrics sensitive to retrieval quality and cluster coherence:
|
| 67 |
|
| 68 |
+
* **RA@k (Retrieval Accuracy @k):** The fraction of the *k* retrieved nearest neighbors that share the same `label` as the query sample. *Higher is better.* (e.g., RA@1 checks if the single closest neighbor is correct).
|
| 69 |
+
* **CSCF (Cluster Separation Confusion Fraction):** Estimates the proportion of samples that are, on average, closer to embeddings from *other* classes than to embeddings from their *own* class. Reflects how well-separated the class clusters are in the embedding space. *Lower is better.*
|
| 70 |
|
| 71 |
+
(See Section 5 of the paper for detailed definitions and analysis.)
|
|
|
|
| 72 |
|
| 73 |
+
### Other Potential Uses
|
| 74 |
|
| 75 |
+
Beyond zero-shot similarity, VocSim embeddings can be evaluated on:
|
| 76 |
+
|
| 77 |
+
* Unsupervised Clustering (e.g., using metrics like NMI or ARI).
|
| 78 |
+
* As input features for downstream supervised classification tasks.
|
| 79 |
+
|
| 80 |
+
## 🏆 Leaderboard
|
| 81 |
+
|
| 82 |
+
Track the performance of different embedding models on VocSim:
|
| 83 |
+
|
| 84 |
+
➡️ **[VocSim Leaderboard on Papers With Code](https://paperswithcode.com/dataset/audiosim)** ⬅️
|
| 85 |
+
---
|
| 86 |
|
| 87 |
+
## 💾 Data Format
|
| 88 |
|
| 89 |
### Data Instances
|
| 90 |
|
| 91 |
+
Each data point in the dataset includes the audio waveform and associated metadata. All audio is standardized to **16kHz mono**.
|
| 92 |
|
| 93 |
```python
|
| 94 |
{
|
| 95 |
+
'audio': {
|
| 96 |
+
'path': None, # Path may be None if loaded directly
|
| 97 |
+
'array': array([-0.00024, -0.00048, ..., 0.00012], dtype=float32), # Waveform
|
| 98 |
+
'sampling_rate': 16000 # Always 16kHz
|
| 99 |
+
},
|
| 100 |
+
'subset': 'HW1', # Origin subset ID (e.g., Human Words 1)
|
| 101 |
+
'speaker': 'speaker_id_abc', # Identifier for speaker, animal, or session. 'N/A' if not applicable.
|
| 102 |
+
'label': 'hello' # Ground truth class label (word, phone, syllable type, etc.)
|
| 103 |
+
}
|
| 104 |
+
```
|