anonymous-submission000 commited on
Commit
56843d8
·
verified ·
1 Parent(s): 1aca5a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -59
README.md CHANGED
@@ -3,6 +3,18 @@ license: cc-by-4.0
3
  size_categories:
4
  - 100K<n<1M
5
  pretty_name: VocSim
 
 
 
 
 
 
 
 
 
 
 
 
6
  dataset_info:
7
  features:
8
  - name: audio
@@ -17,88 +29,71 @@ dataset_info:
17
  dtype: string
18
  splits:
19
  - name: train
20
- num_bytes: 5452179735.8840885
21
  num_examples: 114641
22
- download_size: 5500616162
23
- dataset_size: 5452179735.8840885
24
  configs:
25
  - config_name: default
26
  data_files:
27
  - split: train
28
  path: data/train-*
29
- tags:
30
- - audio-retrieval
31
- - zero-shot
32
- - unsupervised-clustering-generalization
33
  ---
34
- # VocSim: A Benchmark for Zero-Shot Audio Similarity
35
 
 
36
  [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
37
- [![Dataset Size](https://img.shields.io/badge/Dataset%20Size-5.45%20GB-orange)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
38
- [![Number of Examples](https://img.shields.io/badge/Examples-114k-brightgreen)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
39
-
40
- **VocSim** is a large-scale benchmark dataset meticulously crafted to evaluate the generalization capabilities of neural audio embeddings for **zero-shot audio similarity tasks**. It challenges models to recognize fine-grained acoustic similarity between sounds they haven't been explicitly trained to classify together.
41
 
42
- **[VocSim Leaderboard 🏆](https://paperswithcode.com/dataset/audiosim)**
43
 
44
- **Repository:** [GitHub Link - Add Upon DOI]
45
 
46
- **Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings - Link Upon DOI]()
47
 
48
- **Point of Contact:** Anonymous Authors (initially)
49
 
50
  ---
51
 
52
- ## 🎯 Dataset Goal: Evaluating Generalization
53
-
54
- The core purpose of VocSim is to provide a standardized testbed for assessing how well learned audio representations can capture the notion of "sameness" between sounds based purely on their acoustic properties, without relying on supervised training signals for the specific similarity task being evaluated.
55
-
56
- ## 🧩 Dataset Structure: Diverse Acoustic Challenges
57
-
58
- VocSim achieves its goal by aggregating **19 distinct subsets** (15 publicly available, 4 reserved as blind test sets) that introduce significant variability across key acoustic and structural dimensions:
59
-
60
- * 🎤 **Sound Source:** Spans human speech (phonemes, words, utterances), intricate birdsong (calls, syllables from multiple species), otter calls, and common environmental events.
61
- * ⏱️ **Duration:** Ranges from fleeting sub-100ms syllables to multi-second recordings.
62
- * 📊 **Class Structure:** Includes subsets with a few well-populated classes (e.g., environmental sounds) alongside subsets with thousands of classes having only a handful of examples each (e.g., rare words or syllables).
63
- * 🔊 **Acoustic Conditions:** Features both clean, studio-quality recordings (like TIMIT portions) and challenging, noisy, naturalistic recordings (e.g., AMI meetings, field recordings of animals).
64
- * 🧬 **Intra-class Variation:** Naturally incorporates real-world differences in loudness, speaking/singing rate, speaker identities, animal individuals, and recording environments within the same semantic `label`.
65
-
66
- This diversity forces embedding models to learn robust representations that are invariant to nuisance factors while remaining sensitive to the core acoustic characteristics defining a sound class.
67
-
68
- ### Key Metrics
69
-
70
- Performance is measured using metrics sensitive to retrieval quality and cluster coherence:
71
 
72
- * **RA@k (Retrieval Accuracy @k):** The fraction of the *k* retrieved nearest neighbors that share the same `label` as the query sample. *Higher is better.* (e.g., RA@1 checks if the single closest neighbor is correct).
73
- * **CSCF (Cluster Separation Confusion Fraction):** Estimates the proportion of samples that are, on average, closer to embeddings from *other* classes than to embeddings from their *own* class. Reflects how well-separated the class clusters are in the embedding space. *Lower is better.*
 
74
 
75
- (See Section 5 of the paper for detailed definitions and analysis.)
76
-
77
- ### Other Potential Uses
78
-
79
- Beyond zero-shot similarity, VocSim embeddings can be evaluated on:
80
-
81
- * Unsupervised Clustering (e.g., using metrics like NMI or ARI).
82
- * As input features for downstream supervised classification tasks.
83
 
 
 
 
 
84
 
85
  ---
86
 
87
- ## 💾 Data Format
88
-
89
- ### Data Instances
90
-
91
- Each data point in the dataset includes the audio waveform and associated metadata. All audio is standardized to **16kHz mono**.
92
 
93
  ```python
94
  {
95
- 'audio': {
96
- 'path': None, # Path may be None if loaded directly
97
- 'array': array([-0.00024, -0.00048, ..., 0.00012], dtype=float32), # Waveform
98
- 'sampling_rate': 16000 # Always 16kHz
99
- },
100
- 'subset': 'HW1', # Origin subset ID (e.g., Human Words 1)
101
- 'speaker': 'speaker_id_abc', # Identifier for speaker, animal, or session. 'N/A' if not applicable.
102
- 'label': 'hello' # Ground truth class label (word, phone, syllable type, etc.)
103
  }
104
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  size_categories:
4
  - 100K<n<1M
5
  pretty_name: VocSim
6
+ tags:
7
+ - audio
8
+ - audio-similarity
9
+ - zero-shot-learning
10
+ - representation-learning
11
+ - embedding-evaluation
12
+ - unsupervised-learning
13
+ - speech
14
+ - environmental-sounds
15
+ - animal-vocalizations
16
+ - benchmark
17
+ paperswithcode_id: audiosim
18
  dataset_info:
19
  features:
20
  - name: audio
 
29
  dtype: string
30
  splits:
31
  - name: train
32
+ num_bytes: 5452179735 # ~5.45 GB
33
  num_examples: 114641
34
+ download_size: 5500616162 # ~5.5 GB
35
+ dataset_size: 5452179735 # ~5.45 GB
36
  configs:
37
  - config_name: default
38
  data_files:
39
  - split: train
40
  path: data/train-*
 
 
 
 
41
  ---
 
42
 
43
+ # VocSim: Zero-Shot Audio Similarity Benchmark
44
  [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
45
+ [![Size](https://img.shields.io/badge/Size-5.45%20GB-orange)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
46
+ [![Examples](https://img.shields.io/badge/Examples-114k-brightgreen)](https://huggingface.co/datasets/anonymous-submission000/vocsim)
 
 
47
 
48
+ **VocSim** evaluates how well neural audio embeddings generalize for **zero-shot audio similarity**. It tests recognizing fine-grained acoustic similarity without specific similarity training.
49
 
50
+ **[Leaderboard](https://paperswithcode.com/dataset/audiosim)**
51
 
52
+ **Paper:** [Link Upon DOI]()
53
 
54
+ **Repository:** [Link Upon DOI]()
55
 
56
  ---
57
 
58
+ ## Key Features
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
+ * **Diverse Sources:** Human speech (phones, words, utterances), birdsong, otter calls, environmental sounds.
61
+ * **Varied Conditions:** Spans clean to noisy recordings, short (<100ms) to long durations, few to many classes per subset.
62
+ * **Standardized:** All audio is 16kHz mono.
63
 
64
+ ## Task & Evaluation
 
 
 
 
 
 
 
65
 
66
+ * **Primary Task:** Zero-Shot Audio Similarity Retrieval.
67
+ * **Metrics:**
68
+ * **RA@k:** Retrieval Accuracy @k (*Higher is better*).
69
+ * **CSCF:** Cluster Separation Confusion Fraction (*Lower is better*).
70
 
71
  ---
72
 
73
+ ## Data Format
 
 
 
 
74
 
75
  ```python
76
  {
77
+ 'audio': {'array': array([...], dtype=float32), 'sampling_rate': 16000},
78
+ 'subset': 'HW1', # Origin identifier
79
+ 'speaker': 'spk_id', # Speaker/Animal/Source ID or 'N/A'
80
+ 'label': 'hello' # Ground truth class for similarity
 
 
 
 
81
  }
82
+
83
+ Train split: 114,641 public examples from 15 subsets for evaluation.
84
+
85
+ Blind Test Sets: 4 additional subsets held out privately.
86
+
87
+ ## Citation
88
+ ```bib
89
+ @inproceedings{vocsim_authors_2024,
90
+ title={VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings},
91
+ author={Anonymous Authors},
92
+ booktitle={Conference/Journal},
93
+ year={2025},
94
+ url={[Link to paper upon DOI]}
95
+ }
96
+ ```
97
+ ## License
98
+
99
+ CC BY 4.0 - Creative Commons Attribution 4.0 International.