anonymous-submission000 commited on
Commit
73241e4
·
verified ·
1 Parent(s): b8b153a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -34,4 +34,51 @@ tags:
34
  pretty_name: VocSim
35
  size_categories:
36
  - 100K<n<1M
37
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  pretty_name: VocSim
35
  size_categories:
36
  - 100K<n<1M
37
+ ---
38
+ # VocSim: Zero-Shot Audio Similarity Benchmark
39
+
40
+ ## Dataset Description
41
+
42
+ **Repository:** [LINK UPON DOI]
43
+ **Paper:** [VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings (LINK UPON DOI)]()
44
+ **Point of Contact:** Anonymous at the moment
45
+
46
+ ### Overview
47
+
48
+ **VocSim** is a large-scale benchmark dataset designed to evaluate the generalization capabilities of neural audio embeddings for **zero-shot audio similarity**.
49
+
50
+ The dataset is intentionally structured into distinct subsets (19 in total, 15 available here and 4 blind test set) to represent variability in:
51
+
52
+ * **Sound Source:** Human speech (phonemes, words, utterances), birdsong (calls, syllables from multiple species), otter calls, environmental events.
53
+ * **Duration:** From <100ms syllables to multi-second utterances.
54
+ * **Class Structure:** Subsets range from few, well-populated classes to thousands of classes with few examples.
55
+ * **Acoustic Conditions:** Clean recordings (e.g., TIMIT) contrasted with noisy, naturalistic recordings (e.g., AMI meetings, field recordings).
56
+ * **Intra-class Variation:** Natural differences in loudness, rate, speaker/animal identity within classes.
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ The primary intended task is **zero-shot audio similarity evaluation**. Given an audio sample, the goal is to retrieve other samples with the same `label` (representing the same underlying sound class, e.g., word, syllable type) based solely on comparing their embeddings using a distance metric (e.g., Cosine distance).
61
+
62
+ Performance is measured using:
63
+
64
+ * **RA@k (Retrieval Accuracy @k):** Measures the fraction of the k-nearest neighbors that belong to the same class as the query sample. Higher is better.
65
+ * **CSCF (Cluster Separation Confusion Fraction):** Measures the fraction of samples that are, on average, closer to samples from *other* classes than to samples from their *own* class. Lower is better.
66
+
67
+ The dataset can also be used for evaluating embeddings in downstream tasks like unsupervised clustering or as features for supervised classification (see Section 5 of the paper).
68
+
69
+ [Leaderboard upon DOI]
70
+
71
+
72
+ ### Data Instances
73
+
74
+ A typical data point consists of an audio sample and its associated metadata:
75
+
76
+ ```python
77
+ {
78
+ 'audio': {'path': '/path/to/audio.wav', 'array': array([-0.00024414, -0.00048828, ..., 0.00012207], dtype=float32), 'sampling_rate': 16000},
79
+ 'subset': 'HW1', # Example: Human Words from TIMIT
80
+ 'speaker': 'speaker_id_abc', # Or animal ID, or 'N/A'
81
+ 'label': 'hello', # Example label (could be a word, phone, class ID)
82
+ 'index': 12345,
83
+ 'sampling_rate': 16000
84
+ }