Datasets:
Formats:
parquet
Size:
100K - 1M
ArXiv:
Tags:
audio
audio-similarity
zero-shot-learning
representation-learning
embedding-evaluation
unsupervised-learning
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -3,6 +3,18 @@ license: cc-by-4.0
|
|
| 3 |
size_categories:
|
| 4 |
- 100K<n<1M
|
| 5 |
pretty_name: VocSim
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
dataset_info:
|
| 7 |
features:
|
| 8 |
- name: audio
|
|
@@ -17,88 +29,71 @@ dataset_info:
|
|
| 17 |
dtype: string
|
| 18 |
splits:
|
| 19 |
- name: train
|
| 20 |
-
num_bytes: 5452179735.
|
| 21 |
num_examples: 114641
|
| 22 |
-
download_size: 5500616162
|
| 23 |
-
dataset_size: 5452179735.
|
| 24 |
configs:
|
| 25 |
- config_name: default
|
| 26 |
data_files:
|
| 27 |
- split: train
|
| 28 |
path: data/train-*
|
| 29 |
-
tags:
|
| 30 |
-
- audio-retrieval
|
| 31 |
-
- zero-shot
|
| 32 |
-
- unsupervised-clustering-generalization
|
| 33 |
---
|
| 34 |
-
# VocSim: A Benchmark for Zero-Shot Audio Similarity
|
| 35 |
|
|
|
|
| 36 |
[](https://creativecommons.org/licenses/by/4.0/)
|
| 37 |
-
[](https://creativecommons.org/licenses/by/4.0/)
|
| 45 |
+
[](https://huggingface.co/datasets/anonymous-submission000/vocsim)
|
| 46 |
+
[](https://huggingface.co/datasets/anonymous-submission000/vocsim)
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
**VocSim** evaluates how well neural audio embeddings generalize for **zero-shot audio similarity**. It tests recognizing fine-grained acoustic similarity without specific similarity training.
|
| 49 |
|
| 50 |
+
**[Leaderboard](https://paperswithcode.com/dataset/audiosim)**
|
| 51 |
|
| 52 |
+
**Paper:** [Link Upon DOI]()
|
| 53 |
|
| 54 |
+
**Repository:** [Link Upon DOI]()
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
+
## Key Features
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
* **Diverse Sources:** Human speech (phones, words, utterances), birdsong, otter calls, environmental sounds.
|
| 61 |
+
* **Varied Conditions:** Spans clean to noisy recordings, short (<100ms) to long durations, few to many classes per subset.
|
| 62 |
+
* **Standardized:** All audio is 16kHz mono.
|
| 63 |
|
| 64 |
+
## Task & Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
+
* **Primary Task:** Zero-Shot Audio Similarity Retrieval.
|
| 67 |
+
* **Metrics:**
|
| 68 |
+
* **RA@k:** Retrieval Accuracy @k (*Higher is better*).
|
| 69 |
+
* **CSCF:** Cluster Separation Confusion Fraction (*Lower is better*).
|
| 70 |
|
| 71 |
---
|
| 72 |
|
| 73 |
+
## Data Format
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
```python
|
| 76 |
{
|
| 77 |
+
'audio': {'array': array([...], dtype=float32), 'sampling_rate': 16000},
|
| 78 |
+
'subset': 'HW1', # Origin identifier
|
| 79 |
+
'speaker': 'spk_id', # Speaker/Animal/Source ID or 'N/A'
|
| 80 |
+
'label': 'hello' # Ground truth class for similarity
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
}
|
| 82 |
+
|
| 83 |
+
Train split: 114,641 public examples from 15 subsets for evaluation.
|
| 84 |
+
|
| 85 |
+
Blind Test Sets: 4 additional subsets held out privately.
|
| 86 |
+
|
| 87 |
+
## Citation
|
| 88 |
+
```bib
|
| 89 |
+
@inproceedings{vocsim_authors_2024,
|
| 90 |
+
title={VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings},
|
| 91 |
+
author={Anonymous Authors},
|
| 92 |
+
booktitle={Conference/Journal},
|
| 93 |
+
year={2025},
|
| 94 |
+
url={[Link to paper upon DOI]}
|
| 95 |
+
}
|
| 96 |
+
```
|
| 97 |
+
## License
|
| 98 |
+
|
| 99 |
+
CC BY 4.0 - Creative Commons Attribution 4.0 International.
|