| --- |
| license: cc-by-4.0 |
| task_categories: |
| - audio-classification |
| language: |
| - en |
| pretty_name: A-COAT-2k |
| size_categories: |
| - 1K<n<10K |
| tags: |
| - audio |
| - compositionality |
| - benchmark |
| - dx7 |
| - icassp2026 |
| - zero-shot |
| dataset_info: |
| features: |
| - name: A |
| dtype: |
| audio: |
| sampling_rate: 32000 |
| - name: B |
| dtype: |
| audio: |
| sampling_rate: 32000 |
| - name: C |
| dtype: |
| audio: |
| sampling_rate: 32000 |
| - name: D |
| dtype: |
| audio: |
| sampling_rate: 32000 |
| - name: metadata |
| struct: |
| - name: A |
| list: |
| - name: timbre_label |
| dtype: string |
| - name: pitch_label |
| dtype: string |
| - name: rate_label |
| dtype: string |
| - name: amplitude_label |
| dtype: string |
| - name: C |
| list: |
| - name: timbre_label |
| dtype: string |
| - name: pitch_label |
| dtype: string |
| - name: rate_label |
| dtype: string |
| - name: amplitude_label |
| dtype: string |
| - name: T |
| list: |
| - name: timbre_label |
| dtype: string |
| - name: pitch_label |
| dtype: string |
| - name: rate_label |
| dtype: string |
| - name: amplitude_label |
| dtype: string |
| splits: |
| - name: test |
| num_bytes: 5120679688 |
| num_examples: 2000 |
| download_size: 5121174847 |
| dataset_size: 5120679688 |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: data/test-* |
| --- |
| |
| # A-COAT-2k |
|
|
| [](https://arxiv.org/abs/2603.13685) |
| [](https://github.com/chuyangchencd/audio-compositionality) |
| [](https://huggingface.co/datasets/chuyangchenn/a-tre-10k) |
|
|
| **A**udio **C**ompositional **O**bject **A**lgebra **T**est — 2,000 zero-shot audio |
| quadruples for evaluating whether audio encoders represent multi-source scenes |
| compositionally. **No training required.** |
|
|
| Companion dataset to the ICASSP 2026 paper [*Evaluating Compositional Structure in Audio |
| Representations*](https://arxiv.org/abs/2603.13685). See also the |
| trained-head benchmark [`chuyangchenn/a-tre-10k`](https://huggingface.co/datasets/chuyangchenn/a-tre-10k). |
|
|
| ## Quick start |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("chuyangchenn/a-coat-2k", split="test") |
| ex = ds[0] |
| A = ex["A"].get_all_samples().data # torch.Tensor, shape (1, 320000) |
| B = ex["B"].get_all_samples().data # B = A ∪ T |
| C = ex["C"].get_all_samples().data |
| D = ex["D"].get_all_samples().data # D = C ∪ T |
| metadata = ex["metadata"] # {"A": [...], "C": [...], "T": [...]} |
| ``` |
|
|
| ## What's a quadruple? |
|
|
| Each row is a 4-tuple `(A, B, C, D)` where `B = A ∪ T` and `D = C ∪ T` — i.e. the |
| **same** transformation set `T` is applied to two different base scenes. The |
| benchmark score for an encoder `f` is the average over quadruples of: |
|
|
| `A-COAT(A,B,C,D) = cos(f(B) − f(A), f(D) − f(C))` |
|
|
| Score 1 = adding `T` shifts the embedding by the same vector regardless of base scene |
| (perfect compositionality). Random encoders score ≈ 0. |
|
|
| ## Dataset structure |
|
|
| | Field | Type | Description | |
| |------------|-------------------------------|----------------------------------------------| |
| | `A`,`B`,`C`,`D` | `Audio(sampling_rate=32000)` | Waveforms, each `(1, 320000)` mono 32 kHz. | |
| | `metadata` | `dict[str, list[dict]]` | Source attributes per role: `A`, `C`, `T`. | |
|
|
| Each source has four discrete attributes (K = 8 classes per attribute): |
|
|
| - **timbre** — `t1`–`t8`: eight DX7 FM synth patches |
| - **pitch** — `p1`–`p8`: MIDI 36–84, linearly binned |
| - **rate** — `r1`–`r8`: 0.2–3.0 Hz, log-binned repetition rate |
| - **amplitude** — `a1`–`a8`: −26 to 0 dB, linearly binned |
|
|
| `A` and `C` each contain 1 source. `T` contains 1–3 sources (varies per quadruple). |
|
|
| ## Splits |
|
|
| | Split | # quadruples | |
| |-------|-------------:| |
| | test | 2,000 | |
|
|
| (No train/val — A-COAT is a zero-shot benchmark.) |
|
|
| ## Citation |
|
|
| ```bibtex |
| @inproceedings{chen2026audiocomp, |
| title = {Evaluating Compositional Structure in Audio Representations}, |
| author = {Chen, Chuyang and Steers, Bea and McFee, Brian and Bello, Juan Pablo}, |
| booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, |
| year = {2026}, |
| eprint = {2603.13685}, |
| archivePrefix = {arXiv}, |
| primaryClass = {cs.SD} |
| } |
| ``` |
|
|
| ## License |
|
|
| [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) — free use with attribution. |
|
|