a-tre-10k / README.md
chuyangchenn's picture
Add dataset card
acc4ed8 verified
metadata
license: cc-by-4.0
task_categories:
  - audio-classification
language:
  - en
pretty_name: A-TRE-10k
size_categories:
  - 10K<n<100K
tags:
  - audio
  - compositionality
  - benchmark
  - dx7
  - icassp2026
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 32000
    - name: metadata
      list:
        - name: timbre_label
          dtype: string
        - name: pitch_label
          dtype: string
        - name: rate_label
          dtype: string
        - name: amplitude_label
          dtype: string
  splits:
    - name: test
      num_bytes: 640126866
      num_examples: 1000
    - name: train
      num_bytes: 5121009000
      num_examples: 8000
    - name: val
      num_bytes: 640126266
      num_examples: 1000
  download_size: 6401715241
  dataset_size: 6401262132
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*

A-TRE-10k

arXiv Code Companion: A-COAT-2k

Audio Tree Reconstruction Error benchmark — 10,000 synthetic audio scenes for evaluating whether audio encoders represent multi-source scenes compositionally.

Companion dataset to the ICASSP 2026 paper Evaluating Compositional Structure in Audio Representations. See also the zero-shot benchmark chuyangchenn/a-coat-2k.

Quick start

from datasets import load_dataset

ds = load_dataset("chuyangchenn/a-tre-10k", split="train")  # or "val", "test"
ex = ds[0]
samples = ex["audio"].get_all_samples()
waveform = samples.data           # torch.Tensor, shape (1, 320000)
sr = samples.sample_rate          # 32000
metadata = ex["metadata"]         # list of {timbre_label, pitch_label, rate_label, amplitude_label}

Streaming (no local download):

ds = load_dataset("chuyangchenn/a-tre-10k", split="test", streaming=True)
for ex in ds.take(5):
    print(ex["audio"].get_all_samples().data.shape)

Dataset structure

Each row is one 10-second 32 kHz mono audio scene plus its source-attribute metadata.

Field Type Description
audio Audio(sampling_rate=32000) Waveform, shape (1, 320000) — peak-normalised mono.
metadata list[dict] One entry per source: {timbre_label, pitch_label, rate_label, amplitude_label}.

A scene contains N ∈ {1, 2, 3, 4} independent sources, each described by four discrete attributes (K = 8 classes per attribute):

  • timbret1t8: eight DX7 FM synth patches
  • pitchp1p8: MIDI 36–84, linearly binned
  • rater1r8: 0.2–3.0 Hz, log-binned repetition rate
  • amplitudea1a8: −26 to 0 dB, linearly binned

Splits

Split # scenes
train 8,000
val 1,000
test 1,000

Citation

@inproceedings{chen2026audiocomp,
  title         = {Evaluating Compositional Structure in Audio Representations},
  author        = {Chen, Chuyang and Steers, Bea and McFee, Brian and Bello, Juan Pablo},
  booktitle     = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year          = {2026},
  eprint        = {2603.13685},
  archivePrefix = {arXiv},
  primaryClass  = {cs.SD}
}

License

CC BY 4.0 — free use with attribution.