AudioMosaic
Collection
ICML2026 AudioMosaic: Contrastive Masked Audio Representation Learning • 15 items • Updated • 2
AudioMosaic: Contrastive Masked Audio Representation Learning
Code: https://github.com/HanxunH/AudioMosaic
Pretrained encoder: hanxunh/AudioMosaic-vit-b16-pretrained
This is the AudioMosaic ViT-B/16 encoder with a linear classifier head trained on AudioSet-20K (encoder frozen, only the probe parameters are trained).
| Metric | Value |
|---|---|
| mAP (AS-20K eval) | 29.40 |
import sys, torch
from huggingface_hub import snapshot_download
local_dir = snapshot_download("hanxunh/AudioMosaic-vit-b16-linear-prob-as20k")
sys.path.insert(0, local_dir)
from load_model import load_classifier
model = load_classifier(device="cuda")
# Forward a log-mel spectrogram batch of shape [B, 1, 1024, 128]
fbank = torch.randn(2, 1, 1024, 128).cuda()
with torch.no_grad():
logits = model(fbank) # [B, 527]
probs = logits.sigmoid() # multi-label probabilities
The release contains:
model.safetensors — probe weightsconfig.json — architecture hyperparametersmodeling.py — vendored model architecture (no need to install AudioMosaic)load_model.py — convenience loaderRequired dependencies: torch, timm, torchlibrosa, safetensors, huggingface_hub.
@inproceedings{huang2026audiomosaic,
title={AudioMosaic: Contrastive Masked Audio Representation Learning},
author={Hanxun Huang and Qizhou Wang and Xingjun Ma and Cihang Xie and Christopher Leckie and Sarah Erfani},
booktitle={ICML},
year={2026}
}
Base model
hanxunh/AudioMosaic-vit-b16-pretrained