The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
MusicPrint Embeddings
MERT-v1-95M audio fingerprint embeddings for 6,839 Billboard Hot 100 songs (1920s--2020s). Contains k-means centroids and query windows for compression experiments.
Dataset Description
This dataset provides precomputed embeddings from the MusicPrint project, which investigates whether frozen self-supervised audio models can serve as effective audio fingerprinting encoders without any fine-tuning.
Using MERT-v1-95M (a HuBERT-based music transformer), each song is split into overlapping 5-second windows, encoded into 768-dimensional vectors, and then clustered via k-means to produce 10 representative centroids per song. An additional 10 random query windows per song are saved for evaluation.
Paper: PAPER.md
Code: github.com/alainbrown/musicprint
Key Results
| Config | Storage/song | Recall | @ 10M songs |
|---|---|---|---|
| Frozen MERT, k=10 centroids, float32 | 30 KB | 96.6% | 286 GB |
| + PCA 256 + binary hashing | 320 B | 96.5% | 3.0 GB |
| + PCA 128 + binary hashing | 160 B | 92.0% | 1.5 GB |
Contents
The file experiment_cache.pt is a PyTorch checkpoint (torch.save format) containing:
| Key | Shape / Type | Description |
|---|---|---|
centroids |
Tensor (68,390 x 768) | 10 k-means centroids per song (float32, 768-dim MERT embeddings) |
centroid_ids |
Tensor (68,390,) | Song index for each centroid (maps to song_names) |
queries |
Tensor (68,390 x 768) | 10 random 5-second query windows per song |
query_ids |
Tensor (68,390,) | Song index for each query |
song_names |
list[str] | 6,839 file paths (relative to source music directory) |
n_songs |
int | 6,839 |
k_centroids |
int | 10 |
n_queries_per_song |
int | 10 |
skipped |
int | 127 (songs excluded due to decode errors) |
Loading the Data
import torch
cache = torch.load("experiment_cache.pt", weights_only=False)
centroids = cache["centroids"] # (68390, 768)
centroid_ids = cache["centroid_ids"] # (68390,)
queries = cache["queries"] # (68390, 768)
query_ids = cache["query_ids"] # (68390,)
song_names = cache["song_names"] # list of 6839 paths
print(f"Songs: {cache['n_songs']}")
print(f"Centroids per song: {cache['k_centroids']}")
print(f"Queries per song: {cache['n_queries_per_song']}")
print(f"Skipped (decode errors): {cache['skipped']}")
Reproducing from Source Audio
If you have the source audio files:
git clone https://github.com/alainbrown/musicprint
cd musicprint
# Place MP3/FLAC/WAV files in music/ directory
docker compose build training
docker compose run --rm training python experiments.py
The first run encodes all songs through MERT (~6 hours on an RTX 2000 Ada 16GB) and caches results. Subsequent runs load the cache and run compression experiments in seconds.
Encoder Details
- Model: m-a-p/MERT-v1-95M (frozen, no fine-tuning)
- Input: 5-second audio clips at 24 kHz
- Preprocessing: Zero mean, unit variance normalization
- Pooling: Mean pool last hidden state across sequence dimension
- Output: 768-dimensional embedding per window
- Clustering: k-means with k=10 per song (reduces ~175 windows to 10 centroids)
- Hardware: NVIDIA RTX 2000 Ada (16 GB VRAM), encoding took ~6 hours for full corpus
Corpus
6,966 songs from the Billboard Hot 100 archives spanning 1920 to the 2020s. 127 songs were excluded due to decode errors, leaving 6,839 in the dataset. The corpus covers a wide range of genres, recording qualities, and production styles.
Note: This dataset contains only the computed embeddings, not the source audio.
License
MIT -- see LICENSE.
Citation
@misc{brown2025musicprint,
title={Neural Audio Fingerprinting with Frozen Self-Supervised Models},
author={Alain Brown},
year={2025},
url={https://github.com/alainbrown/musicprint}
}
- Downloads last month
- 13