metadata
library_name: symupe
license: cc-by-nc-sa-4.0
datasets:
- loubb/aria-midi
pipeline_tag: feature-extraction
tags:
- music
- piano
- midi
- pre-training
- transformer
- MLM
SyMuPe: Aria-MIDI MLM Backbone
Aria-MIDI-MLM is a 12-layer Transformer encoder designed for symbolic piano music feature extraction. It was pre-trained using a Multi-Mask Language Modeling (mMLM) objective on 371,053 diverse piano MIDI files from the deduped subset of Aria-MIDI dataset.
This model serves as the foundation for the MIDI Quality Classifier, presented in the article: PianoCoRe: Combined and Refined Piano MIDI Dataset.
- TISMIR: https://doi.org/10.5334/tismir.333
- arXiv: https://arxiv.org/abs/2605.06627
- SyMuPe: https://github.com/ilya16/SyMuPe
- PianoCoRe: https://github.com/ilya16/PianoCoRe
- Dataset: https://huggingface.co/datasets/loubb/aria-midi
Architecture
- Type: Transformer Encoder
- Configuration: 12 layers, 768 hidden dimensions, 12 attention heads.
- Objective: Multi-Mask Language Modeling (mMLM).
- Inputs (score-agnostic):
Pitch,Velocity,TimeShift,Duration, absoluteTimePosition - Training: Pre-trained for 600,000 steps on 512-note sequences sampled from the deduplicated Aria-MIDI corpus.
Quick Start
Before using this model, ensure you have the symupe library installed:
pip install -U symupe
Use the following code to embed MIDI files:
import torch
from symupe import AutoEmbedder
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Build Embedder by loading the model and tokenizer directly from the Hub
embedder = AutoEmbedder.from_pretrained("SyMuPe/Aria-MIDI-MLM", device=device)
# model, tokenizer = embedder.model, embedder.tokenizer
# Extract embeddings from a MIDI file
result = embedder("performance.mid", max_seq_len=512, hop_size=256, layer=-1)
# result is MusicEmbeddingResult(...) containing:
# - midi, seq, embeddings, memory_tokens, token_embeddings, hidden_states, sequences and window_indices
print(result.embeddings.shape) # (windows, seq_len, emb_dim)
License
The model weights are distributed under the CC-BY-NC-SA 4.0 license.
Citation
If you use this model in your research, please cite:
@inproceedings{borovik2025symupe,
title = {{SyMuPe: Affective and Controllable Symbolic Music Performance}},
author = {Borovik, Ilya and Gavrilev, Dmitrii and Viro, Vladimir},
year = {2025},
booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
pages = {10699--10708},
doi = {10.1145/3746027.3755871}
}
@article{borovik2026pianocore,
title = {{PianoCoRe: Combined and Refined Piano MIDI Dataset}},
author = {Borovik, Ilya},
year = {2026},
journal = {Transactions of the International Society for Music Information Retrieval},
volume = {9},
number = {1},
pages = {144--163},
doi = {10.5334/tismir.333}
}