File size: 3,226 Bytes
4f59875
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
08158b9
 
 
 
4f59875
 
 
 
 
 
 
 
 
 
 
 
08158b9
 
 
 
4f59875
08158b9
4f59875
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
library_name: symupe
license: cc-by-nc-sa-4.0
datasets:
- loubb/aria-midi
pipeline_tag: feature-extraction
tags:
- music
- piano
- midi
- pre-training
- transformer
- MLM
---

# SyMuPe: Aria-MIDI MLM Backbone

**Aria-MIDI-MLM** is a 12-layer Transformer encoder designed for symbolic piano music feature extraction. 
It was pre-trained using a **Multi-Mask Language Modeling (mMLM)** objective on 371,053 diverse piano MIDI files from the deduped subset of [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) dataset.

This model serves as the foundation for the [MIDI Quality Classifier](https://huggingface.co/SyMuPe/MIDI-Quality-Classifier), presented in the article: [**PianoCoRe: Combined and Refined Piano MIDI Dataset**](https://doi.org/10.5334/tismir.333).

- **TISMIR:** https://doi.org/10.5334/tismir.333
- **arXiv:** https://arxiv.org/abs/2605.06627
- **SyMuPe:** https://github.com/ilya16/SyMuPe
- **PianoCoRe:** https://github.com/ilya16/PianoCoRe
- **Dataset:** https://huggingface.co/datasets/loubb/aria-midi

## Architecture

- **Type:** Transformer Encoder
- **Configuration:** 12 layers, 768 hidden dimensions, 12 attention heads.
- **Objective:** Multi-Mask Language Modeling (mMLM).
- **Inputs (score-agnostic):** `Pitch`, `Velocity`, `TimeShift`, `Duration`, absolute `TimePosition`
- **Training:** Pre-trained for 600,000 steps on 512-note sequences sampled from the deduplicated [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) corpus.

## Quick Start

Before using this model, ensure you have the `symupe` library installed:
```shell
pip install -U symupe
```

Use the following code to embed MIDI files:
```python
import torch
from symupe import AutoEmbedder

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Build Embedder by loading the model and tokenizer directly from the Hub
embedder = AutoEmbedder.from_pretrained("SyMuPe/Aria-MIDI-MLM", device=device)
# model, tokenizer = embedder.model, embedder.tokenizer

# Extract embeddings from a MIDI file
result = embedder("performance.mid", max_seq_len=512, hop_size=256, layer=-1)
# result is MusicEmbeddingResult(...) containing:
# - midi, seq, embeddings, memory_tokens, token_embeddings, hidden_states, sequences and window_indices

print(result.embeddings.shape)  # (windows, seq_len, emb_dim)
```

## License

The model weights are distributed under the [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license.

## Citation

If you use this model in your research, please cite:

```bibtex
@inproceedings{borovik2025symupe,
  title = {{SyMuPe: Affective and Controllable Symbolic Music Performance}},
  author = {Borovik, Ilya and Gavrilev, Dmitrii and Viro, Vladimir},
  year = {2025},
  booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
  pages = {10699--10708},
  doi = {10.1145/3746027.3755871}
}
```

```bibtex
@article{borovik2026pianocore,
  title = {{PianoCoRe: Combined and Refined Piano MIDI Dataset}},
  author = {Borovik, Ilya},
  year = {2026},
  journal = {Transactions of the International Society for Music Information Retrieval},
  volume = {9},
  number = {1},
  pages = {144--163},
  doi = {10.5334/tismir.333}
}
```