ilya16 commited on
Commit
4f59875
·
verified ·
1 Parent(s): d09d93b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: symupe
3
+ license: cc-by-nc-sa-4.0
4
+ datasets:
5
+ - loubb/aria-midi
6
+ pipeline_tag: feature-extraction
7
+ tags:
8
+ - music
9
+ - piano
10
+ - midi
11
+ - pre-training
12
+ - transformer
13
+ - MLM
14
+ ---
15
+
16
+ # SyMuPe: Aria-MIDI MLM Backbone
17
+
18
+ **Aria-MIDI-MLM** is a 12-layer Transformer encoder designed for symbolic piano music feature extraction.
19
+ It was pre-trained using a **Multi-Mask Language Modeling (mMLM)** objective on 371,053 diverse piano MIDI files from the deduped subset of [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) dataset.
20
+
21
+ This model serves as the foundation for the [MIDI Quality Classifier](https://huggingface.co/SyMuPe/MIDI-Quality-Classifier), presented in the article: [**PianoCoRe: Combined and Refined Piano MIDI Dataset**](https://doi.org/10.5334/tismir.333).
22
+
23
+ - **SyMuPe Repo:** https://github.com/ilya16/SyMuPe
24
+ - **Project Repo:** https://github.com/ilya16/PianoCoRe
25
+ - **Dataset:** https://huggingface.co/datasets/loubb/aria-midi
26
+
27
+ ## Architecture
28
+
29
+ - **Type:** Transformer Encoder
30
+ - **Configuration:** 12 layers, 768 hidden dimensions, 12 attention heads.
31
+ - **Objective:** Multi-Mask Language Modeling (mMLM).
32
+ - **Inputs (score-agnostic):** `Pitch`, `Velocity`, `TimeShift`, `Duration`, absolute `TimePosition`
33
+ - **Training:** Pre-trained for 600,000 steps on 512-note sequences sampled from the deduplicated [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) corpus.
34
+
35
+ ## Quick Start
36
+
37
+ Before using this model, ensure you have the `symupe` library installed (`pip install -U symupe`).
38
+
39
+ ```python
40
+ import torch
41
+ from symupe import AutoEmbedder
42
+
43
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
44
+
45
+ # Build Embedder by loading the model and tokenizer directly from the Hub
46
+ embedder = AutoEmbedder.from_pretrained("SyMuPe/Aria-MIDI-MLM", device=device)
47
+ # model, tokenizer = embedder.model, embedder.tokenizer
48
+
49
+ # Extract embeddings from a MIDI file
50
+ result = embedder("performance.mid", max_seq_len=512, hop_size=256, layer=-1)
51
+ # result is MusicEmbeddingResult(...) containing:
52
+ # - midi, seq, embeddings, memory_tokens, token_embeddings, hidden_states, sequences and window_indices
53
+
54
+ print(result.embeddings.shape) # (windows, seq_len, emb_dim)
55
+ ```
56
+
57
+ ## License
58
+
59
+ The model weights are distributed under the [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license.
60
+
61
+ ## Citation
62
+
63
+ If you use this model in your research, please cite:
64
+
65
+ ```bibtex
66
+ @inproceedings{borovik2025symupe,
67
+ title = {{SyMuPe: Affective and Controllable Symbolic Music Performance}},
68
+ author = {Borovik, Ilya and Gavrilev, Dmitrii and Viro, Vladimir},
69
+ year = {2025},
70
+ booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
71
+ pages = {10699--10708},
72
+ doi = {10.1145/3746027.3755871}
73
+ }
74
+ ```
75
+
76
+ ```bibtex
77
+ @article{borovik2026pianocore,
78
+ title = {{PianoCoRe: Combined and Refined Piano MIDI Dataset}},
79
+ author = {Borovik, Ilya},
80
+ year = {2026},
81
+ journal = {Transactions of the International Society for Music Information Retrieval},
82
+ volume = {9},
83
+ number = {1},
84
+ pages = {144--163},
85
+ doi = {10.5334/tismir.333}
86
+ }
87
+ ```