Cinematic Music Descriptor β Module 3 β Music Descriptor Heads
Multi-head MLP that takes Module2 context vectors and predicts all music descriptors for a scene: tempo, tonality, orchestration, etc.
Label Schema
Regression
- tempo_bpm: 45β170 BPM
- musical_valence: -0.93 to 0.68
Classification
- tonality: ['atonal', 'major', 'minor']
- harmonic_style: ['atonal', 'chromatic', 'cluster', 'diatonic', 'modal', 'pentatonic', 'whole_tone']
- dynamic_shape_m4: ['crescendo', 'diminuendo', 'flat', 'subito_forte', 'subito_piano', 'sustained', 'swell', 'terraced']
- rhythm_style: ['drive', 'off', 'ostinato', 'pulse', 'rubato', 'sparse']
- texture: ['ambient', 'chamber', 'full', 'hybrid', 'solo']
Multi-label
- orchestration: ['ambient_pad', 'brass', 'choir', 'electronic', 'ethnic', 'guitar', 'harp', 'organ', 'percussion', 'piano', 'solo_voice', 'strings', 'synth', 'woodwinds']
Training Details
- Base model:
roberta-base - Dataset: ~11,000 scenes from 60β80 movies
- Framework: PyTorch + HuggingFace Transformers
- Logging: Weights & Biases
Usage
import torch
from huggingface_hub import hf_hub_download
# Download weights
path = hf_hub_download(repo_id="suyashnpande/cinematic-music-descriptor-module3",
filename="module3.pt")
Citation
If you use this model, please cite the project.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support