P4-12: Melody transcription via generative pre-training
Subjects (starting with primary): Evaluation, datasets, and reproducibility -> MIR tasks ; Domain knowledge -> representations of music ; Domain knowledge -> machine learning/artificial intelligence for music ; MIR tasks -> music transcription and annotation ; Evaluation, datasets, and reproducibility -> novel datasets and use cases
Presented Virtually: 4-minute short-format presentation
Despite the central role that melody plays in music perception, it remains an open challenge in MIR to reliably detect the notes of the melody present in an arbitrary music recording. A key challenge in melody transcription is building methods which can handle broad audio containing any number of instrument ensembles and musical styles---existing strategies work well for some melody instruments or styles but not all. To confront this challenge, we leverage representations from Jukebox (Dhariwal et al. 2020), a generative model of broad music audio, thereby improving performance on melody transcription by 20% relative to conventional spectrogram features. Another obstacle in melody transcription is a lack of training data---we derive a new dataset containing 50 hours of melody transcriptions from crowdsourced annotations of broad music. The combination of generative pre-training and a new dataset for this task results in 77% stronger performance on melody transcription relative to the strongest available baseline. By pairing our new melody transcription approach with solutions for beat detection, key estimation, and chord recognition, we build a system capable of transcribing human-readable lead sheets directly from music audio.