P5-11: Singing beat tracking with Self-supervised front-end and linear transformers
Heydari, Mojtaba*, Duan, Zhiyao
Subjects (starting with primary): Musical features and properties -> rhythm, beat, tempo ; Musical features and properties -> timbre, instrumentation, and singing voice
Presented Virtually: 4-minute short-format presentation
Tracking beats of singing voices without the presence of musical accompaniment can find many applications in music production, automatic song arrangement, and social media interaction.
Its main challenge is the lack of strong rhythmic and harmonic patterns that are important for music rhythmic analysis in general. Even for human listeners, this can be a challenging task. As a result, existing music beat tracking systems fail to deliver satisfactory performance on singing voices. In this paper, we propose singing beat tracking as a novel task, and propose the first approach to solving this task. Our approach leverages semantic information of singing voices by employing pre-trained self-supervised WavLM and DistilHuBERT speech representations as the front-end and uses a self-attention encoder layer to predict beats. To train and test the system, we obtain separated singing voices and their beat annotations using source separation and beat tracking on complete songs, followed by manual corrections.
Experiments on the 741 separated vocal tracks of the GTZAN dataset show that the proposed system outperforms several state-of-the-art music beat tracking methods by a large margin in terms of beat tracking accuracy. Ablation studies
also confirm the advantages of pre-trained self-supervised speech representations over generic spectral features.