P1-05: Attention-based audio embeddings for query-by-example
Singh, Anup*, Demuynck, Kris, Arora, Vipul
Subjects (starting with primary): MIR tasks -> fingerprinting ; MIR tasks -> indexing and querying
Presented In-person, in Bengaluru: 4-minute short-format presentation
An ideal audio retrieval system efficiently and robustly recognizes a short query snippet from an extensive database. However, the performance of well-known audio fingerprinting systems falls short at high signal distortion levels. This paper presents an audio retrieval system that generates noise and reverberation robust audio fingerprints using the contrastive learning framework. Using these fingerprints, the method performs a comprehensive search to identify the query audio and precisely estimate its timestamp in the reference audio. Our framework involves training a CNN to maximize the similarity between pairs of embeddings extracted from clean audio and its corresponding distorted and time-shifted version. We employ a channel-wise spectral-temporal attention mechanism to capture salient time indices and spectral bands in the CNN features. The attention mechanism enables the CNN to better discriminate the audio by giving more weight to the salient spectral-temporal patches in the signal. Experimental results indicate that our system is efficient in computation and memory usage while being more accurate, particularly at higher distortion levels, than competing state-of-the-art systems and scalable to a larger database.