P5-04: MuLan: A Joint Embedding of Music Audio and Natural Language
Huang, Qingqing*, Jansen, Aren, Lee, Joonseok, Ganti, Ravi, Li, Judith Yue, Ellis, Daniel P W
Subjects (starting with primary): MIR fundamentals and methodology -> metadata, tags, linked data, and semantic web ; MIR fundamentals and methodology -> music signal processing ; MIR fundamentals and methodology -> multimodality ; MIR tasks -> indexing and querying ; MIR tasks -> automatic classification ; MIR fundamentals and methodology -> web mining, and natural language processing
Presented In-person, in Bengaluru: 4-minute short-format presentation
Music tagging and content-based retrieval systems have traditionally been constructed using pre-defined ontologies covering a rigid set of music attributes or text queries. This paper presents MuLan: a first attempt at a new generation of acoustic models that link music audio directly to unconstrained natural language music descriptions. MuLan takes the form of a two-tower, joint audio-text embedding model trained using 44 million music recordings (370K hours) and weakly-associated, free-form text annotations. Through its compatibility with a wide range of music genres and text styles (including conventional music tags), the resulting audio-text representation subsumes existing ontologies while graduating to true zero-shot functionalities. We demonstrate the versatility of the MuLan embeddings with a range of experiments including transfer learning, zero-shot music tagging, language understanding in the music domain, and cross-modal retrieval applications.