Conference paper Open Access

Using weakly aligned score–audio pairs to train deep chroma models for cross-modal music retrieval

Frank Zalkow; Meinard Müller

Many music information retrieval tasks involve the comparison of a symbolic score representation with an audio recording. A typical strategy is to compare score–audio pairs based on a common mid-level representation, such as chroma features. Several recent studies demonstrated the effectiveness of deep learning models that learn task-specific mid-level representations from temporally aligned training pairs. However, in practice, there is often a lack of strongly aligned training data, in particular for real-world scenarios. In our study, we use weakly aligned score–audio pairs for training, where only the beginning and end of a score excerpt is annotated in an audio recording, without aligned correspondences in between. To exploit such weakly aligned data, we employ the Connectionist Temporal Classification (CTC) loss to train a deep learning model for computing an enhanced chroma representation. We then apply this model to a cross-modal retrieval task, where we aim at finding relevant audio recordings of Western classical music, given a short monophonic musical theme in symbolic notation as a query. We present systematic experiments that show the effectiveness of the CTC-based model for this theme-based retrieval task.
Files (831.6 kB)
Name Size
23.pdf
md5:2cbe5c2a931796e4c4ec43019e66e339
831.6 kB Download
117
51
views
downloads
All versions This version
Views 117117
Downloads 5151
Data volume 42.4 MB42.4 MB
Unique views 104104
Unique downloads 4545

Share

Cite as