Published June 7, 2022 | Version v2
Conference paper Open

Deep Embeddings for Robust User-Based Amateur Vocal Percussion Classification

  • 1. Queen Mary University of London

Description

Vocal Percussion Transcription (VPT) is concerned with the automatic detection and classification of vocal percussion sound events, allowing music creators and producers to sketch drum lines on the fly among others. VPT classifiers usually learn best from small user-specific datasets, which usually restrict modelling to small input feature sets to avoid model overfitting. This study explores several deep supervised learning strategies to obtain informative feature sets for amateur VPT classification. We evaluated their performance on regular VPT classification tasks and compared them with several baseline approaches including feature selection methods and a state-of-the-art speech recognition engine. These proposed learning models were supervised with several label sets containing information from four different levels of abstraction: instrument-level, syllable-level, phoneme-level, and boxeme-level. Results suggest that convolutional neural networks supervised with syllable-level annotations produced the most informative embeddings for VPT systems, which can be used as input representations to fit classifiers with. Finally, we used back-propagation-based saliency maps to investigate the importance of difference spectrogram regions for feature learning.

Files

16.pdf

Files (549.3 kB)

Name Size Download all
md5:521e97b31764fa8e64323cfc177ee206
549.3 kB Preview Download