Published November 4, 2019 | Version v1
Presentation Open

ISMIR 2019 tutorial: waveform-based music processing with deep learning

  • 1. KAIST
  • 2. Dolby Laboratories
  • 3. DeepMind

Description

A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model's learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.

In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:

1) Classification: waveform-based music classifiers have the potential to simplify production and research pipelines.

2) Source separation: making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.

3) Generation: waveform-level music generation would enable, e.g., to directly synthesize expressive music.

Link to the original Google Slides

Files

ISMIR 2019 tutorial_ waveform-based music processing with deep learning.pdf