Presentation Open Access

ISMIR 2019 tutorial: waveform-based music processing with deep learning

Jongpil Lee; Jordi Pons; Sander Dieleman

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Jongpil Lee</dc:creator>
  <dc:creator>Jordi Pons</dc:creator>
  <dc:creator>Sander Dieleman</dc:creator>
  <dc:description>A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model's learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.

In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:

1) Classification: waveform-based music classifiers have the potential to simplify production and research pipelines.

2) Source separation: making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.

3) Generation: waveform-level music generation would enable, e.g., to directly synthesize expressive music.

Link to the original Google Slides</dc:description>
  <dc:title>ISMIR 2019 tutorial: waveform-based music processing with deep learning</dc:title>
All versions This version
Views 1,9501,944
Downloads 1,9791,978
Data volume 27.2 GB27.2 GB
Unique views 1,7161,710
Unique downloads 1,7321,731


Cite as