Presentation Open Access

ISMIR 2019 tutorial: waveform-based music processing with deep learning

Jongpil Lee; Jordi Pons; Sander Dieleman

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.3529714</identifier>
      <creatorName>Jongpil Lee</creatorName>
      <creatorName>Jordi Pons</creatorName>
      <affiliation>Dolby Laboratories</affiliation>
      <creatorName>Sander Dieleman</creatorName>
    <title>ISMIR 2019 tutorial: waveform-based music processing with deep learning</title>
    <date dateType="Issued">2019-11-04</date>
  <resourceType resourceTypeGeneral="Text">Presentation</resourceType>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3529713</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model&amp;#39;s learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.&lt;/p&gt;

&lt;p&gt;In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:&lt;/p&gt;

&lt;p&gt;1) Classification: waveform-based music classifiers have the potential to simplify production and research pipelines.&lt;/p&gt;

&lt;p&gt;2) Source separation: making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.&lt;/p&gt;

&lt;p&gt;3) Generation: waveform-level music generation would enable, e.g., to directly synthesize expressive music.&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;Link to the original Google Slides&lt;/a&gt;&lt;/p&gt;</description>
All versions This version
Views 1,9501,944
Downloads 1,9791,978
Data volume 27.2 GB27.2 GB
Unique views 1,7161,710
Unique downloads 1,7321,731


Cite as