Presentation Open Access

ISMIR 2019 tutorial: waveform-based music processing with deep learning

Jongpil Lee; Jordi Pons; Sander Dieleman

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="">
  <controlfield tag="005">20200120173308.0</controlfield>
  <controlfield tag="001">3529714</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="g">ISMIR</subfield>
    <subfield code="a">20th annual conference of the International Society for Music Information Retrieval</subfield>
    <subfield code="c">Delft</subfield>
    <subfield code="n">Tutorial</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Dolby Laboratories</subfield>
    <subfield code="a">Jordi Pons</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">DeepMind</subfield>
    <subfield code="a">Sander Dieleman</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">13731274</subfield>
    <subfield code="z">md5:cbba74f5bc737ce56e641f635729cdd1</subfield>
    <subfield code="u"> 2019 tutorial_ waveform-based music processing with deep learning.pdf</subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u"></subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-11-04</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o"></subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">KAIST</subfield>
    <subfield code="a">Jongpil Lee</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">ISMIR 2019 tutorial: waveform-based music processing with deep learning</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u"></subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2"></subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model&amp;#39;s learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.&lt;/p&gt;

&lt;p&gt;In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:&lt;/p&gt;

&lt;p&gt;1) Classification: waveform-based music classifiers have the potential to simplify production and research pipelines.&lt;/p&gt;

&lt;p&gt;2) Source separation: making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.&lt;/p&gt;

&lt;p&gt;3) Generation: waveform-level music generation would enable, e.g., to directly synthesize expressive music.&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;Link to the original Google Slides&lt;/a&gt;&lt;/p&gt;</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.3529713</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.3529714</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">presentation</subfield>
All versions This version
Views 1,9501,944
Downloads 1,9791,978
Data volume 27.2 GB27.2 GB
Unique views 1,7161,710
Unique downloads 1,7321,731


Cite as