Presentation Open Access

ISMIR 2019 tutorial: waveform-based music processing with deep learning

Jongpil Lee; Jordi Pons; Sander Dieleman


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3529714</identifier>
  <creators>
    <creator>
      <creatorName>Jongpil Lee</creatorName>
      <affiliation>KAIST</affiliation>
    </creator>
    <creator>
      <creatorName>Jordi Pons</creatorName>
      <affiliation>Dolby Laboratories</affiliation>
    </creator>
    <creator>
      <creatorName>Sander Dieleman</creatorName>
      <affiliation>DeepMind</affiliation>
    </creator>
  </creators>
  <titles>
    <title>ISMIR 2019 tutorial: waveform-based music processing with deep learning</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <dates>
    <date dateType="Issued">2019-11-04</date>
  </dates>
  <resourceType resourceTypeGeneral="Text">Presentation</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3529714</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3529713</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model&amp;#39;s learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.&lt;/p&gt;

&lt;p&gt;In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:&lt;/p&gt;

&lt;p&gt;1) Classification: waveform-based music classifiers have the potential to simplify production and research pipelines.&lt;/p&gt;

&lt;p&gt;2) Source separation: making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.&lt;/p&gt;

&lt;p&gt;3) Generation: waveform-level music generation would enable, e.g., to directly synthesize expressive music.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.google.com/presentation/d/1_ezZXDkyhp9USAYMc5oKJCkUrUhBfo-Di8H8IfypGBM/edit?usp=sharing"&gt;Link to the original Google Slides&lt;/a&gt;&lt;/p&gt;</description>
  </descriptions>
</resource>
1,008
975
views
downloads
All versions This version
Views 1,0081,002
Downloads 975974
Data volume 13.4 GB13.4 GB
Unique views 875869
Unique downloads 821820

Share

Cite as