Thesis Open Access

Improving Generalization of Deep Learning Music Classifiers

Morgan Buisson


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Morgan Buisson</dc:creator>
  <dc:date>2021-02-25</dc:date>
  <dc:description>Deep learning models have recently led to significant improvements in a wide variety of tasks. Known as being a very powerful tool capable of generalizing better than traditional machine learning approaches, deep learning models still heavily rely on large quantities of annotated data. As the field of music information retrieval is still subject to data sparsity, automatic music classification remains a challenging problem and numerous models fail at generalizing to out-of-distribution music col-lections. This project investigates possible directions to follow in order to improve the generalization capacity of deep learning music classifiers. More specifically, we suggest a set of guidelines to be followed in order to address the generalization problem of music classifiers trained on very small datasets. We first propose ways to maximize the amount of information extracted from small datasets through outliers detection and eÿcient audio data augmentation. We then show that considering the amount of perceptual ambiguity of each classification task through label smoothing can help obtaining more generalizable classification bounds. We also highlight the impact label noise can have in a small dataset setting and explore ways to improve the model’s robustness. Finally, we argue that leveraging common knowledge among related classification tasks can result in a more generalizable internal representation learned by the model. To illustrate this assumption, we employ a simple multi-task learning architecture to jointly learn pairs of tasks, and list other interesting axes to be further explored in that direction. All the suggested approaches are exper-imentally assessed on two state-of-the-art CNN architectures for automatic music classification. They all lead to consistent improvements over baseline models and unveil new relevant questions to rethink the task of automatic music classification.</dc:description>
  <dc:identifier>https://zenodo.org/record/5554754</dc:identifier>
  <dc:identifier>10.5281/zenodo.5554754</dc:identifier>
  <dc:identifier>oai:zenodo.org:5554754</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>doi:10.5281/zenodo.5554753</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/mtgupf</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/smc-master</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>Generalization; Music Classification; Deep Learning</dc:subject>
  <dc:title>Improving Generalization of Deep Learning Music Classifiers</dc:title>
  <dc:type>info:eu-repo/semantics/doctoralThesis</dc:type>
  <dc:type>publication-thesis</dc:type>
</oai_dc:dc>
82
57
views
downloads
All versions This version
Views 8282
Downloads 5757
Data volume 171.0 MB171.0 MB
Unique views 7575
Unique downloads 5555

Share

Cite as