Conference paper Open Access

Multi-Task Learning of Graph-based Inductive Representations of Music Content

Antonia Saravanou; Federico Tomasi; Rishabh Mehrotra; Mounia Lalmas


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Antonia Saravanou</dc:creator>
  <dc:creator>Federico Tomasi</dc:creator>
  <dc:creator>Rishabh Mehrotra</dc:creator>
  <dc:creator>Mounia Lalmas</dc:creator>
  <dc:date>2021-11-07</dc:date>
  <dc:description>Music streaming platforms rely heavily on learning meaningful representations of tracks to surface apt recommendations to users in a number of different use cases. In this work, we consider the task of learning music track representations by leveraging three rich heterogeneous sources of information: (i) organizational information (e.g., playlist co-occurrence), (ii) content information (e.g., audio &amp; acoustics), and (iii) music stylistics (e.g., genre). We advocate for a multi-task formulation of graph representation learning, and propose MUSIG: Multi-task Sampling and Inductive learning on Graphs. MUSIG allows us to derive generalized track representations that combine the benefits offered by (i) the inductive graph based framework, which generates embeddings by sampling and aggregating features from a node's local neighborhood, as well as, (ii) multi-task training of aggregation functions, which ensures the learnt functions perform well on a number of important tasks. We present large scale empirical results for track recommendation for the playlist completion task, and compare different classes of representation learning approaches, including collaborative filtering, word2vec and node embeddings as well as, graph embedding approaches. Our results demonstrate that considering content information (i.e.,audio and acoustic features) is useful and that multi-task supervision helps learn better representations.</dc:description>
  <dc:identifier>https://zenodo.org/record/5624379</dc:identifier>
  <dc:identifier>10.5281/zenodo.5624379</dc:identifier>
  <dc:identifier>oai:zenodo.org:5624379</dc:identifier>
  <dc:publisher>ISMIR</dc:publisher>
  <dc:relation>doi:10.5281/zenodo.5624378</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/ismir</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:title>Multi-Task Learning of Graph-based Inductive Representations of Music Content</dc:title>
  <dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
  <dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
276
254
views
downloads
All versions This version
Views 276276
Downloads 254254
Data volume 153.5 MB153.5 MB
Unique views 261261
Unique downloads 232232

Share

Cite as