Conference paper Open Access

The Many Faces of Users: Modeling Musical Preference

Eva Zangerle; Martin Pichl

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="">
  <controlfield tag="005">20200120164416.0</controlfield>
  <controlfield tag="001">1492515</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">September 23-27, 2018</subfield>
    <subfield code="g">ISMIR 2018</subfield>
    <subfield code="a">International Society for Music Information Retrieval Conference</subfield>
    <subfield code="c">Paris, France</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Martin Pichl</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">208470</subfield>
    <subfield code="z">md5:a1756165da3bb058762dfde975cf460f</subfield>
    <subfield code="u"></subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u"></subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2018-09-23</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-ismir</subfield>
    <subfield code="o"></subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Eva Zangerle</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">The Many Faces of Users: Modeling Musical Preference</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-ismir</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u"></subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2"></subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">User models that capture the musical preferences of users are central for many tasks in music information retrieval and music recommendation, yet, it has not been fully explored and exploited. To this end, the musical preferences of users in the context of music recommender systems have mostly been captured in collaborative filtering-based approaches. Alternatively, users can be characterized by their average listening behavior and hence, by the mean values of a set of content descriptors of tracks the users listened to. However, a user may listen to highly different tracks and genres. Thus, computing the average of all tracks does not capture the user's listening behavior well. We argue that each user may have many different preferences that depend on contextual aspects (e.g., listening to classical music when working and hard rock when doing sports) and that user models should account for these different sets of preferences. In this paper, we provide a detailed analysis and evaluation of different user models that describe a user's musical preferences based on acoustic features of tracks the user has listened to.</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.1492514</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="g">709-716</subfield>
    <subfield code="b">ISMIR</subfield>
    <subfield code="a">Paris, France</subfield>
    <subfield code="t">Proceedings of the 19th International Society for Music Information Retrieval Conference</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.1492515</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
All versions This version
Views 160160
Downloads 8686
Data volume 17.9 MB17.9 MB
Unique views 155155
Unique downloads 7979


Cite as