Dataset Open Access


Eva Zangerle; Asmita Poddar; Yi-Hsuan Yang

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.3247476</identifier>
      <creatorName>Eva Zangerle</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0003-3195-8273</nameIdentifier>
      <affiliation>University of Innsbruck, Austria</affiliation>
      <creatorName>Asmita Poddar</creatorName>
      <affiliation>National University of Singapore</affiliation>
      <creatorName>Yi-Hsuan Yang</creatorName>
      <affiliation>Academia Sinica, Taiwan</affiliation>
    <subject>recommender system</subject>
    <date dateType="Issued">2019-03-15</date>
  <resourceType resourceTypeGeneral="Dataset"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.2594537</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf"></relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;The nowplaying-rs dataset features context- and content features of listening events. It contains 11.6 million music listening events of 139K users and 346K tracks collected from Twitter. The dataset comes with a rich set of item content features and user context features, as well as timestamps of the listening events. Moreover, some of the user context features imply the cultural origin of the users, and some others - like hashtags - give clues to the emotional state of a user underlying a listening event.&lt;/p&gt;

&lt;p&gt;The dataset contains three files:&lt;/p&gt;

	&lt;li&gt;user_track_hashtag_timestamp.csv contains basic information about each listening event. For each listening event, we provide an id, the user_id, track_id, hashtag, created_at&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;context_content_features.csv: contains all context and content features. For each listening event, we provide the id of the event, user_id, track_id, artist_id, content features regarding the track mentioned in the event (instrumentalness, liveness, speechiness, danceability, valence, loudness, tempo, acousticness, energy, mode, key) and context features regarding the listening event (coordinates (as geoJSON), place (as geoJSON), geo (as geoJSON), tweet_language, created_at, user_lang, time_zone, entities contained in the tweet).&lt;/li&gt;
	&lt;li&gt;sentiment_values.csv contains sentiment information for hashtags. It contains the hashtag itself and the sentiment values gathered via four different sentiment dictionaries: AFINN, Opinion Lexicon, Sentistrength Lexicon and vader. For each of these dictionaries we list the minimum, maximum, sum and average of all&amp;nbsp;sentiments of the tokens of the hashtag (if available, else we list empty values). However, as most hashtags only consist of a single token, these&amp;nbsp;values are equal in most cases. Please note that the lexica are rather diverse and therefore, are able to resolve very different terms against a score. Hence,&amp;nbsp;the resulting csv is rather sparse. The file contains the following comma-separated values: &amp;lt;hashtag, vader_min, vader_max, vader_sum,vader_avg, &amp;nbsp;afinn_min, afinn_max,&amp;nbsp;afinn_sum, afinn_avg, ol_min, ol_max, ol_sum, ol_avg, ss_min, ss_max, ss_sum, ss_avg &amp;gt;, where we abbreviate all scores gathered over the Opinion Lexicon with the&amp;nbsp;prefix &amp;#39;ol&amp;#39;. Similarly, &amp;#39;ss&amp;#39; stands for SentiStrength.&amp;nbsp;&lt;/li&gt;

&lt;p&gt;Please note that user_track_hashtag_timestamp.csv and context_content_features.csv partly provide the same features. We deliberately chose to do so to be able to provide useable files that do not have to be matched and joined with each other to perform e.g., simple recommendation tasks.&lt;/p&gt;

&lt;p&gt;Please also find the training and test-splits for the dataset in this repo. Also, Asmita provides prototypical implementations of a context-aware recommender system based on the dataset at;/p&gt;

If you make use of this dataset, please cite the following paper where we describe and experiment with the dataset:&lt;/p&gt;

title = {#nowplaying-RS: A New Benchmark Dataset for Building Context-Aware Music Recommender Systems},&lt;br&gt;
author = {Asmita Poddar and Eva Zangerle and Yi-Hsuan Yang},&lt;br&gt;
url = {},&lt;br&gt;
year = {2018},&lt;br&gt;
date = {2018-07-04},&lt;br&gt;
booktitle = {Proceedings of the 15th Sound &amp;amp; Music Computing Conference},&lt;br&gt;
address = {Limassol, Cyprus},&lt;br&gt;
note = {code at},&lt;br&gt;
tppubtype = {inproceedings}&lt;br&gt;
    <description descriptionType="Other">{"references": ["Poddar, Asmita; Zangerle, Eva; Yang, Yi-Hsuan  #nowplaying-RS: A New Benchmark Dataset for Building Context-Aware Music Recommender Systems Inproceedings  Proceedings of the 15th Sound &amp; Music Computing Conference, Limassol, Cyprus, 2018."]}</description>
All versions This version
Views 1,184605
Downloads 550497
Data volume 938.5 GB860.1 GB
Unique views 1,047523
Unique downloads 269230


Cite as