There is a newer version of this record available.

Dataset Open Access

EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation

Hung, Hsiao-Tzu; Ching, Joann; Doh, Seungheon; Kim, Nabin; Nam, Juhan; Yang, Yi-Hsuan


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Hung, Hsiao-Tzu</dc:creator>
  <dc:creator>Ching, Joann</dc:creator>
  <dc:creator>Doh, Seungheon</dc:creator>
  <dc:creator>Kim, Nabin</dc:creator>
  <dc:creator>Nam, Juhan</dc:creator>
  <dc:creator>Yang, Yi-Hsuan</dc:creator>
  <dc:date>2021-07-18</dc:date>
  <dc:description>EMOPIA (pronounced ‘yee-mò-pi-uh’) dataset is a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. 

For more detailed information about the dataset, please refer to our paper: EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. 

File Description


	midis/: midi clips transcribed using GiantMIDI.

	
		Filename `Q1_xxxxxxx_2.mp3`: Q1 means this clip belongs to Q1 on the V-A space; xxxxxxx is the song ID on YouTube, and the `2` means this clip is the 2nd clip taken from the full song.
	
	
	metadata/: metadata from YouTube. (Got when crawling)
	
	songs_lists/: YouTube URLs of songs.
	
	
	tagging_lists/: raw tagging result for each sample.
	
	
	label.csv: metadata that records filename, clip timestamps, and annotator.
	
	
	metadata_by_song.csv: list all the clips by the song. Can be used to create the train/val/test splits to avoid the same song appear in both train and test.
	
	
	scripts/prepare_split.ipynb: the script to create train/val/test splits and save them to csv files.
	


 

Cite this dataset

@inproceedings{{EMOPIA},
         author = {Hung, Hsiao-Tzu and Ching, Joann and Doh, Seungheon and Kim, Nabin and Nam, Juhan and Yang, Yi-Hsuan},
         title = {{MOPIA}: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation},
         booktitle = {Proc. Int. Society for Music Information Retrieval Conf.},
         year = {2021}
}</dc:description>
  <dc:identifier>https://zenodo.org/record/5090631</dc:identifier>
  <dc:identifier>10.5281/zenodo.5090631</dc:identifier>
  <dc:identifier>oai:zenodo.org:5090631</dc:identifier>
  <dc:relation>doi:10.5281/zenodo.5090630</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>piano</dc:subject>
  <dc:subject>emotion</dc:subject>
  <dc:subject>music</dc:subject>
  <dc:subject>midi</dc:subject>
  <dc:title>EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation</dc:title>
  <dc:type>info:eu-repo/semantics/other</dc:type>
  <dc:type>dataset</dc:type>
</oai_dc:dc>
953
493
views
downloads
All versions This version
Views 953708
Downloads 493171
Data volume 10.5 GB942.2 MB
Unique views 701595
Unique downloads 276161

Share

Cite as