Published May 30, 2024 | Version v1
Dataset Restricted

MOSA: Music mOtion and Semantic Annotation dataset

Description

MOSA dataset is a large-scale music dataset containing 742 professional piano and violin solo music performances with 23 musicians (> 30 hours, and > 570 K notes). This dataset features following types of data:

  • High-quality 3-D motion capture data
  • Audio recordings
  • Manual semantic annotations

This is the dataset of the paper: Huang et al. (2024) MOSA: Music Motion with Semantic Annotation Dataset for Multimedia Anaysis and Generation. IEEE/ACM Transactions on Audio, Speech and Language Processing. DOI: 10.1109/TASLP.2024.3407529
https://arxiv.org/abs/2406.06375

 

The description of dataset is avaiable on Github: https://github.com/yufenhuang/MOSA-Music-mOtion-and-Semantic-Annotation-dataset/blob/main/MOSA-dataset/dataset.md

 

To request the access of full dataset, please sign in with Zenodo and submit the request from.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Related works

Is published in
Journal article: 10.1109/TASLP.2024.3407529 (DOI)

Funding

Institute of Information Science, Academia Sinica
National Science and Technology Council

References

  • Huang et al. (2024) MOSA: Music Motion with Semantic Annotation Dataset for Multimedia Anaysis and Generation. IEEE/ACM Transactions on Audio, Speech and Language Processing. DOI: 10.1109/TASLP.2024.3407529