Published December 31, 2017 | Version 1.1
Project deliverable Open

WhoLoDancE: Deliverable 3.3 - Report on music-dance representation models

  • 1. Politecnico di Milano
  • 2. Coventry University
  • 3. Stocos

Description

This Deliverable will be based on the outcomes of the task T3.1.3 Joint music-dance representation models. Dance and music are highly dependent in many dance genres: in some genres, dance performances cannot be even executed without a reference music. For this reason, the analysis of the correlation between music and movement is essential in the WhoLoDancE project.

Music and movement are different in nature, so, to study their correlation, we will define two representation models: a music representation model and a movement representation model. The two models are both based on the extraction of a set of representative features able to capture specific aspects of the relative signals. The two models are then used to study the dependence and the interaction between music and movement in two use-cases: Piano&Dancer performance and joint movement-music analysis in Flamenco.

Files

D3.3 Report on music-dance representation models_.pdf

Files (1.4 MB)

Additional details

Funding

WhoLoDancE – Whole-Body Interaction Learning for Dance Education 688865
European Commission

References

  • Buccoli M., Di Giorgi B., Zanoni M., Antonacci F., Sarti A. (2017) Using multi-dimensional correlation for matching and alignment of MoCap and video signals. IEEE 19th International Workshop on Multimedia Signal Processing (MMSP)
  • Bruno Di Giorgi, Massimiliano Zanoni, Sebastian Böck, Augusto Sarti, Multipath Beat Tracking, in Special Issue on Intelligent Audio Processing, Semantics, and Interaction, Journal of the Audio Engineering Society, vol.64, no.7/8, pp.493-502, 2016
  • Piana S., Staglianò A., Odone F., Camurri A. (2016). Adaptive body gesture representation for automatic emotion recognition. ACM Transactions on Interactive Intelligent Systems (TiiS)
  • Camurri, A., Volpe, G., Piana, S., Mancini, M., Niewiadomski, R., Ferrari, N., Canepa, C. (2016) The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement. Proceedings of the 3rd International Symposium on Movement and Computing (MOCO '16)
  • Palacio P., Bisig D. (2017) Piano&Dancer: Interaction Between a Dancer and an Acoustic Instrument. Proceedings of the 4rd International Symposium on Movement and Computing (MOCO '17)