Project deliverable Open Access

WhoLoDancE: Deliverable 3.3 - Report on music-dance representation models

Zanoni, Massimiliano; Buccoli, Michele; Sarti, Augusto; Antonacci, Fabio; Whatley, Sarah; Cisneros, Rosemary; Palacio, Pablo


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.1135078</identifier>
  <creators>
    <creator>
      <creatorName>Zanoni, Massimiliano</creatorName>
      <givenName>Massimiliano</givenName>
      <familyName>Zanoni</familyName>
      <affiliation>Politecnico di Milano</affiliation>
    </creator>
    <creator>
      <creatorName>Buccoli, Michele</creatorName>
      <givenName>Michele</givenName>
      <familyName>Buccoli</familyName>
      <affiliation>Politecnico di Milano</affiliation>
    </creator>
    <creator>
      <creatorName>Sarti, Augusto</creatorName>
      <givenName>Augusto</givenName>
      <familyName>Sarti</familyName>
      <affiliation>Politecnico di Milano</affiliation>
    </creator>
    <creator>
      <creatorName>Antonacci, Fabio</creatorName>
      <givenName>Fabio</givenName>
      <familyName>Antonacci</familyName>
      <affiliation>Politecnico di Milano</affiliation>
    </creator>
    <creator>
      <creatorName>Whatley, Sarah</creatorName>
      <givenName>Sarah</givenName>
      <familyName>Whatley</familyName>
      <affiliation>Coventry University</affiliation>
    </creator>
    <creator>
      <creatorName>Cisneros, Rosemary</creatorName>
      <givenName>Rosemary</givenName>
      <familyName>Cisneros</familyName>
      <affiliation>Coventry University</affiliation>
    </creator>
    <creator>
      <creatorName>Palacio, Pablo</creatorName>
      <givenName>Pablo</givenName>
      <familyName>Palacio</familyName>
      <affiliation>Stocos</affiliation>
    </creator>
  </creators>
  <titles>
    <title>WhoLoDancE: Deliverable 3.3 - Report on music-dance representation models</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2017</publicationYear>
  <subjects>
    <subject>music</subject>
    <subject>dance</subject>
    <subject>representation</subject>
    <subject>models</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2017-12-31</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Text">Project deliverable</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/1135078</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.1135077</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/wholodance_eu</relatedIdentifier>
  </relatedIdentifiers>
  <version>1.1</version>
  <rightsList>
    <rights rightsURI="http://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This Deliverable will be based on the outcomes of the task T3.1.3 Joint music-dance representation models. Dance and music are highly dependent in many dance genres: in some genres, dance performances cannot be even executed without a reference music. For this reason, the analysis of the correlation between music and movement is essential in the WhoLoDancE project.&lt;/p&gt;

&lt;p&gt;Music and movement are different in nature, so, to study their correlation, we will define two representation models: a music representation model and a movement representation model. The two models are both based on the extraction of a set of representative features able to capture specific aspects of the relative signals. The two models are then used to study the dependence and the interaction between music and movement in two use-cases: Piano&amp;amp;Dancer performance and joint movement-music analysis in Flamenco.&lt;/p&gt;</description>
    <description descriptionType="Other">{"references": ["Buccoli M., Di Giorgi B., Zanoni M., Antonacci F., Sarti A. (2017) Using multi-dimensional correlation for matching and alignment of MoCap and video signals. IEEE 19th International Workshop on Multimedia Signal Processing (MMSP)", "Bruno Di Giorgi, Massimiliano Zanoni, Sebastian B\u00f6ck, Augusto Sarti, Multipath Beat Tracking, in Special Issue on Intelligent Audio Processing, Semantics, and Interaction, Journal of the Audio Engineering Society, vol.64, no.7/8, pp.493-502, 2016", "Piana S., Staglian\u00f2 A., Odone F., Camurri A. (2016). Adaptive body gesture representation for automatic emotion recognition. ACM Transactions on Interactive Intelligent Systems (TiiS)", "Camurri, A., Volpe, G., Piana, S., Mancini, M., Niewiadomski, R., Ferrari, N., Canepa, C. (2016) The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement. Proceedings of the 3rd International Symposium on Movement and Computing (MOCO '16)", "Palacio P., Bisig D. (2017) Piano&amp;Dancer: Interaction Between a Dancer and an Acoustic Instrument. Proceedings of the 4rd International Symposium on Movement and Computing (MOCO '17)"]}</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/688865/">688865</awardNumber>
      <awardTitle>Whole-Body Interaction Learning for Dance Education</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
67
28
views
downloads
All versions This version
Views 6767
Downloads 2828
Data volume 39.9 MB39.9 MB
Unique views 6262
Unique downloads 2727

Share

Cite as