Conference paper Open Access

Evaluating a Collection of Sound-Tracing Data of Melodic Phrases

Tejaswinee Kelkar; Udit Roy; Alexander Refsum Jensenius


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.1492347</identifier>
  <creators>
    <creator>
      <creatorName>Tejaswinee Kelkar</creatorName>
    </creator>
    <creator>
      <creatorName>Udit Roy</creatorName>
    </creator>
    <creator>
      <creatorName>Alexander Refsum Jensenius</creatorName>
    </creator>
  </creators>
  <titles>
    <title>Evaluating a Collection of Sound-Tracing Data of Melodic Phrases</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2018</publicationYear>
  <dates>
    <date dateType="Issued">2018-09-23</date>
  </dates>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/1492347</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.1492346</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/ismir</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">Melodic contour, the 'shape' of a melody, is a common way to visualize and remember a musical piece. The purpose of this paper is to explore the building blocks of a future 'gesture-based' melody retrieval system. We present a dataset containing 16 melodic phrases from four musical styles and with a large range of contour variability. This is accompanied by full-body motion capture data of 26 participants performing sound-tracing to the melodies. The dataset is analyzed using canonical correlation analysis (CCA), and its neural network variant (Deep CCA), to understand how melodic contours and sound tracings relate to each other. The analyses reveal non-linear relationships between sound and motion. The link between pitch and verticality does not appear strong enough for complex melodies. We also find that descending melodic contours have the least correlation with tracings.</description>
  </descriptions>
</resource>
110
47
views
downloads
All versions This version
Views 110111
Downloads 4747
Data volume 25.7 MB25.7 MB
Unique views 9596
Unique downloads 4545

Share

Cite as