Conference paper Open Access

Symbolic Music Similarity Through a Graph-Based Representation

Simonetta, Federico; Carnovalini, Filippo; Orio, Nicola; Rodà, Antonio


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">music retrieval</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">music similarity</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">music symbolic representation</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">music computing</subfield>
  </datafield>
  <controlfield tag="005">20200120171611.0</controlfield>
  <controlfield tag="001">2537059</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">12-14 September 2018</subfield>
    <subfield code="g">AM'18</subfield>
    <subfield code="a">Audio Mostly 2018 Sound in Immersion and Emotion</subfield>
    <subfield code="c">Wrexham, United Kingdom</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Department of Mathematics, University of Padua</subfield>
    <subfield code="a">Carnovalini, Filippo</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Department of Information Engineering, University of Padua</subfield>
    <subfield code="a">Orio, Nicola</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Department of Information Engineering, University of Padua</subfield>
    <subfield code="a">Rodà, Antonio</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1069858</subfield>
    <subfield code="z">md5:8d7a18f39f7f2bfe792cba511aca0a74</subfield>
    <subfield code="u">https://zenodo.org/record/2537059/files/camera_ready_49.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u">http://audiomostly.com/</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2018-09-12</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-federicosimonetta</subfield>
    <subfield code="p">user-mir</subfield>
    <subfield code="o">oai:zenodo.org:2537059</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Department of Information Engineering, University of Padua</subfield>
    <subfield code="0">(orcid)0000-0002-5928-9836</subfield>
    <subfield code="a">Simonetta, Federico</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Symbolic Music Similarity Through a Graph-Based Representation</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-federicosimonetta</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-mir</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="a">Other (Not Open)</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;In this work, a novel representation system for symbolic music is described. The proposed representation system is graph-based and could theoretically represent music both from a horizontal (contrapuntal) and from a vertical (harmonic) point of view, by keeping into account contextual and harmonic information. It could also include relationships between internal variations of motifs and themes. This is achieved by gradually simplifying the melodies and generating layers of reductions that include only the most important notes from a structural and harmonic viewpoint. This representation system has been tested in a music information retrieval task, namely melodic similarity, and compared to another system that performs the same task but does not consider any contextual or harmonic information, showing how the structural information is needed in order to find certain relations between musical pieces. Moreover, a new dataset consisting of more than 5000 leadsheets is presented, with additional meta-musical information taken from different web databases, including author, year of first performance, lyrics, genre and stylistic tags.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">isbn</subfield>
    <subfield code="i">isPartOf</subfield>
    <subfield code="a">978-1-4503-6609-0</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="g">26:1--26:7</subfield>
    <subfield code="b">ACM</subfield>
    <subfield code="a">New York, NY, USA</subfield>
    <subfield code="z">978-1-4503-6609-0</subfield>
    <subfield code="t">Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1145/3243274.3243301</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
215
234
views
downloads
Views 215
Downloads 234
Data volume 250.3 MB
Unique views 181
Unique downloads 213

Share

Cite as