Zenodo.org will be unavailable for 2 hours on September 29th from 06:00-08:00 UTC. See announcement.

Conference paper Open Access

Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces

Roma, Gerard; Green, Owen; Tremblay, Pierre Alexandre

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <controlfield tag="005">20200218192058.0</controlfield>
  <controlfield tag="001">3672976</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="a">International Conference on New Interfaces for Musical Expression</subfield>
    <subfield code="c">Porto Alegre, Brazil</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Green, Owen</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Tremblay, Pierre Alexandre</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1525534</subfield>
    <subfield code="z">md5:2472bc126e4591679e4b3f0479b3ab1c</subfield>
    <subfield code="u">https://zenodo.org/record/3672976/files/nime2019_paper060.pdf</subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-06-01</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-nime_conference</subfield>
    <subfield code="o">oai:zenodo.org:3672976</subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Roma, Gerard</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-nime_conference</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.3672975</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.3672976</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
All versions This version
Views 116116
Downloads 5858
Data volume 88.5 MB88.5 MB
Unique views 103103
Unique downloads 5353


Cite as