Video/Audio Open Access

Neural Touch-Screen Ensemble Performance 2017-07-03

Martin, Charles Patrick


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000ngm##2200000uu#4500</leader>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Martin, C., &amp; Swift, B. (2016). MetatoneClassifier: Research Prototype. Zenodo. http://doi.org/10.5281/zenodo.51712</subfield>
  </datafield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Martin, C. (2016). PhaseRings v1.2.0. Zenodo. http://doi.org/10.5281/zenodo.50860</subfield>
  </datafield>
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Martin, C., Gardner, H., &amp; Swift, B. (2015). Tracking ensemble performance on touch-screens with gesture classification and transition matrices. In Proceedings of the International Conference on New Interfaces for Musical Expression, NIME '15, pages 359–364. http://www. nime.org/proceedings/2015/nime2015_242.pdf</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">performance, music, touch-screen, improvisation, recurrent neural network</subfield>
  </datafield>
  <controlfield tag="005">20170908075552.0</controlfield>
  <datafield tag="500" ind1=" " ind2=" ">
    <subfield code="a">This work is supported by The Research Council of Norway as a part of the Engineering Predictability with Embodied Cognition (EPEC) project, under grant agreement 240862.</subfield>
  </datafield>
  <controlfield tag="001">831910</controlfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">248535508</subfield>
    <subfield code="z">md5:ed4904bb1d053d17cc9c953a3593962e</subfield>
    <subfield code="u">https://zenodo.org/record/831910/files/neural-touch-screen-session-2017-07-03.mp4</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2017-07-19</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="o">oai:zenodo.org:831910</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">University of Oslo</subfield>
    <subfield code="a">Martin, Charles Patrick</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Neural Touch-Screen Ensemble Performance 2017-07-03</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;A studio performance of an RNN-controlled Touch Screen Ensemble from 2017-07-03 at the University of Oslo.&lt;/p&gt;

&lt;p&gt;In this performance, a touch-screen musician improvises with a computer-controlled ensemble of three artificial performers. A recurrent neural network tracks the touch gestures of the human performer and predicts musically appropriate gestural responses for the three artificial musicians. The performances on the three 'AI' iPads are then constructed from matching snippets of previous human recordings. A plot of the whole ensemble's touch gestures are shown on the projected screen.&lt;/p&gt;

&lt;p&gt;This performance uses Metatone Classifier (https://doi.org/10.5281/zenodo.51712) to track touch gestures and Gesture-RNN (https://github.com/cpmpercussion/gesture-rnn) to predict gestural states for the ensemble. The touch-screen app used in this performance was PhaseRings (https://doi.org/10.5281/zenodo.50860).&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.831909</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.831910</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">video</subfield>
  </datafield>
</record>
52
10
views
downloads
All versions This version
Views 5251
Downloads 1010
Data volume 2.5 GB2.5 GB
Unique views 5049
Unique downloads 1010

Share

Cite as