Conference paper Open Access

Exploring Sketch-based Sound Associations for Sonification

Engeln, Lars; Keck, Mandy

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="">
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">sonification, mental model,  sketching, auditory encoding, sound,  visual encoding</subfield>
  <controlfield tag="005">20220524135047.0</controlfield>
  <controlfield tag="001">6576697</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">7 June 2022</subfield>
    <subfield code="g">WAVA22</subfield>
    <subfield code="a">AVI 2022 Workshop on Audio-Visual Analytics</subfield>
    <subfield code="c">Frascati, Rome, Italy</subfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Applied Sciences Upper Austria, Hagenberg, Austria</subfield>
    <subfield code="0">(orcid)0000-0002-5821-8016</subfield>
    <subfield code="a">Keck, Mandy</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1482664</subfield>
    <subfield code="z">md5:edbd396f558e0eb5f5ee3ea3e06d3499</subfield>
    <subfield code="u"> Sketch-based Sound Associations for Sonification.pdf</subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u"></subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2022-06-07</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-audio-visual-analytics-community</subfield>
    <subfield code="o"></subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Technische Universit├Ąt Dresden, Chair of Media Design, Dresden, Germany</subfield>
    <subfield code="0">(orcid)0000-0002-9268-4854</subfield>
    <subfield code="a">Engeln, Lars</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Exploring Sketch-based Sound Associations for Sonification</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-audio-visual-analytics-community</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u"></subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2"></subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;The interpretation of sounds can lead to different associations and mental models based on the person&amp;#39;s prior knowledge and experiences. Thus, cross-modal mappings such as verbalization or visualization of sounds can vary from person to person. The sonification provides complements to visualization techniques or possible alternatives to more effectively support the interpretation of abstract data. Since sonifications usually map data attributes directly to auditory parameters, this may conflict with users&amp;#39; mental models.&lt;/p&gt;

&lt;p&gt;In this paper, we analyze various sketch-based associations of sounds to better understand users&amp;#39; mental models in order to derive understandable cross-modal correlations for sonification. Based on the analysis of different sketches from a previously conducted user study, we propose three semantic-auditory channels that can be used to encode abstract data.&lt;/p&gt;</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.6576696</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.6576697</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
All versions This version
Views 177177
Downloads 4848
Data volume 71.2 MB71.2 MB
Unique views 167167
Unique downloads 4242


Cite as