Conference paper Open Access
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.6576697</identifier> <creators> <creator> <creatorName>Engeln, Lars</creatorName> <givenName>Lars</givenName> <familyName>Engeln</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-9268-4854</nameIdentifier> <affiliation>Technische Universität Dresden, Chair of Media Design, Dresden, Germany</affiliation> </creator> <creator> <creatorName>Keck, Mandy</creatorName> <givenName>Mandy</givenName> <familyName>Keck</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-5821-8016</nameIdentifier> <affiliation>University of Applied Sciences Upper Austria, Hagenberg, Austria</affiliation> </creator> </creators> <titles> <title>Exploring Sketch-based Sound Associations for Sonification</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2022</publicationYear> <subjects> <subject>sonification, mental model, sketching, auditory encoding, sound, visual encoding</subject> </subjects> <dates> <date dateType="Issued">2022-06-07</date> </dates> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/6576697</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.6576696</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/audio-visual-analytics-community</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>The interpretation of sounds can lead to different associations and mental models based on the person&#39;s prior knowledge and experiences. Thus, cross-modal mappings such as verbalization or visualization of sounds can vary from person to person. The sonification provides complements to visualization techniques or possible alternatives to more effectively support the interpretation of abstract data. Since sonifications usually map data attributes directly to auditory parameters, this may conflict with users&#39; mental models.</p> <p>In this paper, we analyze various sketch-based associations of sounds to better understand users&#39; mental models in order to derive understandable cross-modal correlations for sonification. Based on the analysis of different sketches from a previously conducted user study, we propose three semantic-auditory channels that can be used to encode abstract data.</p></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 177 | 177 |
Downloads | 48 | 48 |
Data volume | 71.2 MB | 71.2 MB |
Unique views | 167 | 167 |
Unique downloads | 42 | 42 |