Conference paper Open Access
<?xml version='1.0' encoding='utf-8'?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:adms="http://www.w3.org/ns/adms#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:dctype="http://purl.org/dc/dcmitype/" xmlns:dcat="http://www.w3.org/ns/dcat#" xmlns:duv="http://www.w3.org/ns/duv#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:frapo="http://purl.org/cerif/frapo/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:gsp="http://www.opengis.net/ont/geosparql#" xmlns:locn="http://www.w3.org/ns/locn#" xmlns:org="http://www.w3.org/ns/org#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:prov="http://www.w3.org/ns/prov#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:vcard="http://www.w3.org/2006/vcard/ns#" xmlns:wdrs="http://www.w3.org/2007/05/powder-s#"> <rdf:Description rdf:about="https://doi.org/10.5281/zenodo.6576697"> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://doi.org/10.5281/zenodo.6576697</dct:identifier> <foaf:page rdf:resource="https://doi.org/10.5281/zenodo.6576697"/> <dct:creator> <rdf:Description rdf:about="http://orcid.org/0000-0002-9268-4854"> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#string">0000-0002-9268-4854</dct:identifier> <foaf:name>Engeln, Lars</foaf:name> <foaf:givenName>Lars</foaf:givenName> <foaf:familyName>Engeln</foaf:familyName> <org:memberOf> <foaf:Organization> <foaf:name>Technische Universität Dresden, Chair of Media Design, Dresden, Germany</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:creator> <rdf:Description rdf:about="http://orcid.org/0000-0002-5821-8016"> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#string">0000-0002-5821-8016</dct:identifier> <foaf:name>Keck, Mandy</foaf:name> <foaf:givenName>Mandy</foaf:givenName> <foaf:familyName>Keck</foaf:familyName> <org:memberOf> <foaf:Organization> <foaf:name>University of Applied Sciences Upper Austria, Hagenberg, Austria</foaf:name> </foaf:Organization> </org:memberOf> </rdf:Description> </dct:creator> <dct:title>Exploring Sketch-based Sound Associations for Sonification</dct:title> <dct:publisher> <foaf:Agent> <foaf:name>Zenodo</foaf:name> </foaf:Agent> </dct:publisher> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#gYear">2022</dct:issued> <dcat:keyword>sonification, mental model, sketching, auditory encoding, sound, visual encoding</dcat:keyword> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2022-06-07</dct:issued> <owl:sameAs rdf:resource="https://zenodo.org/record/6576697"/> <adms:identifier> <adms:Identifier> <skos:notation rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://zenodo.org/record/6576697</skos:notation> <adms:schemeAgency>url</adms:schemeAgency> </adms:Identifier> </adms:identifier> <dct:isVersionOf rdf:resource="https://doi.org/10.5281/zenodo.6576696"/> <dct:isPartOf rdf:resource="https://zenodo.org/communities/audio-visual-analytics-community"/> <dct:description><p>The interpretation of sounds can lead to different associations and mental models based on the person&#39;s prior knowledge and experiences. Thus, cross-modal mappings such as verbalization or visualization of sounds can vary from person to person. The sonification provides complements to visualization techniques or possible alternatives to more effectively support the interpretation of abstract data. Since sonifications usually map data attributes directly to auditory parameters, this may conflict with users&#39; mental models.</p> <p>In this paper, we analyze various sketch-based associations of sounds to better understand users&#39; mental models in order to derive understandable cross-modal correlations for sonification. Based on the analysis of different sketches from a previously conducted user study, we propose three semantic-auditory channels that can be used to encode abstract data.</p></dct:description> <dct:accessRights rdf:resource="http://publications.europa.eu/resource/authority/access-right/PUBLIC"/> <dct:accessRights> <dct:RightsStatement rdf:about="info:eu-repo/semantics/openAccess"> <rdfs:label>Open Access</rdfs:label> </dct:RightsStatement> </dct:accessRights> <dct:license rdf:resource="https://creativecommons.org/licenses/by/4.0/legalcode"/> <dcat:distribution> <dcat:Distribution> <dcat:accessURL rdf:resource="https://doi.org/10.5281/zenodo.6576697"/> <dcat:byteSize>1482664</dcat:byteSize> <dcat:downloadURL rdf:resource="https://zenodo.org/record/6576697/files/AVI22_WAVA__Exploring Sketch-based Sound Associations for Sonification.pdf"/> <dcat:mediaType>application/pdf</dcat:mediaType> </dcat:Distribution> </dcat:distribution> </rdf:Description> </rdf:RDF>
All versions | This version | |
---|---|---|
Views | 177 | 177 |
Downloads | 48 | 48 |
Data volume | 71.2 MB | 71.2 MB |
Unique views | 167 | 167 |
Unique downloads | 42 | 42 |