Conference paper Open Access
Bernardes, Gilberto
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.4813176</identifier> <creators> <creator> <creatorName>Bernardes, Gilberto</creatorName> <givenName>Gilberto</givenName> <familyName>Bernardes</familyName> </creator> </creators> <titles> <title>Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2020</publicationYear> <contributors> <contributor contributorType="Editor"> <contributorName>Michon, Romain</contributorName> <givenName>Romain</givenName> <familyName>Michon</familyName> </contributor> <contributor contributorType="Editor"> <contributorName>Schroeder, Franziska</contributorName> <givenName>Franziska</givenName> <familyName>Schroeder</familyName> </contributor> </contributors> <dates> <date dateType="Issued">2020-06-01</date> </dates> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4813176</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="ISSN" relationType="IsPartOf">2220-4806</relatedIdentifier> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4813175</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/nime_conference</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">Audio content-based processing has become a pervasive methodology for techno-fluent musicians. System architectures typically create thumbnail audio descriptions, based on signal processing methods, to visualize, retrieve and transform musical audio efficiently. Towards enhanced usability of these descriptor-based frameworks for the music community, the paper advances a minimal content-based audio description scheme, rooted on primary musical notation attributes at the threefold sound object, meso and macro hierarchies. Multiple perceptually-guided viewpoints from rhythmic, harmonic, timbral and dynamic attributes define a discrete and finite alphabet with minimal formal and subjective assumptions using unsupervised and user-guided methods. The Factor Oracle automaton is then adopted to model and visualize temporal morphology. The generative musical applications enabled by the descriptor-based framework at multiple structural hierarchies are discussed.</description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 67 | 67 |
Downloads | 28 | 28 |
Data volume | 40.7 MB | 40.7 MB |
Unique views | 48 | 48 |
Unique downloads | 25 | 25 |