Published May 31, 2023 | Version v1
Conference paper Open

Exploring the potential of interactive Machine Learning for Sound Generation: A preliminary study with sound artists

Creators

Description

Interactive Machine Learning (IML) is an approach previously explored in music discipline. However, its adaptation in sound synthesis as an algorithmic method of creation has not been examined. This article presents the prototype ASCIML, an Assistant for Sound Creation with Interactive Machine Learning, that allows musicians to use IML to create personalized datasets and generate new sounds. Additionally, a preliminary study is presented which aims to evaluate the potential of ASCIML as a tool for sound synthesis and to gather feedback and suggestions for future improvements. The prototype can be used in Google Colaboratory and is divided into four main stages: Data Design, Training, Evaluation and Audio Creation. Results from the study, which involved 27 musicians with no prior knowledge of Machine Learning (ML), showed that most participants preferred using microphone recording and synthesis to design their dataset and that the Envelopegram visualization was found to be particularly meaningful to understand sound datasets. It was also found that the majority of participants preferred to implement a pre-trained model on their data and relied on hearing the audio reconstruction provided by the interface to evaluate the model performance. Overall, the study demonstrates the potential of ASCIML as a tool for hands-on neural audio sound synthesis and provides valuable insights for future developments in the field.

Files

nime2023_88.pdf

Files (1.1 MB)

Name Size Download all
md5:5b11d167a38a9bcbf1698eeb9fcdc433
1.1 MB Preview Download