Published September 17, 2022 | Version v1
Conference paper Open

Using raw audio neural network systems to define musical creativity

Creators

  • 1. Department of Music, University of Leeds

Description

This paper will use the hacker-duo Dadabots (who generate raw audio using SampleRNN) and OpenAI's Jukebox project (who generate raw audio using a hierarchical vq-vae transformer) as case studies to assess whether machines are capable of musical creativity, how they are capable of musical creativity, and whether this helps to define what musical creativity is. It will also discuss how these systems can be useful for human creative processes. The findings from evaluating Dadabots' and OpenAI's work will firstly demonstrate that our assumptions about musical creativity in both humans and machines revolve too strongly around symbolic models. Secondly, the findings will suggest that what Boden describes as 'transformational creativity' can take place through unexpected machine consequences.

Files

Windswor_2022__Using_raw_audio_neural_network_systems_to_define_musical_creativity.pdf