Technology, Creativity and The Social in Algorithmic Music

It is probably an understatement to say that algorithmic music does not normally conjure an image of music as a social practice. Although endowed with a vast body of literature relative to its scale as a genre, the social ecologies that sustain it— the audiences or publics that listen to and discuss algorithmic music; the industries that provide for its production and dissemination; the social practices that are central to how algorithmic music is learned, practised, and circulated; the cultures and politics of the algorithms and computers as technologies— are rarely discussed. Iannis Xenakis’s Formalized Music (1992), possibly the canonical book on the subject, is characteristic in its disciplinary sweep: music theory, mathematics, computer science, and philosophy form the core framework, while the social and historical determinants of algorithmic music are evaded. This constellation of uneven forces reflects a certain selfunderstanding of algorithmic music that goes beyond discourse, participating in the aesthetics of the genre itself. When, in the late 1990s, algorithmic composition made its way into popular electronic music, it was a machinic aestheticism that prevailed: in sound, accompanying artwork, interviews, and promotional literature, nonrepresentational imagery was favoured over pictures of the musicians, with artists favouring cryptic cyborgian monikers over their real names. Ben Watson’s admonishment of ‘laptop cool’ for sublating ‘the contribution of human labour’ into a Romantic aesthetics of the sublime therefore captured some of the underlying rationality of laptop and algorithmic music;1 however, the critique is not new. Algorithmic music inherits an ethics and aesthetics that finds its fullest expression in ‘absolute music’ and the idea that art and aesthetics might exist AQ: Can we change ‘machinic’ to ‘machinist’ in the sentence “When, in the late 1990s, algorithmic composition . . . ”? OUP UNCORRECTED PROOF – FIRSTPROOFS, Tue Oct 17 2017, NEWGEN

And this shift can be seen as a subset of wider changes in the way the social has been conceived and studied. To put it in Bruno Latour's terms, it consists in a move from a 'sociology of the social' to a 'sociology of associations'; from various shades of social determinism to a theory of social mediation-one that takes seriously the contribution of 'nonhuman actors' . Many writers besides Born have taken this direction in cultural musicology, among them Tia DeNora (2000), who has analysed music as a 'technology of the self ' in everyday life and the management of subjectivity; Antoine Hennion (2010), who has explored popular music and the mediation of taste; Benjamin Piekut (2011Piekut ( , 2014, whose work on 'actually existing' experimentalism has traced the path of intermediaries, cultural operators and even books in 1960s London and New York;and Nick Prior (2008), who (not far from the concerns of this chapter) has studied electronic music genres such as 'Glitch' and the way in which breakdown, error, and misuse ends up affording new and unforeseen uses for musicians. These writers' positions have, to varying extents, developed alongside and in critical dialogue with the body of literature associated with Actor-Network Theory (ANT), and so it is from here that I will depart in this chapter. An important theory of mediation, 2 ANT famously and controversially grants agency to 'things'-a maxim that effectively translates into a refusal to privilege human intention when analysing how social assemblages cohere and endure. Being presaged on the 'alien' agencies of algorithms, algorithmic music therefore represents a fertile ground for an ANT-style analysis. Indeed, it affords more than just analysis of creative practices, offering insights into the dynamic relationship between industries, technologies, and musical action, as well as into the ways in which the sometimes fragile assemblages of genre hold together.
Yet while providing a useful framework by which to analyse certain facets of algorithmic music, ANT also suffers from some important shortcomings. It fails to offer a means by which to understand how some actors can become more powerful than others and is equally blind to the role played by time and history in these processes (cf. Piekut 2014). For these reasons, ANT provides only a partial account of the social world it purports to analyse. It is for this reason that I turn to other theories of mediation later in the chapter.
What follows is an introduction to ANT followed by two case studies: first, the algorithmic music pioneers The Hub and, second, the live-coding community. The case studies serve to illuminate the affordances and constraints of ANT: both extending the analysis of these algorithmic music scenes by recourse to ANT and, through the analyses, illuminating some of ANT's limits.

Actor-Network Theory: A Sociology of Mediation
The first thing to say about ANT is that it does not represent a coherent body of theory or philosophical system, but is instead a negative methodology for studying the social world. Although it does come furnished with a bare set of conceptual tools for understanding social mediation (what Latour 1988 has called its 'infralanguage'), Latour deems it crucial, for 'scientific, political, and even moral reasons' , that 'enquirers do not in advance, and in place of the actors, define what sorts of building blocks the social world is made of ' (2005, 41). As such, ANT advances a potentially maddening scepticism towards any and all pre-made abstractions. 'Knowledge' , 'science' , 'languages' , 'theories' , 'the social' , and 'capitalism' all come in for the charge of 'not existing' in the vast and dispersed literature on the subject (Law 2000), a move that is intended to puncture the false stability and staticism that they confer on the social world. In the ANT view, the world that actors inhabit does not lie there passively and continue into the future unchanged. Rather, it is actively 'performed' , 'negotiated' , and 'made' by those actors that support its continuing functioning. Now, there is a sense in which this shift from the passive to the active and performative follows the pragmatic approach to art worlds advanced by the earlier-cited Howard Becker. Becker described 'systems of collective action' in which the individuals that are necessary to the world's functioning work collectively to produce things they call art (1976,704). In a passage that could very nearly have come from the pen of an ANT theorist, Becker gives a list of actors necessary for a symphony orchestra concert to take place: For a symphony orchestra to give a concert, for instance, instruments must have been invented, manufactured, and maintained, a notation must have been devised and music composed using that notation, people must have learned to play the notated notes on the instruments, times and places for rehearsal must have been provided, ads for the concert must have been placed, publicity must have been arranged and tickets sold, and an audience capable of listening to and in some way understanding and responding to the performance must have been recruited. (Becker 1984, 3) In Becker's account, 'music' is the accomplishment of a collective endeavour in which the people and things that are necessary to the functioning of that world come together to produce it. The art thing is therefore not circumscribed in advance, with some worlds worthy of consideration and others not; rather, it is that entity that results from this dynamic system of interactions. Indeed, Becker even includes nonhuman actors in his account: the invention, manufacture, and upkeep of instruments, and the necessity of a system of notation all enter into the mix, interacting with human actors to assemble and perform the social.
Differentiating this from an ANT type of analysis may at first seem like nitpicking, but it is important. Simply put, ANT recognizes the two-way exchange that is incurred when humans and nonhumans interact. Humans may delegate tasks and roles to objects, a feature that is recognized by Becker, but this is never the uncomplicated transfer of action it seems to be. An action is always mediated-translated in the ANT terminology-by the elements it comes into contact with, which means that the resulting effect cannot be wholly reduced to the person, or group of persons, that set it in motion. ANT therefore raises the status of objects from their common role-evident in Becker-as uncomplicated 'carriers' or 'transporters' of a determinate action. Instead they become collaborators and mediators; entities endowed with the capacity to change the course of events.

Skill, Agency, Creativity, Technology
By way of demonstration, let us consider Thor Magnusson's Confessions of a Live Coder (2011) in which he gives an autoethnographic account of his own learning processes when taking on a new programming language. 3 Magnusson is a computer musician whose favoured programming language is the object-oriented environment SuperCollider. In 2007 he began the process of learning a new environment, Andrew Sorenson's Impromptu, in order to answer two questions, practical and methodological: first, how would the shift from an 'object-oriented' to a 'functional' programming language influence his own computer music practice; and second, how is it possible to reflect upon and analyse the process of 'technological conditioning' that is incurred in this process in such a way as to provide shared insights for the community. Now, the view implicitly propagated by most computer music texts is that this process of skill acquisition would make no difference at all. The software implementation of the terse mathematical equations and signal flow diagrams that we find in periodicals like Computer Music Journal is usually left up to the individual user, suggesting strongly that, whether implemented in Supercollider, Csound, or something else, an FM synthesizer is an FM synthesizer: it is the signal-processing mathematics, computer processor, audio card, loudspeaker equipment, and room acoustics that count, because the software itself exerts no audible agency. 4 Yet as Magnusson notes, even 'secondary' aesthetic differences from environment to environment make a difference on the reader or writer of the code. Text inlining, syntax colourization, capitalization, font size, and special symbols all 'begin to condition how the artist thinks ' (2011, 3). Getting even further from function, Magnusson finds that the very discursive and social culture one enters into when learning a new programming language does its own work, enhancing or suppressing one's engagement with it. The extent to which a community readily shares projects and code or is more secretive, helps new users or not, and the clarity and availability of documentation and help files, participates in the success or failure of the individual user learning to operate it.
Magnusson's account challenges the 'neutrality' thesis of technology by showing how both the functional and the aesthetic aspects of a programming language, as well as the discursive community in which it is enmeshed, influenced his music making. He finds that thinking in terms of the 'flow' and 'process' of the Impromptu language, as opposed to the 'objects' , 'prototypes' , 'properties' , and 'methods' of the Supercollider one, changed the way he worked with melody in his music. 'I would write dynamic functions to populate lists with note values and recursively through other functions, empty those lists during playing, until they needed populating again. There was never a static entity one could denote as the piece's "melody" ' (Magnusson 2011, 5). Later, Magnusson discovers that the process affords him larger-scale insights into distinct communities of practice that centre on music software. 'SuperCollider users focus largely on synthesis, signal processing, and generative audio, Impromptu users operate more on the more traditional compositional level (sic)' (5).
The effects described here are clearly not reducible to the operator's own purposeful intention. Indeed, Magnusson actually quotes Latour in the article, imagining his acquisition of new software-related skills as giving birth to a 'new kind of hybrid, making fresh creative decisions' (Magnusson 2011, 3). In other words, SuperCollider and Impromptu became 'actors' in the ANT sense, The universe of functions and sound generators that they offered, the visual representations that they provided, the modes of interaction they afforded, the support communities that built around them, and the musician-programmer who navigated them all colluded and interacted to produce something unique-it would not be the same were it Max or Csound.

Nonhuman Agency
Now, to return to ANT, this 'something'-a distributed action set in motion across a network of people and things and irreducible to any single one-is what ANT understands as 'agency' . In what perhaps amounts to ANT's strongest ontological claim, agencies that do not correspond to actual effects are rejected entirely. If an actor is not producing socially available traces and information then, according to Latour, 'it is invisible and nothing can be said about it ' (2005, 31). If, on the other hand, 'it is visible, then it is being performed and will then generate new and interesting data' (31). It should be clear that this bare methodological axiom-an insistence on performance as the minimal condition for agency-immediately reduces in priority and import the 'human-ness' or 'nonhuman-ness' of the respective agent. So long as it makes a difference, and the course of events would be significantly different were it removed, then it does not matter who or what an agent is presumed to be. To this end, ANT deliberately fosters uncertainty about the 'full' nature of an actor. 5 It is the means through which given assumptions about who or what can 'count' as an agent-a computer musician rather than the software she uses-can be left behind, and 'true' empirical inquiry can proceed.
But does this minimal notion of agency not leave ANT theorists in danger of representing a world in which people and things become interchangeable, with no distinguishing characteristics assigned to either? This remains the most common criticism of ANT-its seemingly amoral stance. But whilst the rhetoric can be overweening, this charge tends to be born of a misunderstanding. As Latour writes, granting agency to objects does not mean that these participants determine action, only that: there might exist many metaphysical shades between full causality and sheer inexistence. In addition to 'determining' and serving as a 'backdrop for human action' , things might authorize, allow, afford, encourage, permit, suggest, influence, block, render possible, forbid, and so on. (Latour 2005, 71) So whilst a nonhuman actor can actually be more powerful or significant than a human one, its agency is not to be understood as isomorphic with human agency. Indeed, as Sayes points out, ANT offers no general theory of agency at all; to reiterate, it aims to provide a negative methodology rather than a substantive theory (2014,142).
Ultimately, the goal of an ANT analysis is to produce richer empirical investigation into the precise nature of the human-technical ensembles that manifest actions, without foreclosing their nature in advance. 6 Brought to bear on algorithmic music, this allows us to engage in a serious and thorough way with the mutable instruments, changing technological infrastructures, self-sustaining music systems, and laptop crashes that participate in and shape its social environment.

A Historical Case Study: The Hub
In the following, I bring an ANT analysis to bear on the practices of The Hub, pioneers of algorithmic music and the first computer-networking group.
Emerging out of the avant-garde music scene of the San Francisco Bay Area, The Hub is in many ways an archetypal product of the region's distinctive mix of high-tech research and bohemianism. The group embodies principles of antihierarchical organization and collectivism whilst at the same occupying a precarious space right at the vanguard of new technology adoption. Their name was conceived as a generic placeholder for a dynamic constellation of people, things and processes. It names at least three components: (1) the composer-performers associated with the project, including Scot Gresham Lancaster, Mark Trayle, John Bischoff, Chris Brown, and Phil Stone; (2) the hardware and software that they used; and (3) the practice of generating shared information which underlay their work. Clearly this managed uncertainty between people, things, and processes, all drawn together by a concept of 'network' , bears more than a passing resemblance to ANT, yet, commensurate with the ideas of the time, it was the conceptual armature of cybernetics and information theory that informed The Hub's practice. 7 Gresham-Lancaster associates the very origins of the group with a technological and economic development: the advent of MIDI. MIDI had 'a major impact, enabling often-impoverished performers/ composers to utilize these new, affordable instruments' (1998,41). In the early Hub performances, the group utilized a blackboard system for sharing data between the distributed computers. A central memory space housed the active components of the piece, which each computer was able to access remotely. This determined the style of communication between computers, and hence, the form of their interactions. One-to-one communication was not possible; instead, all contributed to, and drew from, a shared data resource.
A paradigmatic example of this period in The Hub's history is the piece Perry Mason in East Germany. In it, each of the six members of The Hub runs a program that constitutes a self-sustaining musical process, but which is able to send out and receive variables from the memory source in order to control one another's programs. As Graham-Lancaster notes (1998, 42), these were completely asynchronous interactions. The lack of a shared clock led the group in the direction of a more procedural approach, sharing the tradition of Cage and Tudor.
When OpCode Systems released their Studio 5 MIDI interface, the group opted to redesign The Hub around this new system. Each participant in the network could now directly 'play' the set-up of any other participant, which had not been possible previously. The new Hub was a decentralized peer-to-peer network, which granted more autonomy to each player and also more direct interaction among them. Waxlips, a piece composed by Tim Perkis, is considered the canonical work of this period. Here, the prewritten algorithms of Perry Mason in East Germany are gone as the network interaction is reduced to its most fundamental and basic form so as to allow the emergent structure to be revealed more clearly.
Gresham-Lancaster notes the precariousness of this dynamic media ecology. Utilizing 'the new possibilities the changing technological context brought to the work' whilst also maintaining a repertoire of works is depicted as a fragile balancing act, with 'the shifting context of hardware and software constantly (updating) the sound of the ensemble' (Gresham-Lancaster 1998, 43). In this sense, The Hub dramatizes the essential 'problem' that ANT tries to solve: that is, how to understand innovation and organization without resorting to accounts that portray either technological development or society as the primary drivers of change, cancelling out the respective other. Each new innovation is typically presaged with a change in hardware or software that, in most instances, radically transforms the way the members conceptualize their compositions and organize their interactions. A hierarchical client/ server architecture, where all interaction is mediated by a central data resource, is replaced by a 'flat' peer-to-peer network that allows direct intercommunication, the latter having direct and irreversible effects on the sound. However, not all of the system updates The Hub implement take hold. Both Matthew Wright (2005) and Gresham-Lancaster have independently written of a failed attempt to create a Hub based on Open Sound Control (OSC) to perform over the Internet. Here, the problem was twofold: both that the new OSC-based system was so complex that the group was 'unable to reach a satisfactory point of expressivity' , and that the wider network of the Internet required different strategies and aesthetics than The Hub's creative methods afforded. Rejecting the update led to a reinforced sense of who The Hub 'is': a computer network music group with the 'form and function of a conventional musical ensemble' (Gresham-Lancaster 1998, 44).

The Problem of Agency in Technologically Mediated Music Making
We start to see now why Latour and the ANT theorists object to reified categories. What Gresham-Lancaster's account displays very concretely is the sheer dynamism, hybridity, and, at times, instability, of the ensemble of players, software and hardware systems, telecommunication protocols, and other entities that the moniker 'The Hub' forecloses. When the group moved from the blackboard system to the MIDI hub, the previous repertoire was left more or less obsolete and an entirely new set of material had to be produced based on a different model of interaction. At danger of overemphasizing the notion of nonhuman agency, we might compare The Hub's technological revisions to the cycles of change and renewal in line-up that rock bands can undergo whilst maintaining the same moniker-to paraphrase Latour, 'change an element in the network and you change the actor' (Latour et al. 2012, 593). However, without an analysis of the specific agencies that assemble and supervise the new network we end up with the rather banal observation that every element in the chain produces effects: a kinect sensor is different from a computer keyboard, which is in turn different from a MIDI keyboard, and so on. These immediate mediations are important in providing a materialist account of creativity, but the risk inherent in ANT and related approaches is that they are taken to comprise the entire nexus of possible mediations. Absent from the analysis are the larger-scale commercial and political dynamics that sustain the ecology of electronic music making, but whose logics of change and development are dictated by markets, technical standards, and other nonmusical agencies. Gresham-Lancsaster's account portrays a constant negotiation between two poles of mediation-as Agostino Di Scipio (1995, 37) puts it, formulating the problem in question form: '[H] ow can I use the available existing task-environment to realize my own idea of composition?' or 'How can I design the tools that are necessary to realize my own idea of composition?' No doubt all musical practice falls somewhere between these two poles, rather than at one or the other, but it is clear that, in the pre-Hub days of the League of Automatic Music Composers, the group slide towards a largely selfmaintained paradigm, whereby bespoke self-authored tools are produced to realize their own idea of composition. ' [E]ach new piece conform[ed] to a uniquely designed software/ hardware configuration' . However, with the adoption of Opcode systems' Studio 5 interface they moved towards the use of an 'existing task environment' , enjoying the 'simplicity and clarity' that the changing technological context brought to the work (Gresham-Lancaster 1998, 40-43), but sacrificing agency if this interface was changed or discontinued. The negotiation was therefore between an infinitely reconfigurable set of techniques devised and maintained by the artists themselves, and the standardization of techniques in technical systems whose preservation and development is 'autonomous' (as in, not commensurate with the immediate creative goals of The Hub).
To further probe this 'problem of agency' , and the challenges it poses to analysis, I want to turn to a controversy that briefly surrounded the music notation software Sibelius. In 2012, the software's community of users rose up against the Avid technology company in the wake of the closure of the company's UK offices. Fearing the discontinuation of the software they knew and loved, they petitioned the company to sell it back to the two developers that originally wrote the program. 'Sibelius is far more than just code, it lives and breathes in the hearts and minds of its inventors and developers. Remove them, and Sibelius eventually becomes roadkill' , they wrote (Williams 2015). From the standpoint of the original authors of Sibelius, or the community of users that speak in their name, the software's transformation beyond its original intention and eventual decline was an obvious failure. However, looked at without interest, from the largely managerial perspective advanced by Latour and ANT, what we have is simply a case of the network 'growing' in directions that exceed the will of the developers and user base. 8 As new, stronger actors-the Avid Company-who pursue independent interests, are enrolled within it, the network drifts. What the signees of the 'Sell Sibelius' cause were petitioning for, then, was a form of technological democracy, where the communities that the changes will affect have a say in the systems that they rely upon.
This mediation of creative agency by autonomous technical systems raises the question of who governs, and whose interests govern, technological change. Those musicians and artists with the time and knowledge to resist profit-motivated disruptions in the technological ecology of digital music making, as in the Sibelius case, may wish to maintain older software and operating systems or build their own systems using Open Source software, but it is more often the case that electronic musicians absorb these disruptions into their practices-or do a mixture of both (as with The Hub). An ANT analysis can disassemble these larger economic and political dynamics to uncover the complex chains of agencies that collaborate to make a concrete difference in music making, but one has to ask whether it is really desirable to perform this operation on every grouping or abstraction we encounter-must we account for the countless mediators that contribute to, for example, class, race, Korg, the ECM label, and so on? Georgina Born's theory of social mediation takes off from the opposite starting point. Rather than see the social world as flat, as in ANT, she posits different 'orders' or 'scales' of mediation-scales that are nonexclusive, and that interpenetrate, but that nevertheless have a positivity denied by ANT. She writes: The first order equates to the practice turn: here music produces its own socialities in performance, in musical ensembles, in the musical division of labour, in listening. Second, music animates imagined communities, aggregating its listeners into virtual collectivities or publics based on musical and other identifications. Third, music mediates wider social relations, from the most abstract to the most intimate: music's embodiment of stratified and hierarchical social relations, of the structures of class, race, nation, gender and sexuality, and of the competitive accumulation of legitimacy, authority and social prestige. Fourth, music is bound up in the large-scale social, cultural, economic and political forces that provide for its production, reproduction or transformation, whether elite or religious patronage, mercantile or industrial capitalism, public and subsidized cultural institutions, or late capitalism's multipolar cultural economy forces the analysis of which demands the resources of social theory, from Marx and Weber, through Foucault and Bourdieu, to contemporary analysts of the political economy, institutional structures and globalized circulation of music. (Born 2010, 232) To start from the assumption that there are scales of mediation necessarily means sacrificing some of the rich analytical detail that ANT can afford; yet, at the same time, it also acts as a panacea to the kind of indiscriminate empiricism that can result from OUP UNCORRECTED PROOF -FIRSTPROOFS, Tue Oct 17 2017, NEWGEN 9780190226992_Book.indb 566 10/17/2017 5:42:10 PM keeping track of every human and nonhuman mediator. Either way, it is clearly Born's fourth order of mediation that lends explanatory power to the case under discussionthe large-scale social, cultural, economic, and political forces that provide for music's production, reproduction and transformation. In the next study, I develop the analysis of algorithmic music by reference to Born's theory.

A Contemporary Actor Network: Live Coding
This second section of this chapter makes a substantive and methodological leap forward in time, considering algorithmic music in the context of the present day. Responding to the earlier-cited criticisms of the sociology of art and the constructivist project of demystification and exposure relative to the social, I consider in this section how algorithmic music's own methods and aesthetics might be employed to analyse it. Using algorithmic digital methods designed for online ethnography, the analysis aims to occupy the same meshwork of human and nonhuman actors as the subject itself.
Live coding is an interesting and complex social form. Usually defined by reference to practice, sociality, and technique rather than any coherent musical style, writers generally agree that it constitutes the activity of writing, listening to, and modifying a computer program in real time before an audience. Lifting the curtain on the 'hidden' instrumentality of advanced computer music-the embodied activity of writing algorithms, auditioning materials, and moving around code-live coding purports to disclose computer music practice in its elemental state. Indeed, rawness, primitivism, and the associated qualities of 'danger' and 'closeness to the source' are often cultivated as an aesthetic, via the projection of the screen to the audience and the deliberate imposition of performative constraints. Echoing the 'truth to materials' principle of modernist architecture-form follows function, and ornament is crime-Collins, McLean, Rohrhuber, and Ward write: With commercial tools for laptop performance like Ableton Live and Radial now readily available, those suspicious of the fixed interfaces and design decisions of such software turn to the customisable computer language. . . . [We] do not wish to be restricted by existing instrumental practice, but to make a true computer music that exalts the position of the programming language, that exults in the act of programming as an expressive force for music closer to the potential of the machine-live coding experiments with written communication and the programming mind-set to find new musical transformations in the sweep of code. (Collins et al. 2003, 322) This 'true' computer music is far from being a technological determination, however.
If it were, then, as Collins and colleagues wryly acknowledge, live coding concerts would entail the performer building a driver or DSP engine from scratch in the back of a venue over a number of nights, 'before finally emerging with a perfect heartfelt bleep on Sunday evening' (Collins et al. 2003, 321). Instead, authentic live computer music is a mutable concept, one that enrolls technical devices (the use of text-based programming languages over readymade graphical interfaces), social expectations (the insistence on openness and transparency over secrecy and opacity), politics (the use of Open Source tools over black-boxed commercial software), and ontology (the insistence on 'liveness' , sometimes enforced by starting from a blank screen) in an open-ended negotiation.

Charting the Development of the TOPLAP Manifesto
We see the 'authentic computer music' rhetoric most clearly in the infamous 'ManifestoDraft' that the TOPLAP organization has featured on its website since its initiation. 9  Alongside the manifesto, it hosts concerts, events, pedagogical resources, videos, academic papers, and other related items. When the site debuted, 'ManifestoDraft' was the first item a visitor to the site would encounter. It outlined the conceptual, performative, technological, and philosophical conditions live coders should meet or engage with, performing the dual function of materializing and speculatively positing an idea of authentic live computer music in the form of ten short commandments (see Figure 31.1).
Within the space of a year, the manifesto draft had stabilized to the form it assumes presently (see Figure 31.2). Looking at the development of the manifesto, what we see is a shift from the explicit designation of materials ('no predefined sequences'), programming languages ('languages approved by TOPLAP'), and software ethics ('sole use of Open Source software tools'), to a more strongly worded and ironic, yet less prescriptive, specification of what live coding is. A product of this latter development is the shift in emphasis from 'code' to 'algorithms': a pluralizing move, perhaps, in the sense that it does not explicitly prohibit graphical programming environments and 'live patching' as performance methods, but also an important conceptual shift from the materiality of code to the quasi-immateriality of the algorithms. ' Algorithms are thoughts, ' they write, not 'tools': a Cartesian assertion that posits the writing of algorithms as being 'closer' to the abstract musical idea than the use of tools. Transparency is the enduring demand in the manifesto, though, appearing three times ('code should be seen as well as heard' , 'obscurantism is dangerous' , 'give us access to the performer's mind'). Alongside the taboo on the use of 'backup' material, the emphasis on programming algorithms, and the mention of manual dexterity and the glorification of the typing interface, these elements coalesce to create an image of an idiomatic computer music, one that is 'live' in the performative sense, and 'real time' in the computing one. In this, it conveys an ontological politics of live computer music (Born 2013;Mol 1999), one that is positioned against two dominant tendencies in electroacoustic and computer music: one, electroacoustic art music, where fixed-media music is played back in concert halls over loudspeakers; and two, the club-based laptop performance of the early 2000s, where audiences watched performers from behind their laptop screens, and the performativity of the spectacle was largely taken on faith.
Looking at the TOPLAP site today (Figure 31.3), it is clear that the identity of live coding no longer hinges on the political manifesto. Slipping from the homepage to a subpage, its relegation indicates that the field has stabilized, its clauses having been either AQ: perhaps 'from a later date' , in case it changes further in the sentence "Looking at the TOPLAP . . . "?

Art-Pop Uncertainties
Viewed from the perspective of ethnographic practice, one of the most interesting things about the live coding community is its propensity for self-documentation. Alongside the manifesto, the scene is fastidiously documented, with films about live coding, screen captures of performances, and pedagogical resources all very easy to access. Most of all, live coding is enshrined in dozens of exegetical texts elaborating upon its own practice and theory. These developments were consolidated in 2015, when the first-ever International Conference on Live Coding was held at University of Leeds-an initiative that is set to continue on an annual basis. 10 Often written by the practitioners themselves, this literature is strikingly interdisciplinary, offering perspectives from computer science, software studies, performance studies, philosophy, pedagogical research, and computational creativity. More marginally, writers have looked at live coding from the perspective of embodiment and autoethnography. 11 How does one study a community when the community studies itself! As Born and others have noted, theoreticism can provide an important index of a scene's experimentalism and avant-gardism. It comes to play an increasingly significant role in modernism, with books and articles taking 'on the ambiguous role of exegesis and criticism, of proselytizing and publicity, of both expounding and legitimating practice' (Born 1995, 42). In a recent survey article, the live coder Thor Magnusson seems to follow this thread when he roots the art form's beginnings in postwar avant-gardism. It is 'inevitable' , he writes, that live coding draws AQ: Title added to bibliography for "Born 1995". Please confirm. from modernist practices, because formal experiments-linked here to modernism and avant-gardism-are a 'necessary aspect of the exploration of a new medium ' (2014, 9). Magnusson quotes approvingly the art critic Clement Greenberg, whose version of modernism had content 'dissolved so completely into form that the work of art . . . cannot be reduced in whole or in part to anything not itself ' (9). But he conveys a narrative of hybridization and diversification beyond the self-referentialism of formalist modernism as the form develops. Once naturalized, the new medium evolves into a much more diverse set of practices, and the historical circumstances of its birth (such as the manifesto) are internalized or forgotten. Indeed, this diversity is alluded to in the article's title, 'Herding Cats': 'Live coding does not have a particular unified aesthetic in terms of musical or visual style' , Magnusson asserts (8).
Magnusson's account can be considered an instance of what the musicologist Eric Drott has dubbed the 'decline of genre' thesis; the narrative that, during modernism, the categories that had once shaped the production, circulation, and reception of Western art declined in relevance, as the vanguard heroically rejected tradition and convention in a wave of aesthetic renewal (Drott 2013). By emphasizing these qualities of theoreticism, formalism, and the lack of any kind of aesthetical coherence over other ones, Magnusson aligns live coding with art music and the avant-garde, despite the fact that, in most of the artists he surveys, a clearly audible dialogue with popular forms of electronic music is being conducted: namely, electronic dance music, glitch, and noise. Now, at first blush this can be seen as a simple outcome of the precedence afforded to technological and theoretical issues over musical ones. Musicality is not really discussed at all in the article, a tendency not unknown to highly technologized musics (cf. Waters 2007). But it is also an outcome of the modernist propensity, if not to directly oppose popular musics, then to suppress their influence and instead to root the genre's origins in the aesthetic and technological developments of the neo-avant-garde. Nick Prior came to a similar conclusion in respect to glitch music, noting that: In most cases, glitch's support writers are themselves directly involved in the unfolding of the style, and their interventions are either internalist in content-fulfilling aesthetic, formalist or stylistic criteria-or posit glitch as somehow outside the field through the maintenance of a cool distance from pop. (Prior 2008, 307) But an important subset of live coding, documented in Nick Collins's and Alex McLean's work , is the format of the 'algorave' . Referencing the famous ' Anti EP' by Autechre, where the duo engaged with the then-pending Criminal Justice Bill designed to criminalize raves, 12 the article defines Algorave as being 'made from "sounds wholly or predominantly characterized by the emission of a succession of repetitive conditionals" ' (McLean 2015). Audible in the live coding of Norah Lorway, Sick Lincoln, Canute, Alex McLean, and Benoît and the Mandlebrots is the undeniable influence of electro, ambient, trance, techno, IDM, electronica, and other electronic dance music subgenres. Indeed, live coders often practise an ironic refusal of the hegemony and prestige of art music, as in Nick Collins's work (under the pseudonym Click Nilson) Acousmatic Anonymous, a text-score piece that features an 'acousmatic' who must 'only omit high art for the remainder of the performance' (Nilson 2013). All the same, the ironic attacks on modernist art that take place within the confines of a club or performance venue tend to dissolve into due deference in the more sober context of the peer-reviewed academic article. Writing on electronic dance music, the same author wrote that 'musicians on the fringes of dance music soon enough looked backward to discover the great history of experimental electronic music and automatically merged to become part of that progression (even had they not looked, they could not have helped the latter)' (Collins 2009, 339). Here and elsewhere (Emmerson 2001), popular electronic music's indebtedness to the European avant-garde is emphasized over other equally salient influences, such as its relationship to African American music and the gay subcultures of 1980s (Taylor 2014, 67).

Actor-Network Theory 2.0
As already noted, live coding is a furiously active scene online. Its web practices extend beyond the usual techniques of publicity, network building, documentation, and promotion to the social, technical, and performative aspects of the scene itself. For example, Charlie Roberts's audiovisual live coding environment, Gibber, runs in a regular Internet browser, facilitating advanced creative coding online; whilst the network music axis of live coding-inheritor of the practices of the earlier discussed The Hubinvolves whole performances being carried out online. Code, sounds, images are passed back and forth over the network, as listeners tune in via their own home connections. This is far from an instrumental use of the web; rather, the web enters into live coding's distributed instrumentarium, becoming a medium in its own right.
A useful social sciences tool for studying aspects of these myriad online socialities is the Issuecrawler (Rogers 2002). Developed in the Department of Science and Technology Dynamics at the University of Amsterdam, the Issuecrawler is a webcrawler tool for visualizing networks using a technique called 'co-link analysis' . The project has links to Latour and ANT,13 and was specifically designed to help with the problem of 'controversy mapping' online. Given a science and technology controversy, for instance 'government mandate on childhood vaccinations' , the issue network would display who (or whose website) in government, business, and civil society is linking to whom, therefore affording insights into how the debate is being framed by key actors. Obviously there are important flaws with such a method. Not all the powerful actors in a given issue are represented by a website, and the Internet in general is an unstable and incomplete resource when viewed as an archive of social associations. Moreover, historical actors become dead links, meaning that the method is heavily biased towards presentday issue networks. Nevertheless, applied to music, the Issuecrawler represents a useful tool for conceiving of genres as social assemblages, thereby releasing the inquirer from the somewhat maddening task-identified by Magnusson-of trying to distinguish and classify genres by reference to a stable set of stylistic features.
Similarly to ANT, the Issuecrawler method is fluid and incomplete by definition. A given actor (' Alberto de Campo' , say, or the SuperCollider programming language) may appear in any number of other 'networks' (electroacoustic music, 'extreme' computer music, live coding), an ontological premise that, when applied to genre, means that the longer the list of actors, the more the genre emerges in its distinctiveness. (In other words, more complexity produces greater differentiation.) Furthermore, the method is reversible. True to ANT, any actor is also conceived as a network, so just as de Campo participates in the live coding network, live coding participates in his network. It would appear alongside the university he works at, the school he went to, the friends and associates he works with, and so on. 14 Turning to the results ( Figure 31.4 and Table 31.1), the clearest finding is live coding's heterogeneous array of human and nonhuman actors. In a sense, this is a simple outcome of the method. A website can represent a person, an event, an animal, a building, and so on-co-link analysis simply follows the ANT method in making no a priori distinctions between them. However, it is the mix of artists and programming languages-Supercollider, Max, Chuck, and so on-that is distinctive. It illustrates the fact that, in the live coding scene, the instrumentarium represents an extension of the human, the two inseparable. Within the many technological actors that appear we find an interesting mix of free and open source and proprietary software (F/ OSS). Alongside Supercollider and Chuck, and the alternative copyright licensing organization Creative Commons, Cycling 74 and Arduino both feature; the latter two suggesting that a diversified politics of software has emerged since the early emphasis on 'code' . And although software like Ableton Live and Reason do not feature, the prominence of Arduino is evidence of hybridization beyond the 'glorification of the typing interface' identified in the manifesto.
Importantly, there are very few record labels, distributors, or record stores represented on the Issuecrawler map, an intriguing omission given the centrality of the independent label within popular electronic music genres. There are at least two reasons for this. First, and most obviously, live coding is centred on performance. As already discussed, it practises a virulent ontological politics of live computer music. So although many of the artists produce physical commodities-sometimes in unconventional formats, where the code is shared with the listener and made available for further hacking and recombination-it is clear from the prominence of festivals and events that the scene is oriented towards, and based around, the live event.
The second reason for the lack of commercial outlets on the map relates to how live coders subsidize their activities. As is clear from the dominance of academic institutions, research groups, conferences, and funding bodies, live coding largely takes place within and in relation to institutionalized sites of music production, with most if not all of the practitioners that feature holding some kind of university affiliation. Many are early career researchers on fixed-term practice-based, practice-led, and 'researchcreation' projects; some are doctoral students; whilst others hold positions in music and computing departments. Although live coding draws heavily on popular styles, the strong presence of institutions of higher education makes for a distinct and complicated geography. At the centre of the map we see dynamic patterns of enthusiastic interlinkage amongst the artists, nonprofit organizations, commercial festivals, and F/ OSS software communities, each actor participating, via the medium of the hyperlink, in the mutual exchange and accumulation of validation and recognition. But at the edges of the map are the institutions. Affording performance spaces, technological resources, and financial support, they are essential actors in this ecology, yet they do not reciprocally link. This outlier status is illustrative of the somewhat sober public faces institutions of higher This backdrop of academic and nonacademic institutions gives an institutional context to live coding's delicate negotiation of art and popular electronic music histories. Being subsidized by arts and engineering grants that support such initiatives as interdisciplinarity, science in the arts, code literacy and pedagogy, and innovation with digital technologies, the earlier-cited emphasis on novelty and formal experimentation (to the detriment of questions of musical style and genre) emerges as an institutional and economic mediation as much as a performative genealogy. Here, again, we see Born's fourth order of social mediation at work: the large-scale social, cultural, economic, and political forces that provide for music's production, reproduction, or transformation are reproduced in discourse. We could argue that live coding's ontological politics oscillates between two levels, then: one, an explicit politics of technology and performance, where the black-boxed, obscurantist laptop music of the early part of the twenty-first century is directly challenged; and two, a more subtle politics of art and popular, where the hegemony of the former can be satirized and lampooned, but not too loudly. For live coding's effective institutionalization is ultimately contingent upon its being bracketed-historically, theoretically, and aesthetically-within those very same genealogies.

Conclusion
This chapter has offered a social perspective on algorithmic music. Drawing on theories of mediation, I have argued for an approach to algorithmic music's socialities that doesn't attempt a project of demystification and exposure relative to the social but that instead installs itself within the very ecology of these fields. To that end, I have used Actor-Network Theory to study the contribution of 'nonhuman actors' to the social world, via a case study of the network music pioneers The Hub. The example of The Hub drew forward the question of technological change, and the necessity of theorizing these external forces as part of technologized music's social ecology. Through this project, we discovered weaknesses in the ANT approach, to do with power and hierarchy, which led us to turn to Georgina Born's theory of musical mediation and the hierarchical notion of distinct 'orders' of social mediation. The second study centred on live coding. Using digital methods, I charted the development of the TOPLAP manifesto in order to illustrate how, far from being a technological determination, 'true' computer music was an ongoing social negotiation that continues to the present. The final section used the Issuecrawler software to analyse networks of association within live coding online. I argue that Born's fourth order of social mediation-the large-scale social, cultural, economic, and political forces that provide for music's production-bears strongly on the aesthetic and conceptual terrain of live coding, particularly in regard to the scene's careful negotiation of art and popular electronic musics.

Notes
1. 'In accordance with this anti-labour aesthetic, the typical laptopper releases recordings of ice floes, radio interference or earthquakes. Laptop cool is about avoiding "the turgid, complex, actual, dirty 'thing' "-i.e. earning a living under capitalism-and instead losing oneself in the contemplation of unsullied nature. This is actually no more advanced in ideological terms than hanging a framed reproduction of a painting of a glade of silver birches on the wall of an urban living room' (Watson 2006, 8). 2. Even though it is there in the name, writers associated with ANT tend to eschew the term 'theory' . Properly speaking, ANT is a theory about how to study the social; as such, it is closer to ethnomethodology. It is the idea that ANT can be 'applied' so as to understand a given social phenomenon that the protagonists reject. 3. Autoethnography reverses ethnography's typical focus on another group's culture by focusing instead on the ethnographer's own subjective experience of her interaction with that culture. 4. This is also the ideology behind software companies' promises that their products are transparent conduits of the individual ideas of their users, a characteristic example of which can be found in Richard Boulanger's introduction to The Csound Book, where he writes, 'in the software synthesis world of Csound, there are no such limitations. In fact, the only limitations are the size of your hard disk, the amount of RAM in your PC, the speed of your CPU-and of course, the limits of your imagination' (Boulanger 2000, xxxvii).

'[T]
he human-nonhuman pair does not refer us to a distribution of the beings of the pluriverse, but to an uncertainty, to a profound doubt about the nature of action, to a whole gamut of positions regarding the trials that make it possible to define an actor' (Latour 2004, 73). 6. Piekut 2014, 193. 7. Scot Gresham-Lancaster's statement that 'music is, at its core, a means of communication.
Computers offer ways of enhancing interconnection ' (1998, 39) shows the influences of writers like Gregory Bateson and Norbert Weiner on the musical thinking of The Hub. 8. Summarizing ANT, Feenberg draws on H. G. Wells's version of the myth of the 'sorceror's apprentice' , where two early bioengineers invent a miracle food that causes animals and plants to grow to eight times their normal size. 'Sloppy experiments conducted on a farm near London result in the birth of giant wasps, rats, and even people. . . . In Latour's terms, the delegation of the original program to sacks, walls, and guardians broke down as rats got at the food, and the network was unexpectedly prolonged (in its syntagmatic dimension) through its nonhuman rather than its human members. Of course from the standpoint of the preexisting experimental program the network was supposed to serve, this amounts to chaos, but if one views the matter objectively, i.e. not from the standpoint of the two scientists and their failed strategy, the network can be seen to grow. And this makes it possible for new actors to pursue new programs' (Feenberg 1999, 116). 9. In 2004, the TOPLAP acronym-a play on 'laptop'-was published on the web as standing for '(Temporary|Transnational|Terrestrial) Organisation for the (Promotion|Proliferation |Permanence) of Live (Audio|Art|Artistic) Programming' . 10. http:// www.livecodenetwork.org/ iclc2015/ . 11. Here I am paraphrasing Adorno's definition of art from Aesthetic Theory: 'The defintion of art is art is at every point indicated by what art was, but it is legitimated only by what art became with respect to what it wants to and, perhaps can, become' (Adorno 2004, 3). 12. In 1994, John Major's Conservative government introduced the Criminal Justice and Public Order Act 1994, a sweeping bill that included within its many clauses a direct attack on the free party movement. Section 63 effectively gave police the powers to remove 'a gathering on land in the open air of 20 or more persons . . . at which amplified music is played' . It included a clarificatory subclause referencing 'sounds wholly or predominantly characterized by the emission of a succession of repetitive beats. ' Autechre's Anti-EP satirized the pending bill, bearing a black sticker on the front that read: 'Warning. Lost and Djarum contain repetitive beats. We advise you not to play these tracks if the Criminal Justice Bill becomes law. Flutter has been programmed in such a way that no bars contain identical beats and can therefore be played under the proposed new law. However, we advise DJs to have a lawyer and a musicologist present at all times to confirm the nonrepetitive nature of the music in the event of police harassment' (Pattison 2014). 13. https:// web.archive.org/ web/ 20150310090045/ ; http:// www.mappingcontroversies.net/ . 14. Locating an issue network requires a list of starting URLs; key actors that together provide an overview of the issue at hand. Given this list ('seeds'), the Issuecrawler will crawl through the associated webpages and store in a database ('harvest') any hyperlinks that direct the user to another destination on the web ('outlinks'). The software then analyses the outlinks and stores only those that appear two or more times in the results ('co-link analysis'). The latter two stages of the analysis can be repeated for 'deeper' crawls; in this case, outlinks from the first set of results would also be harvested and a second co-link analysis would be performed on them, a process that dramatically increases the size of the harvest. The process can be completed up to three times. The results are then plotted in a 2D network displaying inlink and outlink patterns amongst the key nodes (webpages), with the x-y position of the nodes on the map indicating their relatedness, i.e. how frequently links are exchanged between them. Node size corresponds either to the number of inlinks the associated site receives or to a mixture of inlinks received and outlinks made. In network analysis jargon, these two features are seen to represent the amount of 'authority' and 'knowledge' respectively a node contains. In other words, a node that receives a great number of inlinks is deemed an authoritative source of information, whereas one that makes a lot of outlinks is deemed to know where the 'debate' is happening (provided they appear in the network in the first place, receiving inlinks themselves). Further analysis is afforded by the domain name suffix associated with a website, with different colours being assigned to different namespaces (.org, .net, .com, and so on)