While I'm suiting up here, I thought I should ask if anyone has any requests.
Reggie Watts?
All right.
Any Miami bass, old school jungle, Ola Tungi for the older folks, maybe?
Maybe at the after party.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
Thank you.
I'm going to talk a little bit about why I'm doing what I'm doing.
That's mostly what I'm going to talk about.
But I also am going to talk about some future directions.
But first, I want to answer the question, oh, there we go.
I'm going to answer the question that I always get first, which is how does it work?
Now, that actually takes more time to explain than the amount of time I have left.
But I'm going to try to do it in one slide.
So here we go.
Everything starts off with the two controllers.
I have in my left hand a Wiimote.
In my right hand, what I call a Springbok controller.
Because it's made from Springbok horns, not because it controls Springbok.
So it's Springbok horns, arcade buttons, zebra wood, and a lot of electronics.
But essentially, it duplicates the functionality of the Wiimote.
It sends information about button presses and gestural motion sensing to the computer.
The Wiimote sends the same information via Bluetooth.
And the Springbok communicates with the computer via MIDI,
which is a serial communications protocol used often in music.
Once that information gets into the computer, it flows into a program written in Max MSP,
which is kind of a graphical programming environment,
which in turn passes that information into some Java classes that I wrote.
And that's where the real computational work takes place.
Once that's done, the computer sends it to the computer.
Once that's done, the computer sends out a MIDI signal to Arduino microcontrollers
in each of the machines, which in turn act as switches that turn on current
that then flows through solenoids or motors in the case of these guys,
which creates motion and then sound.
So that's how it works. Got it?
All right.
The fundamental conceit of the controller system
is that I don't have to hit a button every time that I want a note
or want to hit the drum.
You probably noticed that there were flurries of notes
and my fingers weren't moving that fast.
That's because I can just hold down a button
and the machine will hit the drum repeatedly.
How fast the machine hits the drum depends upon the rotation of the controller.
So here I've got quarter notes.
Eighth notes, 16th notes, and then 30 seconds.
I also have a triplet button to switch into triplet mode.
Musicians think that's hilarious.
So that's the basic premise of the controller system.
Of course, this means that the computer is keeping track of the tempo internally,
so I don't have to worry about staying in time or staying on the beat.
And this delegation of certain low-level performance tasks,
or what I think of as low-level,
not in any kind of...
I don't mean that in a judgmental way,
but it's not like keeping track of the beat is not really a high-level creative decision.
By delegating those tasks to the computer,
I sort of free myself to think about other things,
what I consider more interesting questions like,
do I want a series of repeated notes or not?
And if I answer yes, the computer takes care of the rest.
So that's one instance of what I call a sort of a steering paradigm.
There we go.
This is a very abstract contrast between the two approaches,
traditional playing versus steering.
In traditional playing, one performer action results in one musical event.
Under the steering approach, one controller action can trigger multiple events.
Of course, electronic musicians and DJs have been triggering sequences
from laptops and so on or from turntables for a long time,
but the concept of steering encompasses a broader range of behaviors.
Here are...
This is far from an exhaustive list,
but here are several that I've thought about,
either because I'm using them already or I'm working on them,
or I'm sort of stroking my chin and thinking about maybe doing them in the future if I live long enough.
So simple rules for event triggering, that's what I already described,
this sort of system.
Looping material, that's what I did in the performance.
I would improvise something, loop it, and then it would keep going,
and then I could improvise over that.
I didn't do this in the performance you just heard,
but I can also manipulate looped material by, for example, having a loop play backwards,
rearranging the notes, doing phase shifting, the style of Steve Reich,
a number of different techniques I could use there.
The problem with this approach is that I'm running out of limbs.
As I'm building more machines,
I would either need to start using foot controllers,
which I don't want to do, or bring on people, which I actually will do,
but there's another alternative, which is to create generative models
that can sort of improvise on their own.
That preserves the improvisatory character of what I'm doing
while still having all the advantages of algorithmic improvisation.
And I won't go into the details of how those work.
And then the third speculative group would essentially build off the second
and allow for control of generative models with semantic tags.
So you could, for example, have a series of knobs and sliders that would control
how much the output resembles a certain genre of music.
So you could dial up the French brokeness and dial down the bebop feel
or turn up the Charlie Parker and so on.
That sounds far-fetched, but once you've actually gotten the second group under control,
it's not too much of a leap to move to the third.
And just briefly, I want to address the use of data mining for musical expressivity.
One of the problems in computer music is that computers want to play events
right exactly on time, but that's not how humans play
and it's not how humans usually want to hear music.
So one possible solution is to create a regression model
based on actual data of a human performance.
And that's what I did for this.
All the sort of slight shuffle feel that you heard in this performance
came from a regression model based on recordings of the aptly named percussionist Armando Borg,
which is a coincidence, but I happen to have recordings of,
a MIDI recordings of Armando Borg.
So his style kind of infused my whole performance.
So if you'd like to learn more about how data mining can give your music funk,
come up to me later this evening.
Thank you.
Thank you.
