bwm.
I want to describe the system for live music notation using GEM.
Now, this is work in progress, and there are certain limitations with the way that I've
gone about creating this patch, this set of patches in this toolkit for creating live
music notation that really requires another effort to perhaps do things differently from
the GEM way but we'll come to that later on.
I'm going to start to explain exactly how I went about this process of creating a system of live music notation.
So, what you can see there is a very small patch and it's deceptively small.
In fact, what I've done is I've tried, I began by trying to create music notation based on thoughts.
One of the goals of GEM notes is to create a system of music notation that has a visual style
that is close to professional type setting standards.
And the other main goal of GEM notes is to create a system entirely in PD
so that a patch can be sent to a performer and they do not need to know complex computer issues
such as compiling to play the score.
So, this is really designed to be easy for the performer rather than for the composer.
It turns out it's absolutely not easy for the composer at all.
So, the first stage is to create a stave and a note.
It seems like a simple proposition and I'm using text objects, text 3D objects in GEM.
Now, here's the note object here.
If we click on it, we see, let me just drag this across.
I see a degree of complexity there but it's fairly hard to, it's fairly complicated objects.
Now, the problem with notation is that there are many rules but for every rule there's an exception.
And a notation object, a note itself has many different elements.
It has the note head, it has the stem, it has the tail that can change.
It has a dots possibly, it has an accidental.
So, there are many different aspects that have to be built into the structure of the note object
in order to completely encompass all of these aspects of notation
so that the note object itself can be a homogenous object between one note and the next
so that you don't need a different note object for a minute or a crotchet.
Of course that I've got a 16th note so that you can use the same note object for any note.
So, the idea is to make an object that is universally going to give me all the notes I need.
Now, this is using fonts.
What I've done is I've taken a font that is already available called Music Quick
which is published under the SIL Open Font Licence and I've loaded that in the font forge
and I've taken the pieces I need and I've enhanced it and elaborated on them.
So, things like when you see ledger lines above and below the state,
that's just one text character.
And other aspects are that I had to move some of the text characters from the curly brackets
which you can't use in PD in the backslash.
So, several things need to be moved, several things need to be redesigned.
So, I create this object.
There's also a stave and that is also a set of text 3D objects.
So, what we have here is an object that creates a stave from a string of the same character.
So, it's actually the full stop which has the five lines of the stave
and then I can adjust the length of the stave.
Now, it's very important that this stave has a continual bang going to it
or creates a continual bang from the gem head.
So, just here we have the gem head and then we have a bang
and then we have the bang goes through to this
which packs the position of the stave and a number of other things such as the colour
and the length of the stave and the clef.
This is very important because when we're calculating beams groups as we'll see,
we have to continually recalculate the length of the stems.
So, while the gem head is banging the gem objects and create them, rendering the score,
you're also banging the mathematical process that ties a note to the stave.
So, notes are explicitly tied to a stave.
Let's have a look at the actual...
I'll have to drag this across.
And we'll turn this on.
So, there we have a note.
A note on a stave.
Probably could do with being a little bigger, maybe.
Let's just...
As you can see, the stave itself, if I move the stave, the note moves with it.
So, the idea is that the position of the note is relative to the stave
which is how music notation works.
I'll just do a little audio example here.
Well, you don't need to hear the audio, but...
There we go.
So, I just moved the note up and down.
And I'm actually doing this via fiddle.
I don't know why.
I just thought it would be interesting to do some kind of pitch to score in this presentation.
And as you can see, it's currently showing a flat.
So, we have a choice between sharps and flats.
And we have a choice between rhythmic values.
There we go. Beautiful.
I have a version of the font where this looks fine.
This version of the font, I think there's two versions of the font.
One is for small monitors and one is for large monitors.
So, we're dealing with the small monitor font.
The demi-semi-quaver just looks like a big quaver, so...
Okay.
More work is needed, but I told you that anyway.
So, that's nice, but the trouble is with this.
What you're dealing with is a single object, which is a note tied to a single object, which is a stave.
And then, that's it.
How do you create more notes?
The thing about live notation is that it's something that has to evolve over time.
So, what we have to do is we have to create more of these note objects.
And so, for this, I turn to dynamic patching,
which is, as you probably know, you can build PD patches from within PD patches.
So, you can create one PD patch that builds another PD patch.
Before I go on to the dynamic patching, I'm going to show another example,
which is the beamed group.
And this is really the reason why we have to have a clocked system.
So, here my note objects are connected together.
The first object's right outlet is connected to the other object's third inlets,
and each object's first outlet is connected to the previous object's fourth inlet,
and there are a number of initialisation parameters in the note object,
which have to be, for example, the note objects themselves have to know how many notes are going to be in the group.
Now, I've got this again when I could turn this on, but I'm going to set it to one frame second.
So, you can see the beamed group calculating itself,
calculating where the stem lengths need to be adjusted,
and the trigonometry of the actual beaming.
So, we'll just turn this, let's create a window, and we'll just turn this on now,
and you will see it change.
You will see that the beamed group evolved into a stable system.
OK.
So, the reason for connecting all the objects to all the other objects in this beamed group
is so that we have the correct trigonometry between the first and the last note,
and that adjusts all of the stem lengths of all of the other note objects
and adjusts the trigonometry of the beam.
Of course, this happens very quickly in performance,
but I'm showing it at one frame second so you can see the process of the patch recalculating the parameters.
Again, this is wonderful.
We have a lovely piece of notation, but this is no different from a page,
because I've just created the objects on the screen and put them there
and connected them all together manually.
OK, so we have a score, but the score is going to be the same every time.
We can change the notes themselves, but I'm stuck with full notes.
That's all I've got.
Marvelous.
It looks nice, but it's not very flexible.
So, this system started off like this,
but then you get stuck because you realise you want to generate these notes in real time,
and you want them to be triggered by something like MIDI, maybe.
I know MIDI is almost a dirty word in the PD community,
but it does have a very rigorous definition of what a note is.
It doesn't have a very rigorous definition of what a rhythm is, though.
It's got a very poor definition of a rhythm.
MIDI is a note on, and then we wait for a while, and then we get a note off.
There's no definition that it's a 16th note.
That's usually contained within another piece of software,
is an abstraction on top of that, such as a MIDI sequencer.
So, although you're dragging the MIDI notes around in a MIDI sequencer,
what you're actually doing is you're changing two messages,
the note on and the note off.
It's just because we have these wonderful sequences
that we can fit them into a grid that approximates music notation.
Now, the difficult thing,
now, I'm going to show you a patch that is not pretty,
but this is a real patch that's been used in a performance.
So, the performance is the first time that this was tried.
So, here's my beautiful island's patch.
A number of things about this.
Okay, so we have four staves.
I'll just turn this on and we'll have a look.
You'll see there's four staves there.
The clef can be varied.
You can have a bass or a treble clef,
so that the note positions automatically change according to the clef as well.
If you have a pitch 72, which is the C above middle C,
and you change it to a bass clef,
then you end up with lots of ledger lines.
And it stays the same pitch relative to the stave.
That's great.
So, we want to create notes dynamically.
What we're doing, the reason why we're creating abstractions
like the note object in the stave is so that the note object
and the time signature and the tempo marking
can be dynamically generated within a sub-patch.
I have this abstraction called Make Voice,
and if we look inside Make Voice,
we find in here there's PD Dolanort Voice Canvas,
and within PD Dolanort Voice Canvas,
there is another abstraction called PD,
well, another sub-patch called PD Dolanort Voice.
So, in order to do this, again,
we have to do it within PD Extended 0.42,
because that's the one that you can download from the website,
so that it's easy for musicians to load the patch.
They don't have to compile anything.
So, within the sub-patch, inside the sub-patch,
I can reset it by deleting all of the inlets and outlets
in PD Dolanort Voice,
and then reconnecting them from the previous sub-patch.
So, you've got two levels of sub-patches for the reason
that we need to be able to access the sub-patch dynamically.
So, within Make Voice as well,
we have these lovely sets of lists,
and this is how the dynamic catching is done.
So, this will create a note,
but in order for the beams groups to work,
it also has to perhaps connect the note to a previous note
and connect the previous note to itself.
Now, the objects in the PD patch are numbered.
The first object you put there is 0,
the second object you put there is 1,
the third object is 2,
the first outlet is 0,
the second outlet is 1,
the first inlet is 0,
the second outlet is 1, etc.
So, I'm actually sending an object,
which is a note object,
which is the abstraction I showed you,
and then we have all of the creation arguments,
which are dollar values,
which are delivered by this list here,
and then we have a bunch of connect values,
which are also delivered by the listener.
So, how do we keep track of all the objects?
Because we have to maybe put a barline in,
so that's going to be object 46 or something,
and then we have to create a note,
which is object 47.
We have to connect object 47 to 48, 49, 50.
We have to connect object 50 to 49, 49, 48, 48, 47.
Keeping track of this is actually really difficult,
to create a patch in PD
that just keeps track of the numbers.
So, now this is a bit that,
I'm very pleased it works,
but I can't say I'm proud of it
in terms of elegance and neatness.
I've created an object,
an external object in here,
called the Gemnodes counter.
And this is an external object,
which is probably the...
I almost gave up PD when I was writing the Gemnodes counter.
It is trying to embed the rules of music notation
in a simple counter
that keeps track of what notes are connected to other notes,
and whether we need a barline
because of the time signature,
and keeps track so that you can have
being groups across a barline and things like this.
Very complicated piece of coding,
but actually what it's doing is counter-intuitive,
because it seems like a really simple intuitive thing.
We have structures,
and the structures are connected together,
and then we have an overall hierarchy of structure
between the bar, the groups, the beams,
the rests, the notes, et cetera.
I tried to do this with counters connected together,
but it's very, very difficult to create in PD,
so I have to make it external.
What I want you to see, first of all,
is we'll get Dolanot voice over.
This is the sub-patch in which the objects will be created.
I'm just going to start,
I'm just going to trigger off the piece now,
and I'll just bring this over here
so you can see it again when I'm done.
We're leaping violently between 200% CPU
and 20% CPU, so there are issues
to do with the actual amount of CPU you use.
You can see there's a number of objects there.
There's a tempo object which generates the tempo mark.
There's a time-suitisee object, a rest object.
There we go, so it's deleted all of those objects,
reconnected them up,
and then started generating a whole new set of objects
to link to the stave to create the music notation.
OK, so it's a function, it is working sort of.
There are problems, but it's working.
The main problem is that dynamic patching in PD
is incredibly inefficient,
so it takes a long time for PD to generate a new patch.
Normally you don't see this because you load a patch
and it generates the patch from the list of text commands
that is sent to PD,
and then once the patch is loaded, it's loaded,
and then you use the patch
where you create a new object here and a new atom here.
Here we're creating abstractions in real-time,
in a sub-window,
and the abstractions themselves are incredibly complicated.
So it actually just eats up the entire CPU.
I can't run audio with this.
I have to run another instance of PD to run audio with this patch
because it just uses the whole CPU on this 2.2GHz machine.
It's not very efficient.
What I'm discovering is what is needed
is an object that actually embeds all the musical rules
and the graphics rendering in a two-dimensional way,
because we're using three dimensions to display two here,
and we're using OpenGL text 3D objects
to display every single little element of the notation.
As you can see, this is not a big score.
We're not dealing with any more than four lines.
It's fairly large, but it has a...
Yes, there's another.
Nope, it just appeared out of nowhere, but...
It's not...
In comparison to an orchestral score, this is nothing.
And yet we're using the entire CPU, what we have, 208%.
It goes down to 20, and then it goes back up to hundreds of percent.
So the computer can't cope with this complex recreation of a patch
inside another patch very easily.
Now, that's great, but what we really want is a pitch to score.
I want to talk a little bit about the language
we're using to create the score,
because in order for this to work,
there has to be a middle-level language between just a midi note,
which, as I mentioned before, there's no rhythmic information in midi.
Midi, completely...
It deals with timing in an incredibly cavalier way.
It's like, yeah, go, we're going to start a note here.
This is just weight.
So the midi is designed around the player,
rather than around the musical structure.
And so there's another object
called game notes bar count that attempts to translate
between a midi file and the meta language.
I'll show you what the meta language looks like,
and then I'll show you the external.
That's going to be messy.
Okay, so we have a fairly simple text-based language
with a few quirks that are stolen from C-sound.
So you can see there's a note, there's a bar.
The unfortunate thing is that the musical structure
is actually very explicitly stated in this system.
So you have to put all of the groupings of the beams in the bar command.
And then we have notes.
A dot means it's at the same time, so it's a chord,
and a plus means that we just follow on from the previous note.
The second value is a rhythmic...
The second value is the pitch.
The third value is the rhythmic value
in numbers of the value given here.
So we have a bar of 7, 4,
and then the first beam group is 4, 16th notes.
But then the first note is actually 2, 16th notes.
And the note object recalculates the rhythmic value
according to the duration and the base value of the note.
But then we have to also be very explicit
that we have 4, 16th, then we have 4, 16th,
then we have 4, 16th, then we have 16, 16th.
So the bar command actually sets up
the whole rhythmic structure of the entire thing.
Now this is almost impossible to do in real time
because you don't know what somebody is going to play.
So we can default to a bar of 4, 4 if we want.
We can just set the time signature,
set a group of rhythmic structures,
and then it will continue to make notes in that rhythmic structure.
And then we have a sharp or flat is the fourth value,
and then I have velocity in there,
and I haven't got round to dynamics yet,
so the velocity is kind of redundant,
it's not being used at the moment.
But this is not too complicated,
so this is a meta language which can be fairly easily programmed.
It's much harder to actually generate something in real time
because you have, again, the problems of midi,
no rhythmic representation.
Now, I wouldn't recommend dynamic patching
as an approach to making anything if I didn't have to.
It's very messy, it's very difficult,
and it doesn't give you...
It taxes PD so much that the CPU is so overloaded
that we have to do things,
we have to have the notes appearing over time.
This is very confusing.
Here we go, so this is moving too.
It's very confusing because
when I gave this to...
When I gave this to the percussionist who was playing...
This is a vibraphone piece.
When I gave this to the percussionist who was playing it,
he said, I can't play that fast,
and he was trying to play it as the notes appeared on the screen.
So, of course, that's not the point.
The point is to do it rhythmically according to the tempo mark,
but when you see something appearing in a rhythmic way,
then you try to follow that.
So intuitively there's problems with this.
What we really want to do is be able to create the entire score
at once and flash it up in front of somebody.
But if you do that with dynamic patching,
then you have to wait 20 seconds for the patch to build itself,
and it builds the whole patch and then it flashes it up,
but there's this delay going on.
So dynamic patching is not necessarily the best way of doing this,
and what I'm looking for now is a way of having the notes appear on screen,
calculate their beams, and then rasterise in two dimensions
so that what the performer actually sees
is just a frame of video rather than a really complicated openGL structure.
So, while I say this is a working system,
and I will make the code available on my site for people to download,
and you're welcome to make pieces for it and create stuff with it,
I wouldn't say there's much documentation.
In fact, I'd say there's no documentation at the moment.
But I do think that I've come to a dead end with this.
I've come to the point where I've stretched it as far as I can.
The complexity of the system itself
means that the overheads are very high,
and it also means that it's not going to be something,
there's no ease of use anywhere.
You can hope for a score to appear the way you want it to appear,
but you can't ever be quite sure,
and then there's always going to be a few errors.
So, it's a proof of concept.
It's proving that you can do this with dynamic patching.
It also shows very clearly the floor's dynamic patching.
Oh, yes, I have a metronome as well.
There we go.
The metronome doesn't work
because PD is having to work so hard to create all of the other elements.
So, my lovely visual metronome only works when it stops building.
There we go.
I don't know what tempo that's at, but clearly it's very fast.
Oh, come on.
So, the visual metronome,
you need these kind of things to synchronise rhythmically
if you're going to be doing improvisation with the system,
which is the original idea was to make something that's a middle level
between improvisational music and composed music.
In fact, every time the score is played,
it generates a different random sequence of fragments of the score.
So, although this is a through-composed piece,
and then I've cut it up into fragments,
there's a problem as well.
It's madness.
You need to be composing for live notation
if you're using live notation
because the way that live notation works
is by throwing ideas at the performer,
rather than having a long line.
I might as well just have a page if it's through-composed.
The second version of the piece will be
a little fragment of melody and rhythm,
not huge long lines,
because after creating this piece,
I realise that this is a nonsensical way to use live notation.
Live notation is all about unpredictability
and elements rather than complete scores.
So, there will be a second version of the piece,
whether or not it continues to be created in game
with dynamic patching.
I think I'll be looking for a more efficient way
of dealing with live notation
and rendering frames
rather than complex structural elements individually.
That's it.
I will make this available as soon as I have some time
and a UK plug and a Wi-Fi connection,
then I'll be posting this on my website.
There are actually a number of things in the external
which are crucial to the musical structure.
One of them is this massive array at the top,
which is just a way of splitting up the bar into...
So, for example, if you have 22 notes in the bar,
then we divide them into groups of 8, 8 and 6.
Future versions, I want to be able to change this array
so that you can restructure the bar.
So, if you're dealing in 7, 4, you can hit 3, then 4,
or 4, then 3, or 2, then 2, then 3,
and different rhythmic structures.
But this is why I think I've probably hit the buffers
in terms of this development,
just to get the musical structure into a way
that can be tracked in terms of the number of objects
that has, as you can see, there's a lot of code.
Just to count objects.
This is a counter.
This isn't doing some sophisticated DSP.
This is a counter.
A counter that took six months to write
and still doesn't work properly.
So, I don't think dynamic patching is the way to go.
At least I proved it can be done.
Any questions?
Yes?
You cited two kinds of difficulties.
The one was like, it's a very complicated thing
and it does some errors in its circulation.
That's like the semantic layer.
But then the second performance is a problem.
Don't you think that the second problem can be solved
by what the guys showed today in the morning,
like speeding certain things up,
just pure computational power on the graphics card,
so maybe you could profit from that?
Absolutely.
The issue is,
one of the clear design goals for this
is it has to be something that a performer can take their laptop
and it will be a laptop
and download the patch,
which also has the object as a binary,
so I'm calling the object in a folder called
X and it's X slash gem nodes counter.
So, the idea is ascended to Simon,
who's my percussionist.
He downloads PD extended,
he opens the patch and it works.
And it has to work on stage as well,
which is almost certainly going to be laptop.
Yes, it would be very nice to have a very juicy graphics card
with lots of memory and that would probably solve a lot of the CPU issues.
But I made too many rods for my own back
when I was designing this.
I tried to make it something that would be light and flexible
and easy for performers to use
and would work on a laptop on stage.
When you're designing software,
if you put too many parameters in the specifications,
then you eventually end up with neither a pig nor a dog.
It doesn't quite work in every way,
but I think it was a very big challenge and still is.
Yes.
Why in fact would you decide to use PD
in order to make this music rotation visualization?
Good question.
Well, apart from the fact that PD is a...
It's hard to say. This evolved.
I sat there and I started making a note with a stave
because I was trying to generate a system of line notation.
PD is something I know, I write externals for PD,
so obviously I was going to try.
I saw the potential of the text 3D object
display fonts for music notation.
Having said that, perhaps, like I say,
this is not the way to do it.
But the other reason is, again,
that's this idea that you've got to be able to download something
from the web and use it if you're not a computer programmer.
So PD extended already has the graphics tools built in.
It already has a web displaying fonts.
And so it seemed like a rational thing to do.
Having said that, if you did it in another language
but you gave sufficient documentation,
then there's no reason why it has to be in PD.
I mean, I started this project four years ago
and it's only this year that it's become a reality.
Perhaps PD is not the way to use it.
Perhaps PD is not the best thing, I don't know.
But these things evolved.
You come up with an idea and you start doing the idea
and you go, oh, it's working.
So you start building and building and building.
By December, the gig was in January,
by December last year I was going, oh my God,
I can't believe I've done this.
It was tearing my hair out time because it wasn't working.
So yeah, where are I to live my life again?
Maybe I'd be doing it differently.
Well, you've clearly overcome many technical challenges.
So congratulations.
Well, thanks.
You had a question?
I actually wanted to comment on the performance issue as well.
Whether I really think it's the dynamic pattern
that's involved with your problem
or is it the logic between the communication?
I mean, the communication logic between the options
that are triggered and they work.
You can see a new node.
Well, one of the problems is that you have to clock.
One of the things that really takes up all the CPU is the stave.
The stave object is constantly firing its list
in every game frame.
It's firing its list down to all of the node objects
so that you can move the stave around
and also so that it can recalculate the beams.
What I haven't managed to do is manage to find a way
of getting it to render, sort out the beaming
and then turn that clocking process off.
So you've got two clocks running at 30 frames a second.
You've got the gem head,
but then you've also got a clock for all of the math.
So the more it just increases CPU load linearly,
but the node object itself is quite a complicated object.
So it could well be the topology of the objects
and the way that they're constructed
that makes the dynamic patching so slow.
I'm not sure. I need to do some more tests before I work that out.
Speaking of clocks,
why are you actually using the gem clock
when it's around the clock for room manipulation?
Because it is the...
Well, because every frame...
The idea is that the fastest clock that I can use
is the gem clock.
I could use a faster clock,
a totally overload the system.
It's a compromise between having to recalculate things
and doing it as fast as possible without going too far.
I tried it. I did do it originally with a metro object
to bang the list.
But of course, if you make the clock slower than the gem object,
then you end up with the really slow recalculation of stones
and things like that, and it's distracting.
So it just seemed like the most rational thing to do
is to be recalculating the maths on every frame
so that the recalculations would happen quickly,
but not too quickly,
so I don't have to do five million calculations every frame.
So I just do one set of recalculations every frame,
and eventually, after about three frames,
the beams will line up and it's nice.
Yes?
I don't know if it's...
I put it through your considerations,
but would it change,
because you were talking about an aesthetic approach,
and you had the ambition to have the whole score written
and it popped up,
and your question is, of course, just based on rates,
would it change if you just take for...
I mean, change to a concept of interpretation for life
from between transitions?
Would it take just a small chance of a more musical performance?
Oh, absolutely.
Like I said, version two is going to be lengthened with fragments of six or seven notes
and small rhythmic elements.
This is what...
I wrote the score, then I made the software,
and then I put the two together,
and I realised that what I actually have to come up with
is a new concept of music,
which is what I was aiming for anyway.
Something that isn't quite so rigidly defined.
And the irony was that the score I created for it
didn't exploit the system very well,
because it was so rigid,
and it needed to be something much more flexible and open,
like an aleatoric set of staves with different notes
in different parts of the screen,
and a lot more freedom of choice for the performer.
So, yes, I think there's much more...
there's much better use of the system,
but I haven't managed to encode all of the scores for that yet,
so that's going to be very interesting.
If you print the whole score,
it's like you could do it on paper.
Exactly, yes.
If you have a tracking system that different musicians
communicate in, and they have the tempo
that they can have and then send some stuff...
Ah, well, this is the potential of the system,
and this is more exciting than the piece I composed.
But, I mean, for example, you have...
there's a bit of a clutch,
but if we have a look at the score,
there are quite a lot of aleatoric elements,
so, for example, movement 3 has...
right, so here's the thing that plays all the scores,
and then you've got movement 3,
movement 3 sections 1 to 9,
and then, within there, you've got Markov chains.
So, when you play movement 3,
you have a kind of probabilistic map.
These are Markov elements,
so we've got a sort of standard Markov chain.
Straight out of PD documentation,
in 03 Markov model, or whatever it's called,
in the control examples,
and just adapted.
So, I got this rigid score.
I cut it up into pieces.
Every time you play it, it's slightly different.
There's a little object that seeds the random objects
with the time, the current time and date,
so every time you load the patch,
it's got a different seed for the random object,
so you don't end up with the same series of pseudo-random numbers.
But, yeah, you're right.
It's not the way I envisaged using it,
but I had a deadline.
I had a concert, and there were people buying tickets.
I had to do it.
Thank you.
