 I'm going to give you a talk about closure for the twin media of music and lies. When I tell you a lie, I'm going to follow it up with an admission and an explanation, whether it's about closure or music. And you have to understand that when you're dealing with a domain like music that consumes people's entire lives with study, there's a certain amount of lossy compression that's needed to fit it into a 40-minute talk. In fact, I've already told you quite a significant lie, which is that this talk is about closure. It's not about closure at all. It's about music through the twin media of closure and lies. And I think that's okay, because I was looking back over the program of the conference over the last couple of days, and it's surprising how few talks are actually about closure. I think that in this community, we see closure as a starting point, not a finishing point. So I saw talks about logic programming, board games, mountains of chicken. But a lot of the ideas are things that are taken from the language and extended outwards. And the way I like to put this is that closure isn't so much a programming language as a transmission vector for the closure programming philosophy. So what kind of philosophy is that? Well, it's ambitious, it's evangelical, and it's pragmatic. Closure is a programming language for people who live in today's world of programming but who want to see it made a better place. And that's not a small feat to achieve that kind of philosophical jump. For example, Haskell, which is one of my favorite programming languages, my first programming language, is such a beautiful creature that I think it's easy sometimes to think that Haskell is itself the pinnacle of all things Haskell, that it's the artwork itself. But to us, I think closure, closure isn't an artwork, closure is the paints, it's the easel, it's the musical instrument. And you could say that mainstream success is just a side effect of a well-designed programming language. And if you did say that, it would explain why Haskell hasn't quite had mainstream success yet because it's pure and it doesn't have side effects. But closure does have side effects. Closure does have side effects, both literally and metaphorically. And one of my favorite side effects to play with is music. So I'm gonna show you today some stuff using Overtone, which is written by Jeff Rose and Sam Aram. This is not a demonstration of Overtone. There's far more that Overtone does that I won't cover. But it's an excellent vehicle, as you might imagine for a talk on music, not on closure. So it's a talk about music, so this is music. Well, no, that's a lie. This isn't music, this is dots on a page. But it's dots on a page that's used by people to represent music, in particular from the Western music tradition. And if you're not someone who's got music training, the way you read this is like a graph. So each dot is a note and its position encodes when it happens and at what pitch. So the horizontal axis is time and the vertical axis is pitch. And I'm gonna claim that this is a regular language in the same sense as regular expression. So there's no way to mint new abstractions within the language itself. There's a good reason for that because Western music notation is a strange kind of DSL that's designed to be executed in real time on a peculiar kind of finite state machine called a musician. And a musician doesn't have the mental bandwidth to decode nested abstractions when they're in the middle of a performance, when they're trying to concentrate on their expression and on their technique. So it's a strength of the language that what you see is what you get. There's no extra notions that the execution environment has to work out as they're trying to remember how to place their finger on the bow. So that has a consequence which is that we can't create new abstractions that suit the piece of music we're working with. We can't create things that represent movements in the piece of choruses, et cetera. We also can't drop down any lower and create a piece of notation that describes how a violin sounds. But if we use a general purpose programming language like closure, we do have that ability. So I'm gonna start with the most basic building block of sound and gradually accumulate abstractions till we get back to that piece of Western music we just saw. So the most basic building block of sound is the sine wave. So it's a pressure wave of high and low pressure which is propagated through the air between the thing that's making the sound and the ears of the listener. So that's why they say in space, no one can hear you scream, right? Because there's no medium to transmit the pressure wave. So if we have an instrument called tone, all it's gonna do is emit a sine wave of the specified frequency. So how high or low the note is perceived is based on how many times per second it oscillates. If we have two sine waves, so double tone that are cumulative, we get another sound. And that's louder because we're talking about a pressure wave. So if there's two waves in sync, the troughs reinforce and the peaks reinforce. Now what happens if we have two sine waves that are slightly out of sync? They're alternately reinforcing and interfering. So that's the principle in which noise canceling headphones work. So if you emit a sound that is half a wavelength out of phase with another sound, you get silence. But we don't wanna deal with infinite sine waves. When we looked at the Western music notation, we had dots on a page, events. They're bounded by time. So we're gonna create something called a beep. And so the beep has frequency, but it also has duration. And the way that we control the duration is through an envelope. So an envelope is a wrapper that determines the maximum amplitude of the sound. So when the envelope starts at the maximum, the sound is at full volume, and when the envelope snaps shut, the sound is finished. So this is beep. I don't have to manually cut it off anymore. I just have an instrument called beep. And by the way, we're using not regular functions, but the def-inst macro because with overtone, what's actually happening is there's a synthesis server in the background. So when we're creating an instrument, we're not just defining a closure function. We're actually registering a synth tree with supercolon. Now, I've already told you a fairly interesting lie about frequency. And to explain exactly what I've lied to you about, I'm gonna need the help of this slinky spring. Right, so I said that the, how high or low you perceive a sound is determined by the frequency of that sound. So imagine this is a guitar string and you pluck it and it vibrates at a certain rate. That rate is determined by how tense the string is and how long the string is. Now, if you imagine this, the greatest extent of the movement of the slinky spring is kind of like a sine wave that's wrapped around on itself where it crosses the X-axis where my hands are holding the spring. That's called a standing wave. So there's a particular size wavelength that fits in this string with a certain tension and that's the one that vibrates. But you might also think, well, why doesn't a wave that's half that size also oscillate within the spring? And the answer is that it can. So if you have a wavelength of half the size or double the frequency, it also oscillates within the same string, the same length and tension. So when you have a guitar string, it's not oscillating just at what we call the fundamental frequency, but it's also oscillating at twice that frequency, at three times that frequency, four times, et cetera, in a theoretically infinite harmonic progression. And as we go up, the subsequent harmonics are slightly quieter and eventually they're too high, even for a human to hear. So I've got here a synthesis of a bell, which not only has a fundamental frequency, which we'll use a proportion of one for, but it has a first harmonic, which has a smaller proportion, second harmonic, et cetera. We can use that to create a more realistic sound. So this is what beep sounds like. And this is what bell sounds like. So it's a much fuller and more realistic sound. And you hear them as the same pitch because your brain is listening out for the loudest and lowest of the sounds. But we can do a little bit better than that because when we're dealing with physical properties, things aren't ever quite as ideal as they are in theory. So with bells, it turns out that as you get up into the higher harmonics, the smaller wavelengths, they're actually a little bit higher still than the model would suggest. So if we reevaluate the bell form and then play again, we'll hear another sound. So that sounds much more like a real bell because we're taking into account the physical characteristics of bells. And of course, you have a whole corny copper of instruments that have different variations on the theme. So you can still hear them as the same underlying principle, but they have different color. Now, so far I've spoken about sound as though it's an abstract signal. But of course, in order to perceive music, you need a receiver for this signal. And the characteristics of the receiver are going to affect how the whole thing works. So I've got three invocations of the bell and I'm going to go from high, medium, low. So 600 Hertz, 500, 400. So high, medium, low. But if you look carefully at what I'm doing here, the last argument and the last two arguments here are actually the proportions of those harmonics, the first couple of harmonics. And so what I've done is I've overridden the fundamental frequency with a proportion of zero here. And here I've overridden the fundamental frequency and the first harmonic with a proportion of zero. So actually the lowest sound you're hearing when I play the 500 is 1,000 Hertz. And the lowest sound you're hearing when I'm playing the 400 Hertz bell is actually 1,200 Hertz. So the order of the lowest and loudest frequency is actually the reverse from how you're perceiving it. So high, medium, low. But the physical part of the sound is telling us the complete opposite. And the reason for that is that a harmonic progression is something that is built into real physical sources of sound. It's something that your brain is aware of. We're quite wired for sound. So your brain can recreate the parts of the sound that should have been there. And this is quite important. So for example, if you're speaking to someone with a low voice on a telephone, I think about the lowest that mobiles emit is about 300 Hertz. The speaker isn't loud enough to go much lower than that. But the lowest frequency of a human male voice is about half that, 150 Hertz. So you're relying on the fact that your brain is error correcting back what should have been there in the signal to hear things coherently. And that's just obviously a little bit of an oral illusion. But there's a lot of implications to how our brains and how our hardware is built that you have to take into account if you really want to understand how people are going to perceive sound. So frequency is a spectrum. How do we know which notes in a spectrum to play? Well, I've got another toy, which I picked up in Istanbul. And you can see that this is obviously a proper musical instrument. But it doesn't have a dial. It has buttons. It's quantized the frequency space. So what happened to all the other bits of the infinite frequency spectrum? Well, I'll play you a couple of notes. And it turns out that there's very strict rules governing the relationships between these sounds. So when you double the frequency of a sound, you're generating what you could call is an equivalent frequency. So it's a little bit like midday one day and midday the next. You can distinguish between them, but they share some kind of underlying identity. And if you had a male and a female voice singing the same song, but separated by a frequency ratio of two, you would hear it as the same song. Although at the same time, you could also tell that the woman's voice was higher. And the distance between the octave as we call it is divided into 12 exactly. So the ratio between each adjacent button is a frequency change of the 12th root of two. So each time I go up, I'm multiplying the frequency by the 12th root of two. So quite a small ratio. And we can encode that enclosure very easily. It's a very functional concept. So I've recreated the core midday hurts function from overtime. And all I'm doing is I'm taking the base frequency of a system called MIDI. And each time we go up a button or we go up by one in the MIDI scale, I'm multiplying by the 12th root of two. So midday hurts is just a pure function. Midday hurts of 69 is 440 hertz close enough, which is known as Considay. It's a common reference point. And so midday hurts of 70 is 466 because we're increasing the frequency by a ratio of the 12th root of two. There's a slight lie in the model I've presented here, which is that I've said that we've divided the space between the octave exactly into 12. That's certainly what's done for dominant paradigm of Western music called equal temperament, but it's not the only possible way of doing it. And there's a lot of fun to be had of exploring other keys and other subtleties, both within old Western music traditions and other traditions as well. Okay, so we can start to build up more abstractions. So I've created here a function ding, which instead of taking hurts, just takes a MIDI code. So it takes a new American coding of each of the buttons here and plays a note. So just like when I was playing it on the melodica. And now we can start to flesh out our model of music. So what I'm gonna do here is I'm gonna model each note as a map with just a pitch and a time. So this is just a simple constructing function. So it's very close analog to what we had with the Western music notation. Each note had a pitch and a time. It was just represented positionally, whereas here we're using numbers. And so it follows that a melody is a sequence of notes. And we're gonna be able to transform a sequence of notes using a function called where, which is just a mapping of update in. So let's now define our core function of playing music, play. So what play does is it takes a sequence of notes. It transforms each note by offsetting the time. So in other words, when we play a piece of music, we don't wanna play it back in time at the beginning of the Unix epoch. We wanna offset the time of each note by the time at which we invoke the function. And then we just do a sequence. So at the milliseconds that the note has, we ding the bell, easy. And we just return scheduled notes just so we can see what we're doing. So we can start to build more powerful functions like even melody, which just takes a bunch of pitches, no times, and uses a reduction to space them out by a third of a second each. So this is what even melody sounds like. So that's playing the MIDI notes from 70 through 80. But listen again, and I think you'll hear that there's something a little bit non-musical about this sequence of notes. It's not actually something you would hear in a real piece of music, and that's because we're missing one very important abstraction in music, which is scale. So we've got a whole bunch of notes, but it turns out that in a piece of music, we don't play all the notes. We pick a subset of these notes for the effect of a particular piece. And so the interface to the melodica actually has a default scale. So the white notes represent a major scale and the orange notes represent the notes that are, you could say, illegal in that major scale. So if I pick the white notes, that's scale to play, I'm not gonna play any of the orange notes for that piece. It's something that's relative to a particular piece. And we can encode that too in closure. So a major scale is defined by the relationships between the notes you're allowed to play and the ones you aren't allowed to play. So in a major scale, the distance between the first and the second note is a double jump, we've skipped one. And then we've got another double jump and then a single jump and then a double, double, double single. And so this pattern of skipping and playing continues on in both directions. But that's kind of a relative measure because what we're gonna see is that the zeroes note of the major scale is at zero. The next one is at two, four, and then five, where we've got the single jump. But what we want is a reference point that makes sense for our piece of music. We want a starting point for where we're gonna do these big and small jumps. And so we can enhance the major scale, the major function with a C function. I feel nervous talking about C functions at a closure conference. So if we compose together the notion of C and major, what do we get? Well, we get a function, but we get a function that we can give it a degree of the scale or a position within the notes that we're allowed to play and will tell us what midi note that means. So the third or the two of a major scale, off by one error, is not something that we're invented by programmers, is 64. And then if we go up, we get 65, et cetera. So because we're working in nice pure functions, we can plug things together and we can use a closure community expression. We can decomplect the idea of what the relationship is between the sounds we're playing and the basis from which we're working from. And so of course in music, you have a whole bunch of different starting points. They're just names given to places in the midi scale. So I've just defined here D through B using a kind of a destructuring deaf macro. And they're just names we give to the notes of the C major scale. And things start falling out quite nicely when we use this model because we're using functional composition. So we can have C flat major. And all flat is, is an alias for deck. And we could have C sharp major. And we're just altering the reference point. We could have, I don't know if this is a bug or not, we could have C sharp sharp major if we want. And the ways we can plug these together are fairly limitless. But as you might be able to tell by the fact that I called it the major scale, it's not the only scale. So here's a few other kind of scales. A minor scale is similar but defined by a slightly different pattern of big and small jumps between the allowed notes. We have blues scale and pentatonic and the degenerate case is chromatic. So I might play the chromatic scale to start with. So as I said, that's the degenerate case. That's when we're playing all the available notes because there's a single jump between everything. Let's play the blues scale. So you should have a certain association with the notes you hear there. Partly because of the mathematics because it has a pattern of big and small jumps. But remember we have a receiver in play here and you've also got kind of a cultural and personal background to the kind of music you've heard using those relationships. And it's a combination of those that's producing the effect. So to make that point, let's try playing the pentatonic scale. So the only difference between those two scales is that the pentatonic scale is missing one of the ones that's in the blues scale. But the effect is very different and you'll probably hear that the effect is to sound Chinese or Japanese because the traditional musics of those countries use the pentatonic scale. And so we can have a big difference depending on what scale we choose. It seems like a finicky, small mathematical detail but it has a real emotional impact. So I'll play to you a little melody using the major scale and I'm gonna play it again using the minor scale. It's for Erojaka. Now this is in D minor. So if you're anything like me, the second time sounded a lot more haunting, a lot sadder. But something to note, the only one of these pitches that is interpreted differently by the major and the minor scale is two or the third of the scale in conventional musical terms. So you heard that really obvious subjective difference based on the fact that four of the 14 notes have a frequency that's different by the 12th root of two. So we are really wired to sound, right? We are built in such a way that music and sound is really meaningful to us. We're not really general purpose computers in this case. We are specialist hardware for interpreting, among other things, music. And now we can start to build some melodies. So let's get a bit more sophisticated. So this is row, row, row your boat, written in closure. So the pitches are specified as degrees on the scale. So row, row, row your boat. And the durations are specified in beats using ratios. And so because we're in a functional program language, we can mash these together to make the sequence of notes pretty easily, just using a reduction and a map. So this is what it looks like or part of it. That's row, row, row your boat. And then in order to play it, we need to do a little bit of transforming as well. So the astute among you might have noticed that when I defined play, I was talking about milliseconds. But when I defined row, row, row your boat, I'm talking about beat lengths. So we need something to bridge that gap as well. So beats per minute is a pretty simple function. In fact, it's a function that returns a function. So beats per minute of 120. So two beats per second returns a function that can tell me that three beats into the piece is 1500 milliseconds into the piece. And of course, if we, you know, the fourth beat is at 2000 and if we wanna use a different beats per minute, it's gonna give us a different answer. So we can use that to transform each time key in the map and we can use the C major composite function to transform each pitch key in the map. And then we just need to pipe it into play. But I promised you at the start that we deal with abstractions that are custom, that allow us to express things that a general purpose DSL can't. A general purpose DSL is a legitimate term in programming. So I'm defining here run, which is like a glorified range function. And the reason I'm doing that is because if we look at Bach's music, this is a piece of Bach's music, there's a lot of sections of notes that look like a line. There's kind of zigzagging here going up and down. So we should be able to define these just by the peaks and the troughs. Because I don't think if Bach was speaking about that music, he wouldn't say, you know that great bit in the melody would go C, D, E, F, G, A, B, C. What he would actually say is, oh, you know that bit where it goes from C and there's a run up to the next C. So if we, I'll just interpret this single line. So a run can fill in the gaps and allows me to get a little bit closer to the actual domain. Because the domain isn't something that I can define a general purpose notation for because each different musician's expression is a slightly different sub-domain. So you'll hear in Bach's music a lot of things that sound a bit like this. Peaks and troughs. And so with a couple of other abstractions, I'm able to define the melody to that piece of classical music we just saw, Canoni alla quarter. And I'm taking advantage of things I know about the structure. So it's made up of runs. From zero down to minus one, three, zero. There's a whole bunch of sections where there's notes of the same duration repeated. So there are 14 quarter notes in a row in this section. So this structure may or may not make sense to you, but if you have a different idea about how Bach's music works, well then you write a different abstraction. So there's not necessarily a singular way to represent the same piece of music because we've gone up the extraction tree a little bit. We're not notating every individual note what we're doing is we're expressing a theory about how it works. So it's something quite similar to how compression works. And in Canoni alla quarter, there's also a bass part, another part. And just to make the point about the flexibility of using a general purpose programming language and data, closure data structures. For the bass part, I've added another key to the map. So it's not only has a time and a pitch, but I'm giving it a bit of a metadata that tells me that the part is bass. So if I want to distinguish, and often in an orchestra or a piece, you do want to distinguish between what is the bass and what's not, then I can do so. So I'm getting now to the highest level of the abstraction in my talk. So if you imagine, music is like the OSI networking model. We started with a sine wave, which is like the transport. And then we had error correction with psychoacoustics and various things like scale that allowed us to interpret the signal. Well, we're well into the application layer now. And there's something called a cannon, which is a technique that Bach employed to write this piece of music. So a cannon in English is a melody that is accompanied by a functional transformation of itself. Well, maybe not normal English, in programmer English. It's a melody that's accompanied by a functional transformation of itself. And you can see that the closure to express that is about as terse as the natural language. It's something that lends itself quite well to closure. So we just concatenate the original notes with F of notes. And that can be employed in a variety of ways. So a simple cannon is a cannon where you take the original melody and then you translate it across in time or you delay the accompanying melody. So I'm expressing it here just with a function that takes every time key and offsets it by a certain delay. An interval cannon is a close analog, but instead of translating the graph across or in time, it translates the graph up or down in pitch. So in the original melody, we start at a certain point, and in the accompanying melody, all the notes are either higher or lower in a fixed way. That's an interval cannon. A mirror cannon is maybe slightly more interesting. So it's where we take the original melody and we negate the pitch of all the notes in the original melody. So what that means is everywhere where the original melody goes up, the accompanying melody goes down. A crab cannon, which is so-called because in Bach's time, they had a theory that crabs walked backwards for whatever reason, is one where you negate the time. In other words, the original melody goes forward in time, the accompanying melody goes backwards in time. And as you might expect, because we're dealing with such pure and functional concepts, you can use functional composition literally. So Baroque composers had a kind of cannon, they called a table cannon, which is literally a functional composition of a mirror cannon and a crab cannon. So the accompanying melody is flipped in two directions, around the X axis or the time axis and around the Y axis or the pitch axis. And the reason it's called a table cannon is actually, it's pretty cool. If you imagine a piece of Western music notation on a table in front of you, you can play what you see. If you've got a friend who's on the other side of the table, they can also play what they see. And what they see is the original melody flipped in pitch and time, so hence a table cannon. There's a kind of cannon that I didn't have time to explore called a puzzle cannon, which is where a composer would distribute an original melody and then just tell you that there is a transformation you can make to produce a nice sound. There may be multiple transformations. So I guess you could probably imagine implementing something like that in CoreLogic where you have a solution space and you have some kind of predicate that determines what harmonic relationships you're willing to subject yourself to. So row, row, row your boat is a simple cannon, actually. So what we're gonna do is we're gonna play the original melody and then delay it by four beats. So you should be able to hear that. So I'll raise my hand where the second melody comes in. And you might have done that actually in school. Like you might have had half of the kids in the classroom sing and then the rest. Yeah, and yeah, so as someone from the audience just pointed out, it's actually more than a simple cannon, it's a round. So you don't just play the accompaniment once, you can play it multiple times delayed by four beats and four beats and all the parts work well together. But it seems a bit of a pity that we've abandoned the power of the positional representation. So we had to talk a while back in the conge where we were talking about how we can really easily perceive information when a position is used. And it's nice that we have a linguistic way of conveying that row, row, row your boat is a cannon. That's a simple cannon. But it's kind of a pity that we had to give up the positional representation. But the answer is because music isn't a side effect as I lied at the start of my talk. Music like everything else at this conge is data. So we happen to represent it by pumping it to our speakers which causes vibrations in the air. But why couldn't we equally just graph it? So that's using quill and that's row, row, row your boat. And you can see the symmetry between the two pieces. In fact, let's enhance our definition of cannon. So we're actually marking all the accompanying notes by giving them a part of follower and they'll be able to see what's going on a little bit more clearly. There we go. So you can see visually using position, what the original melody looks like in the pink and what the accompaniment looks like in the blue. But that's quite a simple transformation. And as you might expect a master like Bach used kind of more complicated concepts. So can only have a quarter which is a piece of classical music I showed you right at the start of the talk is a cannon that is produced by functionally composing together an interval cannon, a mirror cannon and a simple cannon. So in graphical terms, it's dropped by three, it's flipped and then it's delayed by three. Which is quite a difficult thing to pull off. So before I play you can only have a quarter I might just give you the original melody without the transformation just so you can hear how complicated that is on its own. All right, so that's the first little bit of can only have a quarter just with the original melody. You can probably hear the runs going up and down that I talked about earlier. But of course it has a transformation which you'll put in here. It has a base part and let's just for the sake of seeing exactly what's going on, let's play it again, let's play the full can only a quarter with all three parts and let's graph it. So you can see the symmetry between the original part. So you can see there's a symmetry between the original part and the translated part and you can also see the triples that are part of the base. And because we're working in a general purpose programming language we are able to express things in a quite domain and even custom domain specific way but we didn't have to give up translating it back into graphical representation when we wanted to. Many people like live music and my own personal theory is to why people enjoy live music is that just as every note is being played just as the strings being plucked before it drops dead on the ground there's a possibility that it could have been something else. The musician could have changed their mind they could have made a mistake. So there's the idea that it's alive, live music. And so if you use code to represent music what does that mean? Well you could say that code is immortal music. This is on GitHub right? You can fork it, you can submit a pull request. You can, as I have done, use git bisect to work out where you mess it up. This music can always be something else so long as you have a closure environment and you have a willingness to transform it. So I do have a willingness to transform it so I'm gonna play something slightly different. So I'm gonna take the scale and I'm gonna play it a little bit higher. I'm gonna use A as my reference point. I'm gonna use minor rather than major so the patterns of the big and small jumps are gonna be different. And I might make it slightly faster and say, because I've got these nice abstractions this is a nice dry place that I can make these changes. If I'd individually annotated every note literally I would have needed to go through every note in the piece to make this change. So this is, can only I look for it as Bach would not have intended it? And I'll bet that the effect is a little bit more like the theme tune of a gritty cop drama and this little bit less like the theme tune of maybe a costume drama. Play around with that for hours, right? And the shape is still the same and if you hadn't guessed it, I'm just doing a simple filter on which notes are in the past to work out which ones to play and which ones not to play. But just because I've kind of trivialized one of the greatest pieces of Western music tradition please don't imagine that creating a canon is a trivial exercise. It's really difficult to create a melody that makes sense when you play it on its own but also makes sense when you mess around with it in a functional way. Bach was definitely a genius even amongst composers. He took a formal austere system and managed to using his human creativity give expression to a really beautiful idea. And I find that really inspiring as a programmer because I think there's a little bit of an analog with what we do. We take the beautiful, crisp but formal abstractions that our programming language gives us and we use that to express ourselves. Perhaps not in quite an artistic way as that. But I know I take satisfaction in having an idea and being able to represent it. And I've heard a lot about how closure is a great language for the enterprise. We can build simple services, et cetera with it. That's absolutely true and I definitely believe that closure is like bringing a scimitar to a spoon fight or whatever Neil Ford's analogy was. But I don't think that's the reason that most of you are here. It's certainly not the reason I'm here. The reason I'm here is that expressing yourself is fun. And in terms of programming language, closure is the best tool yet I've come across to express myself. Thanks very much. Thank you.