 So let's talk about making music with Ruby, and we have a little bit of sound space. I'm Noah, and I make music, and I also make software. I'm also the organizer of the Bay Area Computer Music Technology Group. We've had 30 events since our creation in 2007. We've gathered at various places, Stanford, Mills, Berkeley, Dolby Express, and did the design. Yeah, I know. It's a little more rock and roll. It's better though. Okay, so we've had, is this better? Yeah. Okay, very good. All right, so, yes, SparkMut, the Bay Area Computer Music Technology Group, a strange acronym. 30 events, lots of gatherings, great presenters. John Chowding, the inventor of FM synthesis presented. Go Wong, the creator of the Chuck programming language. Tim Thompson, the creator of Keykit. Jerome Lanier, a number of other great people have presented. And there's an event tonight at Pier 38 at Embarcadero in Townsend. And if you want to come by, you're certainly welcome. So, I'm also building an online music instruction platform called Ashbury Music Hall. And worked with a dance troupe called Capacitor. We recently did an urban canopy project, which was a collaboration with the California Academy of Sciences. And we ended up at TED 2009 with that project. And I have a band, which is called Rabbids Rum. And I wanted to play a little bit of music before we started diving into the code, because it, I don't know, somehow puts your head in the right space. So, this is a track of mine. Did something cool. So, that can be the experience of writing music with software. And the point of this talk is to kind of get over that stage and actually be able to do some cool stuff with Ruby. So, back to the topic, making music with Ruby. And this is going to be an overview of available libraries and strategies for working with them. And really the point is to maximize your fun. Because you have a limited amount of time for working on fun projects. And so, you should maximize that time. So, there's a small fun interruption before we get started. Which is that Ruby is not the fastest language. And we're talking about doing audio processing and things like that. So, it's good to think about four different rates of time in music systems. And they all have kind of different demands. So, the first one is non-real time. So, this would be like, you know, I'm rendering a file and I don't care when it gets done. Or, you know, those sorts of things. And soft real time, which is your exact timing isn't really important. You're triggering something and it doesn't matter if it's going to happen at a specific moment. And then there's control rate, which is, you know, for things like MIDI or open sound control. Which is another MIDI, another music control language. And those kinds of events have to occur within like a 10 millisecond range for you to be comfortable with the timing of them. And then there's audio rate, which is, you know, 44,100 times per second and multiple things happening at the same time. So, Ruby is not so great at the third category of audio rate. But it's decent at doing control rate stuff, soft real time or non-real time tasks. And you can also rely on other schedulers, say like Ableton Live or Chuck or the Super Collider, Programming Language, Java, C Libraries. And you also can interface with faster audio rate libraries for doing digital signal processing. So, why use Ruby? Basically, because it's a great highly usable language. And it's good for writing domain specific languages or declarative type work. It's really good for going systems together. And if you're doing multimedia stuff, sometimes you have a whole array of things that need to be glued together like I need to get some Twitter information and plug it into this, you know, sound installation or something like that. And then there's also rich libraries in Ruby that are lacking in audio programming under this. So, who might want to use Ruby for audio? And I thought about it and there's three main types of users that I kind of want to address in the talk. The first one is Carl the composer. So, Carl's needs are that he wants to compose music, he doesn't really care when it gets written, more soft real time concern, maybe outputting MIDI, maybe outputting audio file, wave file. And his favorite drink is Oolong perhaps, Earl Grey, something like that. So, our second user, potential user is Henry Hacker. And this is like someone who wants to make command line music or like interface there, create some sort of communication between some obscure piece of data and music or write command line music or live coding or those sorts of things. And this kind of user, I say soft real time here, but when I think about it actually hard to, real time would be good as well. Favorite drink, probably cold pressed coffee. And third user is DJ Diana who is all about live performance. So, everything has to be very precise timing, real time control rate, there's a third user profile that's not in here which is Danny DSP and I'm not going to talk too much about that in this talk. So, Snorkel. Snorkel's a library that I put together for exploring Ruby programming. It's pushed up to GitHub at github.com slash aquaboo slash snorkel. And basically I pulled together a whole bunch of libraries, a lot of them, some of them you may have heard of before, some of them are a little bit more obscure and I have examples for most of them and they're also the examples for this talk. So, it's a good starting point and definitely send me your feedback, particularly about installation, it needs to be an installation script because there's a lot of sort of strange dependencies and things. So, first user that we're going to talk about is Carl the compiler. So, Carl wants to make music, particularly algorithmic music. And what should we generate music from? How about Pi? So, everyone when they start making generative music tends to write some kind of program that takes an irrational number and converts it into music and so I think that that's a good starting point and I can save you a little time in doing this so your first project can be something else. But there's also some interesting things that you can do with sort of quasi-random streams of numbers and quantizing them, etc. So, for this particular project we're going to use Ruby, Chuck and Pi. So, what is Chuck? So, Chuck is a real-time audio language. It was created by Go Wang and Perry Cook at Princeton. Its focus is programmer productivity and live coding and the fourth bullet point that's not really here is that it's not exceptionally mature but it's very usable and so it's very quick to get things started and at a certain point you're kind of like, oh, I really wish this was in the language and when you get to that point you either write it or find somewhere around it or you switch to Super Collider. But Chuck is a really great programming language for getting projects started and it has some cool properties. So, Chuck installation. You can grab it from its location at Princeton, the download and I should make note that the two operators that you see here on the left is the Chuck operator which Chuck things and then the second operator is the UpChuck operator. So, Chuck in four slides. So, say you want to make a sound. Line one is you're creating a sine wave oscillator and it is assigned to the variable S and then it is chucked to the digital audio converter and so nothing would happen if you just wrote this line because time is standing still. You have to explicitly advance time in Chuck which is one of the interesting properties of the language is that time is actually a data type. So, the second line is Chuck one second to now which means that basically I'm going to play the sine wave oscillator until one second has elapsed. So, it's basically saying I'm not going to do anything with the scheduler until this amount of time advances. So, second slide builds on the first one. So, you've got the sine wave, how do you change the pitch? So, here line four you have the number 880 and you are chucking it to your sine wave frequency and then you're advancing time again. So, this would play probably the, well, it's going to play the default of the sine wave, the default sine wave frequency which is 440 hertz and then it is going to play something at 880 hertz. Like lines two or five? Yeah, so basically the line one is where you set up like a, it's a graph, it's a unit generator graph so there could be a whole series of unit generators that are all routed to each other. So, sine wave oscillator gets routed to say a reverb and gets routed to the back or chucked, I should say. And then once those things are all assigned those unit generators are going to generate sound as time advances. I'll actually... What doesn't happen on that specifically? You're just saying... It acts on an entire graph. So, did we cover that? Yeah, basically we did. And I'm going to show an example in a moment but this is kind of a background for the example. I mean I'm going to show a demo. So, command line arguments. This is where... Yeah, so you can invoke chuck from the command line and one strategy for using chuck is to pass in command line arguments. Another strategy is to create templates and then invoke, you know, sort of run those templates through ERB or something like that and then pass in the chuck. In this example, we're passing in a command line argument so we have our sine wave oscillator. It's a pipe to the DAC. And then we are grabbing the first argument that's passed in on the command line and converting it from ASCII to an integer and chucking it into a frequency, that's line four. And then line six, we're taking that frequency which really should be called note and then we're passing that... We're converting that to an actual frequency passing it to the sine wave and then we're advancing time. So on the bottom is the actual invocation that we make on the command line. So we chuck to chuck argpitch.ck and then our argument 60. So that would be playing a C note. And then this is an example of recording. So all of these examples are inside the Snorkel project and I've chosen these specifically because they're the kind of examples that you'll probably want when you're trying to invoke chuck from within Ruby. So if this kind of breezes by, just look at the examples and you'll get the hang of it very quickly. So in this example, we have on line four. After the DAC, we're piping out basically everything in our signal graph to wave out stored in W and then that is then chucked to black hole. So it's kind of pulling the samples from the DAC through the wave out unit generator to black hole. And this just trust me, you have to do that. And the explanation for is a little... It's down the rabbit hole. So line seven, we're just naming the file. Then you advance time normally and it records. So the two examples that I have at the bottom for invoking this on the command line, the first one is you'll actually hear the sound and the second one, you pass in the dash s flag, which is for silent and basically it'll render it in non-real time and you won't hear it and it'll render it faster. So for instance, you were using chuck on a server and you wanted to optimize the speed of that. You would pass in the dash s flag to chuck and it would be a quiet. So all right, that's enough kind of talk about chuck. I'm going to show you an example which basically will take n values of pi and convert them to notes and play it a little bit, a little sine wave, a little bit of reverb and we'll record it silently and then we'll play it back. Okay, so I wrote a little wrapper. You know what? First I'm going to actually show you what I just was talking about, which is playing a pitch. So here's that first example, we're just grabbing the command line argument and playing as a pitch. Passing 60. It's so difficult. So the second one is a simple record but I'm going to skip that because we're going to be doing that in a moment anyway. So inside of this other folder called otlottl which is an otlottl as a thing you chuck a spear with. It's a little wrapper for chuck that I wrote and we have this simplest pi demo. Can everybody read that or it needs to be bigger? Is that right? Okay, sorry. I'll read you the comments. I'll read you some of them. Okay, so here we have, we're creating this wrapper and when this is called it'll print out what it's actually calling to chuck. We're setting the pitch offset which is 60 so we're going to add 60 to every successive numeral in our pi generation and then we're going to use 100 digits of pi and have this implementation here. So then we're adding these events and creates a big chuck string, renders it, it's going to do it silently. Actually, maybe I'll just do it. I'll actually play it and generate the wavefile here and here's the result. So right here we have chuck and then it's then piping all these values to this simple render and these are all these command line arguments that are there. Not the most beautiful thing but it kind of proves the point and gets us through that like, I really want to use pi and generate some music from it. There it is. It's kind of mildly disappointing actually. So let's see, let's go back. We'll have a more exciting example in just a moment. So, ooh, I don't want to add a slide. I did, I did not. Okay, so Bloopsophone. Bloopsophone is a project written by Why the Lucky Stiff and Company. It's a Ruby API for doing C chip tunes and this is, it's basically written for doing game music and shoes. There's not a lot of documentation but if you look at the examples, it's pretty straightforward to use. So I'm going to do the sort of enhanced pi demonstration where we use pi and then we kind of quantize it a little bit and then we use this Euclidean rhythm generator which is something sort of as a side note you want to check out. There's this Euclidean rhythm generation algorithm that basically evenly distributes a set of drum hits across a given number of beats and it ends up creating patterns that are sort of clave-like. They sound like lots of sort of polyrhythmic patterns and there was a claim when the first paper came out on this that it was like, it's like figures out like why we like certain rhythms and I feel like it's sort of a specious argument but it does create a lot of cool rhythm so I'm going to use the Euclidean rhythm generator in this and demo it with Bloopsophone which is the C library for chip tunes and so let's take a look at that so CD into Bloopsophone so let's start first with the Bloopsophone theme song which is just the default demo basically it's got... you do the definition of the instruments at the first part so this has a square wave and you're setting up your punch, sustain, decay all the sort of usual things you would expect when you're defining an instrument and that's fairly defined if you want to know what those are I'm afraid you'll have to read the C code if you want to discover them the easiest way to do it is just read the examples and then it uses this kind of Nokia-style notation for pitches and so this is the aptly named Bloopsophone theme song sounds like this okay so we're going to do something else here which is sort of a continuation of our example sort of exploring this generative music idea so I've created four instruments here at the top and then down here I've created this just an array of notes and there are ten values here and these are essentially what we're going to quantize the values of pi that we're getting and then I've created this simple little function here which is called Greek melody since this is like a Euclidean rhythm plus pi it has to be a Greek melody so I create this Euclidean rhythm here in line 55 and I grab these various numbers from pi and what's kind of neat about this is it'll wrap the rhythm across whatever length of however what the number of significant digits are of pi that we're using prints out the melody and then down here I kind of do it a few times so we have a three-section song ABA form and you can change the number of total beats here and it scales so this is kind of like our first sort of real generative music sounding thing and so here it is this is section B so I really didn't premeditate a lot of the aesthetics of this it kind of fell out of the Euclidean rhythms which you could tell we're kind of they're interesting kind of rhythmic there's an interesting rhythmic output from that so that's worth exploring okay so back to so alright so that was kind of neat we generated some music and it sounded okay and I just kind of wanted to show you where the bar is in generative music these days so this fellow David Cope teaches at the University of Santa Cruz he has a project called Experiments in Music Intelligence which is a LISP application which goes out and analyzes scores from other composers and it creates sort of like a Markov chain or kind of analysis of it that stores which pitches are played in combination with what what they transition to and which portion of the piece they're played in he then generates scores that are performed by musicians so you always get some benefit if it's actually performed by live musicians there's a lot of interpretation there but this piece that I'm just going to play a sample from is from analyzing a number of Beethoven pieces and this is the output so it starts out sounding like the Moonlight Sonata and then kind of like diverges and it's fairly sophisticated generated music and he has some other scores that are full orchestral pieces Bach corrals, other things I don't know, I just really struck by this whole thing actually he sort of a side note he originally had writer's block and wrote this program to solve the problem for him and he then used this program to generate compositions in his own style and then premiered the piece and the piece met more met better critical acclaim than anything that he had done previously and he was kind of like he didn't say anything about how he created the music for a while so he's a pretty interesting character and it's worth checking out some more of his music and also the algorithms that are used to make it so on to another kind of generative music topic is data sonification so I worked on this EEG data sonification project with an artist named Sarah Philly and we kind of asked this question of how do you get from a whole bunch of raw EEG data to score and we took a kind of raw approach and I kind of want to show you this just because it's the kind of thing that as you get into generative music that you'll probably try so the first approach was just kind of literal like we're going to take some numbers and we're going to map them directly to notes and we're just, you know, what does this sound like? So this is what it sounds like I think that's probably about as much of that as I can take and the second approach is like alright we're just going to eliminate we're going to eliminate duplication of notes okay so there's a little bit of rhythmicality that starts to emerge and then next is like okay now we're going to quantize it into some sort of major or majorish kind of scale oops slightly interesting I mean you, you know, it's like willing to listen to it for longer than the first two examples but still not like all that exciting so the next kind of iteration of this idea that I tried was working with the capacitor dance troupe we were doing this kind of like geophysics themed multimedia performance and we had access to some raw geophysics data and I had been studying harmonic theory with this fellow named Eloudi Matthew and I thought oh why don't I take his harmonic theory which is basically creating these lattices that are based on based on the overtone series and run the the data that I have through this lattice and there's some other rules that I used but that's the general idea and so the idea was to sonify this geological data and this is the output so this piece there's like some harmonic progression so it's still a little bit floaty so this is a picture of the data that we used or was it this data I don't know you can't really tell so how does data map meaningfully to sound it's kind of the question that I ended up with after this and it's still kind of a hanging question that I think about so meaningful data mappings I'm just going to mention a couple things one is what can actually translate across domains and there's this kind of idea of like meaning spaces like something's meaningful in one domain and something's meaningful in another domain and the other thing is how can it be meaningful to the ear so we have this kind of raw series of data that could be played this kind of pie generation thing where the notes they don't really hang together all that well but you take the same thing and you take pie and you create a sine wave and you get a perfect pure tone and it's like what mapping are you going to use that actually has some sort of meaning to the physiology of the ear and so there's these two sort of questions the aesthetic meaningful to the ear and is it meaningful and accurate to the data some leave that conversation there and go on to our second profile which is Henry Hacker so Henry wants a live code and what's live coding you may ask so live coding is using your keyboard as an instrument probably with a command line or a text buffer and it can be soft or hard but in real time there's a website called toplap.org and there's a manifesto draft that is up there so it is a manifesto in progress and it has some phrases such as code should be seen as well as heard and live coding may be accompanied by an impressive display of manual dexterity and the glorification of the typing interface so this is highly compatible with Vim anyway so here's an application called Quothe that's written by Craig Lada and it's up at netjam.org slash Quothe it's a interactive fiction system in the spirit of Zork and other tools such as that or games such as that it's executable natural language written in Squeak Smalltalk and here's an actual demo of Craig using it so Craig says he's going to open and source that at some point and we should encourage him to and so the next question is live coding Ruby Herb is your friend that console had a lot of herb-like behavior you saw him step into objects and do things like that and it's kind of interesting so I've written a little starter project called Herbivore it's up on GitHub and it's a good live coding starting point allows you to send MIDI and spork some Chuck shreds which is like Chuck language for creating processes and it's I think that the command line is kind of a nice tool for playing with playing with audio basically because there's two states of mind when you're doing music stuff one is you're like you're in the rhythm you're actually in the sound of it the other is that you're thinking about being in the sound of it or you're thinking about the sort of compositional structure of things and those are two really different mindsets and if you're in the command line you can get a little bit closer to that to that being inside of the music state of mind so I have this little text demo so we drop into Herb I have Boson running in here so you can kind of list your commands I've got this midiator key things and so what's kind of interesting about this is like alright this is making a bunch of kind of nasty sounds the mapping is important and you also can kind of use your keyboard skills to actually play music and you can use phrases to remember sort of like I would have a hard time remembering that melody you can have strings of melodies that are associated with text and you can kind of use your word grouping brain functions for actually remembering musical phrases which is kind of a weird cross mapping so that's that demo and you can play around with that so let's move on so midi you probably know about that midiator is a cross-platform midi library that Herb of War is using it's written by Ben Blathing and it borrows code from Topher Still's Practical Ruby projects it gets used a lot also Gamelon is a scheduler by Jeremy Voorhuss and I'd recommend checking that out so on to DJ Diana here wants to control loops live so I'm going to show you so back to our Herb of War thing I was going to kind of hook up Ableton Live and show you that but it's it's pretty easy to do so I thought I'd just do something that's a little bit less like that and incidentally well I'll talk about that one but here we are back in the console and shred the keys you're noticing a little skip there that's actually my fault and not Jax because I'm reading the files off the disc but should be sorted out fairly quickly it's kind of a demo okay so that's kind of an example of oh cool thanks yeah so that let's go back to our slides because I'm about to mention it so now I'm sorry I'll try to do it at the top of the screen so Chucker is a wrapper around Chuck for creating Chuck processes or sporking Chuck shreds as it were it's worth checking out it's also included up there in the snorkel library and github so you can go check that out it's fun to poke around in there different ways of adding in various shreds and pulling them out adjusting parameters it kind of uses a template approach to create a class around a Chuck patch and then launch those as shreds so as I said Ableton live it would be pretty easy to do with midiator and gamelan for passing midi to Ableton and just triggering loops also I should mention that there is a JRuby object that Adam Murray has brought into the MaxMSP visual programming language that works quite well and that's a great thing to do Adam is actually demoing the demoing JRuby inside of Max tonight at the Barkman event and there's Max for Ableton live and what this means is that you can launch JRuby inside of Max inside of Ableton live so that's going to come out next week Adam just confirmed to me that it is possible I think he's going to do a short demo of it tonight and that's the easiest way to get Ruby into your live environment I mean into your live literally live performance environment I would definitely look at JRuby inside of Max for live unless you want to go down this more hackerish direction so control it's kind of a movement that's happening right now I just wanted to mention it as sort of an inspirational point people building custom controllers and developing virtuosity with them hardware endeavor and I kind of wanted to share with you a little bit of what's going on in that space so this right here is a fellow Edison has built this monome which is a 64 button controller inside of a yellow lunch box that he is perloined from his ex-girlfriend and on the right is another controller that's made out of arcade buttons and this is him performing on it and he's developed some serious skills so it actually triggers the chords on it which is like anyway I saw him perform recently and it was even more impressive than that that was a cool demo but the stuff he was doing I was like that is so awesome he's spent so much time with his instrument so there's a lot of benefit of spending a lot of time developing electronic music instrument performing with it and I would suggest going in that direction rather than perpetually spawning new little projects which is my approach so other libraries and quick demos I've given a certain amount of love to Chuck and I should give a lot of love to Super Collider because it's a really amazing language it has things like list comprehension and currying it's fantastic, client server model and it's really good so I'm just going to do a really quick there's a library called Screwbee which is a Super Collider library and let's see where is it so this is Screwbee and this right here looks a lot like Super Collider you're booting the server here's a synth definition and then just for fun I kind of like I'm scheduling this Super Collider stuff inside of Gamelon which is Jeremy Voorhis's scheduler and I think in order for this to play I have to launch Super Collider over here and turn on the server can do this all command line but it's more difficult to compile so let's see what happens here hopefully it'll work so that's kind of cool we just actually defined this FM synthesizer from within Ruby and here's the envelope generator, sine wave oscillator we're multiplying them together doing FM synthesis stuff and then I just scheduled it with the Gamelon scheduler which is like kind of insane because the Super Collider scheduler will be a lot more accurate but it's really neat that you can go back and forth between Ruby and Super Collider like this so I just wanted to give a quick demo of that and one last thing is this Lillipon score writing thing I know you guys can't really see this down here and there's a Lillipon a Lillipon so I just wanted to kind of quickly show you that it's possible to generate scores from within Ruby and so this is kind of a template thing and I just generated some notes there this thing here that bit is actually the score language for Lillipon and I've just kind of created a wrapper around it so it wouldn't have been so hard to kind of convert between our like weird Bloopsophone, Euclidean, Pi thing and generate a score for it so that I can have my orchestra play along with it right so anyway that's there and that's pretty useful you can generate PNG files you can generate PDFs with Lillipon pretty exciting stuff other useful libs MIDI Lib is a classic you can generate MIDI with it written by Jim Maynard that's sort of a standard it's been around for a while I decided not to demo this but OPAS plug DK is this like declarative VST plugin generator written in Ruby for generating JVST wrapper VST plugins it's worth checking out there's a blog post it's kind of mad I've used it it actually works and kind of exciting so interesting projects in that direction so here's some areas for development scheduler improvement synchronization there might be some like what is the best and most accurate scheduler that we can build in Ruby you know part of that is dependent on kind of language level developments composition libraries and DSLs that we can kind of share improved supercollider and chuck libraries those are kind of rudimentary right now and there's no documentation you kind of have to dig around it's exciting but it's not like super usable so improving those would be great unit generator signal graphs once again Jeremy Voris has you know has written a library for wrapping core audio unit generators which I played around with it's pretty cool and it's something to watch that's an area to explore and you also if you want to play around with snorkel and herbivore which I posted you know please send me patches or don't and just play with that so that's the end so right now they're like right now strangely timed at 2pm in the city at embarked air on Townsend there's a Barkman meetup Tom Lieber is going to present he's been doing some like really low level like direct calling of Chuck within Ruby Adam Murray is going to present on Max MSP running JRuby inside of it and JRuby inside of Max MSP inside of live and then also JD Northrop is going to present and he's going to present one of his side projects which is controlling processing and reactor with Ruby and there's going to be a few other presenters as well and so if you want to come out tonight would be a good night and if you live in the Bay Area just check for events because we do a lot of stuff so finally the examples that I showed are all up on GitHub mostly in that snorkel project github.com and my blog is there and thank you very much so Unit Generator Unit Generator would be like a sine wave or a square wave or some other like a delay the typical way that audio programming languages work is that they define a graph of unit generators so you say pipe my sine wave oscillator to this reverb and then after that I want to apply this like whatever Chebyshev polynomial you know like process to it or something like that so it's like it's basically a series of it's a graph essentially graphing the output of one unit generator to another you could consider one audio unit to be a single unit generator and then you would the output of that going into another unit generator would be the signal graph so in essence yes and yeah Jeremy Borges project definitely touches on that and yeah the the VST wrapper thing is also in that direction and yeah any other questions thank you all very much