 more like afternoon, but afternoon, afternoon, you know, hopefully you had a good meal, hopefully you're refreshed. This should be a fun session. I am moderating. We are live. We are live. It works. So I guess I'm gonna start it off. Alright, welcome back, everybody. You know, we had the usual zoom issues, but we are back here with a proper stream key. So session one, we talked about identity. But we know that before we know that before after identity, sometimes it goes through education, right? Education and the future. So today, we have a great assortment of tools that are going to help you learn and maybe you maybe unlearn because unlearning is also part of education. So yeah, and today we are going to start off with Anthony. Hey, everyone. Thanks for having me here. Hope you're having a great day. Let me go ahead here and just make sure sorry, I'll real quick. I want to make sure my slides are actually in presentation mode. Just one second. Okay, cool. All right, I'll share my screen so you can all see it. Great. Hi, everyone. Yeah, I'm Anthony Timorasco. I'm an assistant professor of music technology and composition at the University of Texas Rio Grande Valley in Brownsville, Texas. That's at the very, very, very bottom of Texas in that little tip on the US Mexican border. For today, I wanted to talk a little bit about this concept I've been exploring with my students here in our music tech program about how to approach computer music performance, making music with a computer as an instrument, as opposed to, you know, the typical DAW approach where we sit down, we write, we change, we layer, we come up with a final product and then share it with people after the fact. This live performance aspect of the computer is pretty new to most of our students here in this music tech program that I head up. And this is when I talk with my students, an example or a couple examples of what they typically see or know about making music with a computer before they come join us here in classes or in our new music ensemble, which is a new group that we started last spring for performing music with computers live. You know, typically what they're seeing on social media is stuff like you see on the left picture here. This is an example of the monome Norns, a little pocket computer that runs on a Raspberry Pi. You can interface with it with a ton of MIDI controllers, including this nice monome proprietary grid. Most of the developers of those instruments really go for abstract, you know, very fun imagery like this whale attacking a sailor here. The grid can be anything of course. And so it's a lot of really nice patterns of lights that playback samples or synth engines, very abstract, but very fun. And we see this also with things like larger scale software able to live with grid controllers like their push, which help kind of, again, kind of like extrapolate some of the complex things about computer music performance and make more interactive experiences without looking at the computer, let's say. However, the reality still exists that when you want to start learning about, you know, how to perform something that you built yourself, or you want to build something that's a little more flexible or perform something more flexible, like let's say building a set of instruments in Ableton instead of just a pre-made instrument. This is the reality that our students get faced with, right? This really nice, but dense and complicated UI scheme. And this is the case for most musical instruments on a computer, even ones that are built as max patches and then kind of sent out from composers to ensemble members. This gets daunting for students, particularly if they've never yet explored a DAW, if they've never yet kind of understood this idea of a virtual instrument is here, here's how you interact with that instrument, how you then pass that sound onto effects, those concepts of signal flow, of chain of command, they're very daunting. And so what I ended up exploring or what I've been going to work with with my students here is to start with live coding as their kind of introductory approach to making music with a computer in real time. And that's where I kind of came up with this concept with the title of the talk about learning how to talk to your computer about music, not just clicking buttons or, you know, moving sliders and dials. Those are all important and they are very expressive ways of performing. They still are and they have been for a long time. But as you've all explored, live coding and writing text based instructions for computer performance is also expressive. It may not be in the exact same, you know, one to one physicality approach is turning a dial. But that doesn't mean it doesn't exist or it's not possible to do with text. For me, what I've been exploring with my students is this concept that gets overlooked when working with a DAW like Ableton or Logic or Mainstage, which is that a computer needs instructions. And that when you want to talk to your computer to ask your computer to do a certain thing and you have this one to one bi-directional communication going with your your instrument, live coding acts as a great way to do that. That's a low area barrier of entry and very friendly. You can actually start using language that students already know. Now here at UTRGB, we're a bilingual institution. So, you know, there's always the the thing to address, of course, that some live coding platforms tend to be English focused. But I know there's been some research on different languages for live coding platforms. And Spanish would be really helpful for us here in our region of the country. But regardless of the language, the idea of saying that we're not going to use a lot of drawings, we're not going to use many symbols. If we do use symbols, they usually are data focused like numbers or strings and words. And that when we do actually construct or an instrument or talk to our computer about making music, it is a word. It's a phrase and it's a set of dialogue that we create just like we would if we were talking to each other and asking a violinist to do this and then to follow that action with another action. Many of you, I think I heard in the last session, we're talking a bit about archiving and having an example of the music you make that live coding allows for that because you can archive this text and have a record of what you did. And for me and my students, this works out well because we can look back on this as a performance review. We don't have to do a screen recording and watch what clips we launched in Ableton or if we didn't have the quantization settings just right to kind of have these clips organized, we can actually export that code, look at it, see where we had errors, see where we excelled in certain things and then go and work on that as a starting point at our next practice session. And if students are getting used to or just starting with live computer music performance, since many live coding softwares are free or they run in the web browser, it's accessible to all. And there's no need for an extensive financial investment, especially if at like an institution or a school you have a computer lab, you can get students started right away without needing, you know, software licenses and the like. So I have a couple of examples here of the types of things that my students and I look at when we make pieces with live coding and a couple of things that I kind of emphasize to them and we talk about as a group on why we use certain words and symbols or why we use certain kind of like syntax and how we can apply that to a universal concept of computer music performance. So we use Jibber here as our platform. So I've got some screen shares from sessions we've done in that tool. And yeah, one thing that I talk about with our students is, you know, this broad concept of object oriented thinking and how we see that in live coding and can extrapolate that to other DAWs and other tools that aren't so text based. This idea of making an instance of an instrument, whether it's Ableton, whether it's Jibber or Sonic Pi, you have this broad idea of synthesizers, of drum machines, many different instruments, but you are going to talk to just one, maybe for one piece, or you'll have to actually differentiate to the computer which one you are talking about. Not all SINs making this melody line or making this sequence, but just this particular one. And so when you assign a SIN that you built into a variable, like you see in this example, I encourage my students to use that as a role assignment activity. You know, is this the SIN that's going to play the lead melody line? Then you would want to name it that way. Is this a drum machine that has multiple drums inside of it? Like you see in this next example, if so, then you're going to need to think a little multiple layers deeper. The drum set as a whole, the specific drum on the set. And that is something that we can see in other DAWs and things to that. Here we can kind of lay out explicitly and directly. The same thing goes for when we talk about not just the instrument, but the actions and abilities that that instrument can do. So when we talk to the lead synth, ask it to do its ability of playing a note. And then we want to have that action happen multiple times. In most DAWs, we're doing this by making clips of MIDI, which look at these very abstract blocks or automation curves. Those are all very helpful visual cues, but there's something much more direct for newcomers about seeing that in these square brackets, these are the notes. And you don't even have to know traditional music theory, right? You're talking about scale degrees in numbers and then a symbol like that flat line for a rest, the absence of a note rhythmically. Those types of things and being very explicit about the actions about what's happening and the instructions to infirm the actions have really helped my students out as we've been learning about these ideas as a performers. And another concept that can be kind of daunting in some DAWs is this idea of the signal flow. You know, this is your sound generator. You then have to pass out onto an effect or multiple effects. When we start with just DAWs, a lot of times students will get confused about, you know, thinking that the delay effect is the instrument and thinking, well, I need to make sound by instructing the delay to do this. But technically, you have to start at your synth or your drum machine first. And here in a platform like Jibber or any live coding platform, you can have this chaining of commands, right? You talk to the broadest part first, the instrument, then you ask this activity of actually connecting it and sending it sound into the effect. And then you can differentiate which thing you are controlling, right? The delay effect won't have any sequences or sequence actions applied to it. So that can help in a student's mind differentiate that it's not the instrument, but it is an effect that will apply change the instrument later down the line. And I do want to just share real quick here a little 10 second performance of my students for a group jam that we were doing when we started working with Jibber not long ago. So this is an example of a typical practice session that we would do in our weekly lessons to build a piece with Jibber for our concert last spring. And this is a collaborative jam. Jibber, like many platforms, has the ability for everybody to join a shared room to work in a shared code editor. And this allowed the students to learn together and allow them to see the errors they made to see the things they made that resulted in really interesting sonic ideas. And instead of having everybody's able to the screen or their main stage screen look very dense and not actually be able to know which dial was turned or which preset was selected here. All the work is shown. So it's like working on a lab assignment or a complex math problem as a group. And the work is always there to review later, to practice and to change as you go. So not only are we working on this individually here at UTRGV and our group, but also collaboratively and within 14 weeks of performance or rehearsal, I should say, we were able to do a 20 minute jam, which you can see online on our YouTube page that I'll share the link for you with in just a second here. And yeah, it's really set off our students' interests in ways to perform with a computer that aren't just the typical kind of DAW based performances you might see. So I want to thank everybody for joining us today for coming along and I'm excited to hear everybody else's presentations here on the panel. If you want to take a look at the performance that we did with Jibber last spring and some of our other performances in the ensemble, I've got a link here for you that I can get sent out to everybody later in the future. And if you have a group and you have a group of students or you yourself as an artist want to collaborate with us, we've always opened the collaborations and yeah, please feel free to reach out to me with my email there. Thank you. Thank you very much, Anthony. Yeah, thank you. We do have time, so we are going to go to Martin next. But if if stick around in the end in the group Q&A, because there are some questions we are going to have about conversations. Awesome. Yeah, thanks. Thank you for your speech. And Martin, whenever you're ready, you can go. Um, OK, it's on the wrong screen. Give me a second. Oh, can you not see my screen? We see your screen. Oh, but I can't see it. It is just weird. OK, so you can you can you can see the building to the drop, you can see the text. Yeah. No, I don't think they can. Yes, yes, we can. Oh, you can't. Sorry. OK, my kids can't see it. But they wrote it so it should be all right. Yeah, OK, you wrote it largely. Do you want to try to run and see what happens? Right. OK, so building to the drop. How we learn to Algarave by hacking on Fox dot. I have two of my Algarave groups here. I am sat in UTC Sheffield Olympic Legacy Park, which is a very, very long name for our Computing Specialist School in Sheffield. We. Oh, OK. They are OK. So it sounds like the sound has failed us, but never mind. So I'm Mr Eglinton. I play a stretch beat and we have Jay here. We're working on initials for the kids for privacy reasons. That Jay do put the comment about silly numbers back in. Right, so we've been doing things with Fox dot and stuff here loosely for years. And I performed at the bunch last end of last year. We actually did do give it in a in a session. And there isn't the usual chorus of boo when give it gets mentioned. We have. And I don't know quite why it didn't work for them, but. So but they asked for an electronic music club and because that's what I know and I don't know Ableton, but that was what we were asking for. I could not do Ableton, so I didn't. So we started with Fox dot and it was a tiny group to start with. But over the summer, live music started again and I wrote the first version of a thing called Solar Bars, which is basically a little tiny thing that we added to Fox dot to replicate something that I saw a rave. I went to see class compliant audio interfaces, which is Alex McLean, who is talking at some point over over the two days. And he's he's collaborated Sam, who does the drum stuff and has Mixing Deskins things and does the classic DJ trick of Alex has got tune going and Sam will just drop it for a bar or two and then put it back in. And that is a classic trick, but you can't really do it in most of the live coding environments. So don't worry about it. So we. So I added that and then when we came back, one of our students. We started doing Fox dot every week. We did try doing strudel. A strudel is really good if you guys haven't pledged the gets another JavaScript based one. But it is really, really good. We will be going back to that. But one of the students suggested that we did our solar bar thing, but the drum would go, go, go, go, go, go, go. And then cut out end at the end. So that came from a student and we built that between us. And then he got clever and came up with the concept of the drop as the stop and the other tune next tune comes in that took a lot of coding. In fact, we got it working this afternoon because we've only been doing this as a group since September. In fact, the younger members only joined two weeks ago. So we have been a bit against the clock. So that was what he specified it should look like. That is actually what it does look like. The so like you're adding an instrument and you're telling the instrument to set. We're going to add this space instrument S1 after we have added this drum going fast and building up to the drop. And when the drop finishes, the new instrument that comes in will be S1. You can give it a list and such like. So we did manage to get it to work. I do not know what next slide is going to do. No, fair enough. The algorithm we actually came up with involves being able to cope with how long is a bar? How many session? How many bars do you want it to build over and all sorts of other things, which ends up with about 40 lines of code and logarithms? I haven't used a logarithm since I was there. So between as we got there, S over here did a lot of the composition we came up with in the last like 20 minutes was actually really quite good. Unfortunately, we can't get it to here. So on our site, which we will put I will send the links for it. We will post this, we'll post a new code and we will do a stream of this probably tomorrow, possibly possibly next week. So I'm going to ask them for a very brief bit of input. Folks, do you think that this way of doing it where we're actually trying to change the tool to fit what we want is a good idea? Yes. Yes. Is it a bit more stressful than just using the stuff as packaged? Yeah, it is. OK, I haven't looked at the chat, so right. OK, so oh, sorry, was that my time? But I think that is us done. I the original schedule had us doing five minutes and then then questions. Are we still OK for that? Yeah, we're good for some questions. If there's no questions in the chat to start off, I would like to start off with it. Because you said you have a background in electronic music. And if we go back with Anthony was saying about language, we know that whenever we talk about EDM, there's always the drop, right? So do you think that if there wasn't a drop that then you would be able to still communicate true what you're doing with the students? So you, well, there wasn't and we added we're trying to add what our concept of it is. So. I don't know. Do do do. I think that we discussed what it was in terms of language and in terms of EDM and came up with a way of coding it. Well, let's look a little bit more tidy. So does that answer the question? Oh, yeah, I was just there was no right or wrong question. Is this is just interesting because, you know, so far in both of the presentations, y'all did, there is that explicit and indirect notion of, like, how language dictates how we flow, right? So, like, yeah. Yes. So it's just interesting that, like, no matter what, that in, like, both cases is like, we're having a conversation where we're allowing our interpretation of language flow. And then and then as you were saying, you were able to, along with the students, define your own language. And then everyone found a way to utilize it in code. And as you were saying, you don't know how to use, you don't know Ableton. And then you know, we have a question. Jack has a question. And I think this will be the last question. What about functions for anti-drops? Okay, somebody tell me what an anti-drop is. I don't actually know the answer to that one. So I think a fake drop, Jack says a fake drop, like, you know, like, you know how, like, some songs have like a drop, but then, like, they sort of like tease you before the drop. And then they... So I think we could probably do it without the, actually set the solo length to zero. We could set the target so it doesn't go quite as far. So there's a couple of ways we could do it with what we've got. Pardon? Oh yeah, we can also set the end to false. So it's not actually cutting out at the end. So there's a couple of things that are a little bit closer. I think what I would need to do is hear some, and this lot as well, hear some and decide what we thought was the best way to do it. Because, yeah, but that's what we've so far been doing. Yeah. Thank you very much, Martin. And then if you are available to stay at the end, we would have more Q&A. I did like that live coding session. You just did with the kids for that fake drop. It was entertaining. And yeah, next we have Francesco. Are you ready? Yes. Okay. Okay. So hi everyone. I'm Francesco Corvi. Maybe some of you from a more live coding community know me as Nesto. And today I wanted to talk to you about this ongoing project, live coding in middle school, how to improve music teaching with live coding. So to begin, let's talk a bit about the parts involved. So to institution in the entrepreneurial selected live coding trainer, which would be me to support five music teachers. This teacher already were trying to use technology in their music, in their teaching activities. And so this teacher from five different schools wanted to teach live coding to their students. So the middle school range is from 11 to 14 years old. And they do two hours of music every week in the part of the curriculum. And these students in general didn't have any music or programming background. So the first part of the project was the training of the teacher. So I made the sort of course we meet every week online for a few hours. The first part was introduction to what is live coding and sonic pipe. And then some of the, some of the strategy I used. So to introduce the programming concept needed, I always try to convert some music concept in programming concepts. So for example, if I had to introduce what is an array, I would start by saying, okay, let's take, for example, a chord. So a chord is a collection of notes. And this is quite intuitive for somebody who already has a strong music background. And also in this way, like the concept were already kind of contextualized into application, like how can we use an array? Then we did some guided exercises. So the idea was that the teacher were feeling a bit the idea of the blank page and how to get better music results. So the scope of these exercises, the goal was to put them on the right track without giving like too much precise goal. So it would basically guide them into implementing their own idea from scratch and the idea should not be just using a function or copying an exercise. Also, like in each of this session, I was asking the teacher, like you don't have to bring a working code that's not important, even if it's full of errors, but try to bring an idea that is connecting a bit your musical background with this kind of more computational way of thinking about music. And the last parts of the training were mainly brainstorming session, which were proved to be quite useful. Sometimes we were able to anticipate some problems that they would have finding class and in general, the teacher will add different background, some more from jazz, some more like classical composition. So there was a very nice discussion on the possibilities of like coding in general and of course how to use it in the class activity. So a bit some of the problems and possible solution we found in class. Well, the main problem I would say is the lack of preexisting knowledge in the kids. So the kids don't have a musical background nor a programming background and we are trying to teach them music with programming. So this can become overwhelming because already music, it's complex and then if you try to teach it with programming, which is also concept, which is also complicated, it can be counterproductive. But the point I think is that there is no need to really, the idea is not to teach programming, but it's more give them Sonic Pi, which is the software we use as a way of trying out things mainly. And this I think it's also connected to the second part, which is question outside the teacher knowledge. So the teacher were having a lot of questions that was falling outside their normal area of expertise. It's like this teacher were quite brave in implementing Sonic Pi in their teaching without having like years of experience. I think in the end, this is not really a problem, mainly because it shifts a bit the lesson from a more like frontal way of teaching to a more like laboratory way. And actually when these kids are super creative, so they always came up with things, how can I do that? How can I do that? I think it's very good when the teachers tells the kid like, okay, I don't know this, but we will figure it out together today. Because this is very like empowering for the kids, because it teaches them that like they don't need to, that knowledge doesn't just always come from other people, but you can find your way through things. And the last point was very practical, like kids use smartphone, not laptops. Most of the kids don't have laptops to practice at home, but they all have smartphones. We used Sonic Pi for discourse, but like one approach could be to use a browser based platform and then they could use it on the smartphone, maybe connecting a keyboard to the smartphone. So documentation from classes, mainly we try to use Sonic Pi both as a performance and compositional tool. Most of the kids in this year were not like, they were doing life coding, modifying chunks of code, changing some parameters, which to me honestly, it seems already like a super great achievement for one year. Some teacher used life coding more as an instrument, so they introduced it, the laptop as an instrument. But we also tried how can life coding support the explanation of concept. So like how can I use life coding to introduce some concept in general? We found out that sometimes the code is more clear than musical notation for the kids. To wrap it up, conclusion and future work. So the next step for next year, what we think, especially Indire, is suggesting to involve math teacher and to analyze the work we did with Sonic Pi so far, the work did by the students and the example we made was a more mathematical point of view. So we could get into the more complex part of the algorithms and try to find a way to explain them on a mathematical perspective. So to kind of transfer the more empirical knowledge built with these students from a musical way into mathematics. And in general, this could work in a more multidisciplinary application of code. So the teacher are already thinking about how can we use this kind of application with the arts? But in general, what we can say for sure is that if the computational part was shared between multiple disciplines, then it would be much easier for the kids to familiarize with code and then we could use in each discipline programming as a way to enhance the teaching of the discipline instead of sort of an obstacle of saying like, oh, code is complicated. I don't know if that makes sense. The last part is curricular activities and external labs. So we did some external labs. So some teacher did external labs in their school and these two things can work out very good together because the students that are more interested, they joined external lab and then they can help in the curricular activity, the teacher, by helping the students that are less proficient in coding. So if we design curricular activity and external labs to help each other, this is a very good way of getting things done. Also, what I want to point out is that with this project, we wanted to introduce this as a curricular activity because I think it's very important to give all the kids this opportunity. So not just to the one that can follow the external lab. Okay, so thank you very much. Here is a list of all of the teachers and all the people involved in the projects. So thank you. That's it. And maybe if there are some questions, we can go with those. Thank you very much for that presentation. And if there's not, I will ask the question because I always have questions to ask. How did you find reteaching teachers, right? Because we know like teaching does have preset rules, right? Like you learn to teach, right? So what would you say some of those teachers really had to unlearn? Well, I think that they didn't have to unlearn a lot. Maybe because also this teacher in the first place, they wanted to be involved in this. So they were really, they were looking forward, let's say. I think that actually the idea that they had very solid musical skill was very helpful because of this concept of trying to translate those into code. So I can start by the knowledge you already have and find connections with how you convert like a musical idea into code. So I think that worked very well. That was, I think the main, like the explanation of the programming concept were not comprehensive. It was more like, oh, how can you take, for example, I don't know, some ideas could be a chord or a loop. All these things are both like, have a meaning from a computation but also from a musical point of view. And I think, Anthony, you had a question if I'm not mistaken that you may ask. Yeah, actually you were addressing a lot of it with your question, Kofi, about training teachers because I did that for a long time was training teachers how to do creative coding in graphics and we didn't do life coding, but we did some web audio and JavaScript. Did you find, Francisco, that the teachers were able to sort of find analogies to explain the concepts to themselves and also to their students, like to look back to what they, maybe if they teach a different subject that's not as art driven to kind of say, okay, this is similar to something in the real world or something we do in another class around the physical sciences. And if so, I'm curious what were some of the common ways that teachers tried to make sense of something that was new to them and to explain it to their students. So I think that, yeah, I think if we take like coding, like life coding music, it's a multidisciplinary thing. Like there is a lot of math and logic involved. And the point is that the music teacher can explain to the kids like more the math part, but of course it would make much more sense if it's a math teacher that does that. So, and also like, the goal of this lesson is to teach music, not to teach programming. That is also the idea. So I think that the main thing is that if this would be introduced with all the different subject, then this would kind of be solved by itself. But I don't know if I answer exactly what was your question. If it was these are, okay. Thank you, Francesco. And when we stick around, we shall dive more into this multidisciplinary type of of teaching for life coding. And next we have Sarah. Sarah, if you are ready, you can go. Hi. Hello, everyone. My name is Sarah Bouchard. I'm located in Richmond, Virginia. I'm going to switch gears a little bit because while I am a teacher at VCU here along with Kate, my colleague, and I am going to be teaching next semester a collaborative sound performance class. I am actually going to talk more about my own practice as an artist and show you some examples of life coding. So I am an artist working in sound performance and installation. In my practice, I collaborate with the environment. I give voice to inanimate objects like rocks, sticks, leaves, which like bodies act upon their surroundings and contain histories. I animate these bodies with my own body by manipulating objects, walking through spaces, and sometimes by singing. So I originally started experimenting with Sonic Pi. I forgot to mention the title of this is Embodying Landscape in Sonic Pi. So I originally started experimenting with Sonic Pi as a way of incorporating environmental data in my work, which if we have time, maybe I can share a little bit of that. But as I got more adept at coding and improvising, I was impressed by how quickly I could build layered spatialized compositions with my own field recordings and samples within Sonic Pi. This way of working is totally new for my practice and feels akin to sketching or painting and helps me build an immersive virtual space that lives in parallel to the site that the sounds come from. So I am going to open up Sonic Pi and let's see, I am going to keep the explanation brief so that I can spend some time doing some live coding and give you all a live performance. But here's my method. Step one, record samples at a specific site. So the example I am going to perform is from a local park, Bryan Park. This was for a recent exhibition. It wasn't a live performance. It was a sound installation in quadrophonic format. But this whole Sonic Pi improvising method has been folded into all of my work. So record samples at a specific site. Don't forget to clean up your samples with a standard DAW. Step two, we are going to program Sonic Pi to grab random slices from these samples. And then we play. We vary the rate, volume, panning, and time between samples. So here's our code. I'm pulling in, let's see, I've got a handful of recordings of some of our natural objects. Leaves and sticks. I've got them right here, actually. That I brought to my studio from Bryan Park and manipulated them in recordings. And then some of these are water. I recorded with a hydrophone in the park and some metal creaking sounds that come from a birdhouse structure with a protective guard around it, metal guard in the park. So my live loops, I basically have a structure that I've started to reuse over and over. It's taking a random starting point to start the slice. Select random starting point of slice. That's S. We're calculating a finish point. So what this is doing is it's collecting a random slice from my sample and varying the start and finish point. And actually sometimes I vary the... Oh, no, that's a little bit later. And then I'm varying the amount of time between the samples. So we're getting... With all this randomization, we're going to get a more naturalistic sound. And then within each iteration, we're varying the rate a little bit so that the pitch is a little different. We are varying the volume and we're varying the panning, which is essential to getting this immersive space. I also have a little bit of reverb on it to kind of give it a space. The reverb is being randomized also so that some sounds sound a little bit closer to your ear and some sound a little farther away. So let's see. I think what I'll do is maybe first just give a little sample and then I'm going to launch into a short, very, very short performance. Okay, so... So this is the sound of some sticks. And if I make it loud a little bit faster, then we're starting to sound a little bit more like rain. So before I start my little performance, I just want to mention that another strategy I use is to repeat each slice a certain number of times. I have it set to a random number between one and five. And that can help... I'm not really sure. It just gives it a further depth if you have a sample that has very different parts to it. Some of them are shaking the leaves and some of them are crunching the leaves. So if I want my overall sound to be... to have a nice amount of time where we're hearing the leaf shaking and then maybe then it transitioned to a little bit of crunch, that's where the repeats become useful. Okay, so... I'm going to just play for a couple minutes and then we'll probably be out of time or maybe a couple questions. All right, great. And I think that's it. That was perfect timing. And I did like the flow of that. So questions would be asked after, but that was a great performance and the motion as we motion next to Roxanne. Roxanne, if you're ready, let me know. And you can start whenever you go. Can you guys hear me? Yes, we can. Okay, cool. Hi, hi, everyone. My name is Roxanne Harris, also known as Rox. And I've been live coding for about a year now. And so I saw this conference and I wanted to sign up and talk a little bit more about my process. So the title of this segment will be called What's Your Process? An Exploration of Approaching Methodology for Live Coding with Tony Pye. And to lead in that, I thought the best thing for me to do was to kind of show a little bit of what my journey has been so far and then just demonstrate where I'm at right now in a little mini demo performance. So let me get into sharing my screen. Cool. So basically right now, I come from a background of, I'm a recent undergraduate from Yale University with a bachelor's in computer science and music where I focus a lot on art tech or technology assisted art practice. I only found live coding my last year of college. And as I was thinking about what to do for a senior thesis, I kind of stumbled upon Tony Pye. And for the culmination of my senior thesis, I made a series of web pages talking about my experiences with programming, my initial forays into it as I recognized in the moment that live coding might change my life. And it was important to document this early process. So I wrote like an abstract and had a lot of documents showing the different media that I produced over the course of that early time, including saving all the code that, you know, I produced as well as audio and video produced over the semester and a compilation of a lot of different archive, archival material. One of which I'll show is I made the two, two most of my favorite sections was when I first started trying to learn live coding, I didn't really know where to start, like what exactly is it. And I ended up looking up a lot of live coding videos and made these play by play. It's kind of funny that I look back at it, but these play by play kind of note taking about what was I perceiving in real time? What was I, what are the things that I'm noticing that the performer is doing and what are they, how am I absorbing this information? So this is one of my favorite sections to kind of look back on. I don't do this as much anymore, but it certainly would love to get back into this. As well as the other favorite of the personal entry writing that I ended up doing was I ended up writing a bunch of little paragraphs about my different purpose and intentions and what do I want to be doing in my learning process? Where am I coming from? So as a jazz instrumentalist or an alto saxophonist, a lot of my understanding of music, I have a deep understanding of music, but how does that go into my practice? And I never really found my comfort in DAWs because I couldn't really improvise the way that, and think on the fly in the way that I do with live coding and the way that that's re-enabled me to have agency in improvisational music practice. So I wrote a little bit about digital audio workstations, defining live coding and all these kinds of stuff. So all of these things are available at my first. Part of the deliverable was making a repo of code as well because it was for computer science and music. And so I made a repo full of custom functions, a custom function library, pretty much to extend the built-in functionality of Sonic Pi. And I included a link to the thesis in that GitHub. The first iteration of this library ended up having a lot of different sections based on what did I mostly do with each of these functions. So for example, generate was a huge one because I was thinking about all the different ways can I generate different kinds of data or material and what different kinds of ways. So generating a step-wise list using step-wise motion, generating, like if I want to do microtonal stuff, generating a list by how you divide the octave, just kind of trying to encapsulate all these ideas into their functionalities and abstractions so that I can more easily access them when I'm live coding on the fly. The current, so this was the initial, or this is actually like one derivation from the initial, the initial live coding that I ended up doing. And oh yeah, one more thing to show. So what I've been doing a lot is just kind of having the experiential practice of just live streaming. Even though most times that I'm live streaming, no one's been watching and I haven't really had the intention for anyone to mark it or show, like oh I'm doing this thing, I just kind of honestly recording onto YouTube, live streaming onto YouTube was easier than recording to my computer because I didn't have to have hours of videos on my computer. I just can go straight to YouTube and save. So it was all more for convenience, but it's interesting that they've started to, even without any kind of metadata, rack up, people are watching them, seeing and deriving what they can from these practice session videos just because I'm being vulnerable about where am I at in my process. And I've been able to have the privilege of being able to trace back my process through these practice session videos or was I thinking about at the time and it's like a snapshot in time. And even on the Sonic Pi, in thread community, people have been kind of taking some of the code and then refracting it and making their own compositions and learning from it. So that's been a privilege to be able to experience. The current version of the library that I'm using right now hasn't been released yet. I'm going to hope to put it on GitHub soon, but I just got yard for Ruby. I've also used Sonic Pi to kind of re-inspire me to get into Ruby and more advanced topics in Ruby programming, including like building your own custom libraries and how to organize your code properly and like those conventions and those really, really help and translate into live coding practice. But one of the ones that I'll just highlight, I'm not going to do it in performance right now, but I'll highlight it is I wanted a way to be able to like have a core progression like a string, like the way that you have a chord sheet in jazz. You... I'll just write it down here. Like I wanted to be able to take this string or something like that. Something like that. And then that string in itself runs through the functions and then it would produce, it would just give me all the information I need. The chords, the chord functions based on like the target chord, the tonic would be C, things like that. Just kind of building on my own knowledge and just building these personal tools. And these notes and my attempts and thoughts towards documentation is not only helpful for me in terms of being able to organize my thoughts, but also it's really enriching to be able to share these things, my practice and what's going on and have people play around with these things. So it's like a tool thing the function that I'll use right now and kind of demonstrate if I can find it. Oh, it's here. Arrange is one of the newer functions that I've made where it's been a journey with Sonic Pi in particular because with Sonic Pi you have the live loops and the live loops, even if you are within the live loop it keeps for like 16 beats. Any changes that you make to the code and run won't be reflected until that live loop re-triggers itself. So the distance between the conception of the thought in my head from the actual execution of it and reflection of the sound can be very different. And that can really help in some cases like if you're trying to DJ or trying to build on a sample that has a context in which it really helps. So my personal performance process or what I really want to do in performability is basically have live coding analogous to gaming. I'm a big gamer as well, grew up doing that and having that kind of adrenaline like fast reactionary like kind of sensitive system is something that's really appealing to me. So I've been working with live loops that pretty much only like cycle through a quarter of a beat every time and just kind of having like a unified live loop time set. And there's more things that kind of get into that but these are all ideas that I'd love to talk. I'm glad there's so much sonic fire representation because I'd love to keep talking about it. So, but now I see I only have a minute left so I will kind of talk briefly and then I'll put it a little bit but this system in particular kind of utilizes Euclidean rhythms and then based on where you are in that rhythm you can like do very like conditional like you can just do certain conditional things that it would be a lot harder to do in a DAW. So I'm just going to demonstrate and then I'll take questions. Thank you very much Roxanne and I'm just going to say what the chat been saying I'm not pushing this the chat is saying they do want to see a Sarah and Rox collab just putting it out there just putting it out there and now since everyone has presented we are open for questions so if you have a question you can write it in the chat and I'm going to start it off I guess as per usual so even though everyone had a different sense of education because that was the main theme there was one consistent team that was present in everyone's speech and it was this notion of documentation and archiving and all like all of y'all did it in different ways but also seems that everyone also said that is something that you can do with live coding versus a DAW so do you think that documentation helps a learner achieve agency within live coding or agency within music creation that is open for everyone I was going to say that yeah I think it does I think it definitely helps performers explore agency and yeah making music with a computer because a lot of times I think if you're doing the kind of layer by layer by layer creation in a DAW and then your piece lives on even in the saved DAW project you may go back and see that and say okay I see the edits I made I see the automation curves I drew I have changes over time but you still kind of are seeing mostly the result or kind of the visuals of the instructions versus the kind of explicit conversation almost or dialogue you had with your computer in that performance sense and I think that yeah I think live coding allows you to look back on what you made and wrote and see it as a conversation or a dialogue and that's how I spoke with my computer here's the things I needed to be direct about or the things that I could be a little more flexible on here's where I asked for some input from the computer with like randomness and things like that and I think it could be really helpful to see that also maybe one thing oh yeah you can go thinking how we can like write documentation so providing information but at a certain time avoiding biases in the documentation so like I'm not pushing you to do things in a certain way I'm just giving you the information for you to find the actual kind of way then of using it I think that's something which it's very hard actually I think it's the main hard thing when you have to write the documentation maybe yeah I agree and my experience is with documentation like I think finding that difficulty I kind of just was like then sometimes in that case being just honest about look this is just my practice where I'm coming from and like derive what you can from that and just having that kind of transparency that this is not like dogma or you know this is like this is the way to do it you know this is just supposed to be an inspirational text and in terms of like Dawes like being able to you know with so many ways to also communicate across programs like MIDI like I sometimes I've done a lot of performances where I've used Sonic Pi and Ableton as a setup where I'm having Ableton be the sound generation software and Sonic Pi essentially just be a sequencer so like you can still enable like things to kind of be collaborative and like or if someone is a dead set Ableton user I got them to use Sonic Pi because look you don't have to abandon your whole practice that's where you're coming from you can just add this to your palette I'm thinking about how that question relates to my practice well of course with live coding as opposed to a traditional DAW it's improvisation and then when we're talking about documenting something it's not really it's a documentation that's not a frozen in time kind of thing so with me when I'm having conversation with these materials or places it feels more like they're talking because it's not always the same like you know if I was I used to work primarily in audition and I'd put my samples in and here's my you know composition from A to B but who's to say that that's what leaves are saying you know like I like this way of working where it's constantly changing like a natural occurrence you know we can hear you okay you can hear me now yes yes little sound audio to piggy bank on what you were saying sir and I'm gonna ask you this first and then everyone else can chime in do you think that one of the things that I'm also hearing is that with live coding you can listen and then react if that if I'm not mistaken you can listen to your sample you can listen to your code like go back earlier to like even when Anthony where you're saying like that conversational piece right is that what you're saying because I saw your setup I saw how you set up your your code and like you name stuff like HLCI and then did you listen to all of it or even go back I'm gonna also let I saw Martin just came back on so even you Martin it seemed like your documentation was a very collaborative thing right because you don't document for yourself if I'm not mistaken like when I ask you that question you actually ask the kids in front of us like what do you think and are these some of the things that you think should be highlighted more in terms of how we approach live coding okay the first part can you repeat the first part so I was saying like you know how you were saying that when you're in a DAW it's not conversational you place the piece like you have full control but it seems that like in like how you were like setting up your sonic pie code it's like no I'm gonna let the recording speak for themselves and then I'm gonna react okay got you thank you for jogging I have a mom brain yes totally I mean one of the exciting things about using sonic pie is like the surprise factor you know like I'm like have these samples and I could very quickly create something that sounds totally unlike you know the individual sample so I often talk about it as it feels like painting with a big brush you know like a bigger brush than I would would have had and Martin a brush that has emergent characteristics it changes shape as you're using it so we actually do most of our stuff in troop so in a collaborative space so not only do you not quite know what the tools going to do you certainly never know what some of the kids are going to decide to do inside the same so the fact that we're recording that I don't think I necessarily emphasise that we have only just started but we are kind of doing the practice in public thing so like Roxanne we are streaming our practices on the white end so far and so far half of them have failed completely but that's what we're trying to do to record what we're doing but then of course you're talking about like language and documentation of the changes we make to the code that is and the code that we're playing how do I put this yeah it's very collaborative especially my light level is going up and down like really weirdly we try to I'm trying to draw out what they want it to do and work out how to make it so it can do that even if it's not built in so far so our documentation we've got issues we do test-driven development as well so for that pattern thing we actually have a whole bunch of unit tests about what that drop should sound like if we look like if we're saying it should run for bars or whatever so that you can imagine what it's going to be as well you're going to get surprised sometimes especially if you decide that your bar is seven and a half beats long or something so yeah we're writing down quite a lot of stuff we have only just started but that's the way we're doing it especially as we've got some kids one of whom was ill today, ill of them apparently who is very very good on the actual performance stuff one who is a bunch that are just kind of concentrating the performance but are beginning and one who is a really really good coder but doesn't actually much enjoy the performance but really enjoys being able to make the tool for somebody else so we need to communicate a lot to be able to so we were generating tests and then giving him the play kind of letting him play the test to then see whether it did what he wanted so we got there eventually a lot of documentation especially on that level and then we have time for one more question and I wanted to ask the question from Hermannix and I see Roxanne you already answered it in the chat but I will ask everyone else as well in working with pre coded functions and libraries how do you decide what parameters you will modify evolve during the performance I think for me to answer that question since I'm working with a group of students or even if it's just one student we'll kind of explore first if it was any other instrument if it's something that's somewhat similar like a synthesizer maybe we've used synthesizers in previous lessons or they've used synthesizers in a DAW but haven't actually live coded it before we'll kind of say okay what's similar what do we know about this instrument if we look at the parameters are these universal things like filter cutoff it's going to work similarly here than other places and if there is anything that's unique we were talking just a lot about documentation and yeah if a library or like an add on component has great documentation then it's easy for us to be able to say okay let's dive in together like Francesco was saying let's work together teacher and student so we can both kind of understand and then we can kind of clarify to each other and then yeah kind of exploring what you can do what you can do in the other instruments that you can bring over to it I think is a good technique to start with I feel like that is the question in the creative process for when you're live coding performance I mean for myself there's a lot of intuition and just trial and error I mean the piece that I showed today is a little bit different because a lot of it is randomized but and yeah I don't know just sort of trusting your creative gut I guess and Francesco you have anything to add and Roxanne well I think mainly these two questions we had are kind of correlated for me I think that one point is every time I have to introduce a function to a student well I can tell him like oh this is how the loop works but at the same time I'm implicitly telling this is the way you should use it and that's sometimes I think especially with young kids you can leave them without having a very good explanation and then they will probably found a way which is not the actually way this was meant to be but in the end if I think more generally about live coding I don't think that like compared to a DAW for live coding you can go more in depth and think more programmatically but even in live coding the language and all these things are influencing our way of thinking and doing music so maybe especially for young kids you can leave them be and experiment and then just afterwards tell them like okay now what you did is great now let's give a look at other way you can use it but there is not that your way is wrong and the way that is written in documentation is good so to come back to the question like how do you explore the parameters I think it really comes like if you're already very experienced person then you might look into a specific function because you know you need that so then you approach it like okay let's read the documentation and find what I need on the other end when I'm beginning I think maybe you just don't care about what it does you will just find it and also testing like more with an heuristic approach like let's see how it sounds and it's a combination of the two in the end yeah I mean I feel so grateful to be here talking about all this with you guys I think everyone said what I could say I have a friend I guess a personal adding I have a friend who is an Ableton ahead loves breakbeat stuff you know and he was really hesitant to get into Sonic Pi but once I showed him like this is specific there's parameters that this is specific way how to accomplish what he wanted to do because I already had the experience of knowing that and I just kind of handed that to him it kind of jump started his process and he just just trying to guide him and not others people that I've also brought to Sonic Pi to the place where they can feel like they can play around and spend time in there for hours and really like hearing that feedback of you doing an action and then you getting really good sound out of it it's really cool and I don't remember if I had mentioned this but for the first couple of months like I said I didn't know anything about life-coding or how do you even go about trying to structure a performance and what does that mean to be doing that and the aesthetics of that and all that watching a lot of videos helps just creating them even if no matter what kind of content they are as well as for me with Sonic Pi I kind of built up like being virtuosistic with Sonic Pi by limiting myself by giving myself some parameters to only built in functionalities within Sonic Pi for the first couple of months as well as built in samples which is really nice with Sonic Pi and then trying to if I didn't know how to get a sound or it wasn't just built in like doing small little sound design things like tweaking parameters, reversing the rate on like samples that are built in and just building things from there how much can you get within those limitations and then I ended up being able to explore so many possibilities within that and then I finally allowed myself to kind of start bringing in external samples and other things so that's one method of being able to like go about practicing and being able to determine how do you know what to do you just have to spend time practicing it like it's some knowledge I feel like you can only get with live coding through emergent practice that are they emerge through practice exactly and that kind of reminds me just quickly because I also started using Sonic Pi just almost a year ago and I would just keep asking questions you know I would do the tutorial and then my brain would come up with like okay well what if I you know wanted to start with this sound going really slowly and then gradually like get faster how would I do that and then looking online and you know at forums and stuff so yeah exactly just practicing, experimenting asking questions and trying to find the answers thank you very much everybody for talking we do have to wrap this up but it was a great conversation and I hope everyone that did listen to this now has a way of how they can learn a new style to their live coding practice or probably how documenting your work has multiple positives to it so thank you very much for everyone listening to this and we have day two tomorrow where we have voice and embodiment so we're really like leaving the computer screen and then using our own movements and whatnot so thank you everybody for being here and hope you enjoy thank you