 Welcome to the last paper session of the conference. After this session, there will be two more performances and then a record panel, where probably we can share all the new insights that we got during the conference. This session is also featuring two projects actually from Sao Polo. So we're going across the Atlantic to the southern hemisphere. The first paper is presented by Antonio Goulart, who's at the Sonology Research Center in Sao Polo. And he's interested in synthesis techniques. But he's also life coding and life coding the computer as part of a free improvisation orchestra. Thank you very much. Thank you. Yeah, it's very nice to be here. Thanks a lot for the opportunity. And it's interesting. I never had another Antonio as a classmate. And right now I have another Antonio at the same laboratory as I am. And we are presenting at the same session. See, it's funny. And I mean, usually the last session is the big one. So I'm not sure why I'm here. But I'll try to tell you a good story. So we have this orchestra, free improvisation orchestra in Sao Polo at the music department. And four years ago, I started playing this orchestra. I was playing the guitar at the beginning. But then I started studying sound synthesis. And I switched the computer to maybe try new sounds in this free improvisation orchestra. So Miguel is the bass player in the orchestra. We've reflected a bit about this process of life coding. And that's what I'm talking about today. And yeah, of course, everyone here knows about the first line, maybe the last line, I think. So and we've been talking a lot about free improvisation in this conference this week. It's interesting to see that. And I'll just talk a bit more today just to be sure we are at the same page regarding free improvisation. But yeah, so this one. So here I'm thinking about the computer as a musical instrument, like any other, or if you wish, life coding as a standalone instrument. So I'm not using, I avoid using controllers or motion capture stuff, like talk about it today. I prefer to use only the code in this di-term though. So I'm also not using, not processing the acoustic players instruments. Everybody asked me about that. So I'm ready to talk that it's standalone. Also, I'm not networked. Of course, there's not computers. So it's not like the same as in laptop work, it was a group, like today they exchange some kind of information, at least the clock or some stuff. And I'm also not using machine listening to sync to the music musicians. Although I could do it. I think it would be a bit too much for me. Maybe Nicole's can do it, but I can't. So yeah. So about the orchestra itself, so I'll just quickly show. These are the members. So like Trombone, Flute, myself, saxophone, Miguel, the bass player, clarinet, piano, baritone. And this last guy here plays everyday objects and also uses the voice. And we had someone talking about like the Italian guy using the microphone to get everyday objects. I send a reference to the guy, thanks. So yeah. So all these acoustic musicians, they've been to lots of instrument training and ear training. They are very good musicians. But they had no previous knowledge about electronic music techniques before like two or three years ago. And if you think about life coding, they really haven't heard about it and have no clue what was that. So yeah, in an orchestra scenario, especially in pre-improvisation orchestra, it's very important on how to effectively interact. You need to have a perfect interaction for each work. So we are in this orchestra very interested in thinking about how to interact better. So we noticed some problems in our interactions. And we started to think about why. So I'm only talking a bit about pre-improvisation now. First thing is that it's like a collective and when in a group, of course. But it's a collective and collaborative thing, of course. So everyone is working on the same event, which is doing the improvisation together. And this is very important. It requires benevolent behavior. Or by benevolent behavior, what I mean is like, you don't try to get all the attention. So you avoid maybe polar actions. So especially in pre-improvisation, we try to be like knowing the idiomatic as the expression put by Derek Bailey. So usually if you do some polar actions, just, for example, you would break out the layer, the sounds of improvisations and knowing the automatic layers and just bring all the attention to this polar action. So you avoid doing that. And also use extended techniques to maybe take the instruments to their limit and really try to overcome all the idioms and languages, musical languages. This is a very important thing in pre-improvisation that previous information about the other performance is not a requirement because it, and that's because the interaction should be based on the sound musicians create on the actual performance, only the sounds should be enough for this effect of interaction or pre-improvisation. For example, the bass player should interact with a drummer without needing to stare at him. So this might be very obvious, but it's one of the premises of pre-improvisation. Only the sound is enough. And as a corollary, so knowledge of each other's instruments should also not be a prerequisite. It'll be clear why I'm talking about this. So as another example, the bass player should be able to interact with a drummer without knowing anything about drumsticks or drum skins or anything like that. However, yeah, we all know something about this stuff and we take that into account when we are jamming. We should only focus on sound, maybe with this Schaeferian-reduced listening thing, or what Derek Bailey calls playing without memory. He says that the idea of pre-improvisor is the one who has no memory at all. So it's completely free in the sense of not relating to anything that was mentioned before. Okay, now, a bit about the instruments themselves. So the acoustic instruments, of course, you have immediacy between any gesture, any thing that you do with the body in the instrument and you get sounds like it's immediate. And of course, with live coding, there's a leg of both thinking about the algorithm and the code itself and both writing it, but with great leg comes great opportunities. So we can, yeah. So again, of course, my program sounds for the future and you're not limited for the acoustic timbre of a specific instrument, but we can do like any sound. So this should be taken into account in this free improvisation scenario, of course. So the projection itself, we all know about this, showing this screen thing, the manifesto and stuff, but at first we didn't use projection in orchestra. Oh, by the way, it's called orchestra e-hante, which is like from Ero, the word from Ero in Portuguese. So our audience is usually like, they are really not used to anything that's not acoustic instrument. So projecting that they screened this orchestra in our opinion would blur all the attention to the group performance. It wouldn't be a problem, but it would blur the attention to the music. Now the big problem is like paying too much attention to the code or maybe to the visual thing and forgetting about the music. That's been done. So we didn't project at first, but we thought that maybe that would be one of the reasons our interaction was not good enough. And in one of our like meetings, weekly meetings, I showed, I projected like the live recording and talked a bit about the one before talking about, I'll just follow here this script. So it's more, we noticed that it was more natural for the bass player to jam with the drummer than with myself. Of course that by bass player, I mean the full orchestra. And what happens was like the live coded sounds would be like background layers in this improvisation music that we were doing. And you know, IOD, the acoustic instruments, we could notice lots of conversation between the musicians, but my sounds would like behave as a set of background layers. Not interacting, I wasn't influencing anyone. I wasn't fast enough to maybe follow someone's intentions and the musical intentions or elites, I don't know. Yeah, one problem might be like a musician anxiety and worrying about the immediacy, but maybe another one was not knowing what happens inside the computer when I'm there So yeah, in one of our meetings rehearsals way, I am projected and how much time I have left? 10 minutes. 10 minutes, okay. You're good. Okay, thank you very much. Oh, okay, so yeah, and then I projected and like spent some hours talking to them about all the process of live coding. So show them how it works, that it's actually writing code, not like using an interface where I can have like immediacy and that it takes some time to program a musical intention, but after you got some code with like few variations, you can get huge outcomes. And yeah, I mean, instant, instantaneously, the interaction, the jam was better. And yeah, a little comment. They are very smart people when you have their attention. And yeah, also we tried that in some of our concerts, presentations, and didn't learn that much the attention. So from there, we kept the projection thing. So, I mean, it was very fast all the history, but of course it was a process of talking to them and showing the live coding. And they are all in the doing masters, our PhD in the music department. So they are like studying contemporary music and most, at least half of them are into electronic music studies and techniques. There are three of them, I can show you a video later, are like extending their instruments to hybrid instruments with electronic processing and live electronics. So by now I think they are like more they have much more knowledge of what, how to incorporate electronic tools in there, instruments in there as a new tool. So some questions about that. If we should break this free improvisation of presupposition about the interaction only through sound and add this given that everyone have an idea about each other's instruments nature. Because I mean, only after I showed him what I was doing here, we could effectively interact and we understand, by the time they understood what was happening in the interaction like really started to happen. They understood that they had to give me some time to follow them or like make, prepare like a layer of for me to have my sounds in it and then we talk better, something like that. So this is one of the things I was thinking. Another one, if acoustic musicians should learn coding to become better orchestra musicians by orchestra, of course, I don't mean symphonic orchestra or something, but in this case, another question is how feasible it is to perform an acoustic instrument in this free improvisation context which is like very abstract and which are also exploring extended techniques which are usually difficult to achieve in the instruments. So how feasible it is to do that and simultaneously read code from the live coder and your orchestra. Another one, if is it too much to expect that the acoustic musicians will read the code and write and understand where I'm sonorously going and make room for the new sound. And I mean, would that be a bridge to the future like this Code So Fish idea of being able to learn the code and think about the sounds and the way you can think about the sound that you will come out of a guitar if you play that third fret or something. But the pitfall is they will be paying too much attention to my code in the torment of the other musician sounds that maybe if they could do it, it's not impossible. I know it's difficult to ask some musicians to learn coding, but if they do, of course, they will be paying too much attention to my coding and also to the sounds, of course. But I mean, some of us already do live coding and know, of course, how to program and play I'm sure that most of us here can play some instruments, you could try it, like how it is to play acoustic instruments have one or two, I don't know, life coders in the same group and try to so page their code and maybe try this future interaction or something. And this is the most important question, is if after learning coding, would the musicians still want to play acoustic instruments, since life coding is so cool? So yeah, that was the short history. Thank you very much. We have this, some performances, I mean, we don't record a lot and even film much. Well, we have some recordings of some sessions they saw in Arizona Cloud. This is my MA, if you want to interact with me. And maybe I can answer some questions and still got two minutes, just show one or two videos, but I didn't show here, of course, because time is short, but in the article, I referenced some of our performances with some comments, so both good and bad, more bad than good comments. So you could really get the article and those of you who are interested, the videos and some of the references are only going to worry about it. So thank you very much. Oh yeah. Yeah, sorry, sorry. Are the other instrumentalists like more from contemporary music background or reaches or something like that? Because I play in some very different music students. Oh, just forget about it. Yeah, they all went to music instrument degree in the university and like six of them are also jazz players. So two of them play in a jazz orchestra in Sao Paulo, so they are like, but not contemporary musicians. Not a contemporary musician. So it took a while for them to maybe get a grasp of this electronic music stuff or something. I think you're giving priority on the screen to the live coders representation, right? But what about the other businesses that you thought that they should have some screen space as well so that people can understand what they're doing usually as well as what the live coders are doing. Let me see if I understand, like to do their actions on the instrument and also. Well, you might need something like a interpreter who's going to create some sort of textual representation or if they're playing jazz and usually there's a court progression that you could show that. Well, in this case. I don't know, maybe you're not doing it. But I thought I'm just trying to provide something that represents what the other musicians, the acoustic musicians are doing. You have to get it somewhere, but. I mean, in our orchestra, that's for improvisation context, we usually go for improvisations, but we sometimes try some scripts. So everyone has something, a specific role at a specific time to do that. And we do that maybe to practice how to interact there. But you don't show that to the audience, the script? When we do that, we share that in the program, maybe so the others know what's happening, but it's usually very abstract. So that would be not to lose the improvisation of you and go to maybe a composition or guided composition. So in some countries, you have to have closed captioning on any public broadcasts and so on. But it's usually done by an interpreter. Is there a way that you can interpret for an audience? Maybe even an audience that can't hear. What's going on through some kind of representation? I guess this is a question for the whole community. What's going on in the free improvisation sounds. Even for those of us who can hear, we don't know what's going on. Yeah, I do agree that we're out of time maybe. Well, I think maybe we can take one more question as well. Let me just try to give you what I think about that. I agree that in free improvisation, like sessions, it can go, it can go, the sounds can go very abstract. And what I most try to pay attention when I watch free improvisation sessions is if they are interacting, if the musicians are interacting, no matter how abstract and on what alien level they are, well, I try to get some conversations and interaction going on. Less than 50% of the time, I'm able to find some interaction there. And I think that when in orchestra and hunting, when we do our sessions, less than half of the time we do achieve a nice conversation. But I mean, it's an extreme practice, and if language based, if I can call it like that, improvisation is dangerous, maybe. Free improvisation is even more. So you're more up to not get good results than getting an environment. I don't know, we could talk more about that. Maybe, thank you. Was there a last one? One more short question. Yeah. Yeah. Free improvisation musicians, when we also had this problem of like, do we project or not project? In the end we decided not to draw attention to the code. But most of the audience members that I spoke to, when we did the performance that gave the feedback that they didn't know what I was doing, and then there was this like real disparity between having some visual feedback from the acoustic musicians and not having the visual feedback that we sent to them on the laptop. Yeah. Yeah, in our first practice where I was using the project, it was like all the musicians were playing, like, what's going on? I never saw this. So yeah, there's this shock factor. It is really huge between the acoustic musicians, of course. And our audience had maybe a boosted shock factor. But yeah, I think, yeah, that's a good question. Maybe the size of the projection also matters. I mean, you don't need to blow up across the whole stage, but maybe just have a little shot of projection so you can pay your attention to it. But I need the pandemic to function here. Yeah. Well, just to really finish that, in the article I also wrote about beats, if I can call it like that, we thought tackling all this, this problem of the lack of interaction or paying attention to my sound and what I was doing, I don't have time to talk about it here, but you should read, it's like fun, at least. And thanks a lot for that. Thank you. I'll move on to the last paper of this session and of this conference, presented by Simone Lee, who got an extensive introduction this morning, and his collaborator, Antonio Capazzo, who is in South Carolina, and is joining us today. Okay. Can you hear me, Antonio? Can you wave your hands to the audience? Cool. So, oh, you should go away. So, I brought you another free software. So, so this is a collaborative work with Antonio Cavagio Jr. and me and my supervisor. Yeah, okay. So, well, basically all the implementation is done by Antonio. So, background is that he's a visiting student from University of São Paulo. He's a computer science PhD student who does computer music, especially network music. And he's the typical type of guy who measured the latency between continents to build the network telematic performances. And then his specialty is in cloud computing. So, he just, he had that, before that he had no idea what live coding is, and he came to University of Michigan and we thought we should do remote live coding performance or do something regarding this and we should write a paper and then this is the result. So, every time you see this awful white slice, that's something that I added to his slice. So, you know, I'm obsessed with this chart and then I'm trying to discuss this, the bottom right part where people are remotely located and then collaboration happening in synchronous fashion. So, remote live coding is not new. There has been a good number of remote live coding performances, one in 2013 in doctoral seminar by Andrew Sarson and Dan Swift. And in this particular conference, we have seen two presentation regarding this remote live coding in Jibber while code state sharing and extramarous that we saw this morning and then we'll see this evening as a performance. So, this is something different. So, we build a package, adam.io package called super compare. So, it's a mix of pair programming plus live coding on super collider and it use cloud service. So, if you're not familiar with adam.io, adam.io is just an editor based on webkit. So, you can freely download the editor and then it has a lot of package that you can install and then any user can create a package so that it will extend the existing editor and it has two great package already called adam pair which is basically you use this adam editor as a Google Doc like thing so you can collaboratively edit at the same time and then you have a super collider package that you can run super collider code on adam editor. And then we just, what we just did is we just merged the two package together. Well, although it's not as easy as merging them together but we basically, functionally, it's the mix of two packages and we use the cloud service called pusher.com which is free cloud service. So, like at the lower engine in local machine, you run super collider engine and you edit code on the adam.io editor and it talks to super copair package and super copair package talks to cloud service called pusher. So, this is the capture of the adam.io editor with the package and you can see two people are connected right now. The person number one you select in this part and then you can see in the other person's view that the other person is selecting this portion, marked by red and you can see the exactly opposite by the blue marked section. So, cloud service called pusher is, you can, it has a free plan so you can use it without paying anything. So, it uses push notification if it mean anything to you. So, it's a push versus poll. So, poll means you're asking the server, is there anything new, is there anything new? If there's anything new, you pull the data from the server. It makes the local client machine busy so I guess it's better to use push. Push means server will push the information whenever there is a new information. For the, I mean, you have a limit that there, you can only have 20 devices per day for the free plan and you can have 100 K messages per day meaning that if you, so it's gonna be one message per letter so if you have 100 letters per line you'll be able to code up to 1000 lines per day. But you can upgrade your plan and then pay more and then upgrade your limit. It's 10 messages per second per device meaning if you're not going to type more than 10 letters per second, I guess it's fine. Antonio did a measurement between two cities, one in Ann Arbor at University of Michigan and one in University of Sao Paulo. The average round trip was 230 milliseconds. I guess it's not close to like real time but it's okay to run remote live coding session to exchange idea to do rehearsal with Skype on because we are just sending small information of code and then the signal of execution, signal of code execution. By default, you will use the Pusher demo key which is, it means free, but you can pay more money and have increased the limit of the cloud service. So this is the diagram of the experiment with it. It's a, he was doing one in Ann Arbor he was, his friend was helping him to run session and the data center of the cloud service is located in Northern Virginia. Cool thing is that if you, this is, I mean, Northern Virginia is the default data center for free plan, but if you upgrade, like you can choose the data center. So if you move to someplace else, you can, I'm sure they have more than 15 data centers. So it could be in Asia or it could be in Norway, it could be in Africa, I guess. So I guess using cloud service, it's nice to run this pair programming, it's easy to set up and you can, you can probably scale to, you can do crowd scale networking. The most important thing is serverless, you don't have to run your own server. The amount of configuration you will need for everyone is amount of configuration you need to create a Google documents, I guess, or slightly less because you need a Gmail account to create a Google documents. Unfortunately, we don't do any synchronization in state or clock. So I guess it has some pros and cons for everyone, it will not support certain kinds of music. But again, because we are running two local machine at the two ends, if you send all the codes to one machine, like all the code that has been evaluated in this local machine will be synchronized so that they will play in sync. But it will not be synchronized with the other machine. So there are three ways to run code. You can select the code and run it. That's default superglider shift and the thing. That means you're gonna run it locally. So you're gonna run specific code locally or you can choose to run globally. So this is good in a way that you can preview things before you submit it to some other machines. And then if you're sure you can run it globally. But I realized that if you run this and if you run this again, we'll have a two redundant state in my local machine. So we created another command that can run excluding myself. So you can choose to run the command in other machine but not my local machine. So we're gonna do the demo. By the way, the ironically, both of us are not superglider programmers. So I've been learning supergliders since yesterday. Okay, so what I learned yesterday was this and this, no, is it on? It's the one in the middle. Stezzavu, okay. How about we use local speaker? Can you hear something? Okay, although this will help but let's do this. So how you install it is very easy. So you go to setting view and then install just search super compare. And then because you're not installed yet, I mean, you will see this blue thing. That's weird because I tested during the break, giving up my coffee. Okay, cool. So if you want to create a new session, you will need to look up, start a new pairing session and then you'll get a key that you have to give to your collaborators. So I'll just copy this and then we're talking on the Skype. Okay, you'll see your pair buddy has joined the session. Okay, so he's typing some meaningless words. Okay, we know you're from Brazil. Okay, I have no idea what this is, but I'll try it. Okay, so I run it locally. Since he's probably wanting to change something, I'll just create my own line and then, I don't know. Okay, so maybe Antonio, can you run something on my machine so I can stop? Actually, can you run it again? Antonio, can you give me some more sophisticated code than this? I know you don't know how to code in Super Collider. Just give me the set that you have. So I don't know what this is, okay? So I will be able to run globally, meaning that I'm running this here as well as the one machine in Brazil right now. And Antonio, can you stop the sound? Okay, so that's good. Do you have anything else? Okay, so by the way, all the Super Collider users should annoy him so that it will be better. Okay, I have no idea what this is, okay? Probably sounds. What if, what happened if I changed something? So I can run globally, stop locally, stop globally, and so forth. Okay, so I guess that's probably much of it. I mean, so all the menus are hidden in this package menu so you can evaluate a section, you can broadcast your evaluation, you can broadcast your evaluation including yourself and so forth, or stop the sound locally, globally, or excluding yourself. So, let's get out of my room. Okay, don't do it. So demo is done. So actually he tested with Super Collider users in a five different city at the same time, one in Michigan, one in California, three cities in Brazil, I don't know how exactly I can pronounce it. São Paulo, Breva, Sierra, I guess. And then we realized that you can do a lot of malicious things in this thing. So you can run a malicious algorithm and try to hurt your colleagues here. So then we added permission control, said that if you add a, so let me, is this me? Okay, so if you go to package and setting, if you check this one, okay. Antonio, can you run anything? Okay, so whenever he asks, run anything, it will ask me if I'm going to allow or not. So if I'm say okay, it will run it. So it's somehow needs some, it's a very preliminary thing, but it's somehow needs permission control. So I'm gonna ignore it. And then he witnessed that a lot of people joined the session. They talked to each other and teach each other and try to collaboratively fix bugs, which is good. So we see actually the effects of para-programming. And we think this is good for the course of rehearsal process where people are remotely located and you don't have to get together in the same place. So there's no server. You don't have any separate computer. You don't need any separate computer for this thing. You just need to download the item editor and install the package. And by default, it will use the demo key. This supposed to be a question mark, I guess, and then send any suggestion to this email. And this is, the package is available at this address or you can just search superprepareatom.io. And I think that's it. Thank you. I'm wondering if it's a co-located performance where you both on stage, would you be able to share a tempo flux, be in precise sync? Speedtweet machines, no. So if you hook up, if you hook up a main speaker into one of the machine, because it's one state in one machine and there is another state in the other machine. If you keep sending from the other machine, it will be in one state, so everything is running in sync. But it will not be synchronized with the other machine. And then in the fact that it will try to talk to the data center in Northern Virginia, you have to just keep that in mind. What's the largest number of users that you have tested at the moment? What's the largest number, Antonia, can you hear? Can you talk? Okay, you cannot talk. As far as I know, 20. Okay, 20 people. Because it allows 20 devices per day by default, I guess it's the... I guess it's the... I guess it's the... I guess it's the... I guess it's the... I guess it's the... Any other questions? Do you guys both hear or do you guys?