 Welcome to Anne Wainbach and Philippe Ignacio Noriego, also known as OVS for their performance. Thank you. Hello, yeah. Well, welcome. Thanks for coming this morning. We are OVS. We made this duo when we met at the conservatory, maybe five years ago, six years ago, and since then we've been playing together. My instrument is the laptop and her instrument is piano. So what we want to do in this presentation is show you a bit of what we do when we play together, which is live coding. Maybe some of you have heard about it, which I mean, what we want to do is that you follow the process of coding while I'm doing it. And later we'll show you where we're taking this project because we want to make it ultimately that Anne herself can code with a piano. So we'll get into that in a bit. So we're musicians originally, and as musicians we play a lot of chamber music, and when you're playing with a laptop artist or a live coder, they're very much entranced in their thoughts. So as a chamber music partner to that, I found it very frustrating because I never had any visual contact or physical contact. It was very distant. So we came up with this experiment, which was an idea that every set amount of time we had to make visual contact to make it a more normal musical experience. So if you guys log into this address, what you can do there is actually control the time interval at which we have to look at each other. Now it's limited between 30 and 120 seconds. And what happens is at that moment when it flashes, then if we want to make any big musical changes like you would do when you would start something fresh together or something in conventional chamber music, you could only do it at that moment. So we see the signal, we look at each other, and then we can or we cannot change something, that's up to us. You're also very welcome to leave comments. You can of course try and hack the system, please don't, but you can, this is a hack and conference after all, and leave comments and tell us if you want something, and we may or may not respond to it depending on what we feel like, but normally we respond to some of the comments, not all, because there's a lot of you and only two of us. But we will try. We will try, and we try to make something, a nice performative musical experience out of this for you. A collective experience. Yeah, so yeah, without any further ado, if you guys log in, say hi, oh yeah, I see. Okay, and we'll have time for questions after the piece. Yeah. For the spooky experience, it was cool to do, and cool whoever did the alert there, and kind of crashed the messages. Yeah. So now we're going to go to the next thing we did, and I can explain about it, and I'll let up if you miss anything. Yeah, I'm going to explain it. Felipe, Felipe. We kind of, this is a big research project for me, how can we play together between the live coding and the piano? It's not really normal, a normal way to make music. So then we were really sitting, and what can we really do to bring the two practices closer together? And then we came up with this idea of the Code Clavier, and we have some stickers after if you want one. And the Code Clavier is basically a system, and that's why we also have a digital piano where I will live code through playing the piano. Now we started this project in April, so we're really excited to be able to share it with you guys. It's still very much in the developing phase. We're going to share the first prototype, which is called Hello World. I think you guys can all relate to that. And it's very nice, but it's still very much kind of using this keyboard as a substitute for your standard laptop keyboard. So making music is... Well, I think it is some form of making music, but it's not comfortable piano playing music. But it's still a very important step for us to have the first kind of, hey, I can live code through playing the piano. And we're really curious what you guys will think about it too. Thanks. Okay. Okay. It's booting? It's booting. The thing is that since we started this journey, we tried to do it like in phases. So the first experiment we did was to just like let Anne play the piano and see if we could just map her MIDI keys to then be translated into code commands in Super Collider, which is the language we use for audio. And if once this boots, we'll show you. And then what we wanted to do after that is then not just like one-to-one mapping, which was kind of boring actually, but see if we could get the system to recognize the musical gestures of the piano and also map that to certain things in the language. Yeah. And we also, while we're booting, want to thank the Steam-O-Leadings phones in the Netherlands for supporting this project. That's been really great support for us. And we're also really open with the future to collaborate with people that want to develop other forms of code output. So not necessarily music-related, but visuals or virtual reality or what you name it. We're kind of thinking about it. So if you're interested, come and have a chat with us after. Super Collider takes a while to boot. Yeah. Well, it's there now. And for this also we use then, as I said, JavaScript for the mapping on the MIDI, let's say the MIDI part, which gets the piano and makes the mapping. And then for Super Collider just to do the audio synthesis. So I have to boot the whole caveat, which is there. Okay. It's also quite a journey for me to learn how to code. So we also will be looking into making an education platform because it's an interesting way to approach coding through music as opposed to... Yeah, that's kind of one of the dreams also. Yeah. Just so that I give a little intro of this with this one so that she could control a robot toy piano. That's how they called it. Which is a mechanical toy piano with MIDI. So if she's playing and controlling a mechanical instrument, but because we don't have that instrument here, she's going to be controlling a sampler. It's also done in Super Collider. So take it on. Actually, Felipe made the whole system. So I think very excited to make it so user-friendly. But as you saw, I was kind of more or less typing, just using a different interface and trying to disperse it with musical things in between, but getting a bit nervous. And so our next prototype is called Moitippets. And the key difference about it is that instead of using individual key control, I get motives that I play, pianistic motives and pianistic gestures that I recognize with a computer and then mapped to code snippets, which I can then control. For now, my main mode of control is getting the variables through the intervals of the tremolos. So if I will play a 12 octave, that's 12 semitones, so the number will be 12. And it's very much a prototype. I'm always never quite sure if it's going to work, but we thought it would be really exciting to share it with you guys here today. And that's the last thing we're going to play. And then we'll really hope to get a few questions in before the time ends up. Okay. Yeah. After this, Anne will not need me at all actually. That's the point. That was nice stop with the rain, I thought. Yeah. So hopefully what you could notice here is that the way Anne is generating the code is a bit more musical than the first one, which is what we're trying to achieve in the end. And also, we call it Moitippets, I don't know if you said it, but because we are recognizing the motives and mapping it to snippets. Yeah. And what's really exciting about being able to code with a piano eventually is that I have two hands so I can code two things parallel. It's still working in progress in terms of this system, but that is kind of one of the big motivations behind it. And I think something really unique we can bring to the coding paradigm. And we have time for a few questions. So please ask or come talk to us. So who has a question for Felipe and Anne and wants to come up to the microphone? Hello. Hi. Thank you so much for the wonderful performance. Thank you. My question is what would be the minimal requirements for a performer before he or she could start to play with this? So do you need to first learn how to code as well? Do you first need to set up the entire snippet system or can you just plug this in and play? It's a very big question. At this stage, you cannot plug anything in because it's not released, but eventually we'll have, we also want to develop an education model where people of various levels of programming and musical skills could interact with the system. So I don't know if for those of you based in the Netherlands in on 16th of September, we will do an installation type also at the Leiden Nacht von Künstenkenes where non-musicians can have a go and control the system in a very primitive way. I didn't know how to code until, I mean, I still don't know how to code. I learned everything about coding through the system and I think it was a very nice way, certainly for musicians to approach coding because you learn it as a non-linear score in a way, although not exactly because there are too many variables, but it's much more approachable than if I had to just learn this with the computer. Let me add up also. Like what's happening now with the system is that many things are sort of pre-built, let's say many keywords and conditionals, but what we want is that you don't need to really know how to code, you know how to play the piano. So with enough feedback, visual feedback, what we want is that you start understanding certain things of programming. For example, the way she stopped it, she mapped a conditional which says, okay, for the next time, I will be checking how many notes you play in blocks of five seconds. And if it's more than 100, I will stop it. So, you know, trying to do those things that relate to the playing in real time, we think that could also give understanding to the performer or the player how to program and then use those constructs. And then eventually we want to go more lower level, of course, where they can even make their own snippets and even type their conditionals or make their own conditionals. So we're trying to go really to the low level in the end, I think. Yes, next question. Hi, thank you for your performance. I'm just thinking because there's a limited number of keys on the piano and a limited, well, finite number of commands, what's the chance, let's just say, perhaps playing something classical that you'll create even one valid command or is this really just a case of monkeys and typewriters? I think in the final version, there will certainly be a chance of that. But it's not the goal to play a Chopin Nocturne and suddenly code because we do want the pianist to engage in the coding process. So that we can always render code but it should be also meaningful and controlled, ideally. Although one of the dreams we have also is try to see with this system what happens if you code intuitively. Like you're not thinking in the logical process when you program as a programmer but you're just playing music or playing a prelude by Chopin or improvising and see if that generates code and probably will not be useful code but like the experience itself might be useful or we don't know, we want to explore those things. Those are part of our questions, actually. Yeah. Again, four. So what software are you using? Any related to Sonic Pi or what libraries or tools? No, yeah. We're using JavaScript to get the MIDI to the computer and then also analyzing the MIDI and categorizing the input and branching it into different sort of analysis branches to then generate what I want to type code. Let's say that's JavaScript and that's what's here in the terminal and then that's sending through another library that just types. Actually, it just makes it a keyboard, let's say. I'm using Super Collider, which is the other part there which is open source language for audio synthesis and analysis. Actually, Sonic Pi is built on top of Super Collider. So that's it, JavaScript and Super Collider. Next question. Yeah, hi. First of all, thank you for the awesome performance. It was absolutely suitable for Sunday morning. So besides all the technology, I mean, you do want to repeat this process on stage, right? So how can I book you for my festival? You can come talk to us right now. Put up your information on the screen because I'm sure more people will want to do that. I will contact you afterwards. Yeah, we have a website. It's called keyboardsunit.com. I can't say that we really keep it up to date. We will right now. We'll update it today. Let's see more of you in the Netherlands. Yeah, that would be great. Thank you. And please come talk to us. Next question. I really enjoyed it as well. I have a little technical question. How do you recognize the motives? So what aspects of the music that the cheese playing, are you looking at? So is it pitch or tempo or both? Yeah, good question. We had a lot of discussion in the beginning for that. And in the end, it's MIDI. We're just looking at MIDI because we thought that the protocol is very well-defined. And it's not like a lot of information. So we're just looking at which note it is, a MIDI number, and how often it's coming in. And then I'm storing those things in patterns. So I'm getting the strings, sorry, in arrays. And then I'm analyzing those arrays. And for this one, because this is also kind of a prototype, I hard-coded the motives. So I'm comparing sort of what's coming into something that's hard-coded. And if it's a match, then it prints a snippet. Eventually, what we want is that for the next phase or the next prototype, instead of having hard-coded motives to compare, the system will learn like kind of her style or if you play Chopin, it learns the style of Chopin. And derives those patterns and then compares what you're playing to those patterns and tries to see if there are matches or not. Does that answer the question? Yeah, absolutely. And one more related question. Do you calculate the tempo that she's playing live and then use that parameter to play the motive in sync? When we play together, you mean? No, yeah, yeah, yeah. When you trigger the motive? Yeah. Does it have a fixed tempo or is it adaptive? It's a fixed tempo and she can modify it. Yeah, I was controlling it. But she uses her ears to synchronize it. So she shoots it, oh, it's not there. Modify it, is it there? Or I play on time. But it's a good idea, I think. I mean, right now we have very basic sort of listening methods, but I think analyzing the tempo and the frequency of notes is a good one to have more information, to do more things with the system for sure. Thank you. Thanks for the suggestion. So unfortunately, time's up. So I give a warm applause for Anne and Felipe, a better known as Alves. Well.