 So hello, thank you to be here. The first thing I want to say, even if Mark already said it, is that we are giving a concert at the social event tomorrow. And in this concert we are going to use Python heavily, even if it's not very visible in the concert. So I propose to have this talk to explain why and how we are using Python for making music. So this is the theoretical part, and if you don't believe that it works, just come tomorrow to the social event and listen. So a very usual question when you meet someone in a convention like this one is, and what are you using Python for? And my answer is a little bit less usual. It's, well, mainly for real-time audio processing in live music context. And this is quite unexpected, and it sometimes triggers reactions like, what are you crazy? Why Python for this kind of thing? And as it happens, the answers are very easy. Are you crazy? Yes, we are, definitely. And why Python? Because it's fun. So I could stop here. Thank you for your attention. Have a nice meal. Thank you. But as I was lucky enough to be allotted a 45-minute slot for this talk, I think I can get into some more detail than that. So first, some elements of context. My name is Mathieu Amiget. I'm a musician and a developer. I'm artistic director at Le Chemin Travers, jointly with Barbara Minder. And Le Chemin de Travers is a collective of musicians that plays in a variety of styles from Renaissance repertoire to algorithmic composition. By the way, this music was generated by Python, but that's not at all what I'm going to talk about today. And one thing we've been researching quite a lot for the last decade or so is augmented instruments. What do I mean by that? It's taking an acoustic instrument like, you know, a flute, a violin, piano, or something like that, and trying to extend the sonic possibilities of this instrument using new technologies and especially computers. So why the strange name augmented instruments? Actually, it comes from augmented reality. In augmented reality, we mix real-time views of the world with synthetic information that we add to the image. And augmented instruments do the same. They mix real-time acoustic sound of the instrument with processed audio. So in a sense, augmented instruments are augmented reality applied to music. As a side note, it's not that important in that talk, but it's very important in our research. We decided to use only free software for our research. So we are making music with Linux and free software. That's not a very common choice in the music world. But I guess we would end up to use Python even if we hadn't decided for this restriction. Anyway, the definition of augmented instruments is a little bit theoretical. Wouldn't you have an example? Actually, I will show you a set of examples. The first one is very, very simple. You have a musician. I picture the flute is because the instrument I play, but it could be any instrument. And he plays through a speaker. So you have a set of microphones, wires, amplificator, and everything. And this goes through a speaker. And in a very simple setup, you could simply add a delay module that will, like his name suggests, delay the sound in time. So a time shifting of the sound. And the musician can have a food controller. He needs a food controller because the hands are already busy playing the instrument. A food controller to control the time of the delay, the length of the delay. And even with a very simple setup like this, you can already do some interesting things. Okay. So that's not bad for a very simple setup and one flute playing. So this is really one flute playing with itself. There's no pre-recorded sound or anything. I'm not sure Tellaman had envisioned this way of playing his music, but actually it works pretty well. And for this kind of setup, you really need a computer. You can do it with a hardware pedal. And it's cheaper. It's easier. But if you get a little bit crazy with delays and begin to have several delays wired in strange manners and the delay times are linked one to another, it's not that clear that it's better to do it with hardware pedals. In this example, with a set of four delays or a setup in the right manner, and if you play the right notes in the right time, you can get some interesting effects. By the way, this is an excerpt of a piece we are going to play tomorrow at the social event. So if you like it, just come to the concert. So we are quickly reaching the point where it might be more reasonable to use a computer instead of multiple hardware pedals. But it's still relatively easy to do it with stock software, you know, just taking existing software and wiring it the right way and you can play it. The next example is a little bit more complicated. I'm going to show you a complex piece of music with a strong architecture with a beginning and middle and end and really an evolution. And many things happening on the technical side, many volumes changing and loops being recorded and loops being triggered. And it becomes unpracticable for the musician to control all the details himself. So either we have a technician that does all the knob turning and button pressing when the musician plays. But that's not exactly what we want to do with augmented instruments because we want them to be musical instruments that can be played by one person. So the other possibility is to have choice that are made in advance and encoded in the computer way or another. So in this example we have a state machine and when the musician presses on the buttons of the food controller, he triggers state changes. I'm going to from this stage to this state. And this triggers a set of actions like changing volumes or recording loops or something like that. And so many things happen but the musician has only a few simple actions to make and hopefully it frees up his head to do better music. Thank you. So as I said before, everything is played live. There are no pre-recorded sounds. I once played this piece in a wedding party and after I played it someone, a professional musician came to me and said, oh, that was nice. Your piece, your Karaoke-like piece. And I said, well, no, it's not really Karaoke. You really have to understand that the idea is that everything is played live in the concert. Also, we are slowly exiting the realm of existing software, of stock software because here the box state machine doesn't really exist with the right connections and everything so we had to develop this part ourselves to play this piece. Perhaps a last example, it's very similar but that's an interesting thing. Until now I showed you only things with loopers and delays and so only time shifting if you want. It's of course also possible to add effects of all kinds or synthetting sounds but that's not something we do much synthetting sounds. And in this one, something funny is happening. If you look at the bottom blue path that goes through a looper and then something we call a envelope follower, what comes out is a red path. So an audio path is transformed into a control path for another sound and that's something funny to do and also that we had to develop ourselves. So if you think of a solo flute piece, probably you don't picture this kind of sound and that's exactly what we are trying to do to extend as much as possible the sonic possibilities of the instrument. And actually for a few years we have been doing this kind of thing and everything was going very smoothly using partly existing software so free software as I told you so super looper, guitarics, this kind of thing. Also custom fragments written in audio programming languages so specialized programming languages for audio. We mostly used Chuck but we also had a few experiences with pure data, super collider, never see sound but we could have done this kind of thing. And we would connect everything with Jack. I don't know if you are probably not familiar with Jack. For once it's one of the best recursive acronyms in the history of free software. Jack is the jack audio connection kit and it's a software that allows an audio daemon that allows to connect different audio applications on the same computer in the same way you would connect different audio, you know, rockable audio units with Jack cables but you do it in software. It's very nice. And we would manage everything with dash strips so simply launch the software we needed and connect everything and everything was good. We thought. But then we hit the wall. We had a big problem and we realized that we couldn't go on the same way we had to change something very fundamental in our way to do it. What was the problem? The problem was that we were able to play single songs, single tunes very easily but we couldn't go smoothly from one song to another. What we had to do was launch the right script, then play the song and then go to the computer, quit everything, stop every sound, launch a new script and then we could continue. And that's not that nice in a concert. You know, sometimes you want to crossfade from one song to another or simply it's also not so nice on stage to have someone going to a computer and bending and typing in and that's not very nice to look at. And of course possibility could have been to have some kind of mega patch with every song encoded, every song ready to go and just going from one to another. But we have two problems with this. The first one is performance. If you have every possible song running in parallel, you are likely to have some performance problems on your computer. And the other problem is that we really wanted to have a modular approach because we compose songs and then when we make gigs we say, well, I'm going to play that song and that song and that song but maybe for another gig I will take another song and the first one of the first gig and, you know, so we really had to have a mother way of implementing our songs and then reusing them in gigs or in set lists. So what we needed was some kind of gig framework, you know, like in web framework, but for gigs. The flask of the gigging musician, if you want. And what we realized is that that's something really, really difficult to do in audio programming languages. Audio programming languages are very good at programming audio. They better be. But they lack, you know, the higher abstractions, the meat of programming features that make it easy to make something that looks even remotely like framework. So we did quite a lot of research. And finally we found this. Pio is a dedicated Python module for digital signal processing. It's a very nice module developed mainly by Olivier Benangé in the University of Montreal in Canada. And actually I was also already quite familiar with Python before. And when I saw this, I thought, well, it sounds nice. But if you know anything about real-time audio processing, you should be quite skeptical. Are you? You should be quite skeptical because it's very likely that Python is too slow for real-time audio. And even if it's not too slow, things like memory management, you know, gallery operations and this kind of thing are very likely to introduce too much latency. And so you get clicks in your audio and that's not nice. However, Pio does work because it works more or less like a marble run, this one. The idea is that you have blocks and you can build paths with these blocks. And in this example, if you drop a marble on the finished path, it will just follow the path on the normal speed, on its own speed, even if you were slow to build the path. And you can build a second path while marbles are running down the first one and then just switch to the other. You just have to be a little bit careful on the moment of the switch because if there's a marble at that time, it will go out. But you can do things relatively slowly and then have the path run at a higher speed. And that's exactly what Pio is doing. Pio has an audio engine that's implemented in C. It's very efficient, very lightweight, very nice. Sorry. And there are bindings to Python that give you building blocks and, you know, hooks to change things at all kinds of places. And so all the heavy works of dealing with audio samples and memory and everything low level is completely invisible. And you just have the nice colored blocks and you construct your path. So this is not a toy convention. This is a Python convention. So maybe I can get a little bit more precise on how it works. So remember the first example I showed you, the Telemann cannon played by one musician. How could we implement this in Pio? Actually it's very easy. First you need some boiler plate code but really not that much. Just an import. Create what's called a server. That's the audio engine. And then later on you will start the server and find a way to keep the main thread alive because the server is started on a different thread. And so if you just say server start and stop there, that's critical. So one way is launching a GUI. There are other ways. We don't use a GUI on stage so we don't launch a GUI. That's not that important. Then we try to do the upper path on the drawing. So just having the sound of the musician going to the speaker. And that's really easy. You just have to create an input object. And the input object will represent the audio stream coming from the input of the program. So from the sound card. And then if you call on any stream, any audio stream of Pio, if you call the out method, it will send this stream to the output of the program. So this is a fully working program that will just get the sound through. So that's not bad with what? One, two, three, four, five lines of code. And for the second path, the one that goes through the delay, that's not much more difficult. We have several delay objects in Pio. Here I use the simple delay. And the first argument to a Pio object, so to an audio stream, is the input. So here the delay will take its input from the input object we created. And then as we want the delay to go also to the speaker, we call the out method on it. And we have a third path to implement. That's the red one. So I want to use a food controller to tap tempo the length of the delay. For this, this code is using food controller, which is a small library I implemented to use the software food controller with Pio. And so some boilerplate code. But what's interesting is B1 equals press button one. So mainly I'm making an object that represents all the times when I press a button on my food controller. And then I make a timer object that will compute the time between two successive presses. So if I press and then I wait three seconds and I press, it will contain the number three. And it's also a stream of data which continuously contains this information. And then I just say to my delay object, so the length of the delay will be the value of the timer. And this is full implementation of what you see above. And it's really usable in a concert. I mean, you have to do some work to set up your computer so that it can deal with low latency audio. That can be some work. But the code can work like this. So we are very happy, but we still have the wall. Because if I want to go to another song, I'll have to kit this script and launch another one. And I have gained nothing. Or almost nothing. Because now I have Python. So I really needed to do some kind of framework. And we thought we have to modelize our gigs or our sets in a simple way. So we said our gigs will be modules. And we have some naming convention, for instance, if I say scenes equals and a list after that, that will be the scenes or the tunes that I want to play in my gig. Scenes are also modules. So that means that I will take advantage of the dynamic importing capabilities of Python. And then some set up codes. So I'm saying, well, for this gig, I will have two microphones and I want to be able to crossfade from one to another. And I have some kind of blackboard object that anyone can read or write. Anyone mean the gig and the scenes, the tunes. So I can, for instance, in my gig, I set up my microphone and then I set context of make equals make. And so I can access it from other parts of my code. That's taking advantage, of course, of the dynamic typing possibilities of Python. And the scenes become very, very easy. So in a scene, so as I said, it's a module. And I can say, well, I need to use the expression pedal. And I want to have loops. Of course, I can use all the features of Python. For instance, in this example, I had several buttons of my foot controller that had to behave in a similar fashion. So why not use this comprehension to make all four of them in one time? You see that I use the context.mic in the definition of my loops. And I also have some decorators that provide hooks at some point in the life cycle of the scene. So when the scene is created, activated, deactivated, and so on. And then it's very easy to have a master script. That's the core of our framework that will find the gig. In this example, you have to call it in the common line with the name of the gig. So I launched gig Europe Python 2019. It will find the right module. It will find the scenes that are in it, import every scene. And then I can register some events, for instance, when I press on certain buttons of my foot controller to switch from one thing to another. And with this, I can really easily make this kind of gig framework I talked about. And it works pretty well. Of course, this is only a principle. The real code is much longer. There's some error checking and I think like that. But still, I think the whole framework must be way under a thousand line, which is really, really reasonable for the kind of thing we are doing. And so, this was possible thanks to very nice features of Python, like dynamic typing, dynamic imports, decorators, code introspection, this kind of thing. To be completely honest, in the first version, we also used some disreputable features like monkey patching, live inspection of execution frames, and all kind of hacks. But still, we thought we needed them and we had them. So, we couldn't have a prototype very quickly. And after some months, we thought, well, this is really, really ugly. We must make something about it. And we are getting rid of the ugliest features one by one. But still, all the features are there. And if you need something and you need to do something really unusual or strange, everything is there. And that's something really nice with the Python language, I think. So, now we found a way around the world. We see that there is still a long journey in front of us. But now we can go forward and explore new territories. And we are now able to go seamlessly from one scene to another without sound interruption or also with, for those who know this kind of thing, also with effect stales, you know, if you have a long, long reverb and you switch to another scene or tune, you don't want the reverb to be cut but you want it to die slowly, this kind of thing. And everything works very well. So, my conclusion would be that the combination of Python and Pyro really supports our creative process. And in that, it makes experimentation easy. When we have an idea, a musical idea, it's very easy to implement it and test it. And this is very important because we have many ideas. And to be honest, I would say nine out of ten never reach the stage. We try them and say, well, no, that wasn't a good idea. So, if we need, I don't know, three, four, five days to implement an idea before we test it, we simply don't have the time to do it. And with Python, everything is going very fast and we have a very direct path from the initial idea to its prototype. And, well, most of the time, the prototype is also the production code. And another really, really great thing is that Pyro is very actively developed. The main developer, Olivier Benonger, is very, very dedicated to making Pyro better and better. And it happened many a times that I was working on some codes and sometimes, suddenly I was blocked and I would write to the list saying, well, I'm trying to do this and this with Pyro and I can't find how to do it. And usually I would do it, you know, on the evening and I would go to sleep. And I'm living in Switzerland. Olivier Benonger is living in Canada. That means that he had still a long day in front of him at that time. And when I would wake up the following morning, I would have an answer on the list. Well, this was not possible, but it's now implemented. Just check out the latest code. And it really happened many times. And, well, that's simply great. So I know he couldn't be here today, but thank you, Olivier, for this great work. And this combination of Python and Pyro allows us to see efficiency. We really need efficiency and very low latency when we do real-time audio, but with all the flexibility of Python. It's also quite an unexpected use case for Python. And I think it really shows the versatility of the language and the ecosystem. And that's why I think it's a great way to make it. Now, maybe an interesting question would be, we are very happy now with this Python-Pyro solution, but what could possibly make us consider another solution? And I can see two places where I'm not completely satisfied, and I would consider changing. The first one is catching errors. If you don't know what I'm talking about, I'm going to show you a little bit of a hack that would be called when I press a button on my foot controller. And as it happens, I made a typo in my callback code. I wanted to say, loop, set something, and I wrote something else. As I'm a very, very serious developer, I also documented my typo, but I had absolutely no problem when I launch my script, and it's only when I press on the foot controller that I will get an error. So Pyro is relatively resilient in this kind of case. It won't crash the whole thing. So even if it happens in a gig, it's not the end of the world. But one thing is sure, it won't do what I intended it to do, and it can be quite annoying. So I would appreciate to have some tools that would catch most of errors before they are even executed. Another thing is that, like many frameworks in imperative languages, Pyro heavily relies on callbacks. And callbacks are very nice. They work nice. We are used to it to them. But they are not always the best way of expressing ideas. And maybe it would be interesting to explore other manner of organizing things in time than callbacks. So maybe I read too much about Haskell. Now I want catching errors and compile time, get rid of callbacks. I don't know. Anyway, reimplementing our whole set ups and gigs in our new language would be quite an expensive thing to do. So I think we would really need to have very, very obvious advantages to go away from this solution. But that was just to say what could be even better. If you want to hear more music than the little excerpts you heard, of course the best thing to do is come to the social event tomorrow. We are playing live. If you are the kind of old-fashioned people that still buy CD, like me, you can buy a CD. I have a few with me. You can just come to me. This is our latest album with many, many augmented instruments things, all backed by Pio. You can also have this album in dematerialized, that's the hard word, in mp3 format on Bandcamp. And if you really want to support the platforms instead of supporting the musicians, you can also stream from Spotify, Deezer, Google and virtually any streaming platform. So that's it. If you have questions, I think we can take one or two questions and of course I'm available after my talk to answer questions one by one. Thank you for your attention. Hello. So thank you for this insightful presentation. I'm just curious to know how you chose to annotate your music score in order to know which foot button to press at what time. Sorry, I didn't get it. So I said I'm curious to know how you chose to annotate your music score, your partition in order to know which foot button to press at what time. That's a big problem, how to write. We do quite some compositions for augmented instruments and the writing part is a real problem. Sometimes we just have, you know, standard music scores and we just annotate like numbers or something like that. Sometimes we really have a completely different notation because we don't have any use for the traditional five lines notation but actually we don't really know and sometimes it's even the code that's slowly becoming the score. It happens that we also do a lot of improvisation on canvases and sometimes we don't even write anything and if we have a question we go to and look at the code and say, oh, yes, we decided to have that and that and that. So that's a good question but I don't really have an answer. Another question? Okay, so thank you very much.