 Hello everybody. Right now we will see a presentation about build a step sequencer using Python. And here, I don't know if you see, but you have a lot of materials. It means we will have a live demo. Please to be peaceful. Maybe you should don't work. I don't know. Thank you. Hi, so I'm from France. And I will show you a little project I have been working on on my free time. So I will define, I will start with a background on musical instruments and synthesizers and samples. Define what the sequencer is and specifically a step sequencer. So musical instruments can be played by humans. But some can be played by computers. For instance, synthesizers and samples. Synthesizers, sorry, are essentially sound generators. So it means that lots of parameters can be tweaked and automated. So here are... Yeah. So this is a mini-mood from the 70s. Okay, this is the DX7 from the 80s. And now we have what we call analog modeling, which is a way to reproduce original circuit designs. And I have a Minin over here, which is exactly that. You can also use a VAT plugin to have a virtual instrument on your computer. So I won't use that. Another category of instruments are samplers, which do not generate sound themselves. But they play samples, which are little chunks of sound. There are many ways to map samples to a keyboard, for example. So you can have, for example, one sample for the whole keyboard. You can have one sample for each note, which is the case, for example, with the MPC or things like that, when you see finger drumming. This pad, for example, is meant to do VAT or generate. And you can have a mix between the two, where a note has a sample. For example, I start from G2 and from G2 to B2. It will be the same sample, but pitch adjusted. Now, what about drum machines? So these are sound generators plus a step sequencer, which I will define shortly. So a famous example is the TR-991. Yeah, from Daft Punk or in other various styles, like with Björk. So this is a TR-992. So it brings me to the definition of a sequencer. So a sequencer is meant to play a sequence of notes. And you can have several tracks at the same time. So that's how you build the sound, actually. A step sequencer is a special kind of sequencer, which uses subdivisions of time. So for example, a 4-4 measure is divided into 4 quarter notes, and each quarter note can be divided into 4 steps. So a sequence like this is 16 steps longer. So this is the musical notation corresponding to that. So for each step, we can define various attributes like the note. It's useful. Other attributes like length or velocity, and of course, activate it or not. This is an example where each step is filled with a note. Obviously it's not for the video, but here you can hear a sequence without breaks. So actually there are patterns, 4 step patterns, 4 16 step patterns which make up the whole theme. But you can also set some steps off, and that's where the rhythm really comes off. So here we hear a kind of acceleration at the end, and it's just the effect of having these 4 notes, these 4 steps just activated one after another, next to another. So that's the basic workings of a step sequencer. So how can we use it in real life? Well, usually there are 2 notes. You have a step-by-step mode, which means that for each step we will define which note we want to play. There's no timing there, there's no rush, because we can modify everything, go back, modify a note, modify an attribute. And of course, there is the live mode where you start the thing, and you can activate steps, toggle them in real time. So that's the real power of a step sequencer. Okay, so now I will present the project. So I had synthesizers and empty pads, these are just controllers. They don't contain any sound. And a stake, of course. So I wanted to make the synthesizer here play notes using Python. I wanted to modify, turn, activate or not steps to create a sequence, implement the 2 modes I've talked about, change tempo in real time, that's useful, and create abstractions to make these interactions possible with any controller. An important aspect of the project is that there is no UI. So it's just a black box, so I had to focus on usability from a hardware point of view. Because when you play live, you don't want computers and the mouse and click on things. So what's MIDI? MIDI is an extremely old standard, still largely induced today. There is OSC now, but MIDI is still strongly used. So MIDI is used to synchronize and communicate between musical instruments. There are message types which contain everything from notes to control change, to change instruments, or sorry, problem change to change instruments, control change to have an effect on parameters. And there are system exclusive messages too. So we will need to speak MIDI with our devices. So from the point of view of the other program, what's an input? An input that receives things I do with my controllers, like when I press a pad or when I press a key on the keyboard, when I turn a knob. And on output, I want to, of course, play notes, but also I can turn on or off LEDs on my pads using MIDI messages. This is, most pads like this do respond to special MIDI messages to light their LEDs. So how do we receive messages? It's quite simple using the middle library. So we just open an input and there is a blocking call from which we extract a message. So I'm blocked during this method call. So if I want to do something else, I will have to do with threads of our coroutines. We'll see about that. To play notes, it's regular outputs. So I create a message object and then I send it through a port. For example, here I have a note on message. So if I just send this message, there will be a non-ending note until the note off message arrives. So to play notes in a human sense, we have to have a timer between note off and note off, which is corresponding to note duration. So we can do that maybe with time.sleep. Another concern is to align notes with tempo. So how do I maintain a tempo? A naive implementation would be to sleep for the duration of a step. But there are problems with that because, okay, this time that sleep also blocks. So it's another thread or another coroutine to handle. And the fact of waking up sleeping for six seconds and waking up again doesn't take into account the time needed to wake up. So if we just do that, the tempo slowly drifts. And the solution for that is simple. It's just to calculate absolute times. So like when you set up your alarm clock for the morning, you just set a time. You don't calculate the duration of your sleep. So solutions for that. There are many. I could have used threads. So I would have had many cues to avoid share state. A simple solution is to use coroutines with async.io, for example. So everything is in a single thread there. So we have less concurrency issues. And it's quite okay because our app is IO bound. It doesn't use a lot of CPUs. It mainly waits for inputs and sends outputs. But the problem is that we have to modify the underlying library, middle, to insert yield forms or a wait. So the simple solution for me, but there are always lots of solutions, was to use greenlets with VG event, which monkey patches time.sleep. And so my middle library becomes non-blocking. So I can use it as is and have greenlets. So I went with greenlets coroutines. So I have other steps scheduled and a note scheduler, which is scheduled according to the, which schedules are not on and not off according to the note duration. I have also input screenlets to you to wait for messages. And so this main process at the left is really IO bound. But what was strange was I had to make the print out on a separate process because this was CPU bound. This is an extract of the thing running on your Raspberry Pi. And when you really turn the knobs a lot and tweak things in real time a lot, you get the console printing the messages, takes all the CPU. So that's a no go if you are with greenlets because you will lose synchronization. So just a quick drawing with the main classes. So we have controllers, the base controller class. And my controllers implement and inherit this class. And we have several utility classes like the rules chain and the steps, of course. So how do I implement a controller? I have to do two things. I have to map messages from the controller, which are like when this pad is pressed to sequence actions. Like I want to toggle a step on that action. And I have also to send messages to the controller for feedback, for live feedback. To interpret events from controllers, there are simple events like the note on we just saw, which are represented by a single message. But others are the result of a sequence of messages. This is in the midi specification. So I created a rules chain with each rule matching an individual message. And you have a state automaton, which keeps track of the matched rules so far. So using this, we have a really flexible rules evaluation engine, with which you can declare almost lots of things. So here, for example, I will react. I call the onCC method when the control change number is 74. And next, the one, the 27. I can't really read. With value 0, come from my controllers. And the other part is reacting to sequence of events to give feedback or to play notes. So this is just a callback system with an event at the left and callbacks at the right. Okay. So I have 10 minutes to show you things. So we start by... So when the program starts, you see that it recognized the controller. And it's already counting steps. So here I want to define my steps individually. So I will just stop and go to the first step. It's not really practical. Okay. So, yeah, I can move from one step to another like this. So I define, for example, I... Thank you. I'm on my first step. And then I will play a note and it will register it on this step. Okay. So I will skip a step. Okay. Another note. A last note. So I'm a bit nervous because I've created a really great melody which will surely change the world. So I can just play it now. Oh, no. Yeah. Yeah. Can you hear it? Yeah. And I can do variations of it in real time, for example. This note is screwed. Yeah, okay. Okay. This is not what I had in mind, but... Okay. So this is really the simplest case. I can modify the pitch, for example. Okay. And the tempo too. The tempo too. Okay. That's it for now. Okay. Whoa. What a pretty stack trace. We can also do drums. For example, I have a drum machine on the computer and I've created a virtual MIDI bus. And so I can trigger samples because really it's a sampler. So I've prepared something. Examples. Yeah. Okay. So there I will change the bus. I know it's... Yeah. Okay. So here we go. This is only a step sequencer, a sequence of steps, but there are 16 steps in there. So we just loop through the 8 first and the 8 last. On this controller, which has many pads, I have two rows. Yeah. I can just mute some steps and create variations around that. Okay. I have another example, which is in another style. Yeah. So you can create all kinds of simple rhythms. And using the notes attributes, you can create a really musical thing if we can call it like that. So I can improvise on this with this one. I didn't lock the thing. Yeah. Okay. So sorry for the live. What did I have in mind? Yeah. We can do a thing with Mozart. Sorry for him. Okay. So this is simply a 32 pattern. So eight dots, something, four. And the last example. Okay. So I can, so you can see I have outputs of the parameters. This is, I developed a remote console to look at these parameters in real time. Demo effect. Yeah. So this is a WebSockets example. Okay. Gotta speed up a little bit. I'm not sure I have the time to explain why pattern really quickly. I thought it was really easy to read with easy to write, which is a benefit for not too technical users. So when you have to write something for a new controller, it's good. Dynamic features of Python and the plugin systems made writing these controllers really easy. And there's a large ecosystem which allowed me to plug in WebSockets in a few hours. But there are challenges because Python is obviously not the best choice for real time computing. We have real time constraints there. We can't drift. There was also performance problem on tiny devices like the chip. It runs on a chip, which is a $9 computer. Steppy was designed with simplicity in mind, so hence the single thread execution model. So it means we must be green and use the least CPU possible. So we can speed up the rules evaluation engine. We can stop pretty printing like with large characters, for example, by moving the problems using WebSockets like I did. And then I find my CPU back and that's great. So here are some future plans. I would like to implement codes which are important for the machine, for example, because you usually have a kick and a snare at the same time, so that's a code. And other features like multi-track, MIDI, external tempo synchronization, better web interface and with a rules configuration interface. Because actually for now I have to program it using Python, but some rules are really simple. Live Ableton Live does a great job. You just put it into mapping mode and you turn something and it creates a rule. So I would like to do something like that and maybe open it to other protocols like DMX for lighting, which can be sequenced the same way. So thank you. We can take one question. The question was what do the JSON file contain? For now it's just each step with its note attribute, the tempo, things like that. It's just a really quick implementation. Ideally we would like to load actual MIDI files with tracks. The question was what about non-binary ribbons like triplets? Yeah, there are lots of possibilities using hardware's existing sequences. So I could really do that, I think so. Thanks. Thank you a lot for the presentation. It was very interesting. And if the next speaker could become, should be good.