 And, sort of the inspiration for this project grew out of what we do there and what we do, one of the core things we do is we collect streaming metrics every second. And some of the metrics we collect are total output, instantaneous output, cadence, resistance from the bike as you're riding. So in case you're not familiar with the product, it's a bike that sort of tracks your workouts and tracks your metrics as you work out. So we collect metrics as you're riding and the idea for this project is to use these metrics to create a unique sound experience based on certain input parameters. This isn't a live project, this is a weekend sort of hacking project that sort of grew out of personal interest. And another big inspiration for this project is sort of the use of like software since. So I'll talk a little bit about the library Pio, but I'm personally just interested in the use of software synthesizers sort of mocking out what hardware synths and analog synths do from sort of the early 20th century. So Pio. Pio is an open source project, anyone can look at it and download it. It's been developed since 2009 by a gentleman named Olivier at the University of Montreal. It's in active development, the most recent version, 080 was released May 15th. That's the version I'll be using for my live demos. And the basic idea is that you can create digital signal processing chains in Python. The actual logic for, and I'll talk about sort of what I know about the library, the actual logic is written in C extensions using Python objects in C. And this diagram sort of from the documentation sort of details how the data gets transferred around and what the different Pio objects control. One thing I will note about this library, it has excellent documentation, tons of tutorials to go through it, and also a fantastic community. So last year around November, the creator of the library, Olivier, emailed out to the community and said, hey, I'm just curious, what are people using this library for? It turns out a ton of people responded and said they're using it for a lot of different reasons. One of the first responders actually to the thread responded saying they're using it for auditory psychology experiments. Interestingly enough, there's a library called PsychoPie, which uses a sort of like sound, generate sound that you would respond to. One of the convenient things that Pio offers is the ability to send sound to different channels so you could send things to the left and right channel to see how people process sound, also to see how they respond to sound input. Another very useful application of this project is for teaching. It's very easy to take a, and I'll actually do this in the presentation, is to take a math formula and translate it into sound. Another interesting application, artistic projects, interactive sound, because everything in the library is computed in real time, it's very possible to create sort of art that responds to user interaction in some way, both for this library and particularly in auditory sense. And also, people are using it to plug their instruments in and create sound effects, just programmers, hobbyists. The example I have here is Zyne, which is a Pio child, which basically is a software synth that is built entirely on Pio. And then the final one here that I listed is sound experimentation. Pio is very useful for creating digital signal processing chains that output sound in interesting ways. So I won't actually play this one for you, but this is sort of the hello world of Pio. The basic idea is you have a server instance from Pio, and you boot that instance, which connects it to all of your audio inputs, and then you start the server, which means it starts reading output from stream objects. The basic architecture for this example is that a sign object is responsible for processing a stream. The server object keeps a buffer alive, which then has a callback with the sign object for generating samples and putting samples into that buffer. You can specify that buffer size manually. And then it connects to, in this case, port audio. There's other audio backends that you can use, Jack, core audio from macOSX. Port audio works great. And port audio, the way it works, is that it sort of creates a callback that allows you to send it sound samples whenever it requests. So basically it has a callback function written in C that runs close to real time. It's a high priority thread on the OS that calls your server object and expects a certain amount of sound samples, probably about another configurable parameter, but something close to like 10th of a second or something like that. So yeah, I'll start out with this example. The basic idea here is that with these examples is that I'm going to sort of write a little bit of code and then play a sound sample and then we're going to build up a more interesting example sort of as we go. And I do apologize, these are live demos, so I hope they go according to plan. So this one, the sign example, that's three even harmonics and you can specify the multiplication factor on each harmonic and then you send it to the output. And this diagram is actually generated by the PyO library in what's called a scope. It's sort of mocking out what an oscilloscope would do in a real environment. So this is a more interesting example. The basic idea here is that we're going to play that sign sample through a fader, which just basically does exactly what it says. It fades in for a 10th of a second and fades out for a half a second and it lasts for one second and then also multiplies the sample by three so it's not too loud. And the delay then feeds back half of that sample into the original signal and outputs and basically creates an output that is delayed by about half a second. So I'll play the delay example. So you can see in about less than 10 lines of code we can already start to kind of create some interesting music, some interesting sounds. Now the part I didn't talk about here is that you sort of have to play the fader, you have to start it and fade in and out. A lot of the examples in PyO use sort of sleep. This doesn't really matter on a server environment because the server is actually running the entire time. But if you wanted to play and sort of scripts that then you have to just sleep for a certain amount of time, just a copy out of the library. But this pattern of sort of just playing a signal and letting it resonate and then playing it again and then sleeping is so common that PyO creates a pattern object that allows you to play functions. So the first argument to the pattern object is the play pointer and that plays for about two seconds and then waits two seconds and then plays it again. So this is actually the same example we just saw but it just is less code. So what I'm trying to illustrate here is sort of how these signal processing chains get more complex. The basic idea is the output of one is kept in a Python object and then passed as input into another object. So what this one sounds like, I'll play it for you, the LFO example. So one thing you'll notice is similar to analog synths, you can pass any table object into any input of a PyO object. So the delay variable on the delay is actually a low frequency oscillator that's oscillating at 4 hertz and sort of input and then you can kind of see from the signal that the delay is from the signal down below that the delay is kind of creating this envelope if you look closely and then using the low frequency oscillator as the delay you can sort of make more interesting sounds similar to like an analog synth. Now this is just kind of a more interesting example that also illustrates the same point but it's passing in a sort of low frequency oscillator for a sine wave into the frequency of a low pass filter. So that sounds like this and you can see from the scope that it's actually rounding out some of the curves in a sort of sine fashion. Some of the frequencies are clearly cut out or some of the higher frequencies are clearly cut out by the low pass filter. So my library sound engage is a layer on top of PyO and the idea is you use external input. In this case I'll be running a flask app that runs a PyO server and then you control parameters in your DSP chain. So for this example I'll sort of go through instead of sort of a sine wave I'll use kind of MIDI input via the Python library in Mido. On Mac OS X it's actually not thread safe to read and write MIDI in the same thread. So or sorry in the same process different threads in the same process so I use sub-processes but the basic idea is once you get a MIDI input into the PyO chain you can run it through several DSP steps in order to kind of create music. So I'll kind of start I'll hopefully this example works it's a little more complicated. This is just illustrating sort of what the default behavior is sort of the maximum output of the function. So this is really just a point of reference right now. I'll sort of start to see the interactions a little bit. So basically the parameters that are input into the library is you give it an input domain so for this for the input is somewhere some integer between zero and five and in the library you actually if it's over five it'll just count it as five it's below zero it's you know counts it as zero. The idea the thinking behind that is that sort of from a UX perspective you don't want to certain parameters like say for example and this is kind of a dummy example like you wouldn't want to have your volume be controlled by an input parameter that could grow infinitely and usually you wouldn't want to control volume because that's kind of like a cheap way of controlling engagement but so for this chain I put the variable name for the low pass filter BQ on the chain and you specify that variable and then which parameters you actually want to change. So for the frequency parameter this is the cutoff frequency of the low pass filter you would control it anywhere between 300 and 2000 that would sort of linearly fit anywhere in that input domain and then also to because this isn't a voltage controlled filter you want to control the multiplication factor of the low pass filter that's sort of so that the volume is a little bit controlled that's actually also one of the reasons we have the compressed stage at the end of the chain the compressed stage controls the energy of the signal so it's it's not going to the volume isn't going to change too much but so I'll demo oh so basically what it's doing in the DSP chain is it's sort of branching out in different places for each level of the DSP chain it keeps the original chain the same as long as none of the variables are changing and then from the output of the chorus variable or the chorus object it creates two new chains and then combines them at the end with another pyro object the input fader and that's so that you don't hear clipping between the two signals when you switch when you switch outputs so yeah I think that's all I wanted to say about this for this demo input I'll go second it'll go second by second of course you could do it faster than that because it's a real-time processing engine but we'll show I'll show sort of like it's staying the same and then rising slowly falling off shortly rising really fast falling off really fast and then sort of ending on the final output so this is about a 15-second sample of what sound engage does so that's kind of the basic example of the library now say you wanted to add a another the difference between this slide and this slide is that we're adding a delay in the chain so we're correct kind of creating a similar similar idea but we're adding a delay and you can specify in the library now that you want the input parameter to control feedback on the delay on the delay object as well as the low-pass filter so we'll feedback into the input signal anywhere between half of the signal and none of the input signal and what this will do is it will create sort of branch out on each of the original signals into four new delay signals that will automatically select which one and move from each pile object to the correct one basically and select the output select the correct output for the input fader so I'll pass in the same demo input but this time just with the delay sort of as it's rising well that is sort of it for the demos some of the developments we can do with this library have it configurable so that you could sort of control the chain in a way that is as fast as possible so for example if you want to descend it metrics every tenth of a second as opposed to every second you would want the library to automatically figure out how complex your chain could be I've experimented with some chains where the stream processing actually isn't fast enough so that's one thing you could do with the library allow it to validate that your stream is actually possible you probably notice with the low-pass filter that on the first step that it kind of jumps to a point where it's sort of more audible and you're hearing sort of like less round frequencies between the zero and the one step if you made that quadratic or even better exponential because that's sort of how human hearing works for a low-pass filter then the nonlinear adjustment would be sort of more interesting or a sign or a sinusoidal adjustment also having conditions you could have a condition where if an input stays at a certain level for a certain amount of time that the library will automatically play a sound sort of what I want to do with the project next is use derivatives and moving averages to control the sound output as opposed to just the second by second interval so that would be you have a one-second value a two-second value and a five-second value and then you have a one-second derivative how much the signal is changing over two seconds and how much the signal is changing over five seconds and then you might probably notice from the example that I was using the final fantasy prelude it would be very exciting if we could just kind of create melodies that also respond to input parameters and since this is a server-side technology it would be even cooler if you were in like a chat room with you know four or five people generating input and you were grouping their averages together so the basic idea there is using the same input for one person for multiple people. PIO objects actually allow you to access their samples so similar to the way port audio works you could redirect the system audio back to like a browser for example using the get function on a PIO object that's one way of doing it you could also just broadcast your system sound via there's proprietary tools that do that and yeah that's it some resources here there's a site that a friend wrote called jazz.computer and there's a great wiki page for Python and music and a like I said before the PIO discuss mailing list if you're interested in PIO has a ton of activity thanks for your time I think we're out of time