 So I'm gonna click on a simple example. Let me know if you can hear it. So this is the platform. Great. Making this music happen. So we'll be doing several examples. This is a Greek artist stuff, but I'm showing it off because it's kind of a nice example. But just kind of a sanity check of what's going on. So this is WAGs. It's a fistful of Comonads on the inside, but on the outside you just get music like that. Here's another example by an artist named Ben Burns, which is kind of a little bit more lo-fi and the little crazier sounding. Sounds like Kalechi Techno, which is super fun, in my opinion. So there you go. So that is the platform. It runs on Comonads, but I wanted to give you a sense of like sort of what it is. It's an in-browser DAW, and it's what we're going to be using for a jam session later today in the conference. So where does it live on GitHub? So on GitHub it lives in this library, which is called PureScript WAGs. You're all free to go, clone it, play around with it. There's many examples of it online, and maybe the best place to find the examples is in GitHub.io, where there's sort of all sorts of fun examples. Like here's a delay and flange line that kind of creates a 80s or Game of Thrones sound spooky voice on the browser. So that's that. And now that you see sort of what the project is, an in-browser DAW powered by PureScript, I'd like to now go to the Comonad side, kind of really digging deep in and saying, we're talking about what problem I needed to solve. So before I even get into what a Comonad is, I'd like to talk about what problem I needed to solve. So the problem I needed to solve, at first when I started building WAGs, it was twofold. One is that you need music to come out. Like you just, the sound has to escape. And then two, you need to sort of anticipate what's going to happen in the future, meaning if anything could happen in the future, then it just gets too computationally. It just takes too much computation, meaning that you have to have a runtime that could sort of anticipate anything. It would be like in baseball, if you ask your center fielder to play that entire field, it's just not going to work. They'll be able to play it up to a certain point, but they're not going to be able to run to the pitcher's mound or catch a ball where the catcher's supposed to be. So you can't demand that of a runtime, and certainly not a browser. So you need to be able to sort of anticipate the possible moves. So getting music out and anticipating the possible moves. And that's not a unique problem in music at all. In fact, that's sort of the most common problem. So I have this example of Oscar Peterson, a great jazz musician. And if you look at him playing, there's two things that are happening. There's music that's coming out. But what he's doing all the time is anticipating subtly, within milliseconds, the next thing that he's going to play, meaning it doesn't come out completely spontaneously. It's based on the key and based on the style of the music, which in this case is the blues. So there you go. It's a fundamentally musical problem of getting something out, which Oscar Peterson is doing there through the piano and anticipating what can happen next within a certain realm of bounds. So Oscar Peterson is going to do certain things, but he's not going to smash the piano or something completely outside of bounds, which would be too complex for the runtime of that video. But within those bounds, it's ingenious, of course. And that is the art of not just my musical project, but any musical project. So that's the problem that I have to solve. How do I get sound out and how do I anticipate into the future what the sound needs to be? So in order to solve that problem, I went to Comonads, and now what I would like to do... So I have a PureConf 2022, which is on my GitHub. You are more than welcome to clone it. It's mike-sol.com. PureConf 2022. I'm going to be pushing it as I left-coded, but there you go. You could see the repo at the very least cloning at the end. So let's encapsulate those problems that I want to solve in code. So the first problem that we have is we have some type of context. I'm going to call my context W. The context there is like Oscar Peterson's jazz trio, right? So like jazz trio. And then inside the context is a sound, and I'm just going to call the sound A for the time being. So I need to get a sound out of my jazz trio. I need to get an A out of a W. So get a sound out of my trio. Get an A out of my W. So that's problem number one. Problem number two is Oscar Peterson thinking slightly in advance. What am I going to do next? What am I going to do next? What am I going to do next? So I have my jazz trio. I'm imagining my trio making a sound at some point in the future. And then I need that point in the future. So then get that point in the future. And you see GitHub co-pilot is trying to fill it in for me. And it's doing a pretty good job actually. That's crazy. So anyway, I digress. So now let's do a type. So we have W. Actually, it's to both of these types. So the first one is simple. We have W. A jazz trio with a sound inside of it. And we're just going to get the sound. So at the end, if you close your eyes, you don't see the jazz trio anymore, but you hear the sound. So kind of the group that's producing a fades away. And you just have the trace of the artifact. So W. A to A. And I'm going to call this. Oh, what am I going to call this? Extract. I'm going to call it extract the sound. And now I'm going to create something called expand. I'm going to use a similar signature. So I'm going to say W. A. I'm going to start with. Comona. Actually, I haven't called these common as yet. Sorry. So I'm going to start with my W. A. My jazz trio that's slightly better. And then I'm going to have the function where I'm imagining the future. So here's my jazz trio playing in the future. And we produce the sound, which is exactly the same as this extract function in terms of it's signature. So I'm imagining the future, what it's going to be like. And then when I get there, here's the future. So I need to be able to like fast forward to it and actually use that meaning that if I'm imagining the future, it sort of doesn't. I mean, it's if I'm imagining the, what, what I'm going to do with my jazz trio and then they turn the lights out and take the audience out. It sort of doesn't make sense that there was no point in doing that imagination. So at the very least I need to get something out that is going to be the future that I could then call extract on. So we have these two very musical operations. Extract and expand that Oscar Peterson is using. So these two operations now to kind of pull, pull off the veil are the two operations that are the bread and butter operations of a co-monad extract and expand. And if you're familiar with using monads and there were two talks about monads already in this conference, I'm sure they'll come up elsewise. We're not going to use them in my presentation. This might look familiar to you and it's because they're what people call in fancy talk, the categorical dual of a monad. So let's look at a monadic function. So I'm going to say extract is from a co-monad and expand is from a co-monad. And now let's look at the monad version. So the monad version of extract when I say version, I mean, this is a categorical dual sort of flipping it around. It's called pure or in Haskell speak return. If you're more comfortable with Haskell and this and monad. So category theory, the reason that it's one of my favorite branches of mathematics is because it feels very playful. You can like flip stuff around and you get sort of something for free. And it's the same thing here. So I'm going to flip around extract. I get pure for free. Usually people call it m, m a, but call it sort of whatever you want and expand. I'm going to flip it around. I'm going to get something called bind monad for free and both Jordan and James talked about bind. So I'm going to use m again, m a, a, m a, m a. So my Oscar Peterson, co-monad terms, their categorical duals are monadic at a conceptual level. So why is it called co? Because in category theory, when you flip around the arrows, co is a popular term to use. So if you've heard of a product, which is two things happening simultaneously, a sum term is also called a co-product. So it's two things that happen in either or set up. And in general, in category theory, when you want to flip something around, you call it co-that. And perhaps if you call it co-co, then you get the original, although I've never tried. So perhaps that doesn't communicate that idea. But there you go. So now what I would like to do is build a simple co-monad. And then we're going to look at how co-monads actually powered musical examples that you just heard in the same way that they power Oscar Peterson's jazz trio. So the first, what I would like to create is co-monad that I'm going to call my co-free. So I'll get into what co-free means in a second. But because we're going to be working with something called co-free co-monads, I'd like to start with that right away. So I'm going to create new type, my co-free. And I'm going to say that there's going to be a functor in there and sort of sort of arbitrary type constructor. Wow, it's actually feeling it in something that's almost correct. But it's not quite that. And then type. So the co-monad, it's going to be my co-free F. That's going to be our W. And then A will just be this A. So I'll say my co-free. And now I'm going to use my musical term. So I'm going to say playing now. What is Oscar playing now? It's going to be A. And then say, in the future, I'm going to be playing F of my co-free F of A. So if you stare hard enough at this function and you're used to a non-lazy language, strict evaluation, like peer script, immediately it should sort of freak you out in all the right ways because we have this recursively defined function, meaning that in order to create this thing, we need to pass it in the future, which is this thing. And we can wrap it in an F. But if the F that we're wrapping it in doesn't have any sort of delayed execution, then we're going to blow up our stack because my co-free can only be defined in terms of itself. So we're going to need some sort of infinite type constructor. And that's no fun. So to make this a little bit more safe, there's a few ways to do it. But the way that I'm going to do it for now is just to give it a dummy unit value. Let me see if I've imported the prelude I have. So here I'm going to say unit to that, which means that we're going to be able to defer its construction so we don't sort of blow up the stack. So now that I can do that, let's create a functor instance for it, which we're going to have to do before we create our comon edge. So I'm going to actually, I could even just derive the functor. So derive instance functor my co-free, let's say my co-free F, sorry, functor my co-free F. So that looks like it doesn't want to do it because, of course, F needs to be a functor. There you go. OK. So that's just a pure script compiler defining it, what it's going to do. If we write it up by hand, it's just called map on this. And then it's going to call map on that and map on that again. It will call map thrice, actually, map once on the function, map on this, map on that. In fact, just to let's write it out. Whoa, this is like, it's crazy. So we have F, my co-free, and we're going to say playing now and in the future. And we're going to say, now we'll just reconstruct it, my co-free. And we'll say playing now equals F of playing now, which it got right. Good. And then here, we're going to want map over the function, map over the functor, which is this F, and then map over this. And then from there, we're going to say F in the future. And it is flipping out for reasons that I'm not F, not quite sure of. Yeah, sure. Sure, sure. There you go. It's because I use the equal syntax. Sorry, I was thinking in a different language. So there you go. If we had to write it out by hand and we'll keep it that way, just so you could build the intuition of what the functor looks like. So it's mapping over the function, over this F, and then over this map is being called recursively. So this map is this map. There you go. So now that we have this functor, we're able to define, we're able to turn it into a co-monad. So I'm going to do that. And then I'm going to flip back to the high level and show you an example of the sound that that makes, meaning that once we've turned this structure into a co-monad, what that affords for us. So I'm going to go, again, I'm going to create an instance. So instance extend my co-free. And I'm going to say extend my co-free F, where extend equals, and remember our extend here is WA then this function. And if we click through to extend, let's import it, go to definition. My VS code plugin is not, for some reason, doesn't want to go to the definition, but that's absolutely fine. So we're going to take this here. We're going to put that there. This function, I'm just going to put a typo for the time being. So extend, actually, if I understand correctly, it starts with this function. So we have to flip the order, which we will. There you go. And we need a functor F, which we will. And now we have our typo. So we need to get another micro-free out of it. So let's see if we can make it. So we will, again, start with this micro-free. So we're going to make it look quite similar. So this F here, I'm going to say MCF for the whole micro-free. So I'm going to say playing now is going to be this F of MCF because this function gets us an A, and we need the A in there. So we're going to apply it. And then in the future, I'm going to say, so we have unit. So we can just kind of throw that away. We have, we're going to map over, because we have this functor instance in here. So we can map over this functor. So we're going to say map over in the future. And what are we going to map over it? Well, extend micro-free. So we're going to say extend, extend. And we need to import that. And infinite, yeah. There you go. So we need to wrap it now in the right type. So probably a better way to do that would be map. So map, map, extend micro-free F. Let's see if that works. That works. So we don't even use this playing now. We can just remove it. Okay. So that's extend. So what we're going to do is we've taken this function, applied it to micro-free. So we get something out of it. And in the future, we're going to get something out of it as well. For those that have worked with go-free, this is called redecoration essentially. So we're redecorating the future with this function that we kind of kick the can down all the way. So it sort of, one thing that's worth saying is if we redecorate too much, then we might create a performance issue because we're going to have to get the function on top of the function on top of the function. But for one-offs, it's absolutely fine. So again, this is to go back to our Oscar Peterson example and my example, this is taking a function that could modify the future somehow, modifying it. And then on a rainy day, we check it out and we actually use it. And then extract. So instance, co-monad, my co-free, Fonctor F, co-monad, my co-free F, where, so we said that it's going to be called extract. And we have playing now, and wow, GitHub co-pilot just totally got that, right? So we just extract playing now, and we're playing now. So that is the music that was coming out of Oscar's fingers and that's coming out of wags.fm. So this is the setup, but concretely, I've kind of done this low level implementation. I've showed high level, sort of what the music is, but now I would like to link the two together. So you could see quite concretely, how this is used actually in a very granular way to create sound. And then I'll conclude by talking about what the performance characteristics are of it before I get into the Q&A. So let's go back into an example. And the example I have here for it, is this Bach fugue. So I'll play it for y'all so you can hear it. And I have a synth. Custom synth. So the synth sounds kind of weird because I have this high-pass filter going up and down on it. I can make the high-pass filter a little faster. It'll sound really wacky in kind of a maybe fun, maybe not way. Yeah, let me slow it down a little bit so we can really hear it. Yeah. So there you go. It's a Bach fugue. So I should say that the system is quite fast. So if I start it again, and I speed it up to something faster, it should just work. So this will double the speed of what you just heard. It'll sound kind of crazy, but the co-free component shouldn't fail us. It does. I'll be sad. That is twice as fast. There you go. And there's basically no clicks or anything like that. So we'll bring it. Bach is wrong in this range. So we'll bring it back to that. That's a lot of different things going on. Yeah. So I'll be sad now. I'll be sad. But where is the co-free component in there? Where is the co-mone add in there? How is that, how is that making it musical? So what I'm going to do is dive into the definitions of one of these particular functions. So the wags entirely runs on co-free, it's like that. That is the underlying abstraction on that. That makes the whole entire thing work. Everything works that way. Every time sound comes out of the loud speaker. extract on something at some level. And I've written some custom things that are not Co-Free Comonats, but just other types of Comonats, but just to make it a bit more performant. But in general, that's what's going on. So let's dive into this function, Make Piecewise. Let's look at what it's doing here on this page. And then I'm gonna go to the definition and look at all the way back to Co-Free Comonats. So what Make Piecewise is doing here, I'm gonna make the piece a little bit slower so we can hear it. It's creating a piecewise envelope. So it's starting at a volume of zero, then at 0.11 seconds, going up to a volume of 0.4, then falling down to a volume of one, again at 0.2 seconds and at 0.3 goes to zero. So it creates this boop that's gonna sound like sort of a key press in our little synth. I've slowed it down so you can hear, dum, dum, dum, dum, dum, dum, dum. If we wanna smear it out over time a little bit, we can change it and we'll hear it smear. And now we can bring it back to something a little bit more crisp and we'll hear that. So it's creating this piecewise function of time. How is it able to do that? How is it able to know sort of what the next value is and what value we want now coming out of the browser? How is it able to extract that value? If my choice of terms extract is of course on purpose, it's because it uses the function extract. So let's go to the definition of Make Piecewise, which I have pulled up here. And you see that Make Piecewise takes this non-empty list, which is the envelope that I showed you and has this thing called the audio parameter F function of time. So audio parameter function of time takes the current time, what it is and how much headroom, how much look ahead and gives us an audio parameter under the hood. So, and then audio parameter is all that audio parameter is, it's the value that we want at a given time. And then the offset in case it's something that's precisely timed, for example, if the audio clock falls now, but we need an attack to happen slightly after we could set that as well in an audio parameter. And actually, let me find a slightly better definition because I'm realizing now that this one might be not show common ads in the way that I would sort of like to, but this one definitely does. Sorry, I opened up the wrong definition file. So this piecewise, so time headroom, the time, and it spits out a co-free common ad where the functor is this function. So one really important thing to remember is that a function from A to B has a functor instance of function A. So function A is a functor. So here, a function of time is a functor. So we have this co-free in the exact same way I set it up in my co-free here. My co-free F of A here, co-free, this is our F function of time and A is the audio parameter. This is what it's spitting out. So at time zero, if we go back to our Bach example, at time zero, our co-free common ad spits out this value of zero, at the next time, it will interpolate between zero and four and spit out the value. And one question that you might ask is why not just take a giant structure and constantly kind of map over that structure and do interpolation? Meaning that why do you need to sort of store the value in some intermediary state? The reason is that once things get very long, it's inefficient to do some sort of map or lookup table method. And one way that we can see that sort of self-evident is the actual score of this. So let me start the piece again. So you can listen to it. So this is the score. The score, these are all the notes and this is where they fall in time. This time is quantized by this very small factor and that's what speeds it up and makes it quite fast. So what if we treated this list, or this non-empty array as a lookup table, instead of treating it like a co-free common ad that spits out the next value over time? So here because under the hood, I transform it. So let me back up. In addition to my envelope being a co-free common ad, my score here is also transformed to a co-free common ad because I said the whole thing runs on co-free common ad. So here we're extracting the next value. We're extracting this note. And then as soon as this note's done, we extract this note in time and then we extract this note in time and then we extract this note and so forth and so on all the way down the piece. And when it ends, we just recycle and extract again. So what if we didn't do it that way? What if instead of extracting, we had some sort of lookup table? Well, the naive way to do that would be to cycle through this list, do some sort of filter. And when we get to the next value, we use it, which is fine in the beginning of the piece, but now squirrel, squirrel, squirrel, squirrel, squirrel, squirrel, squirrel, squirrel, squirrel, squirrel, because this piece has tons of notes. Let's imagine that we're five minutes into the piece and all of a sudden we're looking over a data structure that contains 7,000 entries to find the next note to play. The thing would crash, meaning that the piece would start off, okay, but 30 seconds into it, it just wouldn't work in the browser anymore because you're going over this array. Now there are ways to mitigate that, of course. We could transform our array into a map and use the time as keys, at which point it would have a logarithmic performance instead of that awful linear performance that I talked about, but it still wouldn't be great because we would still have to have this map traversal every single time we wanted to get out a note. Whereas the Comonad approach is basically 01, meaning that as soon as we finish this note, we call extract on it, and then we call extend to get our next co-free Comonad and what's going to be in our next co-free Comonad is going to be this note. So we call extract on that. That's blazingly fast compared to some type of traversal. And again, going back to Oscar Peterson in our YouTube video, I'm not, I can't claim to know what was going on in his brain when he was doing that improvisation, but he was playing this note and then somewhere at some place, he's lining up either the next note or potentially what the next notes could be. It could be one or several things based on the performance of another musician or his mood or lots of stuff. And so you're lining it up, which is that extend operation that I was talking about. And the same thing is going on here. We're extracting this value and then we're using a Comonad to extend this value. So we've seen this now happen on two levels of music. We've seen extract and extend work on a piecewise envelope generator, meaning that it's generating the individual envelope that's applied to a single note and we've seen it work on our score, meaning that it's generating the next note that's going to be used in a score. And the beautiful thing about music for me at least is that it's sort of like people say that a burrito, Monad is a burrito all the way down. I feel like music is a Comonad all the way up, meaning that you could start from envelopes, then get to a level of a score and then get to a level of an entire piece or an entire audiograph. And now I would like to bring in the name of this package and why it's named as such. So it's called Peerscript Wags, Y-Wags, web audiographs as a stream. Meaning that if you can take a Comonad and stream it at this small level, can the Comonad represent the entire audiograph? And the answer, I wouldn't ask the question, of course, if I didn't already know the answer, which is yes. So I'm going to open up the Brave browser or actually I don't have it here, but let me install really quickly a Chrome extension. I'm going to say Audion because I, so web audiograph visualizer or let's say Chrome web store because this, just to kind of show the point. So Audion, let me install this really quick. This is made by the Google Chrome team. I uninstalled it because its performance is awful and didn't think I would use it in this presentation, but there you go. What it's asking me to do all sorts of stuff, but let me, that's nice. So let me go back to this and hopefully this will have Audion installed in it and I will be able to show you what I'd like to show you. So my claim is that the whole web audiograph is a Chrome stream. Let's see if that's the case. So I'm going to open up Audion in here. It looks like it didn't install. That's unfortunate. Let me open up Audion here. There we go. I'll turn it on, manage extensions and it looks like Audion is on. So what Audion is going to be is a tool in our inspect browser and we see this web audio pane. I'm going to reload this and now I'm going to press play and my claim is that the whole web audiograph, I mean the whole experience that you're listening to right now is a stream powered by Comonads, but don't take my word for it, look at the visualization and you'll see exactly what's happening under the hood. So it's a little slow because it's drawing it, but I'm going to press stop. This graph that you see on the right side of my screen is the Comonad ejecting a full web audiograph 60 times a second. So I'll press stop and you see, sorry, it actually won't stop. It doesn't hold the state. So that's kind of unfortunate. So I'll narrate it, but you see this audio buffer source, these gain filters, it's not updating fast enough, but you see that by quad filter in there. This is updating 60 times a second actually within the browser. I mean that it's taking a full web audiograph with filters, oscillators, the whole nine yards. It's extracting the one that's now and then extending into the future what it will be in the future. So my claim was that an individual piecewise function can be powered by Comonad, a score can be powered by Comonad. And now we've gone all the way up to the entire musical experience, the web audiograph. The graphs are extracted in time at the sampling rate just like Oscar Peterson is extracting notes from his fingers. I come back to this metaphor a lot, but I think it's so powerful because music functions that way, any sort of UI functions that way as well. So why not use Comonad? And kind of the thing I'd like to close with before we get into the Q and A and I'm happy to answer really any questions about it is the other folks that have looked at Comonad use interesting terms about it. And I'm reminded of one that Phil Freeman the creator of PureScript use which is Comonads or the future. And he met it in kind of the prophetic way that it sounds that the future of programming is Comonads which I personally believe as well. But he also met it in this as a pun, Comonads are the future meaning Comonads line up what happens in the future and that's the whole entire point of using them. But also they line up the present as we saw with the extract operation. So I do believe that Comonads are the future at least it's the future of my future because I'm all in on this tool that I'm making and kind of diffusing it around the world to musicians the jam with me on it and then make stuff with it. So Comonads are certainly my future which is how I'm building my business but it's also a great way to extract the future of an audio state in a really cheap computation efficient way. So aside from when I was drawing that graph and that you heard some hiccups and we don't hear any hiccups when you're using Comonads because it's 01 lookup it's just the next thing that's going to happen. So if you structure your whole entire experience that way then you get those performance characteristics compared to a more naive implementation. So to summarize if you're at all building any type of rendering engine be it an audio rendering engine like I am or it could be a canvas based rendering engine or even some type of web application you want to experiment with a different UI framework Comonads are a great, great, great way to power on what you're doing. It's a fantastic abstraction that from a theoretical and aesthetic point of view is very much linked to some of the most beautiful performative arts including music, including Oscar Peterson and many others of course. So thank you very much for checking them out. I'll stop the share and maybe one thing that we could do now is taking questions about it. So I'm looking at the since we're, there's a lot of stuff in there. It's thing, yeah, co-pilot it thinks we're in a hasto. Every reader is a co-pilot. That's great. There's a lot of stuff in here. So since we've been doing this. Oh, yeah, thank you very much. Why is WAGs a graph or why is web audio graph? So maybe I'll start with that and then I can kind of go through one by one. So apologies if I don't get to your question right away or re-ask it if I don't see it but I'll start with James Collier and then if you had a question beforehand please let me know. So why is WAGs a graph or why is web audio graph? So let's look at, so at an intermediary level what I would like to do is turn on my screen share again and look at a way that constructs it as a graph a little bit more explicitly so we can see what's going on. I'm gonna go to a different one called since. So a powerful abstraction in the setup that I used is something called mini notation which comes from title cycles but underneath the mini notation it's setting up the graph. So let's, this example I have here and let me just make sure that yeah, it's showing to you right now shows what one of those graphs is. So here, let me play it and you'll hear this like really flangy sound is for, which is kind of fun. It's like sort of sci-fi-y. Let me, let me amp up the volume a little bit because it's a little bit soft. Turn it to 1.5. Yeah, now it's, wah. That's sort of fun. Peer script makes fun noises when you ask it to. So why is Web Audio a graph? So in a graph, there's many different types of graphs you can make, but let's look at how, my claim is that the structure is graph-like. So it's just a claim that I'm making but I'll dig into kind of why I'm making my claim. So we have a gain node here. Into the gain node is past a band pass filter. This band pass filter, which is filtering has an argument, a couple of arguments going to the frequency of the filter, it's Q value. And then what's going into it are these oscillators. So these oscillators here, we have OCS. It's a reference to this part of the graph. These oscillators, we have another gain node and then going into that is a triangle wave, a sine wave and a sine wave. So now if we're imagining a graph in our head, we have a gain node into which a band pass filter is going into which these oscillators are going. Interestingly, this graph is type safe. I've done type level programming to make sure it's type safe. What does type safe mean? It means that here, instead of calling my oscillators OSCs, if I add like a lot of S's to it and then press play, the graph won't compile. It'll freak out because it can't find OSCs in there. So it's a graph at the type level, meaning I'm using type level graphs to make sure that if it claims that it's in the graph, it actually is. Now when I fix the type, the type of this record, the graph traverser is able to find that OSCs does exist in there, picks it up in the ref and uses it. And similarly, if I call the ref the wrong thing, if I call it OSCs with a lot of S's, it will also freak out because it can't find that. So let's make it unfreak out OSCS. I say freak out, sorry. Maybe it's actually quite calm. I don't know what pure speakers are like this, but anyway, it does give me an error. So all that is to say that in addition to being graph, it's a type level graph, meaning it's doing type level programming in order to verify that that graph is correct. Why am I using type level programming? Quite simply because the, or not quite simply type level programming is quite complex, but the answer is quite simple. Quite simply because I don't want the audio to fail. When I'm doing a jam session, it's 20 minutes long. I don't want the graph just to like explode. I don't want it to load into the runtime, be like an incoherent graph. And then for my audio to turn off and for the audience to go home and me not to get paid for the gig, I want the thing to work. And for it to work, we want to make sure that any invalid state, this is what James was saying before, any invalid state in Jordan as well is rejected by the compiler and not rejected by the runtime, which we saw happen there. So that's my answer to your question, James, why is Web Audio Graph? Because my claim is that it's the thing that I just showed you as a graph. And then where is it a graph? It's a graph on the term level meaning it's connecting all the stuff in the Web Audio interface, but it's also a graph of the type level to make sure the music is coherent. I mean, it has no bugs. So there you go. Then I'm curious what infix operators, WAGs uses, many is the answer. So WAGs infix to create a scene, there's greater than make scene flipped. There's lots of infix operators and they're used all over WAGs but also all over the unit tests as well. Sorry about the baby crying in the background if you can hear it. I'm not sorry about the baby, but it's sorry about the crying. So here's one of the, actually no, this is not a WAGs infix operator, but this is a WAGs infix operator that's being used. So to find them, you could look in the package and they're used all over the place in WAGZ, which is the, or sorry, in PureScript WAGs lib, which is this library infix operators are used in the engine that powers WAGs. So that engine is, I call it the tile engine. It's like it's a front end for middleware. It's these infix operators are used. Actually, I'm not sure. They're not even used in here. And this engine, they're used in the functions like constructed, which are used elsewhere. Anyway, all that is to say is that at some level using working with infix operators is useful. Then I've used comanus to iterate a game of life simulation. Stepping is redecorating the grid tree. Yes, absolutely, Joseph. That's absolutely true that that is, I think that Barthas talks about that in his blog Conway's Game of Life. And that is a way to use redecoration in that context. Sort of it's almost a linguistically it doesn't roll off the tongue, but you're rewriting the future. But the future hasn't been written yet and yet you're rewriting. It's really what you're doing is you're rewriting the potentialities of the future, which is like sort of beautiful metaphor when you think about it. Like if you send a kid to school, it's because you wanna rewrite their potentialities. You wanna create a better potential for them. So you can rewrite the future and so far as you can rewrite its potential. And that's what redecoration is doing in the case of a Comonet. So then in extract WA to A, there's a web audiograph. Yes, that is exactly what it is. WA to A is a web audiograph and everything inside of it. So there's the web audiograph has control data in it. That control data is also Comonets, which in itself contains control data, which is also Comonets. And then when extract is called, it just goes all the way down the chain and extracts what you need. Why am I composing Comonets together so that BYOC bringing your own Comonet, meaning that instead of locking folks into using one particular abstraction, it takes an arbitrary set of Comonets and then just calls extract and all of them. And the reason it's able to do it is because of the type class. So you just expect a Comonet and the one that you bring is your Comonet of the day or the week, but it uses them on the hood to power WAGs. Reminds me of Kraftwerk. I was told that updates without having to stop music. Absolutely, like if it had to stop, it would be a non-starter for gigging musicians that are using it in a live performance context. So that was very important to me and it's important to the folks that use it as well. Can you generate a WAGs document from a physical synthesizer? Yes, absolutely. The way that I generated, the Bach example was a physical synthesizer. I have a piano here and I use it to generate MIDI and then use a Python package called Mido to parse the MIDI. Although you could also, you don't even need to parse the MIDI post factory. You could also parse it in real time. On my Twitter feed, there's examples of me playing MIDI instruments that are powered by the browser. That's absolutely possible. It's fast and the reactivity of it is sub 15 milliseconds, which is what you need to hear to feel like it actually works in time. Then are you representing the graphs with the PureScript Graph Library? Oh, no, I'm not. I'm representing it with a custom graph maker that is in the WAGs source tree and the graph is represented by a bunch of, so if I go to this graph folder, so all of these audio units like high-shell filter, high-pass filter, gain filter, these are units in the graph and then each one, basically the record contains a bunch of keys that point to these. So as a result, the type of the graph will change depending on what's inside of it, which is why I use type-level programming. If it were one, you could also do it on the term level, but by doing it on the term level, you lose the benefits of the compiler being able to check that the graph is coherent. So these are the elements that make up the web audio graph. Ah, yes, screen sharing is stopped. Stop, sorry about that. I will, let me go back to it really quick. Yeah, I was talking about this. Sorry, these are the elements of the graph, which is in WAG's graph audio units. Sorry about that. Lost track of where it was. Can I show the max potential? Sorry, does the graph update happen efficiently due to common ads or something else? Completely due to common ads. Like that is the only way that it happens efficiently. And I'll show the part of the code where that is, because it's, and I will turn on my screen share for that, because it's super important to insist on that that efficiency comes from the underlying abstraction, and it's here. So, and it's not like spread over a lot of places either. It's one stop shop for that efficiency, which is nice because it makes it really easy to reason about when you're hacking at the library. So it's here, control functions, and it is make scene, which makes the next scene. So make scene, what it does is it gets a frame with the environment, get frame. This is the thing that ejects the next value. And then what we do when we get the frame is we pass, so we get the next frame. We say what the instructions are. These are the instructions going to the web audio graph like turn up the gain or start oscillator, and then we pass the next thing, and next is a WAG, and then we could call make scene on next and get that. So the efficiency completely makes scene is the function that's called when I say that, it just calls comonets all the way down. It's make scene that does it. And all of the efficiencies just come from the fact that we call it once, get the instructions, and then we have next, which is a closure around what happens next in the future. So yeah, 100% of the efficiency comes from the comonetic implementation and the genius of the people that invented the way to work with that in Haskell. I know that Edward Comett worked a lot with it. A lot of other folks did too. And the idea of the pattern is brilliant and I'm making ample use of it. So can I show the max potential of this line of work for musicians that normal doesn't sound an engineer and can't do? Yes, absolutely. I actually just created a Udemy course that is not live yet, but imminently will be. So here if I go to, I have way too many Google profiles, if I go to udemy.com, hopefully this will take it to my instructors. Yeah, crap, no, it's not there. But yeah, so my answer to you is yes, otherwise I would do a screen share, but it's not up yet. So the max potential of this, I do a full course that could show that, as a simple sort of, or opening a salvo to that, like here's one example from it that takes a single file. This file here, let me click on it. Let's change it, call your group. Let's change it. I take it and I do this. Sorry, it's taking a while to load, not sure why, maybe because the audience is messing around with it. Yeah, let me reload for some reason. It's a, could be this thing that I just installed that it's not happy about. Anyway, it's taking a while to load, but hopefully it'll work okay. Anyway, theoretically when it starts it'll, yeah, I have no clue why it's crapping out like that. Let me switch to Safari. Maybe Safari will be kinder to me. Maybe it's that, the thing that I just installed. So anyway, this is all the max potential question. Here it remixes it. And it does that just with this homo-natic structure. So there you go. I mean, sorry, let me turn that off. So it's entirely, yeah, I mean, I do a whole course where I talk about it, but it can push music creation really far. And one of the reasons that I created it was to be able to create music in a way that I hadn't done before. And I found a lot of pleasure in doing that and collaborating with others about it. And then last question I seen here, to touch on an earlier question, RX observables are an example of monads, but not co-monads, but they can be used to make a co-monad. I actually don't know enough about RX observables, unfortunately to be able to say, but one thing is for sure, if a RX observable can potentially not contain a value, if that is part of the contract, then it can't be a co-monad because co-monads, I need to be able to pay up on demand that you ask for the value and you get it. So if an observable, I think observables are, the analog in the purest world would be events. And an event is the same, event can't be a monad because it's very, the same reason that numbers, you wouldn't make it a monoid because there's no natural monodic operator. You can make a multiplication addition, could both be it. So it's sort of, I mean, you don't know which one to choose and events are the same way that events are dealing with time, time is their context. So how you squish together time can be done in a myriad of ways and there's sort of no consensus on the way to do it. So as a result, event is not a monad, but there are many monadic operations you can do. With an event that I'm pretty sure observables work in the same way. They're sort of like events. So because an event could never be fired and an observable theoretic could never be fired either. So in that way, it could be monadic, but not co-monadic. Okay, answering that earlier question on coordinates. But yeah, they could be, ah, okay, you're answering the question. Sorry, yeah. So your answer is absolutely correct. Thanks Robert for that. So yeah, one thing to say, maybe that's useful is that it's possible to have monads that are also co-monads, but they're sort of a bit, I don't, well, I'll use the word trivial. I mean, they're like theoretically they're not trivial at all, but they just don't get you a lot of power when you're using them. So identity, it's sort of the classic example. Identity you could always extract the value out of and identity can be a trivial and monad as well. But where stuff gets interesting with monads and co-monads is kind of when they kind of specialize into their own spheres. So co-monads are great for anything that's front-ended where you need to be able to extract the value and project into a future. And monads are great for when stuff can hit the fan, like parsing, I don't know, could fail in all sorts of ways. And it's managing sort of the uncertainty of failure all the time. And you sort of need a monad to be able to do that in the abstraction.