 Hi, everyone. So it's 12 o'clock. And so I'd like to start. I'll repeat what I said before, which is please keep your microphone muted and your camera's off. If you have any questions, put them in the QA tab. We'll answer them during question time. And no worries about that. And so now that that is said and done, I'd like to hand it over to Anupam Jain, our curator. Hi, everyone. I hope everyone can hear me fine. Just a quick check. So welcome. Welcome to Pure Conf, the first conference on the PureScript programming language. And I hope the first of many more such conferences to come. So welcome, everyone. This is a very exciting conference for me because we've been kicking around this idea of having a conference for PureScript for a very long time. I think in 2020, we were discussing it. And then the pandemic happened. And for various other reasons, we couldn't hold it then. But I'm very excited that this year we were able to do it. Lots of people came together to make this happen. So thanks, everyone. And yeah, so more about me. I'm Anupam Jain. I'm the editor of this conference. And PureScript is a very special language for me. I'm a web developer. And I think it's one of the best in class languages for web development. So I'm super excited. The idea behind this conference was basically to act as a bridge to bring people from the PureScript community together. And especially people who might be evaluating PureScript, people who might be evaluating it for the use case of a production, and connect them with people who are actually using PureScript in their day-to-day lives and who have a lot of experience with it. So people can learn from that experience. And people can see whether it works for them. And so bridging the real world with the theoretical academic world, because if you look at functional programming, it's pure, statically-type theoretical functional programming, it gets a bad rep of being impractical. And I think PureScript is a very practical programming language. So yeah, this is a mixed group. There are lots of people here who may not be from a functional programming background or from a PureScript background. So let me just give you a few words about PureScript and why I like it, why it's special. And PureScript is a language that compiles to JavaScript. And it has a wonderful FFI. It can access all of the JavaScript ecosystem. And that sort of brings it down into the real world, because you're able to introduce PureScript piecemeal into your existing JavaScript application. I've been using PureScript since about 2017. And at that time, we were building an application that was primarily JavaScript. And then we slowly replaced parts of it with PureScript. And it worked out very well for us. And I've been a web developer for a very long time. And PureScript is the only real option you have right now of being able to use pure, statically typed functional programming without artificial constraints, like powerful functional constructs for web development. It's really the only option that you have there. And being able to replace JavaScript with it was very exciting for me. It's almost like magic. So PureScript generates JavaScript. And PureScript generates very readable, debuggable JavaScript. You can look at it in the browser. You can, if you have source maps turned on, then you can actually see PureScript code in the browser. And it has a great community. It has people who are very helpful. And it has an ID story. The ID, so you can use Spacemax or VS Code. And those things work wonderfully well, which is an anomaly, at least a couple of years ago. And the language itself is very powerful. It improves upon Haskell in a bunch of ways. And I think that it removes some of the historical words associated with Haskell. And yeah, so very excited to be hosting this PureScript conference. So the conference itself, we have a great lineup of talks for you today. We have, and if I had to pick a team for the conference, it would be, we have powerful constructs. So it's practicality through power. So the way we selected talks today is they either help you understand a powerful FP construct, for example, monads or monad transformers, things that would be usually a bit harder, or maybe even impossible to do in non-FP languages. So those kind of constructs. And then they also show you, showcase some of the real-world practical use cases, which use those constructs. So first up, we'll have Jordan Martinez, who will be talking about monads and monad transformers and how to understand them in a very practical way. Then we have a couple of talks that we'll talk about some very real-world use cases. James Brock will be talking about parsing using monads. And I feel things like parsing, compilers, interpreters, those things are cases where functional programming in general is best in class app. So I'm very happy to have talks that demonstrate some of that and using things like higher order functional combinators and things like that. And then we'll have a talk that I'm very excited about by Mike Solomon, who has a very real-world use case for these constructs. So real-world that he started a company around it, music production. So I'm very excited about his talk. He's going to show us how to use PureScript for music production. And we'll have a talk by Ben Hart, who will be talking about concurrency, like real-world concurrency issues. And I feel PureScript's aft monad and async and concurrent capabilities are really amazing. When I first encountered them, I was really impressed by it. The aft library is, it has a very small API, but packs a lot of power. So I'm very glad to have this talk. And we'll have Nate Fabian who'll be discussing a topic that all of us encounter when we try to do functional programming in a non-functional programming language. So we try to use a recursive algorithm and then we run into a stack overflow. So he's going to be talking about how to avoid that, how to use recursion in a safe way and especially not just tail recursion, but general class of recursion. I'm very excited about this talk. So yeah, we have great lineup of talk, do stick around till the end because we have a music jam coming up right at the end of the conference, which promises to be a lot of fun. So enjoy the conference, everyone. And I'll hand it back over to Aditya. Thanks, Anupam, for handing it back. Thanks, Anupam. So for those of you wondering who I am, my name is Aditya, I'm my main languages like JavaScript and Python, although recently I've been learning Haskell and I'm also going to be an MC for today. So we're going to be having folks from all over and we'll be having a music jam at the end of the conference, like Anupam said. So I think I've said this a lot of times, but grab your popcorn, grab your drinks and relax. Our first speaker for today is Jordan Martinez. Jordan is a core contributor to the PureScript language and he works for the RISDN network. He's here to discuss a topic all of us love to meet, monads, more specifically, how to use monad transformers without understanding them. The do notation is easy and monads are scary. So to use monad transformers without understanding them, we have Jordan to explain them to us. I'm pretty excited because as a newcomer using something without understanding it is kind of my jam. Due to time zone differences, Jordan won't be able to join us in person for this talk. However, we will be playing a recording. If you have any questions, type them in the chat window and I'll pick them up at the end. Anupam will also take up your Q&As. So without further ado, let's go ahead. Welcome to using monad transformers without understanding them. My name is Jordan Martinez. I've been involved in the PureScript community for the past couple of years, contributing to the compiler and also doing some documentation work. I currently work at Arista Networks. Everything that will be talked about in today's talk will be covered in this repo. I encourage you to check it out afterwards so there's some additional goodies you might find helpful. Now, in today's talk, my entire goal is to make it possible for you to use the acceptee, readertee, and stateee monad transformers in your code immediately after this talk finishes. First, we're gonna cover why monad transformers exist and what problem they solve. Walk you through kind of the thought process behind them and then show you how to use them and avoid some of the common mistakes. So here's a few assumptions I'm making about my audience. One, you understand how to use monads. And two, you understand denotation. If you don't understand that, I encourage you to look through the monad type class hierarchy a bit more because we're gonna be using this a lot. Okay, so what problem do monads actually solve? We have this in JavaScript, but in PureScript, rewrite this. And the difference is between one being a procedural statement and another one being an expression that we can evaluate. Now, when I talk about monads, I want to introduce this analogy of the foreground and the background. For example, we have this picture of a cat. The foreground is the actual cat. That's the thing we care about and it draws our focus. However, the background still has some sort of context here that's important. We're not really focusing on it. We don't really care about it, but it's still part of the picture. Similarly, with monads, foreground is basically the denotation syntax. It's what we see, it's what we read and write and what we care about. The background, however, is the bind's implementation, the stuff we don't really care about, but it's still very important. In the foreground, we have the appearance of a sequential computation. And in the background, we have a whole bunch of boxing and unboxing boilerplate going on that we don't have to do ourselves. So if we look at a few common monadic types, we can kind of see this in action. Let's look at identity as an example first. So here's identity. We have a box that has an A value. We take that value out of the box, pass it into this function F where this arrow represents what we would see in our denotation and that produces back another box value B. We can then unwrap the identity box again, take that value out, pass it into this arrow, which is again what we see in our denotation, and that returns back a final box that stores a value C. Here we can kind of see the exact same thing, but now it's the actual code. We have identity right here. This left arrow is the bind implementation that handles that unboxing for us and we get the value one. We do the same thing here to get the value two and we take them, produce a new value and wrap it in a new box. With the next example, we have the either monad. We see something similar here. In this example, we see that the right functions the exact same way as identity. If we have a right A, we get to here, we have a right B, we get to here and move on and so on and so forth. However, it's different in that this either monad allows you to short circuit. So what happens if we don't get a right? Well, if we get a left, that means we never continue on with the rest of our bind calls here because we stop immediately. It's basically a short circuiting computation. Similarly, if we have a right the first time and we get to this value and then this produces a left, we stop here and we never actually run this value here. So again, as an example in code, it looks something like this. In this example, we see very similar to the identity. There's a box, there's a box, there's another box, nothing really new there. In this example, we have something else different. We have an initial box of right, which means this bind call does occur successfully, but then we have a left here and that means everything beyond this point doesn't actually get run at all. So everything here and down here, these computations never actually occur because we short circuit once we get that left. Moving on to another example, we have the function monad which basically takes an input argument and the monad's output is whatever the output of that computation or that function is. For example, we see something like this. We have an input argument. Here's the initial function that produces back a b value. That b value then is passed into this function f, which is what we see in our do notation, our foreground, and that produces back a new function. The function again takes that argument, arg, passes it into it, produces a new value c. We then take that c value, pass it into the function g, which is what we see in the foreground do notation syntax that produces a final function from arg all the way down to d. In code, it looks something like this. Here we can see the input argument, which I'm gonna say has a value of one and we produce a value of two. Here's that same exact argument now, but now we're producing a value four. And finally, we take these two previous values and we produce some new result with them. So what problem do monad solve? They allow us to use do notation to focus on the stuff that actually matters and let bind basically handle everything else for us. And what we look at seems to be sequential in its order, even though there might be some other things going on behind the scenes or it's not. As an example, we can see the function monad doesn't really seem to be sequential at all, but that's what it looks like on our side. But what are some problems that monads don't solve? How would we write this using monads? We can see here that in most cases, this would be printed here, but in this one case, we have an error being thrown, which means we short circuit. And we have a second part here, which is handling that error and does something else if that error does occur. Similarly, we have a function F that has a read-only configuration value that's inside of its body, but that value is never actually defined inside the function's list of arguments. It just suddenly shows up and we can use it somewhere through this computation. And again, how would you do something like this where you have a value X that refers to the value one at some point in time, later on in the point of time it now refers to two and later on it now refers to five? How do you express these things using monads? And the answer of course is we use monad transformers. So why do we use monad transformers? First, we're adding these effects to pre-existing monadic computations that allows us to decouple our code. And as a result, we start getting something like algebraic effects. An example of this is tagless final where you have basically an interface that you write and you implement it using monad transformers. So we've covered the why and what of monad transformers. Now let's talk about the thought process behind them. How do we actually produce these things? Where do they come from? What's their design? So first of all, monad transformers work at a macro level. They actually just simulate these effects. They're not an actual runtime system that provides exception handling a throwing, but we can simulate it through various means. On a micro level, however, they work by making that do notations bind part just hide even more boilerplate. And that's it. So if we were gonna simulate this effect of this try catch block, what does this actually mean? Well, the try part can return a value if it succeeds. But in a situation where it throws an error, it returns a different value. This means there's one of two possible values it could return, which we can simulate by using the either type. Now what happens if we have the read only value effect? Well, we don't really have a way of just making a value magically appear. But the next best thing is we can say, if you pass that into me as a function argument, then I can always use you at some point in the rest of the computation. The state manipulation effect is a bit trickier because we have the monads preexisting output that we need to keep. But on top of that, we now also need to return back the state at that point in the computation. The state manipulation effect is a bit trickier because we have the monads outputs. But on top of that, we also need to return the state at the end of that computation. So we return back the outputs and the states. However, that state can change over time. So we also need to take that old state and pass it into the next monad of computations output that then returns the output plus that new state. As an example, basically, this state here will be passed in as the argument to the next bind call right here, which can then change that state, producing new value here, which is then passed into the next one and so on and so forth, creating a sort of cycle. And again, this may not make sense immediately, but I will show a diagram later that kind of visualizes this much better. So simulating effects in general sound really good. In principle, at least, there's only one issue though. Have you seen that boilerplate? It gets pretty bad. So if we see this example here with either, what does it actually look like? It ends up looking something like this where beforehand we would just have a value of one, but now we have to wrap every single value in a right in order for this to work. Similarly, if we wanted to do this sort of short circuiting and errors thrown thing, we need to wrap the value in a left. And this just means that everything from this point forward never actually occurs. If we try to simulate this read-only value with a function argument, it looks something like this. Here we see that there's a function argument that we are ignoring and just providing back the value too. Here it is again, here it is again, and here it is again. Again, it's this additional stuff we have to write just to get the value too over here on the left side of this left arrow thing. What about this effect? How do we simulate this? What does it look like? As we can see here, here's the initial state. We pass it into this slot right here, which doesn't show up on the left-hand side of our do notation. It's this slot right here that shows up instead. So this is just the same thing as basically saying peer three, but with a lot more boilerplate involved. In this computation, we have the value that we're sticking it now in the output slot, so now it shows up in our do notation. Here, we are changing the value of the state so that this is now no longer one on the next bind call. Instead, it gets passed through here as four since it's one plus three. The examples we've seen so far aren't really too bad, but what happens if we compose these effects with one another? Here's an example in JavaScript. We have this value X that equals five. At some point in the future, it's now not five, it's five plus whatever this read-only value is. We also throw a new error right here and then handle it separately down here if that error is thrown. So what would be the type of this if we're using our simulating effects? That type would be something like this. We have a read-only value, which is accessible to the rest of the computation. We have our state that will change over time, and the output is either the output or an error. What does that actually look like in PeerScript? Here's what it's gonna look like. We have our read-only configuration value. We've got a state, which is represented by integer, and output and error, which are just strings. Here, I have my example, where I'm passing in what the configuration value would be in the initial state. And in my examples, I've been using this identity A to represent things, but in this example, I'm gonna actually use effect because I want to be able to run this computation. And since effects can't be used like identity in this way, I'm gonna use a lowercase effect to kind of show you the idea of boxing this value in an effect. So we have our try block, which is the computation we wanna run. And if it proves it's an error, the catch block will handle that error and do what it needs to do. I'm not gonna explain all this code, but essentially you can see that this boilerplate gets really boilerplate-y for something that's arguably very, very simple in JavaScript. In principle, this idea of simulating effects with boilerplate sounds really good, but there is no way I'm gonna wanna write this by hand every single time. This is just a pain. And so perhaps this idea is dead, maybe we just shouldn't go with it. And you might think at this point, well, wait a minute, like the real issue here is that we just have all this boxing going on, right? Like if only there was a way to not deal with that boxing and just focusing on all the stuff that we actually care about. And you then might remember, but wait a minute, don't monads kind of do that? They've got the foreground do notation, which allows us to focus on the things that actually matter. And the background syntax, the bind implementation is handling that boxing and the unboxing boilerplate for us. Well, then is it possible for us to take a monad and sort of transform it in such a way that it adds these effects and still allows us to use do notation? And the answer, of course, is yes. So what happens if we stop simulating these effects with boilerplate and start using something like monad transformers to do the same thing? What does it look like? So first, we had this idea of the tri-catch block. We know that we have a monad that produces some output, but now there's a possibility where it can produce either an output or some sort of error. And in order for this to work in do notation, that means it needs a bind instance. Well, to implement that, we need to wrap it in a new type. And since we're wrapping it in a new type, at some point in the future, we also want to unwrap that new type. So we also have this function called run except T. Now what does the bind instance look like? Here's what except T looks like. We have a box identity that stores a value that's right or left. If it's a right, we can unwrap both the identity and the right box and get the value A. We pass it into our function, which we see in our do notation syntax, and that produces back another identity, either error or output. And from there, if it's a left, that means we short circuit. It just stops immediately. But if it's a right, that means we can keep going on to the next computation and so on and so forth. Now, we may have this idea here that allows us to do this computation where there's an error now involved or an either that's now involved. But we need some sort of way to say, you know what? I got an error. I need to use that short circuiting capability immediately. We'll call this function throw error and put visually it looks like this. If I would call bind, it just says take that value, stick inside of the lefts. I have now triggered that short circuiting computation so anything beyond this point just won't happen. But on top of that, we also want a way to then actually handle that error. So if we do get an error here, we can say if it's a left, we can unwrap that left and get back the error, pass it into a function that will hopefully give us back a right and we can keep on going from here. We call that catch error. And it looks like this. I have a computation here, MA. I run that computation and I see is that either that it returns a left and if it is, then I can handle it. And if it's not, then it produced a successful output. I don't need to do anything. We can just continue on and be happy. So what does this look like before and after? So here's what we would have to write with boilerplate to get the same effect. And here's what we get to write now now that we have a bind implementation for accept T. This allows us to focus on the stuff that actually matters. Similarly, this is what we would have to write with accept T using all boilerplate. And this is what we now get to use. Let's move on to the next example, read only value. We have a monad that produces some sort of output. Now we need to be able to have access to this read only value at any point in that computation. In order to use the bind, in order to use the do notation, we need to bind instance, which means we need to wrap in a new type. And because we wanna get rid of that new type at some point, we're gonna have a special function called run reader T that unwraps that new type and passes the argument to the resulting function. It's do notation will look something like this. We have some argument that gets passed and produces a identity of B. We unwrap that box, we get back the V value pass into this function, which returns a new function from argument to an identity of something else, and so on and so forth. But we need a special function called ask that says when you give me that input argument, just return it in the monads output so that I can actually access it in do notation. So remember the argument is sort of in the background syntax of the bind implementation and this exposes it to the foreground syntax of the do notation. But visually it looks something like this. We have the identity that produces back a C. We ignore that C and say ask. And ask says give me back a function that says when you give me the argument initially, just stick it inside a box so that now it's exposed in the next call to bind in the do notation and that's it. So again, what was the before and after this idea? Here's what reader T looks like with all of its boilerplate. And here's what it looks like when we use do notation. This is much, much clearer. Let's work on the state T next. So we have a monad that produces back some sort of output. Now we wrap it in that state idea that as we talked about beforehand, and in order to use this in do notation we need to write a bind instance for it. That means we need to wrap it in a new type and have a special function called run state T that unwraps that new type and passes the initial state into that function. And if we look at the do notation and how we visualize it, it looks something like this. Here's our initial state that produces back a box that stores the output of the computation and that state. We then take that box out, pass it into the function F which is what we see in our do notation and that produces back a new function. But if I take that previous state and pass it into that function it will turn back a box with another output and whatever that state is. And so on and so forth until we get to maybe like here. At some point we're gonna wanna take the state that's hidden in the background syntax and expose it to the foreground syntax because we actually wanna use what that state value is at that point in time. So we're gonna have a special function called get which all it does is it takes the states that we have here and sticks it in that output slot. Put visually, we have state one here. It still stays in it's same slot here but a copy of it now shows up right here as well. And now we can pass it into our do notation through this call right here. Similarly, we're gonna want a function called put that when we've done using whatever the state we want and we wanna change it now but not worry about what that is anymore. We use put where all it says is I go to my state slot, I stick that state in there and since I don't have any output in this computation it's just M units, monad unit there's nothing that's returning. And so I say ignore this output from the previous computation, take this new state and now this state is going right here since I have nothing to actually return in my output this is just unit. So what's the before and after? Here's what state T looks like with all of its boiler plate. And here's what it looks like just using this. So we've talked through the thought process behind how these things work. Now let's actually show you how to use them and some common mistakes to avoid. So when do you use mono transformers? Well, first of all, you don't. If you can just use a regular monadic computation just use that instead. If you do need to use one, the question is how many? If you just have one transformer it will typically look something like this. Here is an effect computation where every single effect here can return back an either string or some value. And if you want to say at any point if one of these fails, I want a short circuit this is where you'd want to use the run except T monad transformer. If you need to start using something like multiple transformers that's when we start to encode things a bit differently. We will use type classes to encode the business logic and then monad transformers to actually implement that logic. Let me give you an example of what that looks like. Here's a computation where it's using these type classes this is basically get input and this is throw error and catch error that we've already covered before. And it says I'm gonna get my initial state if it's less than three, I throw an error. Otherwise I'm gonna set the state to then be zero and then I'm gonna return the actual output value of true. Now this is one interface because we have this type class working for any monad that can implement these type classes. And we can transform a monad by wrapping it in these monad transformers state t and accept t to implement this stuff. But notice that we can do either accept t and then state t or state t then accept t. And so we actually have two possible implementations for this interface. So what does it look like if we run both of them? What's the difference between them two? Here's what it looks like. You can see here then the case of accept t then state t, it returns back a value like this and similarly the other way it returns a value back like this. There's nothing immediately obvious about why this difference matters. Then when you get to the errors you start to see something else that's different. In the accept t state t you'll see here that there's a left and some error message but there's no state. And in the state t accept t, there's a left but there is a state. And this is where the stack order matters. By stack we mean how you unwrap the monad transformers around some monad computation and the order of that stack matters. In the first one we have a monad that produces an either error or a tuple that's an output and a state. In the second one we have a monad that produces a tuple of both a state and either an error or an output. And on the happy path everything's fine. We have all of our values because they're all there. In the error path that's when we start to see a difference between the two stack orders. The top one represents the left error which means we lose our state whereas the bottom one we still have our state and that can mean a huge difference. As far as I know stack order really only matters with the accept t or maybe t monad transformers because they enable short circuiting. And the unfortunate thing is that for many computations accept t is one of the first ones you want to have wrapping your base monad. So this issue does show up quite a lot. Since monad transformers work on all monads what does that actually look like in practice? So here's the exact same program. We have one interface, this program. Two different implementations but keep in mind that this works for all monadic types. Well how many monadic types do we have? If you scroll down we can see that there's a whole bunch of them. We have identity, we have maybe, we have either. We've got ray, we've got effect. And so if I wanted to run this this is what it would look like. We can see here that we have identity which is what we've seen previously. We have maybe where everything is wrapped in adjust. We have either where everything is wrapped in a right. We have a ray where it returns back an array of one value. And we have effect that actually runs effects. So we've covered how to use this and we've shown you the mistake of the stack order affecting things. The question is where do we go from here? Obviously I didn't cover everything because the point of this talk is not to fully explain monad transformers. It's just to enable you to use these three by the end of this talk. So I would encourage you to check out this repo and look through more of the content I have there which does cover more material here than what I presented. Specifically, you might wanna look at the source directory to see how each transformer and all of its different versions looks like and run some of the examples yourself using the scripts I provided. And when you get done with that with a better understanding of monad transformers in general it would be helpful to read over the capability design pattern if you're not familiar with it already. So in summary, we use expressions not statements. We take those expressions and wrap them in monad transformers to add the effects. And finally, the stack order matters and you can use these on all monads no matter which one. If you have any questions you can contact me through Discord or open up an issue on that repo and I will respond to you when I can. Thank you. So now that the talk is over for the next one minute I'll just please put all of the questions that you have in chat. I'll pick them up and I'll send them to one of them. And so yeah, thanks Jordan. If anyone has any questions please do put them in the chat. I don't think that many people have any questions in which case I don't believe anyone has any questions so I will be moving on. So our second speaker for today is James Brock. James works at Cross Compass in Tokyo and is a contributor to the PureScript language. He's here today to talk about Monadic Parsing. Writing parses and compilers are a key use case of functional programming and from personal experience writing a parser is very difficult. I actually altered my compiler class. That's like a testament to the difficulty of writing a parser. So to help us reduce the effort in time required to write one we have James to talk about Monadic Parsing in PureScript. My name is James Brock and I work at the AI consultancy company Cross Compass in Tokyo. This is my talk Monadic Parsers at the input boundary for PureCon 2022. I'm a maintainer for the PureScript Parsing library and the examples in this talk will use syntax and type names from that library. This talk is for an audience who has some familiarity with regular expressions and monads. A process running on a computer is isolated from other processes on that computer and from the rest of the world. On that isolation boundary the process has input and output. The process views its inputs and outputs as either discrete events like signals and mouse clicks or as byte streams. The term byte stream includes any string-like thing like file names, web browser, form field values, entire files or anything else represented in process memory as an array of bytes. We want to talk about byte streams. For output a process serializes byte streams. Serializing a byte stream is easy. We write a byte stream and we send it. There are no surprises. For input a process deserializes a byte stream. Deserializing a byte stream is hard. When a process reads a byte stream it must somehow turn the information in the byte stream into a data structure and the process is native language. Reading a byte stream is difficult because there may be surprises. We expect the byte stream to have a certain structure but it may not have that structure. A large portion of process bugs, crashes and security vulnerabilities can be characterized as a process misbehaving when it encounters surprises in an input byte stream. We have many different terms we use to talk about deserializing a byte stream including decoding, validating, lexing, tokenizing and pattern matching. These different terms describe activities which are all essentially similar. We will say that all of this means parsing. It is the act of reading and parsing input byte streams that we will focus on in this talk. We will be talking mostly about Unicode strings but everything in this talk will generalize to any kind of byte stream. In the year 2022 here are some common methods for parsing an input byte stream. Write an ad hoc parser based on string splitting and regular expressions. This is how we end up writing a process which misbehaves on surprises. It's easy to make mistakes here and forget to handle certain cases. Use a parser generator like Google protocol buffers. This works great as long as the input is in this format but that is often not true. Use JSON. Again, this obviously only works if the input is JSON. The reason why such a huge amount of network traffic these days is JSON is exactly because reading a byte stream is difficult and good days JSON parsers already exist for every language. People would rather mangle their data into JSON than write a custom byte stream parser. And lastly, monadic parser combinators. I'll try to convince you that monadic parsing is the best method for parsing any arbitrary input byte stream. And this is always the first method you should try when you're reading a byte stream from over the process input boundary. I'll read to you from the essay parse don't validate by Alexis King. Consider what is a parser? Really a parser is just a function that consumes less structured input and produces more structured output. By its very nature, a parser is a partial function. Some values in the input do not correspond to any value in the output. So all parsers must have some notion of failure. Under this flexible definition, parsers are an incredibly powerful tool. They allow discharging checks on input upfront right on the boundary between a program and the outside world. And once those checks have been performed they never need to be checked again. Haskellers are well aware of this power. A parser sits on the boundary between your application and the external world. That world doesn't speak in product and some types but in streams of bytes. So there's no getting around the need to do some parsing. Doing that parsing upfront before acting on the data can go a long way toward avoiding many classes of bugs some of which might even be security vulnerabilities. So Alexis King is discussing the abstract idea of what it means to parse something. She does not talk about monadic parsers or any kind of monads at all. Her emphasis is on taking an unstructured input like a byte stream and turning it into a data structure. That data structure should ideally make illegal states unrepresentable. This is a common proverb in the functional programming world. It's only possible to do this in programming languages which have a strong enough type system that the compiler can check your code for you and prove that certain things will always be true and certain variants will hold. If we parse our input byte stream into a data structure which makes illegal states unrepresentable then by producing an instance of the data structure we have provided a proof that our input byte stream is in some sense legal. The easiest way to do this is with monadic parsers. A parsing monad is a monad with three features. It knows its position in the input string. It can choose alternate parsing branches based on the contents of the input string and it can fail in the case that the input string is illegal and cannot be parsed. There are many implementations of monadic parser combinators in many languages and all of them have these features. These three features are necessary for a parsing monad and they are also sufficient. Any monad which has these three features is a parsing monad. Let's look at an example of matching a pattern with a monadic parser. Here's the same pattern expressed with a regular expression and with a monadic parser. The pattern which we want to match is a string that starts with a lowercase a character and then has either a lowercase b character or an uppercase b character. Then we also want to capture the b character whether it was lowercase or uppercase. In the regular expression, we capture the b by surrounding it with parentheses. In the monadic parser, we capture the b character by returning it from the monadic computation. Let's look at the monadic parser computation. It's in the form of a do block. On the first line of the do block, we match a literal a character and then throw it away by binding it to the underscore variable. We don't need to bind it to a named variable because after we've matched it, then we don't need it anymore. Now remember that the parser monad has state inside of it which tracks the current position in the input string. The act of successfully matching the a character will step the current input string position forward by one character. On the next line of the do block, we match a lowercase or uppercase b character and bind it to a variable named x. The way that we say that x can be either a lowercase b or uppercase b is by using the alternative operator. The alternative operator is kind of like the or operator which is why we write it as a bracketed vertical pipe character. The alternative operator is a binary operator which takes two parsers as arguments. It first tries the left parser and if that succeeds, then it returns the result. If the left parser fails, then it tries the right parser and it returns the result of that. So then after this alternative parser succeeds, the x variable will contain either lowercase b or uppercase b. Then we return the variable x from the monad computation. Now let's modify this parser slightly so that instead of returning the captured b character, it returns a data structure. Let's return a very simple data structure. What is the simplest data structure? It's a single bit and the single bit contains all the information we need about whether the b character was uppercase or lowercase. So now we've changed the type of parser to return a Boolean instead of a care. The parser returns true if the parse succeeded and the b character was uppercase. This is an example of what we mean when we say that we want to return typed data structures which make illegal states unrepresentable. There are only two possible ways to successfully parse this string and the data structure which we're returning has exactly two possible values and no more. So recall that I said that a monadic parser needs three features, the state of the current position in the input string, alternative and failure. Let's see what happens when the ab parser fails. We give it an illegal string axxx and instead of returning right and a Boolean data structure it returned left with a description at a position for the error. The error says that it failed to parse because it was expecting a b character at position two. When we're parsing patterns out of a string, a common thing to want to do is pattern repetition. Let's try repeating the ab pattern. We added the asterisk quantifier to the regular expression to repeat the pattern many times. We also defined a new parser named ab many. The ab parser uses the many parser combinator to match the ab pattern many times. Let's talk about what we mean by a parser combinator. A parser combinator is a normal pure script function. It is a function which takes a parser as an argument and then returns a new parser. The type of the data structure produced by the new parser may be different. In this example, you can see that the ab parser has type parser string Boolean because it is a parser from a string to a Boolean. We passed the ab parser as an argument to the many parser combinator and the many parser combinator returned a new parser with type parser string array Boolean. The data structure produced by the new parser will be an array of Booleans. So we run this new ab many parser and it matches the ab pattern as many times as it can on the input string and then returns an array which is true for each of the matched ab patterns which had an uppercase b character. Parser combinators may take more than one parser argument. For example, the alternative operator which we are using in the ab parser to match either a lowercase b character or an uppercase b character is also in fact a parser combinator. That alternative binary operator is a function which takes two arguments, a left parser and a right parser. It returns a new parser which tries first the left parser and then tries the right parser. So parsers call parser combinators and pass parsers as arguments to the parser combinators which return new parsers. This is how we build up parsers for complicated pattern matching. Let's try writing our own parser combinator. Here is a parser combinator which we have named twice. It is a lot like the many combinator and in fact it has the same type signature. The difference between the many combinator and the twice combinator is that the many combinator will try to match its argument parser as many times as possible but the twice combinator will match its argument parser exactly two times, no more, no less. The twice combinator takes one parser as an argument which we have named P. The P parser can be any type of parser which is what we mean by for all A. The type parameter for the P parser is named A. The twice combinator will first try to match the P parser and bind the results to the name P1. Then it will try to match the P parser again and bind the results to the name P2. If both of those succeed then it will return an array with P1 and P2. Down below we define and run a parser named AB twice with the same input as before. You can see that the AB twice parser matches the AB pattern two times and returns true when the matched pattern has an uppercase B character. So we can see that regular expressions and monadic parsers both solve essentially the same pattern matching problem but the monadic parser is much longer than the regular expression and has a lot more code. Is that bad? The thing about regular expressions is that they seem like a reasonable and efficient solution for small toy examples like these but when we take on larger and more complex problems the advantage of monadic parsers becomes apparent. That's analogous to the situation of pure functional programming. It's difficult to convey the advantage of pure functional programming with small toy examples because small toy programs are simple in any language. The advantages of pure functional programming become truly apparent when we are maintaining and refactoring in improving large complicated computer programs. In the same way the advantage of monadic parsing over regular expressions becomes apparent when we are parsing more complicated patterns. Let's parse something a little bit harder. How about an email address? We all have a pretty good idea what the format for an email address is. Here is the complete internet engineering task for specification for the format of an email address. Okay, that's a little bit hard. It's not quite as simple as we might have expected but it's not too bad. Let's think about what it says. The forward slashes in the syntax mean alternative which means either the one on the left or the one on the right. The square bracket syntax means that something is optional so it's okay if that thing is missing. First, the specification says that an address is either a mailbox or a group. Then it says that a mailbox is either a name adder or an adder spec. Then it says that a name adder is an optional display name followed by an angle adder. Then it says that an angle adder is an optional comment folding whitespace followed by a left angle bracket character followed by an adder spec followed by a right angle bracket character followed by an optional comment folding whitespace or alternately it can be an ob's angle adder and it goes on like this. Can we write a monadic parser to parse an email address? Here's a monadic parser for parsing IETF email addresses written by Fraser Tweedale and published in the Haskell library purebred email. This is in the Haskell language and uses the addo parser parsing library. The syntax for pure script using the pure script parsing library would be almost exactly the same. A monadic parser looks a lot like the internet engineering task force specification. Let's compare the specification and the monadic parser line by line. First, the spec says that an address is either a mailbox or a group. The monadic parser says that an address is either a group or a single mailbox. So the order of the alternative was flipped but that's fine I'm sure Fraser Tweedale had good reasons for doing that. Next, the spec says that a mailbox is either a name adder or an adder spec. In the monadic parser, I see address spec on the right side of the alternative and that expression on the left side must be equivalent to a name adder I guess. There is an optional display name. The optional function is a parser combinator which does exactly what you'd expect. It will try to match the display name pattern one time but if the display name pattern is not there then it skips it. Next, the spec says that an angle adder is this adder spec expression surrounded by angle bracket characters and optional comment folding white space expressions or alternately an obs angle adder. The monadic parser says basically the same thing. I don't see the obs angle adder alternative in the monadic parser for angle adder. Maybe we should open an issue with Fraser Tweedale and ask him about this. Finally, the spec says that a mailbox list is either a comma separated list of mailboxes or alternately an obs inbox list. And the monadic parser also says that a mailbox list is a comma separated list of mailboxes. Again, the monadic parser omits the alternative. I don't know why. The point is that the formal spec for RSC5322 and the monadic parser implemented in Haskell are very similar. We can see what the monadic parser is doing and we can ask ourselves reasonable questions about the implementation. Next, let's look at the same RFC5322 spec implemented as a regular expression. The author of this regular expression claims that it, quote, 99.99% works for parsing RFC5322 email addresses. And he may be right about that, but how can we tell? The regular expression for RFC5322 is shorter than the monadic parser, but it's very difficult to read it or make improvements. This is why regular expressions are included in the pejorative Wikipedia article about write-only programming languages. Here's a quote from the article. Write-only code is source code so arcane, complex, or ill-structured that it cannot be reliably modified or even comprehended by anyone with the possible exception of the author. Now email addresses didn't start out as an internet engineering task force specification. They were intended to be simple and they started out simple. In the 1970s, an email address was a username and then an at sign and then the name of a computer. We could parse that with a regular expression. But over time, email addresses became more complicated as everything does. Monadic parsers will scale and grow in a much more maintainable way than regular expressions. And even small patterns written as a monadic parser will be easier for others to read. We often use regular expressions to scan a string and capture all the patterns which we find. Here we want to find all of the integers in the string 10x2y-3 and split them up. We've done this with both a regular expression and a monadic parser. There are a couple of things to notice here. First, the monadic parser for an integer doesn't just match a string pattern and return a string. It actually converts the string to an integer and returns the integer. We're using the int decimal parser which is included in the PureScript parsing library. The split cap function runs the int decimal parser on this string and produces for us a fully typed data structure which tells us everything there is to know about the structure of the input string with respect to the integer patterns in the string. The second thing to notice is that we made a mistake when we were writing the integer pattern for the regular expression. We forgot to think about whether we wanted to allow negative integers. Now it's easy to change our regular expression so that it will allow negative integers and I'm sure you all know how to do that. What's hard is to take that improved integer regular expression pattern and publish it in a library so that other people can avoid making the same mistake. Of course, regular expression libraries do exist and they contain useful patterns like the email regular expression which we saw before. But they really don't compose very well. The only way to compose regular expressions together is to concatenate the regular expression strings. This is because again, regular expressions are a whole domain specific language which is embedded in some other host language. Regular expressions don't have any of the composition features like functions and modules that we expect from general purpose programming languages. Monadic parsers on the other hand are just normal PureScript functions and they compose very well. We've seen how to compose monadic parsers together by writing parser functions with parser combinators. Remember that the term parser combinator just means a function which takes some parsers as arguments and then returns a new parser. There are many other tricks we can do when we're using monadic parsers for pattern matching. There's one more trick that I wanted to mention in particular and that is matching recursive patterns. Regular expressions famously cannot parse an input string which has a recursive or tree-like structure. That means that regular expressions cannot parse HTML. They cannot parse JSON. They cannot match balanced parentheses. All of these things have a recursive tree-like structure. Here's a monadic parser named balanceParents which matches a balanced group of parentheses. It will pair each open parenthesis with a closed parenthesis and capture the group when the last parenthesis closes. The way we write a monadic parser to parse a recursive structure like this is that we write a recursive monadic parser. You can see that the monadic parser calls itself on the second line and that is how it tracks its depth in the parenthesis parse tree. I'll read to you from the introduction to the paper Parsec Direct Style Monadic Parser Combinators for the Real World by Dan Lyon and Eric Meyer. Parser combinators have always been a favorite topic amongst functional programmers. Burge already described a set of combinators in 1975 and they have been studied extensively over the years by many others. In contrast to parser generators that offer a fixed set of combinators to express grammars, these combinators are manipulated as first-class values and can be combined to define new combinators that fit the application domain. Another advantage is that the programmer uses only one language, avoiding the integration of different tools and languages attributed to Hughes 1989. I want to repeat one of the key points about monadic parsers. They are written in normal pure script. Parsing input is such a hard problem that when people want to do it, they traditionally use a whole domain-specific language for parsing. The most famous domain-specific language for parsing is regular expressions, but there are many others. There has been a long tradition of general-purpose languages which are so weak that it's impossible to write anything in them which is slightly hard, like parsing an input stream. The whole point of the Perl programming language is that it's a weak, comparative-style language with regular expressions built in so that we can use regular expressions for parsing input byte streams. That's the whole point of Perl, but it doesn't really solve the problem in a satisfactory way. When our computer program consists of two different languages, then we have the usual problem of language interoperation, and the usual solution to language interoperation is to pass strings. A regular expression can capture a pattern in a string, but then it returns the captured pattern back to the host language as a string. And then what do we do with the string? We still have to turn the captured string into a data structure. Remember that we looked at the problem of extracting integers out of some input string. We wrote a regular expression which found integer-looking substrings and we ran it on the input string. The regular expression found some matches and returned the captured substrings. Suppose we then ran a string to integer conversion function on a captured substring and the string to integer conversion function failed. But then, was the captured substring a legal integer string or wasn't it? In monadic parsers, we turn the string into a data structure in our native language during the parse. So there is no ambiguity about whether or not the input string was legal. In this case, the data structure is an integer. That's a very simple data structure but it is a data structure. If we have a thing which is not an integer then we can't represent it as an integer. So producing an instance of the integer data structure provides a proof that the input string was legal. Monadic parsers allow us to match patterns in normal pure script instead of using a domain-specific parsing language like regular expressions. And after we have matched a pattern we produce a data structure which preserves the proof that the input string was legal. Here is a type for a monadic parser. This is an excerpt from the 1998 paper Functional Pearls, Monadic Parsing and Haskell by Graham Hutton and Eric Meyer. On the bottom line, it says that the type of a parser for a data type A is a function from a string to a list of pairs of an A and a string. This simple type definition tells us pretty much everything we need to know about monadic parsers after we spend some time thinking about it. Like all good math, it has a simple definition but complicated and far-reaching implications. Modern monadic parser libraries usually don't use this exact type definition for a parser, but they use definitions which are equivalent. The essay, Revisiting Monadic Parsing and Haskell by Veebub Sagar has some good discussion about that. And if that last definition was too prosaic for you, then here is the same definition expressed as a poem by Fritz Rohrer. Dr. Zeus on parser monads. A parser for things is a function from strings to lists of pairs of things and strings. Okay, that's enough theory. We'll stop at the Dr. Zeus level of parsing theory. We don't actually need to know any theory to use monadic parsers, but now you know that the theory exists and that the theory and techniques have been pretty well established since the 1990s. So in a JavaScript runtime environment, you can expect a pure script monadic parser to run at least 10 times slower than a JavaScript regular expression. That's just how it is. And that situation probably will not improve much in the future. If your input byte stream will be large and you need to process it quickly, then you might not be able to use pure script monadic parsers. But all of the techniques we talked about here are also used in Haskell. There are Haskell monadic parsing libraries such as AddoPars Act, which run very fast. About as fast as regular expressions, sometimes faster. So if you learn these monadic parsing techniques, then you can apply them in other execution environments very well. Whenever your process receives a byte stream from the world beyond the process input boundary, the first thing you should do is to parse that byte stream into a data structure. Monadic parsers are the easiest and most effective way to do that. Unless you already have a parsing library for the specific format of the byte stream or you are under severe performance constraints, the first technique you should consider for parsing an input byte stream is monadic parsers. Thank you very much for listening to my talk. Hi, thanks for listening to my talk. How's my sound? Can you guys hear me? Great, okay. So, here's a question from Toastal. Has the performance of the parsers in PureScript improved recently? I tried to use a lib from CSV, but it crashed my browser because of PureScript's parser being slow. No, they haven't really improved and it's hard to improve them. I've been working on that. I've tried some different things. It's difficult. There are actually two monadic parsing libraries in PureScript. There's PureScript parsing, I'm the maintainer of and which I've been talking mostly about today. There's also another one called PureScript string parsers. The main difference with PureScript string parsers is that it does not use monad transformers, which makes it faster. And so, if you have performance problems, you can try using that one. It will run a little bit faster, but really regular expressions in a JavaScript execution environment are just gonna be faster because there's a whole just-in-time regular expression compiler that compiles regular expressions in JavaScript down to very fast code and runs them. And we don't have, there's no equivalent jit compiler for monadic parsers. So in general, monadic parsers don't have to be slower than regular expressions. They are for historical reasons because Google and the WebKit developers have spent just years and years of programmer time and millions of dollars in programmer salaries optimizing regular expressions. And regular expressions in other libraries have been optimized for decades and decades as well. And they're super fast now. Monadic parsers can be fast. In many cases, they are fast. The Addo Parsec library is fast. It is, like I said in the talk, it can be as fast or faster than regular expressions. In JavaScript runtime environment, that situation's not gonna get better anytime soon. Okay, so let's go to another question. So James Collier asks, can monadic parsers also parse non-regular languages like context-sensitive grammars? Yeah, it's a really good question. I wanna talk about that. Let me ask, let me answer James' other question first. So his question, so his other question is, recursive monadic parsing isn't complicated by strict evaluation? It is not, it is not. It's not complicated by strict evaluation. That's a really good question though. It's a very, very good thing to ask, especially when you're coming from Haskell into PureScript, sometimes you get bitten by strict evaluation. But in general, no, it's gonna be fine. The place where it would really bite you is with the alternative operator. If it were going to, if you don't want your, you don't want because alternative operator is a function and PureScript is strict in evaluating it's in, for functions, which means it will fully evaluate all the arguments before it calls the function. And so if you have an alternative operator which fully evaluated the left parser and fully evaluated the right parser and then called the function, like wouldn't that mean that both parsers get evaluated? Well, no, it doesn't because it fully evaluates the parser, but it doesn't run the parser. So all the semantics of monadic parsing in PureScript work exactly how you would expect them to work if you're coming from a lazy language like Haskell and Haskell monadic parser libraries. Another question. Oh, oh, your other question was, can monadic parsers also parse non-regular languages like context-sensitive grammars? So let me get back to that in just a second. Let me see what other questions there are. In Raku, you can have recursive regex patterns. Yes. And, oh, yeah, right, sir. And that's an extension, but it's an extension that makes the regex parser not regular. So like you're getting none of the advantages, like you're losing a lot of the advantages of regular expressions that they can be equivalent to a finite state automata, but then you're getting the disadvantages of regular expressions, which is the syntax. So like, yeah, you kind of get worst of both worlds with recursive regular expressions. Grammars are interesting too. In Raku, that's cool. Thanks for the link. I'm not familiar with Raku. Does that mean we can mix in regex for speed? Yes, you can mix in regex for speed, especially the PureScript string parsers library in PureScript has a regex parser. And I have an issue on PureScript parsing because I wanna add a regex parser into PureScript parsing so that you can call regex in the middle of a monadic parser. It's a really good feature. You can do that in PureScript string parsing. You'll be able to do it in PureScript parsing very soon. What about a monadic parser combinator library that compiles to regex from Joseph Young? That is, so there's a Haskell library, which is called applicative regex. No, regex applicative. It's called regex applicative. It is super, super interesting. I can't remember the name of the author off the top of my head. But basically what it is, is it's a regular expression library, but it's written in terms of applicative combinators in Haskell. So it's not monadic, instead it's applicative. Applicative is a slightly weaker algebraic structure, which is kind of like a monad. But in applicative computations, you know the shape of the computation in advance as you do with regular expressions. So you actually don't need a full monad if you're just doing regular grammars. You can use an applicative and that's what the regex applicative is all about. And that library is very, very interesting. You got all the nice syntax and composability of monadic pars, of a monadic parsing library, but you get a parses regular grammars. Why do all parsers have that byte to, so t-rex, why do all parsers have that byte to tuple type? Because, I'm not sure what you mean by that. A lot of the examples I used in the talk were the care parser, which parses one byte. Could you rephrase that question, please? I'm not sure you mean by that. Severe performance constraints. Yeah, okay, I talked about that. So, okay, I wanna talk about, I wanna go back to James Collier's other question about if a monadic parser can parse a context-sensitive grammar, what does he mean by that? He's talking about, oh, let me share my screen here. Can you see this, am I sharing? Yes, this is sharing. Okay, he's talking about really what he's talking about is this, the Chomsky hierarchy. Everybody here familiar with Noam Chomsky? This guy, he's super famous. And this is what he is famous for. The Chomsky hierarchy provides a hierarchy of grammar complexity. And regular expressions can parse, yes, Noam Chomsky. Regular expressions can parse regular languages, which are type three, the weakest level on the Chomsky hierarchy. Monadic parsers, because can parse the next level up, type two, context-free grammars. Because they can be, because they're normal functions, they can be recursive. They can call themselves like any normal function in PureScript. And because they're recursive, then they are effectively the same as a non-deterministic push-down automaton. A push-down automaton is just another name for the data structure known as a stack where you can push things and then pop things. And your call stack becomes the data structure which provides the push-down automaton capability and then you get type two, so you can parse type two, context-free grammars. Now there's a way to get to type one with monadic parsers too. And that is through monad transformers, which is what Jordan's talk was about. If you have a parsing library, I didn't talk about this during the talk, but all the parser types in PureScript parsing are monad transformers. And you can add, which means you can add new monad capabilities to your parsing monad. Which means you can bring in any context into the computation that you want. Any kind of monadic capability like a state monad, you can transform your parser monad with a state monad. And then you can bring in some state context into your computation. And then you can use that context that you've brought in to make decisions about your parsing. And that makes your parsing context sensitive. And then you can parse type one grammars using monadic parser combinators. Type one grammars include most computer programming languages, for instance. So if you want to parse a computer programming language, you will need a parser that can parse type one grammars and monadic parser combinators with monad transformers can do that. Any other questions? Oh, T-Rex question. Type parser, a string equal, parser is a function from strings to lists of pairs of strings and things. You said this always holds true and is proven. Why is it true? It's true, it's not exactly a theorem, it's a type. And so that if you have, so this type, it's this type, if you have a parser which has this type, then it gives you all the capabilities that you need for a parsing monad. It gives you alternative failure and parsing state. So that's why that's a parsing monad. And like I said during the talk, that's not actually the exact type which is used in, which is used in pure script parsing, but it uses a type which is equivalent. And the best discussion of this is Veebh Segar's essay, Revisiting Monadic Parsing in Haskell. It's a really good discussion for this question. Aditya says, for a moment, I was wondering when Noam Chomsky became a mathematician. Yeah, he is, he kind of is a mathematician. He's, the reason he's so famous, the reason he's the most cited scholar living alive today and the most famous public intellectual in America is because he brought this mathematics, this mathematics of computational logic into the field of linguistics and applied it and began answering questions which no one had ever been able to come up with good answers before, before. Noam Chomsky actually thinks that the difference between humans and animals is that human brains are capable of, are essentially Turing machines that they have a grammar generation organ which is subject to the fundamental theorem of computer science that, which is, and that's really what Noam Chomsky is all about. That's why he's super famous. Yeah, you should go down a Wikipedia rabbit hole about that, it's super interesting. Let me, let me, let me show you some stuff here. So let me show you an example of what it looks like to pull in context. Actually, I don't know, maybe at my, maybe my question and answer session is going over here. Aditya, tell me if I, if you want to just go to the next talk, okay? So this is, so this is from the documentation for my Haskell megaparsec version of the replace, the parsing replace library. This replace library is a library that I've written three times and I alluded to it in the talk. It's not part of the basic parsing library. It's not part of the basic monadic parsing library. It's an add-on library which provides some extra features. I basically wrote this library by going through the Python RE module, which is really good and has, it's about regular expressions and provides all the features that you can do for regular expressions in Python. And it's really ergonomic and really useful. And so I just went through this whole library and made sure that everything you can do with regular expressions in Python, you can also do with Haskell or PureScript. Every time I found something that you couldn't do, then I added it to this family of libraries that I wrote and, and published it. There's, there's three of them. There's PureScript parsing replace and then there's also Haskell libraries replace Megaparsec and replace Adoparsec for those parsing libraries. And so from the replace Megaparsec library, this shows you how you can pull in a state context into a monadic parsing operation in order to provide, in order to parse context sensitive grammars, in order to inter-solve context sensitive parsing problems. And here, and as you maybe remember from Jordan's talk, you apply a monad transformer and you pull in a state and then you just stack your parsing, you use that as a base monad for your parsing monad and then you have the state available to you while you're doing the parsing computation. And this allows you to solve, this allows you to do the same thing that you would in parsing resubn. So in, in Python resubn, it performs the same operation as subn returns a tuple which tells you the number of substitutions that you made. So let's, what is subn? Sub is search and replace. It searches through string, replaces everything, but you can also tell it to stop after a little while and you can say you only wanna do a certain number of substitutions and stop after that and then report back after you've made the substitutions. And in Python, they had to provide a whole function in the library, which is this resub function or actually the resubn function. But in, but with these monadic parsing libraries with monad transformers, we don't need to provide a library function. We don't need to add anything to the API. This is just an example, which shows you how to do it because monad parser transformers already give you all the power that you, all the capabilities you need to do an operation like that. You just have to, but it helps to see an example. It's hard to figure it out if you've never done it before. So there's an example of that provided in replace make a parse sec. And there's also an example on the pure script in the pure script parsing replace library. So I think I'm gonna stop there. That's all I have to say. Did I miss any questions while I was talking? Oh, how easy is it? How easy is it to parse binary data in pure script? Great question. Super easy, super easy. On the pure script parsing library, go down here. Related packages, pure script parsing data view. If you wanna parse binary array buffers, you can use this library here, which goes with pure script parsing. So that I think is a good place to stop. I'm gonna stop sharing. Aditya, yeah, what do you say? Yeah, I mean, that was awesome, I'll be pretty honest. Thank you. Okay, before we move on, there's actually a question that James... I think Anupam, you're gonna have to take this question. James Collier was asking regarding Jordan's talk. And one second. Yes, James Collier was asking on YouTube regarding Jordan's talk. Is the state V transformer related to the SD monad? Anupam? Is the state T-monad transformer related to the ST monad? Oh, that's sorry, that's a monadic transformer talk. Jordan, do you wanna take that one? You can take that one too, James. Okay. No, it's different. That's confusing because they sound the same, but they're different monads. ST monad is like an effect monad for when you just wanna have some local effects and do some mutation. ST monad is basically a performance trick that you can do. If you have an algorithm you wanna do some local mutation in a little localized area, like an isolated localized area in your program where you do mutation. And then you get the result out of that. That's what the ST monad is for. And then the state T-monad is a monad transformer for carrying state along in a monadic computation. James is on YouTube. So James, I hope that answered your question. So, yeah, I think, is there anything else? James, can we move on to the next slide? Okay. I'm gonna turn off my video. Thanks a lot. Right. So our third speaker for today is Mike Solomon. Mike is passionate about music and functional programming. To combine us two passions, he created his own startup. He also created an open-source music production platform that uses PureScript. As we've seen, monads are scary. And so obviously we'll be talking about the co-monads which are even scary. His talk, Mike is gonna be doing some of the concepts and making it a bit easier for us, especially me, because, you know. Great. Hey, so I'm gonna be doing this talk live. So for a couple of reasons, but one is because conceptually it is quite hard, it took me a while to a long time to get them. And I'd like for people to be able to ask questions as I go along. And the second reason is because it's gonna be using live coding. So even though that we're gonna use certain co-monads as a given at a certain point, we're gonna start from a blank PureScript document and code them from the ground up with the hope that builds intuition a little bit more for the problems that they could solve and also kind of their performance trade-offs. So before doing that, I'd like to do a couple of examples. But first I'd like to do a sound check because as Aditya mentioned, I'm building a music platform based on an open source PureScript library called WAGs. So it wouldn't be a musical presentation without music and music unless you're John Cage requires sound. So I'm going to try to place them back via Zoom and see if you hear it. So the first thing I'm gonna do is a screen share. I'll put share sound, share. And immediately it's asking for system preferences. So it's so good that we're doing this now. Yeah, when you upgrade Mac OX, you find out about these things the old fashioned way. Okay, actually Aditya, so it's asking me to quit and reopen Zoom. Is it okay if I see you in 30 seconds then you can kind of improvise during that? Yeah, sure, I mean, go ahead. Because yeah, sorry about that. It's a new setting when I upgrade Mac OS it says that I have to quit and reopen Zoom to share my sound. So I will see you in one second. Yeah, sure. If anyone's curious, I've never actually used Mac in my life, I don't have a friend who uses it. I use Pop OS, so if you guys use any other thing other than like Ubuntu or like Mac or like Windows just kind of put it in the chat. I've been looking for a Linux distro to kind of, you know, I've just been looking for a new Linux distro. So bias is Monjaro and, okay, Pop OS is a good choice. Yeah, actually one of my friends recommended it. Nick's OS, okay, I've heard of it. I've never actually seen it. I remember reading about Monjaro that it's built on top of Arch and that it has rolling releases which is what my friend was reading about once. Or I actually tried to torrent Monjaro one. I mean, I was using bit torrent to kind of download it and it was downloading it like five KBPS. Like a two gig file, like five KBPS. So that was like, I think it was taking me like three years. And so yeah, that's kind of why I did not do it. I just kind of gave up and like, and Mac did it at the same time. I actually once tried to use an iPhone and I think I almost smashed across the room because I like, I like my phone has like custom launches. So there's like, I'm also using, I'm also using POPOS at home sadly. I have to use Windows at work. Windows updates, man, Windows updates. That's the only reason I changed to POPOS. Windows updates, yeah, I was used KISS Launcher. Actually I've been using EV for some time but EV got removed from the place though. So yeah, that's kind of like the reason why I'm not using EV, EV was really simple. Mike is back, I do all this. Yeah, sorry, no, sorry about that. And I apologize, I should have not updated Mac OS yesterday. I should have realized that it was gonna do the permissions dance with me. So apologies for that, but it should be fine now. So I'm gonna click on share. It's asking, so I'm gonna click on a simple example. Let me know if you can hear it. So this is the platform. Yeah, I can hear it. Great. So this is all peer script making this music happen. So we'll be doing several examples. So this is a Greek artist stuff, but I'm showing it off because it's kind of a nice example but just kind of a sanity check of what's going on. So this is WAGs, it's a fistful of Comonads on the inside but on the outside you just get music like that. Here's another example by artist named Ben Burns, which is kind of a little bit more lo-fi and a little crazier sounding. Sounds like Kalechi Techno, which is super fun in my opinion. So there you go. So that is the platform, runs on Comonads, but I wanted to give you a sense of like sort of what it is, it's an in-browser DAW and it's what we're gonna be using for a jam session later today in the conference. So where does it live on GitHub? So on GitHub, it lives in this library which is called PureScript WAGs. You're all free to go, clone it, play around with it. There's many examples of it online and maybe the best place to find the examples is in github.io, Microsoft.github.io slash wagsy where there's sort of all sorts of fun examples. Here's a delay and flange line. That kind of creates a 80s or Game of Thrones sound spooky voice on the browser. So that's that. And now that you see sort of what the project is in browser DAW powered by PureScript, I'd like to now go to the Comonads side kind of really digging deep in and saying, we're talking about what problem I needed to solve. So before I even get into what a Comonad is, I'd like to talk about what problem I needed to solve. So the problem I needed to solve at first when I started building WAGs, it was twofold. One is that you need music to come out. Like you just, the sound has to escape. And then two, you need to sort of anticipate what's gonna happen in the future, meaning if anything could happen in the future, then it just gets too computationally. It just takes too much computation, meaning that you have to have a runtime that could sort of anticipate anything. It would be like in baseball, if you ask your center fielder to play that entire field, it's just not gonna work. They'll be able to play it up to a certain point, but they're not gonna be able to run to the pitchers mound or catch a ball where the catcher is supposed to be. So you can't demand that of a runtime. It's certainly not a browser. So you need to be able to sort of anticipate the possible moves, so getting music out and anticipating the possible moves. And that's not a unique problem in music at all. In fact, that's sort of the most common problem. So I have this example of Oscar Peterson, great jazz musician. And if you look at him playing, there's two things that are happening. There's music that's coming out. But what he's doing all the time is anticipating subtly and within milliseconds, the next thing that he's going to play, meaning it doesn't come out completely spontaneously. It's based on the key and based on the style of the music, which in this case is the blues. So there you go. It's a fundamentally musical problem of getting something out, which Oscar Peterson is doing there through the piano and anticipating what can happen next within a certain realm of bounds. So like Oscar Peterson is gonna do certain things, but he's not gonna like smash the piano or something completely outside of bounds, which would be too complex for the runtime of that video. But within those bounds, it's ingenious, of course. And that is the art of not just my musical project, but any musical project. So that's the problem that I have to solve. How do I get sound out? And how do I anticipate into the future what the sound needs to be? So in order to solve that problem, I went to Comonads. And now what I would like to do, so I have a PureConf 2022, which is on my GitHub. You are more than welcome to clone it. It's micSol slash PureConf 2022. I'm gonna be pushing it as I live-coded, but there you go, you could see the report at the very least, clone it at the end. So let's encapsulate those problems that I wanna solve in code. So the first problem that we have is we have some type of context. I'm gonna call my context, W. The context there is like Oscar Peterson's jazz trio, right? So like jazz trio. And then inside the context is a sound, and I'm just gonna call the sound A for the time being. So I need to get a sound out of my jazz trio. I need to get an A out of a W. So get a sound out of my trio, get an A out of my W. So that's problem number one. Problem number two is Oscar Peterson thinking slightly in advance. What am I gonna do next? What am I gonna do next? What am I gonna do next? So I have on my jazz trio, I'm imagining my trio making a sound at some point in the future. And then I need that point in the future. So then get that point in the future. And you see GitHub co-pilot is trying to fill it in for me and it's doing a pretty good job actually. That's crazy. So anyway, I digress. So now let's do a type. So we have W, A actually has two both of these types. So the first one is simple. We have W, A, a jazz trio with a sound inside of it. And we're just gonna get the sound. So at the end, if you close your eyes, you don't see the jazz trio anymore, but you hear the sound. So kind of the group that's producing it fades away and you just have the trace of the artifact. So W, A to A. And I'm gonna call this, oh, what am I going to call this? Extract. I'm gonna call it extract the sound. And now I'm gonna create something called expand. I'm gonna use a similar signature. So I'm gonna say W, A. I'm gonna start with a co-monet. Or actually I haven't called these co-monets yet, sorry. So I'm gonna start with my W, A, my jazz trio. That's slightly better. And then I'm gonna have the function where I'm imagining the future. So here's my jazz trio playing in the future and we produce the sound, which is exactly the same as this extract function in terms of its signature. So I'm imagining the future, what it's gonna be like and then when I get there, here's the future. So I need to be able to like fast forward to it and actually use that. Meaning that if I'm imagining the future, it sort of doesn't, if I'm imagining what I'm gonna do with my jazz trio and then they turn the lights out and kick the audience out, it sort of doesn't make sense. So there was no point in doing that imagination. So at the very least, I need to get something out that is gonna be the future that I could then call extract on. So we have these two very musical operations, extract and expand that Oscar Peterson is using. So these two operations now to kind of pull off the veil are the two operations that are the bread and butter operations of a co-monad extract and expand. And if you're familiar with using monads and there were two talks about monads already in this conference, I'm sure they'll come up elsewise. We're not gonna use them in my presentation. This might look familiar to you and it's because they're what people call in fancy talk, the categorical dual of monads. So let's look at a monadic function. So I'm gonna say extract is from a co-monad and expand is from a co-monad. And now let's look at the monad version. So the monad version of extract, when I say version, I mean, this is a categorical dual sort of flipping it around. It's called pure or in Haskell speak return if you're more comfortable with Haskell and this is a monad. So category theory, the reason that it's one of my favorite branches of mathematics is because it feels very playful. You can like flip stuff around and you get sort of something for free and it's the same thing here. So I'm gonna flip around extract and I get pure for free. Usually people call it M, M-A, but we call it sort of whatever you want. And expand, I'm gonna flip it around and I'm gonna get something called bind monad for free. And both Jordan and James talked about bind. So here I'll use M again, M-A, A, M-A, M-A. So my Oscar Peterson co-monad terms, their categorical duals are monadic at a conceptual level. So why is it called co? Because in category theory, when you flip around the arrows, co is a popular term to use. So if you've heard of a product, which is two things happening simultaneously, a sum term is also called a co-product. So it's two things that happen in either or setup. And in general, in category theory, when you wanna flip something around, you call it co with that. And perhaps if you call it co-co, then you get the original, although I've never tried. So perhaps that doesn't communicate that idea. But there you go. So now what I would like to do is build a simple co-monad. And then we're gonna look at how co-monads actually power the musical examples that you just heard in the same way that they power Oscar Peterson's jazz trio. So the first, what I would like to create is co-monad that I'm gonna call my co-free. So I'll get into what co-free means in a second, but because we're gonna be working with something called co-free co-monads, I'd like to start with that right away. So I'm gonna create new type, my co-free. And I'm gonna say that there's gonna be a functor in there and sort of arbitrary type constructor. Wow, it's actually feeling it in something that's almost correct, but it's not quite that. And then a type, so this, the co-monad, it's gonna be my co-free F, that's gonna be our W and then A, which will just be this A. So I'll say my co-free. And now I'm gonna use my musical term. So I'm gonna say playing now, like what is Oscar playing now? It's gonna be A. And then say in the future, I'm gonna be playing F of my co-free F of A. So if you stare hard enough at this function and you're used to a non-lazy language, strict evaluation like peer script, immediately it should sort of freak you out in all the right ways because we have this recursively defined function, meaning that in order to create this thing, we need to pass it in the future, which is this thing and we can wrap it in an F. But if the F that we're wrapping it in doesn't have any sort of delayed execution, then we're gonna blow up our stack because my co-free can only be defined in terms of itself, so we're gonna need some sort of infinite type constructor and that's no fun. So to make this a little bit more safe, there's a few ways to do it, but the way that I'm gonna do it for now is just to give it a dummy unit value. Let me see if I've imported the prelude I have. So here I'm gonna say unit to that, which means that we're gonna be able to defer its construction so we don't sort of blow up the stack. So now that I can do that, let's create a functor instance for it, which we're gonna have to do before we create our comon edge. So I'm gonna, actually I could even just derive the functor. So derive instance functor my co-free, I would say my co-free F, sorry, functor my co-free F. So that's, looks like it doesn't wanna do it because of course F needs to be a functor, functor F. There you go. Okay, so that's just a pure script compiler, defining it what it's gonna do. If we wrote it up by hand, it's just called map on this and then it's going to call map on that and map on that again. It will call map thrice actually, map once on the function, map on this, map on that. In fact, just to let's write it out. Whoa, this is like, it's crazy, isn't it? So we have F, my co-free and we're gonna say playing now and in the future. And we're gonna say now we'll just reconstructed my co-free and we'll say playing now equals F of playing now, which it got right, good. And then here we're gonna want map over the function, map over the functor, which is this F and then map and then from there, we're going to say F in the future. And it is flipping out for reasons that I'm not F, not quite sure, yeah, sure, sure, sure, there you go. It's because I use the equal syntax. Sorry, I was thinking in a different language. So there you go, if we had to write it out by hand and we'll keep it that way, just so you could build the intuition of what the functor looks like. So it's mapping over the function over this F and then over this map is being called recursively. So this map is this map, there you go. So now that we have this functor, we're able to define, we're able to turn it into a Comonad. So I'm going to do that and then I'm gonna flip back to the high level and show you an example of the sound that that makes, meaning that once we've turned this structure into a Comonad, what that affords for us. So I'm going to go again, I'm gonna create an instance. So instance extend my co-free and I'm gonna say extend my co-free F where extend equals, and remember our extend here is WA then this function. And if we click through to extend, let's import it, go to definition. My VS code plugin is not, for some reason it doesn't want to go to the definition, but that's absolutely fine. So we're gonna take this here, we're going to put that there, of this function, I'm just gonna put a typo for the time being. So extend actually, if I understand correctly, it starts with this function. So we have to flip the order, which we will. There you go, and we need a functor F, which we will. And now we have our typo, so we need to get another my co-free out of it. So let's see if we can make it. So we will again start with this my co-free. So we're gonna make it look quite similar. So this F here, I'm gonna say MCF for the whole my co-free. So I'm gonna say playing now is going to be this F of MCF because this function gets us an A and we need the A in there. So we're gonna apply it, and then in the future, I'm gonna say, so we have unit, so we can just kind of throw that away. We have, we're gonna map over, because we have this functor instance in here, so we can map over this functor. So we're gonna say map over in the future and what are we gonna map over it? Well, extend my co-free. So we're gonna say extend, extend, and we need to import that. And infinite, yeah, there you go. So we need to wrap it now in the right type. So probably a better way to do that would be map. So map, map, extend my co-free F. Let's see if that works, that works. So we don't even use this playing now, we can just get rid of it. Okay, so that's extend. So again, what we've done is we've taken this function, applied it to my co-free, so we get something out of it. And in the future, we're gonna get something out of it as well for those that have worked with co-free This is called redecoration essentially. So we're redecorating the future with this function that we kind of kick the can down all the way line. So it sort of, one thing that's worth saying is if we redecorate too much, then we might create a performance issue because we're applying function on top of a function on top of a function on top of a function. But for one-offs, it's absolutely fine. So again, this is to go back to our Oscar Peterson example and my example. This is taking a function that could modify the future somehow modifying it. And then on a rainy day, we check it out and we actually use it and then extract. So instance, co-monad, my co-free, functor f, co-monad, my co-free f where, so we said that it's gonna be called extract and co-pilot just totally got that, right? So we just extract playing now and we're playing now. So that is the music that was coming out of Oscar's fingers and that's coming out of wags.fm. So this is the setup, but concretely, I've kind of done this low-level implementation I've showed high-level sort of what the music is, but now I would like to link the two together so you could see quite concretely how this is used actually in a very granular way to create sound and then kind of I'll conclude by talking about what the performance characteristics are of it before I get into the Q and A. So let's go back into an example. And the example I have here for it, Mrs. Bach fugue. So I'll play it for y'all so you can hear it. I have a synth, not some synth. So the synth sounds kind of weird because I have this high-pass filter growing up and down on it. I can make the high-pass filter a little faster and it'll sound really wacky in kind of a maybe fun, maybe not way. Yeah, let me slow it down a little bit so we can really hear it. Yeah. So there you go. It's a Bach fugue. So I should say that this system is quite fast. So if I start it again and I speed it up to something faster, it should just work. So this will double the speed of what you just heard. It'll sound kind of crazy, but the Co-Free Comonet shouldn't fail us. It does, I'll be sad. That is twice as fast. There you go. And there's basically no clicks or anything like that. So we'll bring it. Bach is wrong in this phrase, so we'll bring it back to that. Where is the Co-Free Comonet in there, to be clear? Where is the Comonet in there? How is that making it musical? So what I'm going to do is dive into the definitions of one of these particular functions. Wags entirely runs on Co-Free Comonets. It's like that is the underlying abstraction that makes the whole entire thing work. Everything works that way. Every time sound comes out of the loudspeaker, it's because it's using extract on something at some level. And I've written some custom things that are not Co-Free Comonets, but just other types of Comonets, but just to make it a bit more performant. But in general, that's what's going on. So let's dive into this function, make piecewise. Let's look at what it's doing here on this page. And then I'm going to go to the definition and look at all the way back to Co-Free Comonets. So what make piecewise is doing here, I'm going to make the piece a little bit slower so we can hear it. It's creating a piecewise envelope. So it's starting at a volume of zero, then at 0.11 seconds, going up to a volume of 0.4, then falling down to a volume of one, again, at 0.2 seconds, then at 0.3 goes to zero. So it creates this boop that's going to sound like sort of a key press in our little synth. I've slowed it down so you can hear, dum, dum, dum, dum, dum, dum, dum, dum. If we want to smear it out over time a little bit, we can change it and we'll hear it smear. And now we can bring it back to something a little bit more crisp, and we'll hear that. So it's creating this piecewise function of time. How is it able to do that? How is it able to know sort of what the next value is and what value we want now coming into the browser? How is it able to extract that value? My choice of terms extract is, of course, on purpose. It's because it uses the function extract. So let's go to the definition of make piecewise, which I have pulled up here. And you see that make piecewise takes this non-empty list, which is the envelope that I showed you, and has this thing called the audio parameter f function of time. So audio parameter function of time takes this current time, what it is, and how much headroom, how much look ahead, and gives us an audio parameter under the hood. So, and then audio parameter is, all that audio parameter is it's a, it's the value that we want at a given time. And then the offset in case it's something that's precisely timed. For example, if the audio clock falls now, but we need an attack to happen slightly after, we could set that as well in an audio parameter. And actually, let me find a slightly better definition because I'm realizing now that it, that this one might not show common ads in the way that I would sort of like to, but this one definitely does. Sorry, I opened up the wrong definition file. So this piecewise, so time headroom, the time, and it spits out a Co-free co-monad where the functor is this function. So one really important thing to remember is that a function from A to B has a functor instance of function A. So function A is a functor. So here, a function of time is a functor. So we have this Co-free in the exact same way I set it up in my Co-free here. My Co-free f of A here, Co-free, this is our f function of time. And A is the audio parameter. This is what it's spitting out. So at time zero, if we go back to our Bach example, at time zero, our Co-free co-monad spits out this value of zero. At the next time, it will interpolate between zero and four and spit out a value. And one question that you might ask is why not just take a giant structure and constantly kind of map over that structure and do interpolation? Meaning that why do you need to sort of store the value in some intermediary state? The reason is that once things get very long, it's inefficient to do some sort of map or lookup table method. And one way that we can see that sort of self-evident is the actual score of this. So let me start the piece again. So you can listen to it. So this is the score. The score, these are all the notes and this is where they fall in time. This time is quantized by this very small factor and that's what speeds it up and makes it quite fast. So what if we treated this list, or this non-empty array as a lookup table, instead of treating it like a Co-free co-monad that spits out the next value over time? So here, because under the hood, I transform it. So let me back up. In addition to my envelope being a Co-free co-monad, my score here is also transformed to a Co-free co-monad because I said the whole thing runs on Co-free co-monad. So here, we're extracting the next value. We're extracting this note. And then as soon as this note's done, we extract this note in time and then we extract this note in time and then we extract this note and so forth and so on all the way down the piece. And when it ends, we just recycle and extract them again. So what if we didn't do it that way? What if instead of extracting, we had some sort of lookup table? Well, the naive way to do that would be to cycle through this list, do some sort of filter. And when we get to the next value, we use it, which is fine in the beginning of the piece, but now scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, scroll, because this piece has tons of notes. Let's imagine that we're five minutes into the piece and all of a sudden we're looking over a data structure that contains 7,000 entries to find the next note to play. The thing would crash, meaning that the piece would start off okay, but 30 seconds into it, it just wouldn't work in the browser anymore because you're going over this array. Now there are ways to mitigate that, of course. We could transform our array into a map and use the time as keys, at which point it would have a logarithmic performance instead of that awful linear performance that I talked about, but it still wouldn't be great because we would still have to have this map traversal every single time we wanted to get out a note. Whereas the Comonad approach is basically 01, meaning that as soon as we finish this note, we call extract on it, and then we call extend to get our next co-free Comonad. And what's gonna be in our next co-free Comonad is gonna be this note. So we call extract on that. That's blazingly fast compared to some type of traversal. And again, going back to Oscar Peterson in our YouTube video, I'm not, I can't claim to know what was going on in his brain when he was doing that improvisation, but he was playing this note, and then somewhere at some place, he's lining up either the next note or potentially what the next notes could be. It could be one or several things based on the performance of another musician or his mood or lots of stuff. And so you're lining it up, which is that extend operation that I was talking about. And the same thing is going on here. We're extracting this value and then we're using a Comonad to extend this value. So we've seen this now happen on two levels of music. We've seen extract and extend work on a piecewise envelope generator, meaning that it's generating the individual envelope that's applied to a single note. And we've seen it work on our score, meaning that it's generating the next note that's gonna be used in a score. And the beautiful thing about music for me, at least is that it's sort of like people say that a Monad is a burrito all the way down. I feel like music is a Comonad all the way up, meaning that you could start from envelopes, then get to a level of a score, and then get to a level of an entire piece or an entire audiograph. And now I would like to bring in the name of this package and why it's named as such. So it's called Peerscript Wags, Y-Wags, web audiographs as a stream, meaning that if you can take a Comonad and stream it at the small level, can the Comonad represent the entire audiograph? And the answer, I wouldn't ask the question, of course, if I didn't already know the answer, which is yes. So I'm gonna open up the Brave browser, or actually I don't have it here, but let me install really quickly a Chrome extension. I'm gonna say Audion because I, so web audiograph visualizer, or let's say Chrome Web Store, because this, just to kind of show the point, so Audion, let me install this really quick. This is made by the Google Chrome team. I uninstalled it because its performance is awful. I didn't think I would use it in this presentation, but there you go. What? It's asking me to do all sorts of stuff, but let me, that's nice. So let me go back to this and hopefully this will have Audion installed in it, and I will be able to show you what I'd like to show you. So my claim is that the whole web audiograph is a Chrome stream. Let's see if that's the case. So I'm gonna open up Audion in here. It looks like it didn't install. That's unfortunate. Let me open up Audion in here. There we go. I'll turn it on, manage extensions, and it looks like Audion is on. So what Audion is going to be is a tool in our inspect browser, and we see this web audio pane. I'm gonna reload this, and now I'm gonna press play, and my claim is that the whole web audiograph, meaning the whole experience that you're listening to right now, is a stream powered by Comonads, but don't take my word for it. Look at the visualization, and you'll see exactly what's happening under the hood. So it's a little slow because it's drawing it, but I'm gonna press stop. This graph that you see on the right side of my screen is the Comonad, ejecting a full web audiograph 60 times a second. So I'll press stop, and you see, sorry, it actually won't stop. It doesn't hold the stage, so that's kind of unfortunate. So I'll narrate it. But you see this audio buffer source, these gain filters, it's not updating fast enough, but you see that by quad filter in there. This is updating 60 times a second, actually within the browser, meaning that it's taking a full web audiograph with filters, oscillators, the whole nine yards. It's extracting the one that's now and then extending into the future, what it will be in the future. So my claim was that an individual piecewise function can be powered by Comonad, a score can be powered by Comonad, and now we've gone all the way up to the entire musical experience, the web audiograph. The graphs are extracted in time at the sampling rate, just like Oscar Peterson is extracting notes from his fingers. So I come back to this metaphor a lot, but I think it's so powerful because music functions that way, any sort of UI functions that way as well. So why not use Comonads? And kind of the thing I'd like to close with before we get into the Q and A, and I'm happy to answer really any questions about it is the other folks that have looked at Comonads use interesting terms about it, and I'm reminded of one that Phil Freeman, the creator of PureScript Views, which is Comonads or the future, and he met it in kind of the prophetic way that it sounds that the future of programming is Comonads, which I personally believe as well, but he also met it in this, as a pun, Comonads are the future, meaning Comonads line up what happens in the future, and that's the whole entire point of using them, but also they line up the present as we saw with the extract operation. So I do believe that Comonads are the future, at least it's the future of my future because I'm all in on this tool that I'm making and kind of diffusing it around the world to musicians, the jam with me on it and then make stuff with it. So Comonads are surely my future, because that's how I'm building my business, but it's also a great way to extract the future of an audio state in a really cheap, computation efficient way. So aside from when I was drawing that graph and that you heard some hiccups, I mean, you don't hear any hiccups when you're using Comonads because it's 01 lookup, it's just the next thing that's going to happen. So if you structure your whole entire experience that way, then you get those performance characteristics compared to a more naive implementation. So to summarize, if you're at all building any type of rendering engine, be it an audio rendering engine like I am, or it could be a canvas-based rendering engine or even some type of web application you want to experiment with a different UI framework, Comonads are a great, great, great way to power what you're doing. It's a fantastic distraction that from a theoretical and aesthetic point of view is very much linked to some of the most beautiful performative arts, including music, including Oscar Peterson and many others of course. So thank you very much for checking them out. I'll stop the share. And maybe one thing that we could do now is taking questions about it. So I'm looking at the synth work, there's a lot of stuff in there. It's thing, yeah, co-pilot, it thinks we're in a hasto. Every reader is a co-pilot. That's great. There's a lot of stuff in here. So synthwave, Beethoven, yes. Oh, the envelope, awesome, yeah, thank you very much. Why is WAGS a graph or why is web audio graph? So maybe I'll start with that. And then I can kind of go through one by one. So apologies if I don't get to your question right away or re-ask it if I don't see it. But I'll start with James Collier and then if you had a question beforehand, please let me know. So why is WAGS a graph or why is web audio graph? So let's look at, so at an intermediary level, what I would like to do is turn on my screen share again and look at a way that constructs it as a graph a little bit more explicitly so we can see what's going on. I'm gonna go to a different one called synth. So a powerful abstraction in the setup that I used is something called mini notation which comes from title cycles but underneath the mini notation, it's setting up the graph. So let's, this example I have here and let me just make sure, yeah, it's showing to you right now shows what one of those graphs is. So here, let me play it and you'll hear this like really flangy sound is for, which is kind of fun. It's like sort of sci-fi. Let me, let me amp up the volume a little bit because it's a little bit soft. Turn it to 1.5. Yeah, that was, wow. That's sort of fun, isn't it? Peer script makes fun noises when you ask it to. So why is Web Audio a graph? So in a graph, there's many different types of graphs you can make, but let's look at how, my claim is that the structure is graph-like. So it's just a claim that I'm making but I'll dig into kind of why I'm making that claim. So we have a gain node here. Into the gain node is past the band-pass filter. This band-pass filter, which is filtering, has an argument, a couple arguments go into it. The frequency of the filter is Q value. And then what's going into it are these oscillators. So these oscillators here, we have OCS. It's a reference to this part of the graph. These oscillators, we have another gain node and then going into that is a triangle wave, a sine wave and a sine wave. So now if we're imagining the graph in our head, we have a gain node into which a band-pass filter is going into which these oscillators are going. Interestingly, this graph is type-safe. I've done type-level programming to make sure it's type-safe. What does type-safe mean? It means that here, instead of calling my oscillators OSCs, if I add like a lot of S's to it and then press play, the graph won't compile. It'll freak out because it can't find OSCs in there. So it's a graph at the type-level meaning I'm using type-level graphs to make sure that if it claims that it's in the graph, it actually is and now when I fix the type, the type of this record, the graph traverser is able to find that OSCs does exist in there, picks it up in the ref and uses it. And similarly, if I call the ref the wrong thing, if I call it OSCs with a lot of S's, it will also freak out because it can't find that. So let's make it unfreak out OSCS. I say freak out, sorry. Maybe it's actually quite calm. I don't know what Shakespeare's inner life is, but anyway, it does give me an error. So all that is to say that in addition to being a graph, it's a type-level graph, meaning it's doing type-level programming in order to verify that that graph is correct. Why am I using type-level programming? Quite simply because the, or not quite simply type-level programming is quite complex, but the answer is quite simple. Quite simply because I don't want the audio to fail. When I'm doing a jam session that's 20 minutes long, I don't want the graph just to like explode. I don't want it to load into the runtime, be like an incoherent graph and then for my audio to turn off and for the audience to go home and me not to get paid for the gig, I want the thing to work. And for it to work, we want to make sure that any invalid state, this is what James was saying before, any invalid state in Jordan as well is rejected by the compiler, not rejected by the runtime, which we saw happen there. So that's my answer to your question, James, why is Web Audio a graph? Because my claim is that this thing that I just showed you is a graph. And then where is it a graph? It's a graph on the term level, meaning it's connecting all the stuff in the Web Audio interface, but it's also a graph on the type level to make sure the music is coherent. I mean, it has no bugs. So there you go. Then I'm curious what infix operators, wags uses? Many is the answer. So wags, infix, to create a scene, there's at greater than make scene flipped. There's lots of infix operators and they're used all over wags but also all over the unit tests as well. So, sorry about the baby crying in the background if you can hear it. I'm not sorry about the baby, but it's sorry about the crying. So here's one of the, actually no, this is not a wags infix operator, but this is a wags infix operator that's being used. So to find them, you could look in the package and they're used all over the place. In wagsy, which is the, sorry, in pure script wags lib, which is this library, infix operators are used in the engine that powers wags. So that engine is, I call it the tile engine. It's like a front end for middleware. It's, these infix operators are used. Actually, I'm not sure. They're not even used in here and this engine, they're used in the functions that construct it, which are used elsewhere. Anyway, all that is to say is that at some level using working with infix operators is useful. Then I've used comanus to iterate a game of life simulation. Stepping is redecorating the grid tree. Yes, absolutely, Joseph. That's absolutely true that that is, I think that Barthas talks about that in his blog Conway's game of life. And that is a way to use redecoration in that context. It's almost a, linguistically, it doesn't roll off the tongue, but you're rewriting the future, but the future hasn't been written yet. And yet you're rewriting. It's really what you're doing is you're rewriting the potentialities of the future, which is like sort of a beautiful metaphor when you think about it. Like if you send a kid to school, that's because you want to rewrite their potentialities. You want to create a better potential for them. So you can rewrite the future and so far as you can rewrite its potential. And that's where redecoration is doing in the case of a Comonet. So then in extract W-A to A, there's a web audiograph. Yes, that is exactly what it is. W-A to A is a web audiograph and everything inside of it. So there's, the web audiograph has control data in it. That control data is also Comonets, which in itself contains control data, which is also Comonets. And then when extract is called, it just goes all the way down the chain and extracts what you need. Why am I composing Comonets together so that BYOC bringing your own Comonet, meaning that instead of locking folks into using one particular abstraction, it takes an arbitrary set of Comonets and then just calls extract in all of them. And the reason it's able to do it is because of the type class. So you just expect a Comonet and the one that you bring is your Comonet of the day or the week, but it uses them on the hood to power wags. Reminds me of Kraftwerk. It's cool that it updates without having to stop the music. Absolutely, like if it had to stop, it would be a non-starter for giving musicians that are using it in a live performance context. So that was very important to me. It's important to the folks that use it as well. Can you generate a wags document from a physical synthesizer? Yes, absolutely. The way that I generated the BACH example was a physical synthesizer. I have a piano here and I use it to generate MIDI and then use a Python package called Mido to parse the MIDI. Although you could also, you don't even need to parse the MIDI post factory. You could also parse it in real time. On my Twitter feed, there's examples of me playing MIDI instruments that are powered by the browser. And that's absolutely possible. It's fast and the reactivity of it is sub 15 milliseconds, which is what you need to hear to feel like it actually works in time. Then are you representing the graphs with the PureScript Graphs Library? Oh, no, I'm not. I'm representing it with a custom graph maker that is in the wags source tree. And it is, the graph is represented by a bunch of, so if I go to this graph folder, so all of these audio units like high shell filter, high pass filter, gain filter, these are units in the graph. And then each one, basically the record contains a bunch of keys that point to these. So as a result, the type of the graph will change depending on what's inside of it, which is why I use type level programming. If it were one, you could also do it on the term level, but by doing on the term level, you lose the benefits of the compiler being able to check that the graph is coherent. So these are the elements that make up the web audio graph. Ah, yeah, screen sharing is stop, sorry about that. I will, let me go back to it really quick. Yeah, I was talking about this. Sorry, these are the elements of the graph, which is in wags graph audio units. Sorry about that. Lost track of where it was. Can I show the max potential? Sorry, does the graph update happen efficiently due to Comonads or something else? Completely due to Comonads. That is the only way that it happens efficiently. And I'll show the part of the code where that is, because it's, and I will turn on my screen share for that, because it's super important to insist on that that efficiency comes from the underlying abstraction and it's here. So, and it's not like spread over a lot of places either. It's a stop shop for that efficiency, which is nice because it makes it really easy to reason about when you're hacking at the library. So it's here, control functions, and it is make scene, which makes the next scene. So make scene, what it does is it gets a frame with the environment, get frame. This is the thing that ejects the next value. And then what we do when we get the frame is we pass. So we get the next frame. We say what the instructions are. These are the instructions going to the web audio graph, like turn up the gain or start oscillator, and then we pass the next thing. And next is a WAG. And then we could call make scene a next and get that. So the efficiency completely make scene is the function that's called when I say that it just calls Comonads all the way down. It's make scene that does it. And all of the efficiencies just come from the fact that we call it once, get the instructions, and then we have next, which is a closure around what happens next in the future. So yeah, 100% of the efficiency comes from the Comonadic implementation. And the genius of the people that invented the way to work with that in Haskell. I know that Edward Comet worked a lot with it. A lot of other folks did too. And the idea of the pattern is brilliant and I'm making ample use of it. So can I show the max potential of this line of work for musicians that normal does and sound engineer can't do? Yes, absolutely. So I actually just created a Udemy course that is not live yet, but imminently will be. So here if I go to, I have way too many Google profiles. If I go to udemy.com, hopefully this will take to my instructors, yeah, crap, no, it's not there. But yeah, so my answer to you is yes, otherwise I would do a screen share, but it's not up yet. So the max potential of this, I do a full course that could show that, as a simple sort of opening salvo to that. Like here's one example from it that takes a single file. This file here, let me click on it. It's just a jink of Collier Groove. I take it and I do this. Sorry, it's taking a while to load. Not sure why, maybe because the audience is messing around with it. Yeah, let me reload for some reason. It could be this thing that I just installed that it's not happy about. Anyway, it's taking a while to load, but hopefully it'll work okay. Anyway, theoretically when it starts it all, yeah, I have no clue why it's crapping out like that. Let me switch to Safari. Maybe Safari will be kinder to me. Maybe it's that thing that I just installed. So anyway, this is all the max potential question. You hear it, it remixes it. And it does that just with this homo-natic structure. So there you go. I mean, sorry, let me turn that off. So it's entirely, yeah, I mean, I do a whole course where I talk about it, but it can push music creation really far. And one of the reasons that I created it was to be able to create music in a way that I hadn't done before. And I found a lot of pleasure in doing that and collaborating with others about it. And then last question I seen here, to touch on an earlier question, are ex observables an example of monads, but not co-monads, but they can be used to make a co-monad. I actually don't know enough about our ex observables, unfortunately to be able to say, but one thing is for sure, if our ex observable can potentially not contain a value, if that is part of the contract, then it can't be a co-monad because co-monads, I need to be able to pay up on demand that you ask for the value and you get it. So if an observable, I think observables are the analog from the pierced world would be events. And an event is the same, event can't be a monad because it's very, the same reason that numbers, you wouldn't make it a monoid because there's no natural monodic operator. You can make a multiplication addition, could both be it, so it's sort of, I mean, you don't know which one to choose and events are the same way, that events are dealing with time, time is their context. So how you squish together time can be done in a myriad of ways and there's sort of no consensus on the way to do it. So as a result, event is not a monad, but there are many monadic operations you can do with an event. And I'm pretty sure observables work in the same way, they're sort of like events. So because an event could never be fired and an observable theoretically can never be fired either. So in that way, it could be monadic, but not co-monadic. Okay, answering an earlier question on co-monads. But yeah, they could be, okay, you're answering the question, sorry. Yeah, so your answer is absolutely correct. Thanks Robert for that. So yeah, one thing to say, maybe that's useful is that as possible, I have a monads that are also co-monads, but they're sort of a bit, I'll use the word trivial. I mean, theoretically they're not trivial at all, but they just don't get you a lot of power when you're using them. So identity, it's sort of the classic example. Identity you could always extract the value out of and identity can be a trivial and monad as well. But where stuff gets interesting with monads and co-monads is kind of when they kind of specialize into their own spheres. So co-monads are great for anything that's front-end where you need to be able to abstract the value and project into a future. And monads are great for when stuff can hit the fan, like parsing, I don't know, could fail in all sorts of ways. And it's managing sort of the uncertain, the failure all the time. You sort of need a monad to be able to do that in the abstraction. Okay, sorry, let's have a break. Sorry that I'm going on answering these questions and now paying attention to the time it's because the time isn't here. Thank you so much for checking out the presentation. Let's flip back to you, Adita. And you can kind of take over. Hey everyone, thanks Mike. That was, I mean, that was awesome. And it reminded me a lot of like synth wave and I occasionally listen on YouTube to like a lot of chill wave compilations that reminded me of Hyde and it was kind of cool to see programming with music. Okay, so everyone, we're just going to go on like a break for five minutes. I think I'm thinking everyone's kind of mentally, everyone's kind of exhausted right now. So for like the next five minutes, just go have like a glass of water, hydrate, have like fruits, you know, get some snacks. So yeah, see everyone back in five minutes. Do post anything in the chat. If you do post anything in the chat, I'll be here and I'll control you in a few years. I'm just going to be here. So yeah, can you guys hear me? Yes, I can hear you. Yeah, so welcome back everyone from that five minute break. I hope all of you had like, you know, I hope all of you are fine now. So the break's over. Let's get back to the talks. Our fourth speaker for today is Benjamin Hart. Ben is the director of Cardano operations at M Labs. Ben has a lot of experience in Haskell JavaScript and PureScript. Today he's going to be talking about asynchronous programming in JavaScript and PureScript. Is asynchronous programming is hard, especially in JS. I have a lot of stories about that and I'm pretty sure a lot of people have a lot of stories. PureScript makes it easier by giving us some facilities for asynchronous programming and to talk about these, that is Ben. Thanks for joining me. We're going to be talking about asynchronous JavaScript and PureScript for Haskellers today. My name is Ben Hart. I'm director of Cardano operations at M Labs.City. We're currently hiring for Haskell, PureScript and Rust at all levels from intern, entry level, senior and management. Take a look at our website, M Labs.City. So I personally come from four years of Haskell and PureScript via functional full stack JavaScript and I'm from Toronto. Let's drive in. So what are we even doing? We're going to examine some asynchronous primitives in JavaScript with callbacks. We're going to take a look at more modern approaches like promises and asynchronous wait. We're going to take a look at PureScript abstractions in the effect and af monad. We're going to learn just a little bit about fibers which are almost threads. So examples one through three, we're going to take a look at sort of simple event emitters. So this is a way to sort of reason about how JavaScript asynchronously sort of appears to work. So this is synchronous programming but it can build up into an asynchronous kind of API or essentially can be used to sort of reason about the way that the browser allows us to do asynchronous programming or node for that matter. So here we have sort of an emitter object. It's got this idea of some listeners which is a queue of functions that will grow as we add them with on event. So we're going to add listeners and then we're going to fire off a listener. So this is where we get into sort of asynchronous appearing behavior where we can set up some events that then happen later on. So we can reason about these computations a little bit differently. We can sort of store some computation elsewhere and then fire it off. But again, this is still synchronous, right? We could sort of step through this code really easily without worrying too much about missing pieces at least until we get into things that are out and stored in different modules where we have to reason about things a little differently. So this is a really basic system. Let's go ahead and run this sort of asynchronous foundation. So here we can see we have this idea of like starting the action listener attached. And then even though we say pow up here it doesn't happen until we sort of kick off this event. Okay, so it's a really simple sort of scaffolding out of this computation that's gonna happen later on and then we can kick it off in a much easier way without defining all of the computations gotta happen right at the moment when it has to happen. So next we're gonna move ahead. So we'll extend it and we can actually add, can add a lot of things to this actually. If we flip ahead we can extend it to include event types which is what we've done here but we can also extend to different types of observable data and we can extend it to do all kinds of other things. State management, it's a good intuition for how DOM event listeners operate. So if you wanna add a button to a page and have events occur it's also sort of how Ajax requests and networking requests are gonna work. So here we have another example. We've got these two different event types and have sort of intentionally switched their order. So now we can sort of fire those off in different orders based on labeled events. So our, what used to be just one queue is now a map of queues to event types. And then we can have another sort of version of it where instead we hold some data and every time we change the data we make that data available to some functions for state management, okay? So this is observable data. You can see we also optionally give the ability to reference the previous value using variadic functions here. So because we don't have a fixed number of arguments to our function type in JavaScript we're able to pull these kinds of tricks. These will get ironed out later on but for now there are some conveniences that we can pull from this. So let's go ahead and run that example. And you can see as soon as we change this initial value of hello to Bob we get the first listener going off and logging out the original value, hello. And then a second listener fires which is going to give you the new value. So these could be supplying them out to other callbacks. They could be running network calls themselves and you can get this notion of sort of a chain of functions that call each other, building up a call stack, but potentially with asynchronous steps where you're not adding things to the call stack immediately. So this brings us to sort of a fundamental problem. JavaScript is single threaded usually. Worker threads don't count, they're a bit of a newer addition but what I'm talking about is the fact that JavaScript runs with an event loop and when an event is triggered there's a call to an event listener that's placed in a queue kind of waiting for the event to resolve and once the call stack is cleared of synchronous behaviors the JavaScript runtime is gonna start to execute listener calls from the queue. So it's sort of very similar to this but the reality is that this emitter is obscured. It's implemented off in the browser maybe in C++, it's in the node core, it's part of the web APIs that you think of as sort of the JavaScript base library, all right. So there's a really great talk on this that I can never do justice to is called what the heck is the event loop anyway from Phillip Roberts and it essentially goes through just this with a lot more demonstration. So if you're interested in knowing a lot more about how the event loop works I really recommend that as a great starting place. Now there's also some modern abstractions that have been added on to help us prevent what's called call back hell, this idea like maybe we have so many listeners and additionally the old syntax for functions was very big and then relatively clunky relative to these arrow functions that we're using. So we came up with promises they're sort of a wrapper for unfinished computations they can't be canceled once you start them off so here we're using fetch which is an API for doing network calls so we're calling off to this AI age predictor based on your name. So there's a difference between this example and the previous examples here we could trace through all the actions and never lose track of sort of where the JavaScript interpreter might be operating and how it might evaluate out but here we get to a point where we call fetch and we'll very quickly get confused because the fetch computation doesn't wait for us to call fire on any of these it kicks off immediately through a browser API. So this returns a promise here which we then called dot then on but the computation is sort of running in the background in another thread that we don't directly control and when that thread completes and receives its response this is going to be kicked off and then likewise this is going to be kicked off. So we have this sort of chain of events and if you squint this is gonna look a lot like a monad and you don't have to squint that hard this actually kicked off as this API was being discussed there was a hot debate about adding features like monad to JavaScript however there is some convenience here because you don't really have to think about whether you're calling bind or map on this you can return promises or not promises and that was determined to be the more sort of JavaScript friendly API. Now we also get some pleasantness that looks a little bit like do notation so we get a sync of weight and if you sort of equals a weight if you squint that into an arrow you might imagine this looks quite a bit like Haskell's do notation and you wouldn't be wrong. The kicker because it is just like do notation is async actually denotes that your return value gets wrapped in a promise so then you can go ahead and catch for errors at the end just like you might catch some monad say that has monad catch or monad throw. So we have these sort of functional like types of expressions where we can sort of set up chains of computation and also set up binding do notation but like I said it's not cancelable we haven't officially created monads or functors in JavaScript and this is where the next great example comes from because there's this great library called future and future allows you to wrap up promises with a different type which does implement monads and functors and by functors and alternative and all kinds of other abstractions that you might be familiar with. So if you are using JavaScript and you cannot use pure script this is a great system to use. It also has certain API helpers like enforcing that you supply an error handler before you get to unwrap and get your actual value out. You can also set up cancellation as well so there's a lot of great things to get out of the Flutures library. There's also a way to do async await like syntax using iterators. There's a lot of great options for you with Fluture but the main problem that you might have is that Flutures errors, they're great type errors but they are at runtime which brings us to pure script. So with pure script we are gonna be talking about three different techniques the effect monad promises and their relationship to af as well as fibers. We're gonna be taking a look at some examples with fetch and afjax. So let's just move over to the examples. I've created a couple of simple wrappers here. I've created two wrappers. One is it uses the fetch API just so that we're comparing apples to apples but this fetch API it's going to essentially just take a callback and call our callback. So there's not a whole lot going on here. It allows us to just sort of have a callback like interface and pure script. You'll also notice there's this just nullery function so we have currying here. We get the URL, we get a callback and then this nullery function is an effect. So when you talk about an effect in pure script we're just talking about a lazy value, a function that is deferred, okay. Now we've also got this one for promises. So this one is gonna return an effect with a promise inside. And we can change promises to afts and afts to promises. So we'll talk a little bit about that. Pure script has two different types of effects that used to have many more but we've mostly gotten rid of them. So there's like we can return just a promise and say, oh, we've got the promise but there's not really great tools for unwrapping a promise. You'll mostly want to use aft unless you're returning it for a JavaScript user maybe in a library. So what we've actually got here, we've got first just a callback based API where we're using our effect call so we just supply a callback there. Let's go ahead and run that and we get some compiler output and then we actually get the effect itself running and when we get a result we'll get the callback fired and it's stalling out for me. So I'll just move ahead. So the next thing that we can do is we can set up an aft. So here we're getting the result as an aft and what we've done, we've got this fetch function that we've implemented in PureScript just using a promise helper to unwrap the promise and supply it as an aft. Aft gives us all of the powers of functors, moanads, bifunctors, oh my, and it's cancelable. All of the great features of Fluture, however, we're in a PureScript context so we can kind of reason about our types at compile time rather than at runtime. Okay, let's go ahead and run that. Hey, and there's our output. So again, we're just outputting text as things land. There's not a whole lot else going on. We're just calling log. All right, so now we've got these sort of basic examples. What I wanted to do next was show a PureScript native example. So let me just come down here. This is aft jacks, uncommon the code that runs it. So aft jacks is the Ajax library for PureScript. So we're returning an aft. We can do a native request. This has sort of full type safe setup for your request. So here we can do a request with a get method, a given response format, so you might be anticipating a string response or a JSON response or some other response. You can actually set it up to do some of the parsing for you if you like. In this case, I'm just leaving it as a string, so we're not getting too much into the parsing world. But it's the same request essentially, so we can run this as well. Okay, so that's sort of the native way to approach this. There is also a shortened helper just called get, which allows you to use much less of this. But that is the basic idea of an aft jacks usage. So one thing that you'll note about aft jacks, and it's something that is common across quite a few PureScript libraries, is it's distributed across multiple modules for typical usage. So I'd say this is sort of a design principle across PureScript, is it's very, very modularized. So just be prepared for that, that you'll be jumping around all of the different modules in that library. Even compared to Haskell, it's quite modular. All right, so we talked about fetch, we talked about aft jacks. I do wanna spend some time with fibers. So fibers are a really interesting feature. Fibers essentially allow you to take afts and control these forked computations. So you can start to reason about aft as a unit of asynchronous computation, but a fiber is really a chain of asynchronous computations that you could then call various methods on. So you get some reason about it. You can see like fork aft, I think is probably the most descriptive, intuitive function that you might see coming from Haskell because it works exactly like fork IO. So you'll get IDs, you'll get cancellation capabilities and you can join fibers, kill fibers. You can make a fiber invincible or you can make an aft invincible. So there's lots of different capabilities that you can set up for fibers and for working with complex asynchronous computations. All right, so just to summarize PeerScript's approach to asynchronous computations. You get lots of different type safe wrappers to define the outputs of unfinished computations and you also get control over synchronicity and asynchronous using effect and aft. So effect is for all synchronous effects which is why you have to supply callbacks everywhere and aft is for asynchronous effects. So you don't have to think about supplying callbacks. You can just operate on your data without too much concern about what is synchronous or asynchronous, although it is clear in your notation which one is which because we're in due notation. You also get type level knowledge of blocking operations which is sort of, I guess, the collaried to the above. You get abstractions for chains of computations which follows familiar type class interfaces laws and you get the flexibility of flutures but at compile time and you get due notation. So let's go take a look at a production example. I wanna show you a library that I've been working on. We're pulling in a bunch of web socket things. We're using the WS library in JavaScript. We've pulled in a bunch of foreign imports which are just wrappers around various library calls and what we can set up through a whole bunch of layers around the specific web socket interface that we're using is this query M monad which allows us to just have a reader T over some config and an app and this essentially allows us to set up some guarantees through adding and removing listeners sort of all over the place as queries go out and come back to abstract over the fact that web sockets don't give you good request response guarantees and so this sort of builds up a dynamic dispatch system. So you can see it, we do quite a bit here to build up a type safe dynamic dispatch system. We build up listener sets and methods for adding and removing to a listener set but through this system we actually have a very nice API for working with web sockets where you don't have to concern yourself with like a mother listener that listens for everything that this web socket could possibly spit out. We have it but we have it sort of buried in some async helpers and we can abstract over it. So this is a really, really strong use of the af mode ad I think in pure script. I encourage you to take a close look at this. This is the Cardano browser TX library on the Plutonomicon. This is one that I've worked on with the team at MLabs. Well, that's it. Thank you for watching. There should be a Q&A session right after this if I was able to be there in person. Just a reminder, MLabs is hiring Haskell, PureScript and Rust developers at all levels. We're a premier blockchain consultancy working on Cardano, Solana and Polkadot. Please take a look at MLabs.city for more details and I will see you at the Q&A. Hey everyone. So yeah, if you have any questions for Ben, pull it in the chat and yeah, then see you at the event. Hey there, I also have all my examples up. I know I had to go through that fairly quickly in the prerecord so if you have questions, if I went too fast feel free to let me know and I'm sure I can share my screen here as well. I'll hold a minute for questions just because I think if I share my screen it might, oh, I think it might be okay. Which PureScript async handling approach is mostly used in popular libraries in production and am I using that Xmonad? I am using Xmonad. Which PureScript handling approach, async handling approach is most popular? I would say AF. I would say if you're going to get, you're gonna do with any library that's doing network requests and quite a few others. You'll see AF first and foremost, I would say. How can code using AF coexist with something that uses promises or some other API? Yeah, let me share my screen quick because I just wanna show you to a library. So once my screen settles down from Zoom being crazy with it, there is this great control.promise module if you're on pursuit that lets you freely convert to and from promises and AF. So to and from AF are all here. I'm just gonna flip back over to the chat as well. Yeah, so that way you can sort of pass things back and forth from promises to AF, AF to promises. So especially if you're dealing with a JavaScript library returning promises, super, super common these days, you can flip back over to an AF for the easily. Does AF support canceling a promise? If so, how? I have not dug into the specific way it cancels a promise. You certainly get a cancel method for any AF, but let's take a look over here. I don't actually see cancelers that get spit out in this library. So it may be doing something just a little bit different than what you'll see over in the AF library. So the AF library has a number of constructors to sort of build up an AF, but you'll see they always return like an effect with a canceler. I actually don't get that here. So it might be that you can't generically just get a canceler out. It may be that it modifies your promise callback in some way, but you could always take a look at the source, I suppose. Gonna take me a while to parse that. I'm gonna go with right now, they're not easily cancelable right out of the box, but I encourage you to take a look at the docs on that one. Is there a significant performance difference between using different handling approaches? I don't notice a significant performance difference, but certainly there is going to be some penalties. Some are gonna be better than others. I don't think one is severely worse than the other, but your mileage may vary. There you go. Robert just confirms that promises can't be canceled because they come from JS land. Thank you, Robert. Does it make sense to write custom FFIs around HTTP calls like fetch? I wouldn't really do that in production. I would just reach for after acts. If you really don't like the way after acts is doing something, you do have the option like I just was. I don't think there's anything super wrong with it, but in general, I do prefer working with after acts. At least working from PureScript. Working in JavaScript, I really prefer to work with FUTURES. And again, like there's a to FUTURE from FUTURE, sort of a set of methods in the FUTURES library to go from a promise to a future. How does it compare with async await in JavaScript? I'm not quite sure what you mean by it on a pump. Okay, so how does AF compare with async await in JavaScript? I would say they're really comparable in terms of the sort of making asynchronous programming appear synchronous. I see there's some back and forth also about promises and canceling. We'll get to that. But I would say AF is really comparable to async await, but with more type safety. Certainly because we're in sort of a Haskell-like language, we're in PureScript, it's going to force us to make sure that our binds and our lets are sort of separated out. Whereas binds and map calls and lets with a promise or async await, if you put an await call in a place where you don't need it, I don't think anything bad happens, at least not last I checked. So it looks like there's this abort controllers, which I've actually not heard of, but not every promise supports abort controllers. Is AFJax using Fetch or the old Ajax APIs? AFJax does use older Ajax APIs. And in fact, it uses the XHR2 library, I believe in Node and the XHR HTTP request from the browser. So it is an extremely old Ajax API, which gives you compatibility everywhere. And it means you're using an API that is very highly optimized because it's been around forever. Thank you very much, everyone. I suppose that's all questions. Take care, have a great day. Thanks for watching. So everyone, I do wanna give a shout out to Ben. Ben has been awesome. And the reason is it's actually like three in the morning at like four in the morning in Toronto, he kind of woke up early to get the stock. So thanks Ben. I'm pretty sure everyone here loves that. And yeah, it was great to have you on. So for that final talk of the day, we have Nathan. Nathan too is a member of the PureScript 4 team and he works for the Rista Networks on a different note. He'll not be talking about more heads. If we're talking about managing the stack when we call the inevitable recursive functions because when functional programming, everything is recursive. Nathan was unable to make the conference, but he has fortunately made a recording and has sent it to us. So on we go. Hey everyone, I hope you're having a great conference. My name is Nathan Fabian and welcome to teaming the stack in PureScript. Let's face it, PureScript kind of has a call stack problem. And this is actually pretty common in functional languages. And what it comes down to is that PureScript lacks loops. Most languages have some sort of looping construct, but PureScript only uses functional recursion. In most run times, recursive calls are really any function calls take up additional stack space and recursive calls aren't special in that regard, which means consequently, if we loop too many times with recursion, then we'll get a stack overflow. A common way around this is something called tail call optimization. Tail call optimization is kind of a misnomer, I feel like, because is it really an optimization if it's necessary to actually write correct programs in a functional language? But the gist of tail call optimization is that it turns first order self-recursion in tail position to a JavaScript leaf. I just say JavaScript because that's our most common backend. And first order just means that we only call the function recursively in a non-higher order context. So it's not passed to a function. It's not captured under a lambda that someone else might invoke dynamically or anything like that. Self-recursion means that we only call a single function ourselves recursively. And it happens as the last thing to evaluate in the function call and evaluating the functions. The last thing to do is call ourselves recursively. TCO is specifically a static transformation. It only looks at local information for a particular binding, a recursive binding. It doesn't involve any sort of global control flow analysis or anything complicated. It's really straightforward to implement. And just to bring it up, the general form of this is called tail call elimination. And other runtimes that are not JavaScript like Erlang or Scheme will support dynamic tail call elimination, which means that any dynamic call in tail position will be optimized to not take up additional stack space. And actually like Safari supports this for JavaScript, but no other JavaScript runtimes do, unfortunately. So to get kind of an idea of what we're doing, I just wanna look at a very, very basic quick example. This is a common data type, list data type. This is exactly what is in the core libraries. And there's a function. Let's look at a function sum. And all this does is it pulls out integers and adds them together. And it has a one recursive call to itself, but it's not in tail position. It calls itself, but then after that call is done, it has to take the result and add X to it. So this would not be optimized by the compiler. However, we can do a simple transformation. This has what's called a worker wrapper transformation, which has a worker go that is recursive. And it has a wrapper that invokes the worker. And in this worker, we take an additional argument, which is an accumulator. And then we kind of keep that state and we only call ourselves in tail position. So whereas before we invoked the recursive function, then did something with the result in this worker wrapper transformation, we take the result, we go ahead and add it to our running accumulator, and then we recurse after that. This gives us a tail position. And since this is also first order, this will get compiled and optimized into a type JavaScript loop. It doesn't have to eat additional stack space or allocate anything. So one thing that's actually interesting is that any first order non-tail position recursion can be mechanically transformed into tail recursion. So into this kind of form that we're looking for. As long as it's first order, it can be, any of that recursion can be rewritten into tail recursion. And you might say, okay, if it's mechanical, can the compiler do that? And the answer is lots of compilers do. PureScript does not, but this is actually the steps I'm gonna go take you through are transformations that many, many compilers do automatically for you. It's just it oftentimes has sort of a trade off. And so depending on the transformation that it has to do, it might require doing a lot of additional allocation on the heap, which may make your algorithm a little bit slower. So if you need performance and you know you have bounded input that isn't particularly large, you can go for a recursive implementation that is gonna be a lot faster. But if you have unbounded input, then it might be a good idea to go ahead and just eat the heap cost, the allocation cost. And so it might use more space and it might be a little bit slower for access because it has to do more dereferencing, but it will be stacked safe. So our example here that we're gonna try to transform is a little bit more complicated. We're gonna look at a data type that's really similar to our list data type and it's a binary tree. And this isn't really that much different. It's just we have an additional recursive node. And so what makes this relevant is that our recursion actually forks. We have to go in two different, go down two separate branches, or curse down two separate branches. And if this were a binary tree, this wouldn't be that big of a deal because it would be logarithmic or if it were a balanced binary tree rather, it would be logarithmic in depth. And so you wouldn't really have to worry about stack in that case, but we're gonna assume that this is non-balanced. And so we have no idea, this may be totally left associated or totally right associated. So we have to be kind of careful. So we're gonna look at an implementation of map for this data type. And it's pretty straightforward. This is actually what a compiler, the compiler will derive this implementation for you. Unfortunately, this is not stacked safe because we have two separate recursive calls and neither of them are in tail position. So what do we need to do to turn this into a tail recursive implementation that the compiler will optimize into a loop? First, let's look at where our recursion happens. Our next step will be to move all of our function arguments into bindings. And this is really not super necessary, but it is, it makes it very clear what our kind of, what our evaluation order is and really tells us that this is obviously not in tail position because we have to call these functions, get the result and do something with it afterwards. So this is definitely not in tail position. And it makes it a little clearer what's happening. Next, this is a little, this is a big jump, but it's not too complicated. I'm gonna walk through it. We're gonna convert this into what's called continuation passing style, which is just callbacks. It makes our evaluation order kind of specific. So it's very similar to the worker wrapper transformation that we did before with our sum implementation. We have a wrapper here. We have a worker go here and we have this extra argument, C-O-N-T, cont or continuation. And really all this is doing is taking those bindings that we pulled out. So if we look at this one, we have the binding on the left-hand side here. And then we have the call on the right-hand side. It just kind of flips that around the other way around. So now our call is on the right-hand side and our binding just kind of goes over to the, or our call is on the left-hand side and our binding is on the right-hand side. And because it's a Lambda and we format it, this will end up kind of looking like stair stepping, but it's really not stair stepping. It's just another way to kind of like look at the flow of evaluation. And anytime then we looked at our old implementation where we're doing nothing but returning a value, we replace that with a call to our continuation, cont. And so one way to actually look at this is that this is the same sort of transformation as far as like an accumulator. This continuation is just an accumulator and we're literally accumulating a program. We're literally accumulating code to run. So we're traversing this. We're doing this algorithm and our accumulator is just a new program to run. Really, really interesting, really elegant. Our next step then is to lift these callbacks into explicit name bindings. You'll see this a lot in this kind of stuff. It's like put it in an explicit binding. This makes everything clear around what's actually happening with this. What are all the variables involved? So you can see we make our closures explicit. We have to make sure we capture all of them and pass them to the continuation. This makes it very obvious what our dependencies are. Our next step then is to take these closures and turn that into a data type. So here we have our identity continuation. This is what gets it started with. We have our left-hand side continuation. It goes down the left-hand side of the tree. We have our right-hand side continuation which goes down the right-hand side of the tree. So we're just gonna turn that into a sum type. We have each one, all the values that they capture, they just get put into the data type. Again, we have our cont identity, which is kind of a nil value that tells us it's done. So then what we're gonna do is take all of those closure bindings and turn it into a single function eval that cases on this data type and executes the code that was in those closure bodies. So this is, if we go back here, we see cont lhs. We have this let binding, this go cont rhs and that finishes it. And it's doing the same thing here. We have cont lhs, the let and the new go call and then cont rhs. Now, instead of invoking our continuation explicitly where it was a function, we are instead turning it into a call to eval. So we have eval and this is a typo. This is next here, but it should be cont, but you'll see up here eval cont. So instead of calling cont tip, we just call eval with the cont and our return value. So you'll notice here, one thing that's interesting is that eval is always in tail position. Eval is in tail position here. Eval is in tail position here. Go is in tail position here and tail position here. So our calls are all in tail position. So we have a first order algorithm where everything is in tail position. The only problem now is that it's not self recursive. We have a mutually recursive set of bindings. So we're gonna do a similar transformation that we did before with our continuations is that we're gonna turn our mutually recursive go eval calls into a data type as well, kind of like how we turn our continuations into a data type. So we have our map go and the arguments. So we have the function that we're mapping, the binary tree and our accumulator. And then we have our eval function with our return value and the accumulator. And we're gonna turn these into a case. So instead of having our separate bindings, again, we move this into data types and a case. And so if we look at this now, all of our calls to go are now in tail position. We only have a single self recursive loop. And so this will turn into a nice, stack safe implementation of this algorithm. And this is all very mechanical. You can apply these transformations to essentially any recursive algorithm that satisfies that criteria of being first order. So if that's all you want out of this talk, that's fine. You can leave it at that. That's all you need to know to write stack safe. Basic pure script algorithms. I'd like to keep going. There's a few things I kind of want to explore with this, which I think are pretty interesting. One is just that instead of using tail recursion, like references to go, like our worker, you can use tailwreck, the tailwreck function. And this is in the standard library and kind of the same idea before. Instead of recalling tailwreck and tailwreck makes our kind of our worker here correct by construction. So we can't use go accidentally wrong. And so it'll always be stack safe because we have to return an explicit data type at the end of each iteration. So this makes it a little bit easier to keep things straightforward. So that's just, you don't have to do this. I almost never use tailwreck, but it is nice to get that sort of guarantee that it might be a little bit slower. It's just the issue. But I'm gonna look at something here called a greatest fix point. And if we take our worker and just put that in its own function and owns, like we treat it like a state machine, it's taking an input and like a state and then transitioning it to a new state. So we've got a map call. And it's just every time you invoke the transition function, it transitions it to the next state in a map call. All we've essentially done is removed these like loop and done, we've just taken the work and removed like the extra wrappers. We'll see here, those are just missing. We just map each state to a new state. We can then take, essentially take that loop that we had before or the wrapper and define this eval function which will evaluate a map call and turn it into B. So we can call our stepper and then we just case on our identity, our identity continuation here. If we know that there's nothing left to evaluate in the continuation, then we can just go ahead and return the result. I think this was actually a mistake, these arguments should be put. But it's the same idea. And if it's not our identity accumulator, then we just continue stepping. So this corresponds to what's called small step semantics and programming languages. And this is great because we have a very clear evaluation state that we can start and stop whenever we want. This is great for actually writing things like debuggers. We've essentially defined our own little language just for this map call. And if we wanted to, we could use our stepper function here and step through every single evaluation step of this function. Be great for if you wrote a little language, you could use this approach to make, essentially make your own little debugger for it. So it's super useful. Now I wanna look at, but you know, we had to match on our map eval.cont identity specifically. That's the only thing here. So I wanna look at what this step data type is, this done and loop. Step here is, it's step AB, we've got loop A or done B. And this is essentially just, this is just either with more specific names to make it clear what's happening. Originally, Tailwreck actually used just either, but it was very easy to get them mixed up, like which one is it, is it left or right? And so it was changed to use explicit names, which makes it a lot easier to use. But in essence, it's really just either. So we're gonna transform it to either, we're gonna not use loop or the step data type, we're gonna use either. And we're actually gonna kind of like flip the meanings around here. So done means left, which kind of makes sense. You know, if you think of like either as sort of an error condition, or like a way to like halt or like short circuit the computation, then like left is kind of like that's done, like we're done with it now. Like there's nothing left to, there's nothing left to compute. And so right becomes our loop constructor. We're just flipping the meaning. And then we're gonna take, we're gonna use the PureScript Fixed Points library or fixed points or other fixed point functors. So specifically, the greatest fixed point is the new data type. And we're gonna use our either functor. So either B, so it returns a B, and we can write this fixed function that operates just like Tailwreck. And it will evaluate any fixed point over either B and return a B here. And so it's the same idea, left return, it'll just return that, right, keep going. So it's essentially what we were doing in our eval function. And you can then turn that into, we can take our step function and we can compose it with a condition here, a termination condition essentially. That's kind of what the loop are done. It's just a way to communicate that we need to terminate. And so we can compose it with a side termination condition and get the same result down here of map call A, map call AB to be evaluated. So now we've looked at kind of what the greatest fixed point is. Let's look at, if we try to swap this out with a least fixed point. The greatest fixed point is this new either B and that corresponds to an existentially qualified A and a tuple that has our step function and the state. So the existential excuse me, quantification over A just means that we can't see it from the outside. So if you look at all the types, this A never shows up in any signatures. So it's totally abstract. We have some abstract state and we have some step or function that takes that state and gives us our result back or a new state and to keep looping. So in order to evaluate it, we have to keep looping until we find our left instructor which will tell us the term. So we're just kind of pull out that either. We're left with new F and so we've got A to F of A paired with an A value, an abstract A value. So one thing that's actually really interesting about existential quantification and why a lot of functional languages don't really support them and that's because existentials can be eliminated through what's called the closure isomorphism and that just means essentially that any existential it can has an encoding to a closure. It's kind of where you get objects correspond to existential. So it's like where objects are a poor man's closures or closures are a poor man's objects. In functional languages, you don't often have to have existentials. You can encode them through closures. So we're gonna look at kind of what that means though like what this isomorphism means. So we're gonna look at what this isomorphism means we're gonna look at a simpler example not our fixed point or new type. We're gonna look at this can show which is kind of common. I wanna somehow represent a data type an abstract data type that supports a show function that can turn it into a string. So we're gonna look at type can show if it was existentially quantified it would be exist A and it would be a function A to string and that object. And this existential can be encoded as a closure with a rank to universally quantified eliminator which is just a really complicated way of saying we're gonna CPS we're gonna continue turn this data type into a continuation passing style. But in order to preserve the abstractness of A we have to make sure the continuation that consumes these values here is must be polymorphic in the A value. So this rank to universal quantification just means that whatever implementation goes in here it must treat that abstractly and really what that means this is kind of our example of here what that means is that if our consumer here uses this callback if we have some abstract A and all we can do with it we can't do anything with it we can't like add it to something because all we have is this function that's paired with it that turns it into a string. So having that A and a function that turns it into a string means that it's equivalent to just having a string and really because of laziness and all that it's really equivalent to having a deferred string or kind of like a lazy string. And that's kind of why in Haskell you'll see like existential quantification is kind of an anti-pattern and it's not so much that it's an anti-pattern it's just that Haskell's really good at universal quantification and you can just encode it with universal quantification and it'll probably be a lot easier to use. So we're gonna try to apply this transformation to our new type. We've got tuple A to F of A and an A. So before we had our existential A we could essentially just turn it into a unit type, right? So we can't do anything except just apply this step function to it. But the problem when we do that with this type is whereas before with this string if we look at this implementation the A in the function only appears as an argument in this function it also appears as in the return type. So it occurs in both positive and negative position of this type which makes it, I don't know what to write here. I can't act, you literally can't write this and that's because you have to use a type level quantification for recursion or what's usually called mute. And in, there is a data type and we'll get to that but PureScript and Haskell actually apply this to any data declaration, data or new type not type aliases or type synonyms. Any data or new type actually gets this implicitly applied and that's what lets you write recursive data types. So in order to write this new quantification that's where we get this data type mute F and this is what creates that recursion. We've created a new, like a recursive data type here that we can apply our F to. So we can actually factor out that unit arrow that's the function unit functor. And so we can get a kind of a simpler mute data type which was what exists in PureScript fixed points and then we can get our sort of delayed mute which we'll need for like evaluation order purposes. We don't want to be too strict and so we'll just compose that F with the function unit functor. So our delayed mute is compose function unit with F. So now let's look at, can we write a fixed function for this and it's actually pretty easy. We're gonna use delayed mute and the same either be functor and you'll see here, we unroll, we're importing data.functor.mue and data.functor.compose and unroll just takes that new wrapper off and so we're left with a compose so we have to kind of and take that wrapper off and so then we have our thunk then we can call our thunk, get our left or our right value and we get, and it's the same sort of thing left just returns the result or we just recurse on mute. So we've written our fixed type and it's actually really simple just to swap this out. So there's a few things here. One is just I've replaced loop and done with right and left. Otherwise it's the exact same worker like the logic here is exactly the same as the worker we had written before. Our wrapper is just a little bit different. We've got our new fixed function here and the worker just has to have this extra like roll and compose and that's to make kind of like the types work out and then we have our delayed computation here and this implementation is just a stack safe as the other one this will run in a constant stack that we've just bundled everything up into these this thunk, this function unit. So can we kind of undo this transformation? So we did all of these steps to get like all these data define these data types turn it into like a tail recursive function but you see this function now isn't even like really I mean, it's recursive but it's not tail recursive because we're returning right but it's just a stack safe because of the way we're computing we're using fixed here and delayed computation the delayedness is crucial. If you remove that function unit this will not work and of course it will just continue to spin and explode on the stack. So the delay is absolutely crucial but we can kind of start to undo these transformations so we're going back now to our mutually recursive go and eval and the same thing this is just as stack safe as the other one this will run in a constant stack. Can we go further? We can keep going then again we can turn it into just remove our continuation data types and just use named continuations here and again this is still stack safe. You can go even further turn it back into our original continue or into our continuation passing style here and again this is still stack safe instead of just identity though we have to turn we have to use this done or this is ends up being what our identity is is role composed const left it's kind of kind of extreme but this is still stack safe and this is in some ways this is a lot nicer to be used can we go even further? We had that kind of direct style where it was just bindings and we can but first I want to look at what this data type is that we're working with so if we take this type aliases make it a rec we're going to call it rec rather and it's new compose function unit either be this is very very very big kind of like a fancy looking type let's kind of like inline that and kind of get rid of all of get rid of all the functerness for now the functerness and just turn it into a pretty straightforward recursive definition we've got rec be is equal to rec we've got unit to either be or rec be again so this is where we get that that's why we have new we have this recursive data type that recurses here and I'm going to do kind of a kind of an interesting transformation here I'm going to take this unit arrow and I'm going to distribute it to the branches of either so instead of unit to either be we have an either where the unit is under each constructor this isn't necessarily a safe transformation in general but for this particular type this is equally as expressive so it's totally okay we don't lose anything with this so we've got this unit to be and unit to rec be we're just going to rename those back to like our nice done and loop so we've got done unit be and loop unit to rec be and this unit to be is kind of unnecessary like the unit here because we can turn that into just done be and loop unit to rec be because if we want like a delayed done value then kind of just most of the time we can just wrap it in a rec so it's not super necessary for our purposes it might be necessary if you were doing additional composition with like the unit to be like after you're actually done with it and this just preserves like laziness value this is why in like Haskell like you don't ever have to think about these units these units like this because everything just kind of implicitly has that in a lazy value but we're keeping this and this is actually this type is really interesting here so if we just go ahead and turn this into our functary name again here function unit rec be and kind of abstract this away into an F we've got this rec F be done be loop F rec F be Wow and so we can recover our rec here with rec function unit and this type here rec is actually the naive definition of the free moment so it wouldn't be an eight Fabian talk with out something about something about free moment so through this process we've essentially just we've derived a it's naive because it's the it's the true essence of it's the minimal definition of free that there are some there are some properties of it that aren't great but for our for this it's totally fine for for for pedagogical purposes it's totally fine and then our rec type alias is what we call trampling so free function unit so instead of going through the process of like writing the implementation for these which could be an entire talk in itself we're going to just import data.free so we have to use we have to define this suspend function it doesn't exist totally it has a delay function but we need this just to like delay evaluations again the delaying the evaluations is kind of subtle when you're dealing with this so it's so we're going to use this definition it's also equivalent to bind with a pure unit you even don't necessarily need with with data.free and this particular implementation you don't need the function unit functor you can you can kind of get around it but for for our purposes we'll keep this here we'll define this suspend function and then we can write just using typical monad stuff with the trampoline monad we can we can recover our nice direct style binding so go FL binds to L et cetera we're just using pure at the end so this is very nice very minimal very close to our original our original algorithm it just has a little extra stuff it still has kind of the wrapper of one trampoline but it's pretty close and then just adding a little bit of syntactic noise but clearly we're in a DSL here our stack safe DSL and we're back to something that is very very close to our original our original definition and this is totally stack safe as well so you may ask why did we go through all that process of like data types and all this kind of stuff when you can just kind of like throw in these function calls and all that and there are actually like a lot of good reasons again I brought up some issues around like this is kind of like defining your own little language and small step semantics you could use this for debugging and stepping through code and but for more practical considerations it's that when you have a tail recursive loop that's specialized to your your subset your language subset it's just it's gonna it's gonna get like a whole lot better on the JavaScript back end the trampoline monad is actually lets you deal with the higher order recursive case and so like any call any recursion anywhere then that just that binds through trampoline becomes stack safe which is very nice but we don't really need this we still have a first order recursive algorithm we don't really need that and so if we go to just like the tail recursive loop with explicit data types that'll the just in time compiler in JavaScript will hit that really hard and it'll be a lot faster than the trampoline approach the trampoline approach has a kind of like a performance hit of anywhere from 25 to 50 times you know slower than the than a naive recursive algorithm the in my experience this is just in my experience of like writing these functions and but it in my experience the other one the using the tail recursive loop with explicit data constructors is about six to eight times slower than our naive algorithm so big performance difference and so that's really kind of where it stands like trampoline is obviously like great and if you just want to like stack safety really really quickly without doing hardly any work you can use trampoline just like this it'll give you something stack safe it'll just be really slow and I haven't benchmarked it but I think it's you know promising to kind of go the middle of the road with for this first order case where you can do sort of like our least fixed point approach are the original one where you're just using closures and a fixed loop so thank you that is that is stack safety in pure script there's a lot more to like talk about around stack safety there's like other stack safe monads like app and stuff but that's just basically it they essentially just use a free monad sort of encoding and that's what that's what gives them stack safety so that's kind of that's kind of like the end of that story so all right, thanks everyone. Everyone, since Nathan's not here Anupam's gonna be taking the questions Anupam, the first question is for this first order what does the question mean? Hello, your mic's off, your mic's off. Hey, can you hear me? Yeah, yeah, I can hear you. Okay, great. Yes, I'll try to take the questions. That was a pretty great talk full of a lot of stuff so I'll try to answer the questions as best as I can. So what is first order versus higher order recursion means? I think what Nate meant by that was that instead of a function calling another function recursively so first order would be a function calling another function recursively and higher order recursion would involve passing a function that happens to be recursive into another function and then that function calling the past and value so it doesn't have static information on what function it's calling and that's the decision so that makes it a lot harder to optimize it and make it stack safe. That seems to be all the questions if anyone has any questions I think for the next 30 seconds just put them on all chat or put them on YouTube I'll be having a look over there too. Okay, so there don't seem to be any more questions if you do kind of get any questions in the chat later. So for now, thanks everyone for all the talks, thanks all the speakers. I wanna thank all the speakers for I wanna thank all the speakers for agreeing to do this and now I'll hand it over to Anupam for the outro. Anupam, take it. Hey, thanks Aditya. So how's everyone doing? That was the end of all the talks but the conference is not over yet because we have the music jam coming up which I'm really excited about. So please stick around for that. Yeah, what a great set of talks. I think the presenters, the speakers put in a lot of work they volunteered their time and they volunteered their large sweat and tears especially considering the time zone differences. So big thanks to all of them. The PureSkip community is like the nicest community I've been a part of and everyone was very nice and very cooperative and so thanks everyone and thanks to the people who made this possible. So our host, Jaspe, this talk came out of a discussion that we were having with the Jaspe folks in 2020. So thanks for that. Hasgeek has been hosting this on their platform and our sponsors who made this possible, MLabs and Melenso, they do functional programming work and I think at least MLabs is hiring as well. So do check them out. They do some great work. And also, yeah, thanks to our volunteers and especially Aditya for emceeing this and everyone else who was involved and thanks to the attendees. I hope you had a good time. I hope you can take something back from the talks and we hope to have more talks, more conferences like this in the future. So please watch out for those. And yeah, I'll hand it back. I think I'll hand it back to Mike now for the Music Jam. So thanks everyone. Enjoy the Music Jam. Thanks Anupam. Thanks for doing this. This was really great. You did a great job. Yeah, thank you very much, Anupam. It's really nice to have this and hopefully next year we can have it physically in India. That would be really, really fun. Yeah, okay. So welcome to Music Jam. We'll let this go sort of for as long as we'd feel like, but I'm gonna create a link now, pure con 2022. You have never been to India. Either would be so, so, so nice to have a conference there. Okay. So I'm sitting at this link. Whoever's, I would like to be part of the Jam can hop on. I'll stay on the Zoom call. Maybe we can kind of do it for, I don't know, sort of however long it makes sense, but I'll see folks on there. So it's hdpsyapp.wags.fm slash p slash pureconf 2022. I see six people on there, which is great. A way to do a sanity check in your browser is, I recommend using Chrome or Safari Firefox to be a little janky, but it works. I'm just press play. So press play in the upper left corner. And if you hear somebody saying hello with some bloops and bleeps, then it's fine. And then you can press stop and it's all good. So I'm just gonna tweet this out because you never know who will show up strongly normal at these Jam sessions. So I'm gonna invite folks to it. Jamming at pureconf, I'll hashtag it at pureconf. Please show me. Okay, great. So I just tweeted it, let's see if anybody else shows up. So yapp.wags.fm slash p slash pureconf 2022. I see in there now 12 people, awesome. So, and somebody's already jumped the gun because I was gonna give instructions on how to edit it, but somebody from the bunch has changed it from hello world to hello pureconf. So I'm going to start it right now, fire it up. We have a proxy syntax, but we can also use a string syntax. Maybe it makes it a little ergonomic and a little bit easier to work with. So we can leave it at proxy and that way it'll continue running in the browser. So I'm just gonna create a single note. So you should hear just this one note bleeping and blooping in there. And now what I'm gonna do is copy and paste a couple of links that can be really nice for you to get inspiration. So one of them is this one, which has a lot of working examples that you can use sort of right off the bat. And another one is the sound library, which is the superdirt since library. Let me send it to you, because otherwise you won't know what sounds exist that you could use just by copying and pasting them. Here you go. So I will copy and paste it into the document. Great. Okay, so if you click on that, you'll see the names of a bunch of sounds. For example, one of them is called Glitch 2. So if I put in Glitch 2, you'll hear Glitch 2 now. So we have a speed of a note and a glitch. So now this will be an excuse for me to drop in the documentation, although you'll learn by playing around. I mean, specifically in that documentation, which is new, so there's a lot of stubs, but there's a section on mini notation. And that's all this is. This is this type of syntax called mini notation. So what we can do is add a note to it. We could also make it a little bit slower, which I'll do now maybe so there you go. So it's slower for those of you that are listening. I don't know if there are 24 glitches. Actually, there aren't. Glitch goes up in, so if we look at that, Glitch goes up to seven, but there are seven. So if you change it to Glitch, like three or four, five. Sure, Glitch five. Great, sounds nice. So that's it. So if you add a sound, it'll add it to the mix. So it'll subdivide the cycle. It cycles two seconds long. So now we're here, one, two. But if I add another note, then we'll hear triplets. So you could hear, it feels like it's sped up because it has another note in there. There you go. Great, so we just added Jazz 34 to the mix. And now it will see up. Hey, Mike, sorry, a quick comment. We can't hear the sound from your presentation. So people who are not on that page are probably not able to hear it. Ah, okay. Yeah, I could diffuse the sound for sure. My only concern though is that if folks are diffusing it live in their browser, then they'll kind of double hear it. Right. I hear people, I just heard somebody add a bass drum. So people are kind of going. So maybe if it's okay with you, I'll leave it, this sound off, but maybe just encourage folks to visit the website and press play so they could hear it. Because already we have our creators creating and jamming. Sure, sure. Yep. So one thing I'd like to do now is go over subdivisions. So now I'm gonna, so now we have five events that are equally spaced. So subdivisions are compile safe. So I'm gonna make a compile error and I'd like all you to see what happens. So you see that there's an error now. If you click on the exclamation point, you'll see the parser found an opening tag without a closing tag. And that's because we're using type level programming to parse these symbols. So now I'll put a closing tag and it'll update just fine. Great. I heard, I saw a Gabba louder in there. I could copy and paste it back and disappeared for some reason. Yeah, there you go. And we'll add a closing, closing a parser to it. Okay. And if the page ever stops working, just refresh it, which I'm gonna do now because for some reason my page just crap down on me. But that's fine then. Great. Great. So now we hear the Gabba louder three in there. So it's subdivided into three different units. And you can change it to whatever you want. Like I'll change it to notes just because I feel like it, it's anarchy. So you can change it to whatever you want. So now let's listen to the music that's happening in there. We hear four events. One, two, three, and then three events mushed together. And that's how you create subdivisions in here. So I'm gonna create another subdivision except this time within the subdivision I'm gonna make a chord. And you make a chord with a comma. So I'll do notes 10. And you'll hear a chord. And again, refresh the pages for whatever reason the compilation is too slow. So now I have a chord. And within the chord we can also make some sequences. Yeah, I see folks doing that. Sounds really nice. I'm gonna add a new line in there just to make it a little bit more legible because this symbol will parse it just fine. Well, so one thing I'm gonna do just for fun is start adding a couple like little effects chains in there just to kind of show you what's possible. And we'll see if anybody has any questions too. What does the grouping of notes do? So the grouping of notes in square brackets creates this subdivision. And if you put a comma, then it'll superimpose them. So we'll create a chord. And we have one quite nice one going right now. So you can kind of follow along in the music and see what's being created. So yeah, so now I'll add a rate change. We have a mega chord going on in there. It's taking a bit long to compile for me. Oh, okay. Yeah, I actually just hit that too. Maybe it's the try peer script instance that's barking. It just started having that issue for me as well. I'm out. Yeah. Wow, that's so nice. Whoever added that, that sounds really cool. Yeah, I think there's a backlog on the try peer scripts over now I hear the tabla. I'm gonna send this out on the discord just cause it's a really nice groove. In case anybody else wants to use it. I'm gonna send it on the music channel. So I'll show you guys a little pro tip. There's a function called on tag that I'll import on tag and I've created a tag zero. And for now I'm just gonna create identity but I'll fill it with something kind of cool. So now I'll put change rate cons 2.0. Then my compilation just stopped as well. So immediately it's coming out for me. Could be the try peer script instance that's displeased. But the music continues, which is fine. So you hear now that the thing with tag zero is higher cause I'm calling change rate on it. We can make it even higher. So you hear it higher. I'll bring it lower again. And you could change those change or chain those on tags as many as you'd like. So you can make as many tags as you'd like and add as many little changes as they'd like. There's a syntax error that was just, oh, actually no, it wasn't a syntax error. It was just a server's 502ing. Yeah, I see what you're talking about. Intermediately 502s for me. Yeah, I hear the symbol in there. I mean, I'll turn off my audio for now just so you can listen to the music. So yeah, let's let it run for about 10-ish, 15-ish minutes or so and just see where we go. I'll just chime in cause it looks like we've gotten it into this persistent compiler error state which is a bit embarrassing, but the server, it's never taken this many people editing a document at the same time. Like it's been a bit more tame in other situations. And now it's showing its limits, but it's really nice what you got before. It was looping for me for a while and I didn't even realize it wasn't updating until I refreshed the page. So maybe for some folks, I could see it's still going, which is great. But the triangle synth example that I put below, I'll comment it in at a certain point just so you could hear that that's kind of a nice way to get a synthesizer going. But maybe, yeah, Anupam, if you want to do some sort of wrap-up, just so I see that it is kind of crapping up out a lot and I'm sorry about that. It's, this is kind of a new frontier that it's hitting in a good way, but by the way that it has to be during this nice jam that we're doing, sounds really cool. Hey, yeah, it's, I think we put too much load on the server. So this is going through Try PeerScript for the PeerScript compilation, right? I think... Yeah, exactly, exactly. And kind of usually the, I mean, it's able to handle usually pretty heavy loads I've seen, I'm looking at the logs right now and Try PeerScript is completely freaking out. So I think it's something about the way that we're doing it. Maybe the fact that it's all in one document and it's going so fast and propagating to everybody as it's changing is that's doing it, yeah. Right. No, it sounded pretty cool and it was pretty cool to be able to change code in the browser and hear actual music come out. I actually had my nine-year-old son sitting with me and he was telling me which sounds to pick. He was like playing sounds and I would pick a sound, put it in the code and it got him really excited about making music with PeerScript. And so it was a great demo. Thanks for doing this, Mike. I think we can have maybe just the jam as well as an event sometime later. It'll be great to make some real music, something longer. Yeah, absolutely. And kind of on my end now that I have the server logs, I will look at them and see exactly what's going on. I talked a little bit about maybe deploying Try PeerScript on AWS Lambda. So hopefully I'll do that and hopefully that'll fix the issue. Yeah, that'll be great. All right. If anyone else has something that they would like to discuss or ask, just let me know. Otherwise, I think maybe we can close the conference. Aditya, do you have something more? Not really. I mean, like I said before, thanks everyone. Thanks for attending. This is the first PeerConf and yeah, I mean, I thought it was great. It's nice to kind of see everything. I'm like a bit into everything. So it's nice to kind of, I hear some stuff. I get it. I hear some stuff. I'm like, oh man, it's just flying over my head. It was awesome. I liked all of it. And yeah, I mean, I hope to have another PeerConf. It'll be fun to just kind of get everyone together. Let's talk more about this talk because, you know, yearly changes keep happening. I mean, that's where it's from my end. I don't know anything else. Great, yeah. So PeerConf in India next time, that's the target. And so thanks everyone. Hope to see you in person next time, you know, for the next conference. Take care. Thank you. Thanks everyone. Thanks James. Thanks. Thanks Mike. Thanks, that was great. Thanks.