 Hi, everyone. How's everyone doing? Good. So today, I'm going to talk about lazy functional state threads. It's a paper written in 1994, so about over 20 years ago. And it talks about writing imperative programs in Haskell. And yeah, I think it's a really interesting paper, because I'll get into it at the right time. Yes. So sometimes in Haskell, we want to do strict, stateful computations. And Haskell is a lazy functional programming language. So how do you do strict computations in Haskell? And why would you want to? Sometimes you want to do strict computations in Haskell. Why? Because of reasons. But seriously, sometimes you want to do things like the union find. Like when you do a graph algorithm, the union find, which involves iteratively updating mutable states. Sometimes you want to do things with hash tables. Mutable hash tables are like a pretty fast data structure. And sometimes you just want performance. Sometimes you really like, and you want performance because the way Haskell thinks about things is different from the way the underlying hardware works. And the closer you are to the underlying hardware, the better performance you can get. So sometimes you do want to write strict, stateful things in a lazy functional programming language. So how do we do that? So in this paper, the authors introduced the notion of a state transformer. And they're like, hey, it's a box. Just think of it as a box. Put some state in, you get some state out, and you get some result. Like just a box. Let's not think about what's inside the box right now. So this generalizes really nicely to multiple inputs and outputs. So you can have more than one input, and you can have more than one output because Haskell has tuples. Again, you put some state in, some inputs, and you some state out in some results. So how do we start implementing this interface? So we have this interface. How do we encode it in Haskell? So we begin with a really simple function called return ST, which what it does is it takes something and it puts it inside this box, this STS box. That's a return ST. And then let's have a then ST. And a then ST is a way of taking one box and another box and putting them together so that the state goes from the first box into the second box out and you get your result. So if you've done much Haskell, these functions, these type signatures might start to look a bit familiar. They might seem to start to match a pattern that we're familiar with. And this is the pattern. So at the time that this paper was written, if you look at it, this idea of this type class and the do notation hadn't made its way into Haskell language just yet. So the code examples that are in the paper are very much not in that style. They look kind of not great. And I've taken the liberty of, in this presentation, translating them to use do notation, which is really handy. So if you, again, in the paper, they go, hey, sometimes you don't really care about the output value at all. Sorry, we don't care about the first value at all. So let's define a then ST. And they call it the then ST in the paper. But again, if you've done much Haskell, you're familiar with it as the, I don't even know what this is called. It's like a bind, but it throws away the first thing. And later, when they start talking about arrays, which I'll get back to a bit later, they talk about a seek ST, which takes a list of actions and completely throws them away. It doesn't really care about what happens next. And in Haskell, we know this as sequence underscore. The underscore, like, it indicates, again, we're doing this for the side effects, not the initial value. So anyway, so we have a bunch of combinators here. We have then ST, seek ST, return ST. So now that we've done that, we can start talking about references. So let's say we have these, this API. We have return ST, we have then ST. And we have these things. And let's say, now we want to start talking about references because we actually really want variables. Variables are really, really useful because they give a name to the thing that we want to, they give a name to a thing. And giving a name to a thing is really useful. It allows us to, like, work with it, pass it around, mutate it, keep track of a thing. Because you can't, yeah, you can't do those things unless you give it a name. So what's an API for giving a thing a name? Let's start with this one. So we have a way of creating a new variable which takes some value and gives you a variable reference to that value. A way of reading the value inside a variable and a way of writing the value to the variable. So this is how you do it. Again, in the paper, this is like the old, this is the way that they use dynasty everywhere in CKST. But if we translate it, it looks a lot nicer. It looks like that. Yeah, pretty simple. Read a variable, read another variable, swap the two variables. Sweet. Great. So now we get to the, I think, the main contribution of this paper, the whole reason that this paper was written. And it's like, okay, suppose we have this thing. How do we get it to work exactly? How do we define... So let's say we have a function called runST that actually runs the box that makes things happen. Suppose we defined it like this. This is an incorrect implementation. Why is it an incorrect implementation? It's an incorrect implementation because you can do something like create a new variable and get... So this is the variable. And then read the variable in two different state threads. And that's wrong. You don't want to be able to share a state between two different invocations. Why do you want to be able to share a state between two invocations? Because things change in the meantime. That's a broken. So how do we make this work? And we use the touch system. Again, if you've done my task, this is sounding familiar. Use the touch system to do a whole lot of things. And it turns out that this is one of them. So the way that the incorrect implementation works is that it basically goes for all s and for all a write a function that takes an STSA to an a. And we say, no, that's not actually right. What's actually right is we want a function that for all a, like, rank to polymorphism. So we, and again, this is, my understanding of this is a little bit hazy. But you get, you want a function that works for any state that is passed to it. So, like, you want to create something that for any a and for any possible state that is passed to it inside the scope, give me back an a. So why does this work? Because if we go back to the broken program, this won't work. We won't have, sorry, we won't have an STSA at the end of it. We won't have a, we won't have a mutable variable of s, because the s is given a value. So one way that I like to think of this is that this is, the s here is like a, is like a beton in a relay. So when you're running a relay, you get a beton. And when it's your, someone else has turned around the relay, you give them the beton. And the beton itself doesn't matter. And we don't really care about what the beton is, right? We don't really care about the shape of the beton, the size of the beton, the properties of the beton. We just care that we have a beton. So it's a token. It's basically a token that is passed from one function or like one box to another box. And we don't really care about what it is, but we can't like take someone else's beton or we can't take a beton that we, you know, prepared earlier and expect it to work. And that's essentially the way that this works. So assuming that you understand that, you move on to array references. And those are also like relatively simple. We parametrize over the index, which is basically any element that has a notion of like, you know, you can use to index something. So we have a new array that takes some bounds. It takes the type of the element and it gives us a rare reference of that type. We have read, write, and this is an interesting one because we basically take a mutable array. We somehow freeze it, make it return an immutable copy of the array and return it back. That's fine. So here's an example of making it work. This is actually fairly complicated, I find. Acumery function that basically runs through, kind of does like a fold. So it takes a, yeah, folds and it takes some bounds and it like sequentially applies. Like it's kind of like a scan, I think, is how I describe it. Again, if you're familiar with Haskell. So you have, anyway, this function, you can use it to make a histogram of stuff and you can use it to like sort, to like put an array into bins, to like bins based on some property. Okay, so anyway, so we've covered references, we've covered arrays. Let's start with input and output. Big kahuna. So input and output is essentially, again, one way of thinking about it is it's ST, it's specialized to the type of the real world. And real world here, again, is a placeholder for the world. It just, you take the real world, you do something to it and you return the real world. Okay, great. Let's give us, let's do an example. So this, again, we go back to the CKST that I mentioned earlier. You can, using this type, you can define a put char, if you have a put char and a get char, you can define a put string. Makes sense. So the way this actually works is that they say, hey, let's do something cool. Let's introduce a C call primitive. So like language feature that interfaces to C. So we have that. Suppose we have that built in. We can actually define put char and get char in terms of C call. Okay. It's pretty straightforward. And let's do something like the main function in, sorry, the main, yeah, the main function in a C program. Let's have a main IO that is the only thing that is the equivalent of run ST. So, okay. So if you're following along with the paper, which I feel like no one is, which is fine, which is great. There's a part of this that I completely skipped, which is the formalization. And I think that's because it doesn't really add to the understanding of the paper to me. It basically goes, hey, let's take a lambda calculus. Let's extend it with run ST and C call. And let's prove that everything kind of works. It fits together. Let's, yeah. Yes. Actually, as far as I know, they didn't in the original paper prove that this kind of type trickery is enough. That's right. That's a good point. Prove the safety. Yep. And like a couple months ago, paper which does prove that. Yeah. Yeah, thanks. That's a good, yeah. Thanks for mentioning that. That's right. They do mention that the part where they actually, actually they formalize the language, simply type lambda calculus extended with C call and run ST. But they stop short of saying that run ST itself is correct. And they say that we presented an informal argument. We think it's right. We haven't just haven't done it. And recently, yeah, there was someone who was like, hey, you can actually formalize it. And anyway, so anyway, implementation. So crucial to the implementation is the idea of updating things in place. Because if you copy things like a whole bunch of times, that's not, you know, at least memory-wise, it's not going to be efficient. It's probably not going to be fast. So they have an informal argument, basically boils down to, hey, state is single-threaded, state is strict. Our run ST function is, as we mentioned in the type, parametrized over any possible input state we could pass it. So in place updates are fine. They're correct. It's all good. And again, as Gurgi mentioned, it's not, yeah, you don't have to implement the, yeah, that recently they finally proved that it is in fact correct. And let's talk about efficiency. So in Haskell, we have the state type. And if you've tried to implement the state type or looked at how it's implemented, this will look relatively familiar. This is a way of, this is essentially the state type, except for this current state thing. And as I mentioned, the current state thing is like a beton or a token, depending on how you want to think about it. And its actual value doesn't matter. Which is what they say as well. You can just give it anything. But the fact is, because of the way that this is defined, the current state is passed from one to the other to the other, like a beton, exactly like a beton. And that's what ensures that things happen in a sequential order. So that's what they mean by threading, right? This is where the word thread comes from, in the title. So it's lazy, it's functional, like Haskell's lazy and functional. But we have state and it's threaded state, single threaded. So if we define it like this, the advantage of defining it this way as opposed to some other way is that you can do a transformation. You can basically take some code that looks relatively straightforward. You can inline it to get to that. And then once you inline it to get to that, you can transform it to go stricter. You can form a strictness transformation to make it go, again, to transform it and like ensure strictness. And then by the time it gets to the code generator, you can even throw away the state. Because as we've mentioned, the state doesn't really matter. And so now we get into the nitty-gritty of Haskell. Haskell has, despite being a lazy functional programming language, it has the idea of unboxed values, which are values that can't be funks. And those are necessarily strict. And they're kind of denoted as like primitives with like a magic hash. I think that's the name of the language extension that allows you to use them. So there's a magic hash. So if you go back to the state thing, this is very similar to the state type definition, except that we use these unboxed values. And that's how we use them to define our, like our API for references, for example, like NuBar. And they make a special point of saying that, hey, freeze array naively implemented might not be so efficient because you're always like freezing it and like throwing away the mutable array. Sorry, throwing away the mutable array and giving them back the immutable array. But if you know that the mutable array is not going to be used in the future, you can just give them back the mutable array and it'll be fine. So the difference, so IO is implemented very similarly to that type, except for the fact that there's a case here instead of a let. And this means that, and case in Haskell is one of the, means that it's going to be strict. And we know that it can be strict because it updates in place. And we know that we need to update in place because the real world is something that always needs to be updated in place. The only reason you'd want to do something to the real world is because you want it to update in place. And then they define some more stuff. They go, hey, it turns out it's useful to define whether two variables are equal, two references are equal, and two arrays are equal. And they also do something that actually kind of, yeah, I think this is a step too far personally because this is one of the things that's useful in Haskell, but it's also really dangerous. And I actually mentioned this. They say, hey, everything so far uses the state only once. And you're like, yeah, there's no duplication. There's no like not using it. We always give it back except for honesty. But this is one of the things I say, hey, what happens if we actually relax the assumption? What happens if we duplicate the state? So then we are able to reintroduce laziness into our strict state threads. And that gives us things that I think this is very similar to the way things actually work in Haskell. So you have a lazy read file. But they say that this is a really dangerous function because we can't prove, like we don't prove and we don't even try to prove that what you're doing is safe. And this is a burden that we place on the programmer. The programmer when using this function has a burden of proof. And this is something that has gone largely unheeded by Haskell programmers for like almost 25 years now. So I think that's pretty funny, but also a bit scary. So in conclusion, have we turned Haskell into C? In a subsequent, sorry, this is in a follow-up paper to this that actually the authors asked the same question. Hey, has the purpose of this whole exercise been to turn Haskell into C? And they say, no, it's not because we're still able to separate the pure parts of a program from the impure parts of a program. And we're still able to iteratively go, hey, this thing might work better as a pure function and turn things, transform things as necessary from impure computations to pure computations and kind of peel away the layers of the program such that we expose the, I guess, the imperative core, the imperative spine of the program is how I like to think of it. And the rest of it that is purely functional. And yeah, so this is the reason that I really like this paper and I thought this was worth presenting is because in Haskell, Haskell has a bit of a reputation of being to ivory tower. People don't really care about the real world. And I think this is a paper that demonstrates that that is not the case. Over 20 years ago, over 20 years ago, people were thinking about, hey, let's make Haskell fast in performance and let it compete with the big boys. And it turns out that in most cases, you can get performance that is within an order of magnitude of C, which I think is really good. And I think that's all I have for you today. Thanks. So if you'd like, I can show you the code in Haskell itself that implements this paper. And I forgot to mention that. So another reason that I like this paper is that if you write Haskell, you're using the ideas in this paper. This paper's impact on Haskell itself has been quite profound because this is the way Haskell actually works. It has an SD type. It has an IO type. The details are a little bit different. The names might be a little bit different, but essentially it's the same paper. It's the paper as code. And that's really cool. So in fact, if you'd like, you can look at the Haskell source or I can show you the Haskell source and I can, I guess, confirm that it does actually work the way I say it does. I just think it's worth mentioning that one big shortcoming of the SD approach is that you need higher ranked types to be able to type run SD. And if you have higher ranked types, then typing inference is no longer decidable. Interesting. I wasn't aware of that. It basically means that you will have to add some manual type annotations here and there. I think... And one particular in SD consequence of this is that the dollar operator, which I think you've used... I do use, yes. And it's very natural to try to write run SD, dollar... Something. Something. So that would not type check. Interesting. Oh, for a special case? Special case in the type checker. Which is horrible, right? Because... Yeah. Dollar should be... It should be treated the same as... Yeah. Actually, in the paper, they say that we don't actually need higher ranked types in the language. What they do is that they want a special case run SD. They say, hey, run SD should be treated as though it has this type without actually giving it this type. So that's a way that they try to reconcile having simple time inference. But reality, we do have higher ranked types. Yes. So... Yeah, so... Yeah. Anything else? Could you go back to the issue about the run SD implementation being wrong and then there is the type signature? Oh, yeah, of course. There we go. Could you run through that again? Yes. Okay, let me see if I can refer to the paper this time. Okay. So... So... This is equivalent to saying, like, for all S, for all A, SDS goes to A. So it says that... Yeah, so the type itself, like, think of it as like the type is being on the outside. The type is on the outside of this function. Basically, it's like the scope of it is like essentially outside of that part of the function. Whereas the correct implementation says the A is on the outside of the function but the S is inside the function. So the S is like scoped in here which means that the S cannot leak out of this part of the function. So I think the literature refers to it as... Yeah. So if you go back to the previous slide. So in this case, right, your enemy gets to choose S and A, right? Yes. And if I'm a particularly nasty enemy, I will say that, okay, for S, I'm going to choose whatever, S. And for A, I'm going to choose S-T-Rab, S-Steam. All right? I can do that because I get free choice over both S and A. But if I choose that, then that means you now have to give me S-T-Rab, S-Steam which is exactly what you want to avoid because now you have given me... You're leaking state essentially. Oh, I see. A is S-T-Rab. In pure code. Yeah. So I mean, another way of looking at it is that the S never goes outside the parentheses. So the S doesn't escape the scope. So run S-T... In this case, you get to choose the S, right? I, your enemy, you get to choose the A, but then you get to choose the S. Yep. So going back to the example... Yeah, let's look at an example here. Yeah, so exactly as he said. So if you had to choose both the S and the A, then you have a new var... So on the outside, you get a variable and a state. So the state has leaked out of your function, right? Because the V contains the state of it. Yeah, yeah, V contains the state. Yeah, so it's a, like, a mute var of S, A. Whereas we don't, like... Yeah, because A is this type. So anyway, so the point is that your state has leaked out of this function. So let's look at the example again. Yep. As in the state is not in only the first expression, but it's also in the second one. Is that why you say it's leaked out? Well, yeah. So anyway, so suppose you go run st new var true and you get... What you get out is a mutable variable of... Actually, no, you don't get a mutable variable. You get an S and an A. You get both the state back. Right? You get a state. You get a variable containing a state and a value. Okay. Right? Because, sorry, if I can go back to my... Yeah. If I can go back to that. A new var goes from an A to an STS mutable var SA. Right? So if my A is... Sorry, if my A is the... Is itself a mutable variable, then the thing that I get back and what run st does is that it takes this and removes the run st bit. Right? So if my A is itself a mutable variable, then instead of the S just being here, the S is there as well. Like, does... You know what I mean? It's like the state has been duplicated, basically. But it's a different S. No, it's the same S. Same S. It's the same S. But the thing is because of the way the... Because the function is... Because the type signature is wrong, I can feed that back into this and I have the state that has been duplicated. Okay. Whereas correctly, if I were to do this and I were to go back to that old example, I would get some S, but it's an S that... Yeah, again, it's not... It doesn't... My mutable variable won't work for any S. It'll work for an S, for one S, but not any S. So in this, you will get... So if you fix it, this line will work, but this line will not work. This line will say, hey, there's already an S here. I can't do... You know, this doesn't work. This doesn't type check. So, yeah, ranked to polymorphism or the appearance of ranked to polymorphism, which is what it means when the for all is in here instead of on the outside, means that the runST function will work. Yes. So the example program won't given this correct category? No, no, no. So this line won't type check. This line will work, but this line will not. No, but the other... Is it the other way? Yeah, of course, because the first one is the one that tries to return... I mean, yeah, the first one is the one that tries to return the variable. Okay. So the idea, right, is that this S is just the type level of tag. It's just a label you attach to various universes of preferences. And you want to avoid crossfalls between universes, right? And when you say that you have something which can run in any S, it's like a kind of translation of the various, because you can say that if you start in any of these universes, you can do something which is locally consistent, right? If you can do that, that means you can't possibly have any crossfalls with other universes, because if you had, then you wouldn't be translationally variant. You wouldn't be able to be moved from one universe to the other, because then your references to those other universes would break. So this is how the higher rank type ensures that whatever A you return is self-contained in the sense of not having any references to anything which depends on the choice of S, right? But again, S doesn't exist. It's not, you know, I don't think of it as, like in the state monad, where you really have a state which is a refined data that you pass around. You don't have that here. S doesn't exist. S is just a phantom type. It's a type level of tag. Whoop. We're done. Yeah. Would you connect, do you want to try this? Try this in the Haskell, like, you know, in GHCI? It's time to digest what was just said. Okay. Thanks. Yeah. Yes. The slide on the interleave. SD, yes, right at the end. Yeah, the most annoying part of the... What's this slide about? You're saying that there's something interesting here. Yes. So interleave SD is the thing that duplicates the state. So the S goes in and the S also goes out. So it duplicates the state, which is basically, unlike everything else that we've done before, everything else, like the state goes, one state goes in, one state goes out. It's not like, you know, the only thing that throws it away is run as T, but everything else takes the state, returns the state. It threads it through. Whereas interleave SD is, hey, what happens if we duplicate the state? What if we like, you know, throw all our assumptions out the window? And what that does is that allows you to write, like this function, for example, this is a lazy, this is a lazy read file. And it works as long as, yeah. What does read CTS mean? Read contents. So it's a recursive definition. So if you read the contents of the file descriptor. So what this says is, basically, hey, I'm going to read the character, and I'm going to read the, sorry, I'm going to read the rest of the file at the same time. Okay. Yeah, essentially. Yeah, essentially. Yeah. How does it interleave as it is reading character sequential, no? I mean, it is, it is, it is sequential, but only because we won't demand the contents of the thing out of order. So what happens in this example, right, is that this gives you a string, which as you force it, as you force the first character, then it will read the first character. As you force the second character, then the second character will be read. So it immediately returns with a string, but the string is suspended. So in read file, right, like suppose you had some other code there, so you read contents of F, and then you do something else. That something else would run immediately. It wouldn't wait for the whole file to be read. It would only read the file as you force it, right? Which would mean it holds the file descriptor. So it's like lazy art. Yeah, yeah, exactly, exactly. So if you're like working on something that assumes the file descriptor is closed, or I think you're like... Yeah, that's where you go. That's exactly where, you know... Well, that's the thing. It has to use them in a strict, like in a strict or statefully threaded way, or it has to, I think... We didn't do this, right? So this seemed like a good idea 20 years ago. It seemed like a good idea 15 years, five years ago. But it's increasingly obvious. We have learned that it's not... So there's approaches that kind of use, I think there's continuation, like a continuation based approach. I think you probably know more than this. Yeah, like pipes. Yeah, pipes and conduit, yeah. So basically the way of saying that, hey, we will control the resource, but you can give us a function that will tell us what you want to do with the contents of the resource. So then you run the function, but it itself keeps control of the file handle or the resource. So this is super relevant. This approach is really nice, right? In a small program. You just read the file, really, because you only read it as needed. Because it works out and everything can be fewer. And it's not that elegant. I mean, it's not that simple if you use the proper approaches. You have to be a bit more explicit, but it's worth it because the operational benefit, exactly in resource management and exception handling, et cetera, is worth it compared to this approach. So does it mean right now, like I said, in Haskell, when you read a file, it actually meets the whole... It reads lazily. This is essentially the way Haskell works right now. Yeah, in the standard library. Yeah. Yeah, they have the libraries which... Well, they have other libraries that go ahead. This was a bad idea. Let's do it the other way. Also, there's also the strict form of... Yeah, you can choose. I mean, you could write this in the strict form, right? Because you could have... If you just leave the inventory listed up, this is the strict... Oh, I see. Okay. Yeah, this might have made more sense with another box thing, but I was tired of turning those into Ascii, so I chose not to. Cool. Other questions? Yeah, but speaking of parallel, how does this gel with things like concurrent Haskell, where you could really have parallel... Would you have multiple states? Again, there's only one state. If the state token is this... So the way that it works is that you have... The state token itself is what determines the data dependency, right? Yeah. So you won't have the same state token across two different... Oh, no, no, sorry. So, yeah, RunST is not... Sorry. RunST is not meant to be used concurrently. Like, if you use it concurrently, bad things will happen. They have other ways of doing it. Yeah, yeah. Yeah, it's not. So they have other types of variables. They have, like... That's software transactional memory. I think that's another bunch of papers that would be fun to read and understand. But, yeah. Yeah, software transactional memory. They also have, like, I guess, mutex-based variables. They have a whole bunch of concurrent and parallel abstractions. But RunST is not one of them. Okay. In fact, yeah. I don't think IO is one of them because IO is based on ST. Like, this is the way things actually do work in Haskell. So... Does that mean, before this paper, there was no IO? Oh, yeah. How did the IO ask you? No, they had a different way of doing IO. They had, I think, based on, like, uniqueness types. So they basically had a really hacky solution. Like, this is one of the reasons that they decided to try this in the first place. So the uniqueness thing... No, that's a clean... Oh, okay. That's a clean thing. So what did they have instead? So the old ancient Haskell is doing IOs basically only using laziness. So you would have, like, a program that does IO would seem to be a program which, given a list of reads, produces a list of writes. And both are lazy, which is what allows, you know, the second read to depend on the first write. If you think about, like, a simplest program that does IO, which is, like, a text... interactive text program, right? Yes. So you want to read something, you want to react to that, and you want to write something. But maybe what you write out determines what you will read next. Because maybe, you know, people have a human, right? They read the first line and they decide what to type in the second line. But you can model it... You can model it as just two lists of, you know, an input list and an output list and a function which, given an input, gives you the output. And it just has to be careful that it only forces enough of its input to determine enough of its output. So it's part of the runtime? Is it part of the, like, the runtime? Yeah, and then, like, the runtime, which is, you know, on top of you, basically does the opposite of this. It feeds you enough of the input and it forces enough of the output that it can actually, you know, communicate with the outside world. Yeah, but I mean, it wasn't principled and it wasn't elegant. Yeah. Yeah, so they were like... It's easy to get loops and... Yeah. They're like, hey, can we do better? And, like, it has tons of the answers, yes. So... So is there anything else, like, on the horizon for Haskell? This is pretty much... Oh, there's a ton on horizon. This is, like, this is 23 years old. But it works really well for... It works relatively well for IO. It works relatively well for, like, mutable variables. You can write really fast code in Haskell that does stuff using this machinery. But, like you mentioned, hey, what about conco... You know, what about machines that have more than one core? Like, unthinkable in 94, right? Like, personal computers with more than one core. Yeah, for that you have other... You have other concurrency abstractions and other, like, variables. Yes. It's actually one of the most important open pastures of functional programming theory that there's a conjecture that... Okay, so there's a theorem which says that any imperative algorithm can be turned into a purely functional one with logarithmic slowdown. But it's a... It's not known whether it is true that you always potentially have this logarithmic slowdown. Like, it's an open question whether there's a way to turn any imperative program, a program which uses mutable references, et cetera, into a pure program which doesn't have logarithmic slowdown. And that, you know, that's the ultimate question behind this whole area, right? Because the reason you need ST is because you want to avoid the slowdown you get from your mutable data structures. But maybe you don't need to. Maybe you can avoid it, you know. Yeah, and so, again, the way that they... And I guess an interesting thing about this functionality is that this is, in a sense, a very limited form of linear typing. I think in linear types, essentially you have to use the argument only once. But you have to, yeah, once and only once. And this is a very limited form of it. And again, it's useful, you know, it's useful. This is the way Haskell works. They're trying to bring more of that into Haskell. So there's currently work going on trying to incorporate linear types into Haskell proper. And that's interesting as well. I guess it's something worth mentioning. And I guess the other thing worth mentioning is like monads in general. So this syntax, the do notation itself, came into Haskell through hugs, which was a different alternative Haskell implementation. And before that, they had the concept of monads. And predecessor to this paper they added bracket syntax that is similar to do notation but not the same. So for me, this paper is about two things. It's about the rank-to-polymorphism. It's about the monadic interface. And I think, again, you know, especially in hindsight, you can see that the fact that you have monads gives you a very general API for working with things, just things. Like this sequence underscore is useful in general. It's not useful just for I.O. This double arrow thing is useful in general. It's not just useful for I.O. And even their names are very suggestive. Like then, there's the idea of the monad as a programmable semicolon, which is like, come on, what does that even mean? But this kind of gets at what it means. It's like the then statement. What does then mean? It's a way of kind of sequencing your things. So in place of your, you know, yeah. And the fact that you have do notation and the fact that you have, you can define what happens between each line essentially means that that's, it's a really useful interface. It's really cool. So monads rank to polymorphism, linear typing, kind of laziness, strictness, C-interfacing, imperative functional. It's just, this paper has a lot of cool stuff. And it's relatively well presented. It's like, this is really short. There's a follow up called state in Haskell, which goes into a greater detail. It's about how to generate an infinite supply of variable names, like an infinite supply of identifiers, which is an interesting problem. And there's like more cool stuff in that paper. And there's a predecessor that talks about do notation, before do notation. So this is an interesting bunch of papers to have a look at. But especially this one, I think, because it's accessible. It's interesting, but there's not too much to take in. Cool. Think. That's it.