 Hello everyone. Welcome to Functional Conf 2022. We have with us today Adam Rosin who is going to talk about concurrent state missions with Gats Effect is joining us from USA Seattle. Thank you very much Adam for joining and we are looking forward to a wonderful session. Hello everybody. I'm just getting my windows open here correctly. Good. Thanks for having me. I'm very excited to be here wherever here is and wherever you are. We get to be together. Let me share my screen. This is Zoom so the windows all change whenever you share your screen. So here we go. Now I got it all right. So let's see. Is my screen being shared okay? I think it is. All right. So my name is Adam Rosin. Thank you for the feedback. Yes, you can see my screen next. My name is Adam Rosin. I work for a company called Inner Product. We're a small consultancy. Myself and Noel Welsh who if you're familiar with the Scala community. We form this company and we help train, mentor and help people build systems using functional programming mainly in Scala. So I'm happy to be here. And so my talk today is concurrent state machines with Gats Effect. So lots of fancy words. Functional programmers love fancy words. And we'll talk about some of these and what they mean and how to put them together. So this message is really for me. Don't panic. So yes, it's 9.30 at night, but that's okay. Wherever you are, maybe you're in India time or all around the world. So when one then be now, it will be here soon. Okay. So the talk is about concurrent state machines with Gats Effect. So I'm going to basically start in the style of like defining, let's talk about what these words mean. So let's talk about concurrency. My hope is that these ideas will be applicable in whatever language. I happen to be using Scala, but any sort of functional programming language, any programming language really is has these concepts so we can apply them. So let's talk about concurrency. And say we have two processes X and Y and they're running concurrently, whatever that means, at the same time we could say. And somehow they join together and we get the results from each of them. And so in Scala, in Gats Effect, we might write this code as X and Y are these values that have some type and you can call some methods on them. X is going to do stuff and Y is going to do stuff and we're going to run them together in parallel. This might be how we declare what's going on. So what can we know from the code that we wrote from the diagram that I drew, what could we infer about these things? We know that we're sort of starting the computations, but what do we actually know about them? Well, for example, like, does X finish first? Does Y finish first? Well, we don't really know. We said, we sort of are defining concurrency as we don't really know which finishes first. That's sort of what it means. Things are running at the same time concurrently. And we don't know which finishes first. So this is a nice little book. I have the URL and the reference at the end of the talk, the little book of semaphores. It really sort of, I haven't found a good reference for just like what concurrency is about. So I recommend it. So two events are concurrent. If we can't tell by looking at the program, that's a key, which will happen first. So if I back up a little bit, in my code here, we're doing something with X, we're doing something with Y, we're saying run them in parallel, we can't tell from the code which happens first. And that's a feature. You know, that's the idea of concurrency. And maybe it might be easier to think about it in the reverse case. So if you did know what was happening first, that means you know the sequence of events. And so those events, we can say are sequential. They are not concurrently running. They're sequential, one after the other. And so concurrency is actually a feature. We don't, there are many cases where we don't care which one finishes first. We just want them to run. And there's no reason to delay one after the other. So it's an abstraction. We are giving up serialization, one thing before the other. We get performance, we get, you know, that sometimes there's just no reason to run one thing or after the other. We can run things at the same time. So that's concurrency. We're also going to talk about coordination. So I sort of summon the spirit of Rich Hickey, you know, we define our words. So coordination means to order together. So there's more than one thing that we're ordering. So if we had X and Y, and they are running concurrently, we don't know when they're going to finish, which is going to finish first, X or Y. But we want to add back. We've forgotten which one finishes first. But we sort of want to add back some of those constraints. Maybe we want, we actually do want to know which one finishes first. Or maybe we want to do, we, if all we know is they're running concurrently, we may want to put some restrictions on that. So a familiar example to most people might be like a queue. So a queue is sort of a coordinating interface. There are two roles, two sort of actors interacting. There's a producer, which is pushing it onto the queue. And there's a consumer, which is pulling off of it. And so in the normal case, and note that the producers and the consumers are running concurrently. We don't know when produce, you know, if producers are happening before or after each consumer, they're deliberately concurrent. So in the normal case, the producers offer the items to the queue, and the queue takes them and the consumers take things out of the queue. And that's normal. But then we want to add some extra behaviors. For example, if the queue is full, we don't want to allow a producer to put more items in the queue. And so we, a typical way of behavior, we want to happen is that the caller is blocked that basically that push that that end queue is paused in some sense. And at the same time consumers, if there's nothing in the queue and the consumer asks for something, it's very often the case that you might block again the consumer and only provide them an item when one becomes available. This is as opposed to you could make these non-blocking operations. So producers would try and put something into the queue and the queue would say I'm full and it would be up to the producer to try again. Or if the queue was empty, a consumer would ask for something and the queue would say I don't have anything. And then the consumer would have to retry. So a different strategy would be just to sort of block the computation. And so in the world of concurrency and coordination within a concurrent environment, there are lots of sort of components available. We have our queues. There may be like a circuit breaker that sort of says, well, I'm talking to some service and if the service is returning lots of errors, I'm just going to not even try to talk to the service until some time out period happens or something like that. Locks let you coordinate. It gives you exclusive access to some resource. Laches are a bit more complicated. There's sort of like a door that's always closed and then it's always opened. Barriers and all sorts of sort of primitives. These are the things that we can use within a concurrent environment when we need to know a little bit more. Maybe we don't need to know exactly when each event happens and in what order. But we want to add a little bit of constraints. We want to ensure that the queue isn't full. We want to ensure that the queue is not empty and so forth. So what this talk is about, now that we've talked about concurrency and coordination, we're going to talk about how we're going to kind of give a recipe to how would you build one of these coordinating components that's going to handle concurrently operating sort of interactions. It's going to be built with this technique called a concurrent state machine. So you're probably familiar with like a state machine and this is a little bit more complicated. It's a state machine working in a concurrent environment. And we're going to use the Scala library. It's written in Scala. It's in the type level ecosystem, which is a large ecosystem of libraries for functional programming. And the library that we're going to use for this concurrent state machine is called Cat's Effect. So that's the big idea. And so the talk is basically has three parts. First, we're going to talk a little bit more abstractly about the concepts of concurrency. We talked a little bit about concurrency, what it means and coordination. And there are a few more general terms that work in any sort of, in any context that are important for talking about these things. Then once we know about what synchronization is, we're going to talk about, well, how do you, how is synchronization represented in Cat's Effect? So this is a bit more practical. How you would actually start to program the primitives of what we need. And then once we know how to use these synchronization primitives, then I'll sort of give the recipe of how to build a concurrent state machine and give an example. So that's the general plan. If anyone has any questions or a little clarification questions, please put them in the Q&A or in the chat. I'm happy to just quickly clarify or we can just collect, I'm happy to collect a bunch of questions and talk about them at the end either way. Okay. Oh, and I just did want to notice that these ideas are not original. I learned sort of this form of them by Fabio in the Cat's Effect and type level ecosystem. He's sort of popularized this and shown how really, how powerful it really is. So thank you to Fabio. Okay. So let's talk, part one of three, let's talk about synchronization. So I just have some funny state machines. This is this cool Twitter account called happyautomata and vaguely reassuring state machines. And it's a lot better than the kinds that the boring ones that we deal with. So I thought I'd make it a little bit more exciting. So what is synchronization? We're in the world of concurrency and coordination. So if I sort of consult my source, it says computer programmers are often concerned with synchronization constraints. So this is where the requirements are pertaining to the order of events. Now, we think about this all the time in our code. So when does one thing happen after the other? So we're coordinating in the dimension of time. When does something happen before something else? When does something happen after something else? Very normal kinds of considerations. If you just had a regular, a non concurrent program, that would be, do A, do B, do C. When you look at the code, it shows you the order of events, A, then B, then C. If things are running concurrently, we don't have the same relationship to the source code. So we can't tell from the source code, which is happening first. That's what we said concurrency was. So we might have do X, then Y, then Z, or maybe more in a domain language. The user must be logged in before they can add an item to the cart. Any time you hear the words before, after, these are talking about synchronization constraints. Or perhaps more technically, when the buffer is full, reject new requests. This would be like our queue. If the queue is full, we want to do something. So there are two general kinds of synchronization going on. There are others, but these are definitely the most common. One kind is called mutual exclusivity. So two concurrently running effects are mutually exclusive when they can't happen at the same time. There are reasons you don't want things to happen at the same time. As a human, it's really confusing when things happen at the same time. But in computers, it's bad if things happen at the same time. For certain situations. I'll give you an example in a second. The second major type of synchronization, so this is coordination and time, is called serialization. I sort of mentioned it early. It says something must happen before another. So that's different than not happening at the same time is distinct from one before the other. We may want one or more of these. So what's the problem that mutual exclusions is trying to solve? It's typically called the lost update problem. And so you can imagine that we have some mutable variable here, x, and it's initialized to zero. And then on some thread, you know, thread one, I increment x. And if this was the only code in the world, we could look at the code and know what x is going to be. x is going to be one. But if there was another thread that ran this same increment code, if this executed beforehand, and we looked at the code, if we looked at the code, we would say, okay, well, there's two threads and each is incrementing x by one. So x should be two. But if they're sharing the same mutable location, this update gets lost because when this code runs, x is still zero. It hasn't been updated yet. So this update gets lost. So whenever you're incrementing counters, you need to make sure that those counters, for example, get atomically updated. So the idea is that if you can control and say that updates are mutually exclusive, they never overlap in time, then you can know that you're not avoiding, that you're not losing updates. I mean, so effects are not allowed to happen at the same time. Now in Java and in Scala, there's different mechanisms where you can to achieve mutual exclusivity. So there's a synchronized keyword, there's volatile variables, there's atomic references, and there's lots of different ways to do this. So it's a very old technique. There's lots of ways to do it, but this is what they are about. They're about making sure essentially that mutations are not happening at the same time. So that was mutual exclusion. The other one I want to talk about is called serialization. This is one effect must happen before the other. They are serialized. I know that X happens before Y. So this diagram here, and if X is not in control of starting Y, if we just said if X and Y were concurrent, X has no idea when Y starts or when Y finishes, we're stuck. We can't get this serialization. So if we had some other mechanism, let's just call it a magic black box. Y could somehow know that X had finished. We had this unknown wild west of concurrency where we don't know when things finished. Serialization allows us to know when something is finished. We can get it back. So these are the concepts that we're going to use and apply in Scala in our functional programming language. We have the idea of concurrency and coordination. We don't know things when things are concurrency. We need something to help us coordinate them. How do those things work? Well, we're sort of adding constraints. Concurrency says, I don't know anything about these things happening at the same time. We want to incrementally sort of add more knowledge so we can talk about ensuring that things are not happening at the same time with mutual exclusion. And we can ensure that one thing happens after the other with the idea of serialization. So these are the big words, the fancy words, breaking them down, and these are the concepts that we're going to apply. Okay. So that was part one. Part two, how are these ideas implemented with the cat's effect library? So here's another state machine, kind of spooky. So what is cat's effect? It's definitely about cats. Now, as one of my daughters says. So it's a high-performance asynchronous blah, blah, blah, blah framework for creating in a purely functional style and type of ecosystem. So lots of good words there. Composable, that's what we want. High performance, we want that to asynchronous so we can build things that are running concurrently. Purely functional style. So referential transparency is built in. It's a library for doing all this cool stuff. That's not a very good description, but I'll show you. So about this word effects. So the library is called cat's effect. The talk is called something, something, something with cat's effect. What's an effect? Well, one way to think about an effect is it's a value, but it's different than like an integer value or a string value. It's a value that represents what happens when some computation executes. And so that, yes, that is a very broad definition. It's a broad concept. So, you know, sort of this first example, this value A, it's sort of not, the way we write it in Scala, you know, this is a value that's named as A and it's type is A and it has some definition. But it's just a value. It could be a string or an integer or whatever. But in effect, the way we typically write, we denote effects, it sort of has sort of like this context, you know, F for, you know, effects A. And it's sort of about A, but it has its own name, you know, the effect that produces some A. This is just an A, but this is an effect, which is about A or produces A is something like that. So that's how we sort of represented in Scala. So it's, but it's still, notice, it's still a value. It's not running. It's just this piece of data. That's also important. So effects, there's, it's usually represented this, you know, I showed this F of A, it's a higher kind of type. So it's a type constructor F, which itself is parameterized by some, some other type. So some examples of effects that you might be familiar with, there's the, or you may not even consider them effects, but they really are. So there's the option type. So option of A is a type that may or may not produce an A. There's sort of an effect there. It's, it might be an A, but maybe it might not. Or an A, a future is an asynchronous computation, which may produce a value of type A sometime in the future asynchronously. And in cat's effect, there's this type called IO, which is like this super general type that says an IO of A says I can do anything in the whole world to produce a value of type A. I can talk to a database, I can read memory, I can talk over the network, I can call your mom, whatever you want. But it represents sort of a very general, actually like the most general kind of effect. I can do anything, but I'm going to give you a value of type A. So that's what effects are. And we'll see more and more how we can use them. So we discussed synchronization. We're doing mutual exclusion and serialization. How do you do those within the cat's effect library? So for mutual exclusion, there's this data type called a ref. So it's a reference, but it's a reference that is safe under concurrent access. It provides mutual exclusion to updates. So it doesn't have this lost update problem. If you try and update a ref, it's parametrized by the effect type and the value that it's held inside. If you try and update a ref, you're guaranteed that those updates will not happen at the same time. Ah-ha, I like ah-has. The other sort of style of synchronization in cat's effect is this serialization. We want to ensure one thing happens after another. And in cat's effect, we use this deferred data type. So deferred says, eventually I'm going to provide an A that's managed by some effect. Eventually I'll give you an A, but you can't have it until it exists. So it's sort of like an asynchronous value, but it's a little bit more complicated because you get, if it's not there, you have to wait. So here's a little exam, a little, a series of examples that show us how do you use these things. So that's what they're. So we have our reference, our ref, which gives you atomic updates. It provides mutual exclusion. Here's some codes. So here we're initializing a reference. We're saying our effect type is IO. It's sort of our standard, most general effect type. And we're initializing the value to zero. And then we are going to launch, run a bunch of other computations in parallel. That's what this par tuple thing means. We're going to run two of these workers that are going to do something to the counter. Here's two of them. And while these workers are doing their work, we're going to print the value of the counter. So these things will happen concurrently. So it's very, you know, it's sort of declaring what's going to happen. And then only when this, only when this whole thing is, these effects are values, I'm returning an IO. An IO is an effect, but it's a value. It's not running yet. It's the job of the main program to run this thing. And then they all run. So we're sort of doing declarative concurrency and declarative synchronization. So here's these two types of computations that are going on. Our workers get the counter. And all they're going to do is update the counter. So we say increment it by one. Remember, these will happen atomically. So even if there's more than one worker running at the same time, they won't stumble over each other. And then it's just going to recur. So here's recursion. It's going to go back. So it's just going to continuously update the counter. And there's going to be two of these workers. So it's going to, you know, increment twice as fast as normal. And then at the same time, we had a print counter. It gets the ref. It holds along. And it's going to sleep for five seconds and get the value of the counter stored in the name N and then print it out to the console and then recurse again and just every five seconds print out the value. So these are the two parts. This is sort of the writer into the reference. And this one is like a reader to the reference. They're running concurrently. You can read and write concurrently and we won't lose any updates. Easy peasy. There's lots of different, here's all the different operations that you can do. You can get, which is a read, you can set, you can do all sorts of things. You can do a, you can provide an update function, all sorts of good stuff. Okay. So that was ref. You know, it's an atomically updateable reference. The other side is synchronization with deferred. So the way that the deferred interface works, the data type is the sort of again, there's sort of two roles going on. There's readers who are waiting for a value of type A and there's writers or maybe just one writer, but it could be more than one. They're all racing to provide that A. They're running concurrently. You're like, I don't need to do anything until I get that A. And then somebody else in the background, the chef is making my dinner in the background and when it's ready, my dinner will appear. So that's sort of the two sides and these interfaces basically have methods for each of those roles. So not much there either. So to summarize, we have these two data types and cat's effect and they're about synchronization. Ref gives you the mutual exclusion. Let's, you know, let's you act on these things concurrently. So you share the ref with concurrently operating things and they can have safe metacalls. And we have this deferred. It says, I'll only give you this value once it's there. So if you want y to always happen before x, y could have a deferred that is completed once x is done. And then they would sort of unblock whatever y needed to do. And the really great thing, the part that makes coding really pleasurable and fulfilling is that these primitive data types provided by cat's effect are safe. They're referentially transparent. They compose, their operations compose together. You can do one thing and then the other thing and the other thing. And none of these guarantees go away. They're safe to run in any kind of order. This is very nice. This is what we want. So we talked about our fancy words about synchronization. We talked about how synchronization is implemented in cat's effects. We're going to put this together and use cat's effect to build a concurrent state machine. So this will allow us to build more powerful coordinating interfaces. So we have ref, which lets us have mutual exclusion. We have deferred, which lets us ensure one thing happens after another. We're going to try and get higher level behaviors, like our cues and so forth. So the big idea is together with ref and deferred and a little thinking, that's the brain, we can produce this concurrent state machine. So we can kind of have a recipe. It's always going to be built with these primitives. So that's like the easy part. Easy in quotes. This is not easy. And we use that coordinate concurrent state machine to build this coordination component. Our cues, our locks, our latches are whatever. That's what we're going to do. So before we do concurrent state machines, let's just talk about a state machine. What's a state machine? Well, you can think about it as this function. So s is the type of our state. We have an initial state that has type s. And we're given some sort of input to do this or whatever. And that produces a new state. That's our new state. And maybe some extra output that says, oh, this is what happened. So this is like a signature of a state machine. If it's running over and over again, then this value feeds back into here. Let me just turn the crank. But just so you know, a state machine is just, that's just like a traditional object in object-oriented programming. Let me show you. So here's a class. And it has some state of type s. And you can act on it. You can give it a value of type a that reads the state and maybe updates the state. And it produces some value b. That is the same as this signature. You know, you give me a machine, now I'll act on it. So objects, state machines, same thing. That's good. Scala is an object-oriented language. We don't have to forget all that stuff. So that's cool. And just as a very important note, from the outside, we can't see what the state is. This is just a constructive parameter in this representation. The state is hidden. So that's the whole idea. The state machine is complex. It might be a very complicated logic within, but it's hidden from the caller. But the behavior, the value you get out from the value you get in, that is public. That's shown to the user. So this is a powerful technique. We're hiding the complexity of the state transitions, but we're providing feedback to the caller. So that's a regular state machine. A concurrent state machine would mean that whoever is acting on this thing, their action is going on concurrently. So if that's happening, we know that might be a problem. You might lose updates under the presence of concurrency or something like that. If we were modifying this state, we want to ensure that we don't lose those. So we use mutual exclusion, and we might use an atomic reference if we were just in Java or playing Scala. So we're just changing a primitive notion of a variable to something as atomic updates. But we are in an even better world, you might say. We are in the world of concurrent state machines with effects. So instead of using an atomic reference, we would use our ref type from cat's effects. So here's the managed state that's hidden from the user. And when we act on that state, we're not necessarily producing a value of type B, but we're producing an effect that is about that output B. So we've sort of generalized the signature a bit more, but it's roughly the same. There's a starting state, input, and then an output state, and an output value. So that's sort of the rough form of a concurrent state machine. It's going to look like this. It's going to look like a ref and methods that produce effects. So here's the recipe. So those are the parts. It's a little bit wordy, so I'm not going to just say it word for word. But there's basically two parts. We talk about the outside. So how are the callers going to interact with the interface of this component, of this thing that we're implementing with a state machine? So in our example of the Q, the coordination interface would be like NQing data and DQing data. That's what you see from the outside. So we enumerate these roles. That would be like producers and consumers and each roles methods. So a producer would push or NQ and a consumer would DQ. This is just the outside. So it doesn't involve any fancy things, any cats effect stuff. And then we need to think about just sort of in words, what's the behavior that those methods have when we invoke them? So we said before, like in the case of the Q, if the Q is full, you can't add new things or you would block, say. Or if the Q is empty, a DQ would block. You want to sort of say what the behavior is like. And these depend on the current state. So once you have that definition, we need to implement these methods. So we define them here. And we kind of have this recipe. I'm going to show you an example because it just sounds very complicated to look at it like this. Because this is what you see this and you're like, oh my goodness, do I have to? It'll be easier once we see an example. So a Q is a little bit more complicated. So we're going to do something a little bit simpler. So this is called a, this coordination interface is called a countdown latch. So a latch is like a, keeps a door closed. And in this world, latches are closed, and then they're open and they always stay open. And it's not just one event that opens the latch, but you want to say that some exact number of events have to happen before the latch opens. You know, I want, I need three high fives to feel good about myself. And I'm not leaving this talk until I get three high fives. There we go. So that's what a latch is. So this is like how you would use our latch. So we would construct a latch, we would say, I'm going to wait for three events or person, friend, doesn't matter. I'll take any high fives from anybody, you are my friend. I need, I'm looking for three events, and I'm going to wait till they happen. So I get my latch after I construct it. And there's going to be people who are waiting for that latch to open. And then once that latch opens, I'm going to say, finally, it opened. So this is somebody's waiting for the latch to open. And then this is the key. And then something happens. This will only happen once the latch opens. Then there's the other side. The other side says, well, I'm going to notify, I'm going to like do some say go, and then I'm going to decrement that latch. I'm going to say here's a high five. And so I'm going to run three of those notifiers, they're going to decrement that count. And I'm going to have two waiters who are waiting for that count to go to zero. And I'm going to run them all at the same time. So let's see what happens. Okay. So I started all of these things, you know, they're running concurrently. But I see the behavior that I want. I see the notifiers running first. They're decrementing. And the waiters, they're running concurrently with all of them, but they stop until all three of the notifiers have notified. And so then this latch opens because three went to zero. And then they all, then there were two of those waiters, and then they print. So we've, we've ensured some serialization here. But it's a serialization with a complex condition. We're sort of saying the latch allows us to serialize three events before some other events. So these three events have to happen before any other number of events have happened. That's what's cool about the latch. So how do we build the latch? This is our concurrent state machine. This is why we're here. So let's talk about the roles. I'm going to be up here. So there's like these two roles, there's the waiters, they're waiting until the latch opens. And we want to block those waiters. We want them to not be able to do anything. We want to ensure serialization. They only get to run after the latch opens. The other role that's involved is sort of what is going to contribute to opening the latch. I call these notifiers. They notify the latch that some sort of work is complete. So, oh, I think I used the wrong names. But so here's our interface. So the waiters are going to await. It produces an effect that has a unit value. That's like nothing. But you won't get that unit until everything is ready. The decrement is for these notifiers. That decrements the count inside the latch. And when that count goes to zero, that will unblock the waiters. So that's sort of our interface. Now, it's a state machine. So we need state. So we need to model the state. And we can think about this as, you know, there's essentially two states. We have an algebraic data type. Here's our super type. It's a state. And there's two things that extend state. Either we're outstanding. So that means that, you know, our count is not zero. And we show our N as the current count. Or we're done. And, you know, we don't need to, we know we're at zero. We don't care anymore. When we're not done, when we have an outstanding count, we need to be able to tell the waiters that eventually they're going to get a unit. So we're going to use this deferred to only provide this unit value and block when we have it and block otherwise. So we have state either we're outstanding or we're done. We keep track of how much, how many more complete calls we need. And we have this deferred in order to block the callers until that happens. So if we have state, we need something to, you know, something needs to modify the state or read the state. Where is that held? We use, we manage the state as a ref. So when we're create, this is the apply method. That's like a constructor or a factory method. We're producing a latch. So we need to create, we need to create a reference that holds something of type state. And we initialize our state to outstanding, you know, with n. So if we want to count down latch with three, three values, waiting for three events, we initialize it at n as three. We also construct this deferred that lets us signal things. Yes, there's a, there's a comment in the chat. It's rather language independent. That is my goal. That's my hope that in whatever language you have, if you had a, you know, an atomic reference data type, and you had another data type that has, that provides serialization or blocking, you can build these things too. Doesn't matter. It doesn't have to be skull. So, so we have our state, you know, it's either outstanding or done. We manage it with some sort of atomic reference. And we coordinate the blocking behavior with this deferred. So then we have to implement our methods. We have our, our two roles we have our waiting. So we, the way you implement each method is you, you, you are your state machine. So you get the current state and you perform whatever behavior dependent upon what the state is. If, if the current state is outstanding, we have that deferred value that lets us do blocking. And then we, we ask the deferred to get its value, which will block the caller. We want await to block when we're outstanding. And when we're not outstanding, we are not going to block. We're just going to return right away and say, I'm, you don't need to wait. There's nothing to wait. So if, if n is greater than zero, we block by getting the value of the deferred. And if our state is done, we don't block it all because there's no reason to block. So waiting, waiting is easy. Maybe this sounds like a talk. Then there's the decrement method. So this, this we need to, if I was to say it out loud, we need to look up the current state. If, if it's greater than one, I'm going to, I'm going to write back n minus one. And if it's, if the current state is one, it's going to go to zero. So that means I'm going to switch to done. And I'm going to unblock everybody. So that's kind of the same thing. We, we modify the current state that lets us get the current state. So if I'm outstanding, and there's only one count left, that means we need to transition to the done state. And at this, and then perform the action of completing that defer that says, hey, waiters, unblock. So that's sort of like, ah, I went from outstanding to done. That's the, that's that transition. Um, but if, if, if n is not one, if it's greater than one, then we need to go from outstanding n to outstanding n minus one. And we do nothing as a side effect. And if we're done, decrementing doesn't do anything. So we done stays at done. And we do a noob operation. So hopefully this is somewhat readable if you happen to, if you're not familiar with Scala or, um, you can get the general idea. It's a, it's a, it's a state machine, current state, new state side effect, current state, new state side effect. There we go. So we did it. That's how you build it. We have our external interface. And we implemented these methods with the concurrent state machine, looking at the current state, perhaps updating the current state, performing some action based on that state. And, you know, we can now use it to, collaborate. This is wrong. This should be await syntax error. Run your compilers. Okay. So let's just quickly summarize. So we talked about a lot of general concepts. And I hope they're generally useful. And we know there. There's this idea of concurrent coordination. How do we can, we have concurrency where we don't know who's going to finish first or who's even running, but we want to coordinate them in time. We want to order the events. So that idea is called synchronization. And there's different kinds of synchronization, different ways of producing it. And once we make these components, it's their job to coordinate between the concurrent computations. So producers and consumers are coordinated by a queue. Waiters and notifiers are coordinated by a latch. Updaters are coordinated by a lock. Something like that. And the technique we can use to build these coordinating interfaces are these concurrent state machines. And what's really nice, at least, and especially nice in Scala and functional programming languages like libraries, I mean, like cats effect, is these are composable primitives. We have ref and deferred and things like IO. And they all interoperate. You know, there's no sort of spooky action. So that's really nice as a program. There are lots of examples out there. So there are many, many libraries built on top of cats effect. You might look at Chris Davenport's libraries. He has a million of them. They all use this technique of a concurrent state machine. It's super powerful. And I'm very glad that Fabio formalized it and taught it to people. So that's the end of my talk. You can learn about cats effect at type level. There's a little book of semaphores. Thanks to Fabio and the Twitterverse for cool pictures. I wrote a book about effects and cats effect called essential effects. There's a discount code for anyone here who wants to get a copy. Thank you very much. I'm happy to, I'll try and answer some questions quickly and chat with folks in the hangout. Yep. Thanks a lot, Adam. Wonderful session.