 Alright everyone, welcome back. This is a stream that people have been asking for for a while We're going to tackle Sort of the whole idea of futures and a synchrony and rust And the same we're not so much going to talk about how you use it But rather is sort of how does it work internally in the language? What are the libraries like? How do they function? How do all the pieces fit together? And so the the goal isn't so much necessarily that you're gonna like write lots of futures yourself Although you might end up doing that And it's not really about writing like writing async await code It's just like how are these components written in the first place? And so what we're going to do is we're going to look at futures first We're gonna start looking at futures 0.1. So the the version that sort of released and the ecosystem at large is using Then we'll go into Tokyo, which is sort of one of the main executors for futures and in rustland today And which deals with things like Ios input output like network connections file systems those kind of things Then we'll look at the move to take futures and move it into the standard library And what that looks like how it differs from the existing futures stuff and why it's different Then we'll look at we'll see a little bit how time works out But then we'll look at pinning So then you'll see that the new future stuff has the notion of pinning and we'll talk about what that is and why that is And then that ties in very strongly to the idea of adding async and await the async and await keywords to us So we'll talk a little bit about what those mean How that works under the hood how that ties back to pin and the future stuff in the standard library This stream I've mentioned this a little bit before but this stream is going to be more Sort of me talking about how stuff works. We'll probably look at a little bit of code We'll do some reading of RFCs It will also be some like drawing and diagrams and that kind of stuff There'll be relatively little coding on my part, although if it's necessary, you will do it But this is really more of a goal to the goal here is more to sort of go through all the stuff How it works and explain it because I haven't really found any good sort of cohesive Explanations that cover the entire ecosystem and where it's going and so the hope is that this might be that If you have any questions as we go, feel free to fire them off in chat. I'm monitoring that on the side Because if you have a question chances are someone else who's watching either now or later will also have the same question So questions are awesome Just before we start I'm John I do a bunch of these live streams and They're usually about things and they're like we're building stuff than the rest ecosystem I've done a stream on open source contributions in rust. We've done several where the implemented various various asynchronous primitives we did one on asynchronous as age we've done a couple on Implementing the zookeeper protocol on Tokyo and a bunch of other stuff that we've written you can go back and Look at my YouTube channel for recordings of all the streams I also built a tool the other day the there's the video for building that tool is also online That is basically a website where people can vote for what ideas they want to see next So usually what I've been doing is on my I guess I should this is my Twitter account if you want to follow me That's unhelpful this one This is my Twitter feel free to sort of follow that if you want to hear updates both about the stream about rust in General and I tweet about other stuff as well, which may or may not annoy you who knows But I try to keep it relatively high signal So usually what I've done in the past when trying to decide what streams to do and what I did for this stream as well And how this came to be Is basically to sort of tweet out polls or or or just asking people? What would you like to see? and this particular stream came about because Over basically the past year. I've had a lot of people say can't you just like cover how async works? And in particular I sat down with Brian Myers who I met through Twitter actually and through the stream In New York the other day and he said how about you just do one on futures in Tokyo Just explain how it all works, and I was like that seems like a great idea But I wanted a better mechanism for learning what things all of you want to watch as well We built the other day was this tool where I put in ideas for ideas for upcoming streams and you can vote on them So in this case there are four ideas here currently and it uses rank choice voting Which is cool. You should look it up if you don't know how it works the basic idea is that you can sort of rank your candidates and Over time it runs It runs elections to try to figure out What is the most preferred candidate stream and then what I'll do is over time I will just take whatever is currently winning and then I will do that stream and then remove that and run Another election. It's pretty neat. You go catch the stream that I did if you care. It's not written to be Any kind of secure it's broken and all sorts of security ways. So like don't abuse it It's just there for you to be able to express what you would like to see All right, all of that said Sorry, this just Oh Sorry, give me one second. That's not what I wanted to happen. I just lost the chat, which is annoying Like so There we go chat back All right, so let's dig into futures For those of you who sort of don't know what futures are at all I won't go through sort of the entire story But I will give you some basic background on on futures and asynchronous Or in general for that matter the idea behind futures is That we want some way to express a Value that is not yet ready. So if you come from other languages, this is similar to JavaScript promises and many other languages have a similar kind of concept of a A value that's not quite ready But you can do if you just wait a second or wait a little while then the computations value will be ready You could think of this either as something that's compute heavy Like I need to compute a bunch of hashes and then eventually the hash is ready for you Or you can think of this in terms of asynchronous IO like you have a network channel And you want to try to read from it, but there's nothing currently on the channel And so what you get back is a future that says eventually there's going to be some stuff for you At a very high level you can think of this as a building block for doing lots of concurrent operations So the idea is you have an asynchronous computation. So you have something like I'm I I'm sort of reading from a network connection I'm just not going to block the thread right now The way this often comes up is imagine that you have a web server of some kind where you have say Thousands of connections you could spin up a thousand threads and have each one Reblock and read on its own connection, but that seems sort of unnecessary like why are we doing that? Why do we need all need all these threads? Instead what you can do with futures is you can have one Future for every connection and then you have a single thread or some set of threads that are Sort of just looking at all of them and just handling whichever one is ready at the time If you come from the JavaScript world or even for that matter to some extent from go But certainly in the JavaScript world You have this notion of an event loop so you can think of your program is basically being single threaded and there's just one thread that Whenever you tell it to like read from a network socket or something It's going to just remember that you asked it to do that and then when the network socket is ready It's going to tell the thread or it's going to sort of go Oh, that's ready now and then run the computation the way this often looks is So the way this often works is you have something like So let's see I'm Connecting to some server that this is not going to be real code But take for what it is so imagine that I'm connecting to server one and server two, right? So and then I'm going to do like x dot right So think of a program like this where I have a single thread I want to connect to both right to both and read from both right now I could have this code just work this way and sort of you're gonna connect and then you're gonna wait until the connection has Finished succeeding Then I'm gonna connect again wait until that connection is finished Then I'm gonna write to x and wait until all of those rights have finished Then I'm gonna write to y wait until those are finished Then I'm gonna read or then I'm gonna wait until x responds to me and then I'm gonna wait until y responds to me This works right. This is totally fine, but it's a little bit Finicky because we're doing a lot of this waiting So for example, why are we waiting for the stuff in food bar to be written to x? Like why are we waiting for all that to go out to the network before we send anything to y? Why not sort of send to x and to y and then wait for x and y sort of in parallel and whichever comes back first We're gonna deal with first and so in a in a futures-based world. What you would generally write is something along these lines and then So this gives you back a connection and what we're gonna do is We're gonna write food bar on that connection and then again, all of this is The types here are all made up like the the think of this more as trying to explain the idea behind basic race computation So here we get back the connection. We do see the read and here Whenever we get the bytes back from that So let's say this returns the connection and the bytes that we read then we check that The bytes are bar food Right and here we don't even need x anymore Right, and then we do the same for why they basically did the same thing. So the two futures are gonna look the same, right? This is gonna give you back a future and this is also gonna give you back a future If you try to like print this future or something try to figure out what it was It would just tell you I don't know yet Right, this doesn't it's not like I guess here Let's make this just be a blue Right, so this entire thing evaluates to a Boolean and this evaluates to a Boolean Right, so if you were to look at this future immediately sort of on this line We would do something like print line If we were to do this this would not print true or false, right? What there was would print is this is a thing that will eventually become a Boolean And so that would that's what we mean by asynchronous computation or futures Similarly here the type of this is also this is something that will eventually become a Boolean and then Where this gets interesting is now you have these two things that are basically just Descriptions of what to do you can think of this code as this code hasn't done anything yet. It's sort of lazy It's not quite true But you can think of it as it hasn't done all of the computation yet It's just describing the steps that it's going to go through right This is really just saying that whenever this finishes do this and whenever that finishes do this and whenever that finishes Do this but none of the things have finished yet They might not even have started yet and so this is where the notion of an executor comes in So an executor is basically something that you can give futures to and it will just make sure that they get done So if you have an executors, let's say x is an executor I guess I should say a is an executor right of some kind We don't really know what executor is yet. Then you can think of something like you're gonna do A dot run I guess we should call these few x and y So we can sort of run X We can run And only when we run them are we gonna get back the x This right, but this still is sort of weird This is saying run the entire x future and then run the entire y future That doesn't seem very much better than what we had initially and so what we might want to do is something like in this case because we We might not even care about the return values In fact, maybe we want the assert back here. Maybe what we really want is to say just run those and have them happen I don't care about the return value. That's when you can use the idea of a spawn So here we would do so a few decks So this maps to a Boolean of whether they were equal And we assert that it was equal and then we do the same for one right and so what spawn does is sort of tell this Tell the executor just run this thing and make sure it gets run at some point and do the same for y And then we're at the end just sort of gonna Like block on Let's say that that's what it's called, right? so what this is saying to the executor is just have these things run in the background and I don't care what order they finish in I just want them to be run at some point and just tell me when they're both done So you might wonder well, how does any of this help, right? We've spawned these but without knowing what that executor does Who knows like how do we know they're running in parallel? And so this is where the whole idea of futures come from and we're gonna start looking now at the futures trait because that is where all the Sort of magic comes from so Let's just focus on these top three lines So a future and keep in mind both of these are futures, right? This is a future and this is a future a Future is a thing that has an item So item is the value that it resolves to when it completes in our case That would be Boolean because this is a Boolean and Boolean because this is the Boolean error which Sort of goes away later, but let's talk about error for now So error in our case is the error value that each of these futures had each of these intermediate steps would give you Stop blanking my screen So You can think of this as like a TCP TCP stream connect is basically gonna give you back an IO error, right? If it fails, but in the futures world what that really means is this future is either going to resolve into a Boolean or It's gonna resolve into an IO error So think of this is like think of this as this could be a sequence of results where you use the question mark operator So the error type just keeps getting propagated So if you error you don't execute any of the other and then it's below you you just execute your current one It fails and then you the future just resolves into whatever that error type is So that's what type error here is and then all of the magic for futures happens in the method pole So let's zoom into that a little bit So notice it has a pretty long docs in this because pole is very important in futures land Let's look at what it does. So a pole if you pull a future it takes a mutable reference to self So to the future itself, and then it gives you back a pole of the item in the air. Okay, so what is a pole? pole is really just a result async of the The type and the error. So what is an async? And async is either ready or not ready and this is where we get to the heart of all of this What the executor does so what an executor does when it has a bunch of futures? So imagine that we have something like a struct executor executor Think of it as sort of having a vector of futures, right? It's not actually what it stores, but let's just imagine And when you do block on all Again, none of this is how the actual code is written, but I think it helps for exposition What does what the executor is going to do is it's going to pull on all of the futures sort of right? So what we're going to do is for 0 We're going to call after And recall that F dot pole what pole really is saying here is there are three possible return values It's error or it is async not ready or it's async ready So if there's an error then the future is going to resolve with that error, right? so If pole returns an error, then that future at error then we might want some way to take some the browser bigger sure that better so Pole is gonna. I guess we can match on it So it can either return async ready and that includes a T, right? Which is the item values of the thing the future results to It can return. Okay, it's not ready In this case, we have to do something and it can return error of some type Now imagine that we had some way of communicating back to whoever sort of in this case we have the future, right? So we don't really have anyone to tell about the result, but let's think about run Let's try to write this in a way where the return value is useful. Let's imagine that you had run all We're gonna change this to be instead of this. It's going to be run all Feud X and Feud Y. I guess sure. Sure. Let's do that So that makes sort of sense So we're gonna have a run all method and instead of storing this inside the executor We're just gonna have an executor and then it's gonna be given the futures to execute back future And it's gonna return a VEC of sort of the futures item I know this isn't real types, but you can think of this like I guess Now this assumes all the futures have the same items and the same error, but let's just imagine this is fine for now So what this function is really doing is it it's gonna keep like a Results vector Okay, so now our goal is to return the results of every future that we're pulling right This future result it returned ready and so now we have a result. I guess we're gonna need something like Okay, so in this case the future result it's basically telling us it's finished so async ready means that the future said I have done all the work like I've done all the end ends and I have the result at the end So the T here is the future item. So you can think of this is sort of like we're gonna store and results I okay Right and then in not ready. What this means is the future is telling us I have to do more work, right? There's a network packets. I have to do or I have to do some more computation Come ask me later. So in this case, we don't really have anything to do with this future This still has to do more work. So we just sort of continue If we get an error, then we stole that error in results, right? So Given this loop, we're gonna be given a bunch of futures We're gonna call pole on all of them and pole you can think of a sort of do more work is really what pole means And then if it says I'm done then we store the fact that it's done whether that's an error or an okay And if it turns not ready, then we have to pull call basically call pole again is what not ready means And so in our case what this means is we sort of need to do a loop here, right? because Otherwise we would never call pole again Right Now this is where you can see this start to sort of break down like Results isn't long enough to hold all these results. We would need to know when we finish But the basic idea I hope makes a little bit of sense that not ready Just means we have to call pole again to make more progress towards yielding the item in the end right and so I guess we can actually might write this in a way where Sure, let's do something even We're gonna not sort these in order again, I realize these are kind of silly but You'll you'll see why Okay, so now what we're doing is if we get ready then we push that result along with the future that was resolved this way So the index and I guess we do And then this loop is now wild And then at the end we're just gonna Right, so this means we're gonna keep calling pole on every future until every future has returned async ready So we need to intermute Because Pole takes a mute self Otherwise we wouldn't have the mutable reference Maybe cycle the iterator. Oh that so this is broken in all sorts of ways But I don't it's the wrapping around it is not super important It's more that you understand the idea that we're gonna if we're given a bunch of futures We have to call pole on all of them and then if any of them return not ready We have to make sure to call pole on those futures again, right? now You might start to wonder there are many things you should start to wonder when you see this code like What happened so here this means that we're gonna pull a future after it has finished Why? Here this seems like it's just gonna be a busy loop, right? We're gonna call pole. It's gonna tell us it's not ready and then we're gonna immediately call pole again That seems relatively inefficient and you're totally right on both counts In fact, you're not allowed to write this code if you wrote this your implementation of an executor would be broken In particular, we need to dig into what the what pole actually tells us And so the pole documentation is actually pretty decent Notice that it says you want to query whether its values become available We'll get back to this registering interest It checks the internal state of the future and assesses whether the value is ready to be produced which means basically Has your computation finished or has it not? and Notice that implementers of the function should ensure that never blocks, right? And so you can sort of see this from this code if pole blocked that we're not gonna pull any of the other futures for a long time So they're not gonna make progress either, right? So you can think of pole here. It's just making progress on whatever you're doing whether that's sending network packets receiving network packets Doing some more iterations of computing Like waiting for a timer to pass So pole is just going to do a little bit of work and then continue and this should lead you to realize why Calling run all on both future X and future Y means that we're gonna make it progress on both at the same time So remember this is all single threaded But we now have a single thread that is gonna sort of poke at the X future and say are you ready yet? And the extra is gonna say I did some work, but I'm not ready yet We're gonna poke at the Y future and say are you ready yet? It's gonna say not quite yet, but I did some work and then we keep alternating between them Until one of them finishes and we're just gonna poke one until that finishes and eventually they're both gonna be done And now that single thread has been doing sort of both of the the pieces of work in parallel Just by switching between them and this extends of course to be many many computation Right, so you can have a single thread that sort of manages say a thousand connections And all it's really doing is sort of checking all thousand of them like are you ready? And then it pulls on that to make some progress and then it goes back. There's an echo Ooh The drugs just kicked in Is there still an echo now? Or is it better? Like is the echo now gone? This delay from speech to chat is pretty annoying Still the same, huh? It randomly started in the middle of my sentence That's so weird So I just turned the microphone back off and on still huh Well That's weird And it's still there Huh unhelpful now, it's good. What that is the weirdest thing So now it's fine. It sounds weird Does that help like I there's a an internal echo That like there was another microphone turned on Is it now better? Fantastic That's so weird. Why did that suddenly kick in now? huh, oh well Okay, so you can see how this setup allows us to Handle many computations that are sort of like if you imagine that you have a bunch of computations And they're not really compute heavy so they're not like taking up CPU cycles but instead all of the futures that you have are Mostly waiting like waiting for a network packet to either be able to be sent or to be received Or they're waiting for a timer then they're all just sort of sitting there And so the single thread has a lot of spare time So if you had a thousand thread one for your connection, most of your threads would just be sitting there here We have a single thread that What it's going to do is just make sure to work on whatever is ready and just keep all the other things in the background And so the Where this is so this is this ties back to what we talked about with not ready But when a future is not ready yet, so it's its final value is not not there then it returns not ready right But notice that here's what where it gets interesting in this situation The future will also register interest of the current task in the value being produced When the future is ready to make progress It should unpark Okay, so this park and unparked business seems important you can think of this as When I call pole what you should do as as the future is you should mark yourself as not ready by returning not ready And then at some point in the future You're gonna make someone mark you as ready again right So here There's basically a contract where if you return not ready. You've arranged There is now some noise in the background Sound it's just the weirdest How about this is that any better or is that worse and brings back the echo That should be more what I want, but It's pretty unhelpful Well, uh, I guess the question is which this is that one and this is this is that one. Okay, great Well, I guess we'll we're back to this I think this is the right setup I don't know what the noise is um Worse I just wish audio would work. So you're saying now it's terrible or is it now fine again after I Disable this business Now you can hear me and it's fine or now It's okay. Now it's good fine Okay, so there's this um This is contract and not ready that says if you return not ready Then you must have arranged for yourself to be woken up again when you can make progress Right. So poll says I can't make progress. That's what if not ready is returned Then that's what poll means. I can't make progress But I've like told my friend over there to wake me up when I can make progress the the The basic idea is like, um Think of think of a future that it's just like waiting on a timer Right, so you ask it. Hey, are you ready and it goes? No, I'm not ready. My timer has expired yet What it's going what that future is going to do is let's imagine there's some other thread out there That's just running. This is checking time. It's going to tell that other thread. Hey, uh, you other thread over there Can you wake me up when this time is reached? And so the way that works is there's this uh task notion So Where is the documentation for task? Right so, uh You can sort of get a handle to yourself in futures world So when I when I poll this future There's going to be some magic that makes sure that the this this value sort of points to myself points to Um, sort of this executor if you will It's not quite this executor. It's uh, but you can think of it as this executor for now. Um When I call poll Can call this method To get a handle to basically the current executor. So that's what this task is Again, it's not really what a task is but it's fine for now That task item it gets It can give away to whoever it wants Right, it can send it to a different thread It can do whatever it has to be the same process, of course, but generally it's going to like give it to some other thread Um, and then that other thread is responsible for at some point Calling notify on that task And what notify means is You previously couldn't make progress, but now you might be able to make progress Right, um That roughly makes sense. So here Um And so You can sort of think of this as we're going to poll all the futures. Um, and then we're sort of going to go to sleep It's at the end here sort of going to be a Let's call it a sleep. It's not actually asleep, right? Let's call it So here the thread is going to go. I pulled all the futures. All of them said they were not ready So i'm just going to wait until one of them is able to make progress And then on the side there's going to be Sort of meanwhile here Meanwhile There's some other threads somewhere Uh, like some other thread t notices that Say, uh, a network A network packet arrived And in particular it noted notice that a network packet arrived That the this future that we pulled the return not ready was interested in Right, so again think of a future or something like a tcp connection, right? If this is a tcp stream, then if a packet receives for that tcp stream Then the thread somewhere that it's like monitoring the tcp streams Is going to notice oh a thing came in and this thread when it returned not ready It took its task handle and gave it to that thread So that thread knows that when something comes in on this this tcp connection I have to wake up that task So it calls task dot notify And that task dot notify is going to unblock this thread Again, this is not actually thread park. There's some other mechanism that we have to sort of wait for Task notify to be called right And now you can imagine this being even more sophisticated. So for example Instead of just having one task that we give to all the polls. We actually have we sort of keep one task One task per Per future. So let's Write some sort of pseudocode here. So here as the executor I'm gonna Task dot push Sort of task new Right, I'm not going to say what's in a task. There's some stuff in a task that lets you notify, right? But it's imagine that I could sort of create new tasks on my own Um Like you could easily imagine it has to be something like a Mutex Notify the executor when the result is ready. Isn't something similar to a callback strategy Ah, so so the notify is not necessarily that the future is or when the result is ready Notify just means I think you can make progress now Right, so imagine you have a tcp stream, right? So a tcp stream is a future that doesn't return until the entire stream goes away Right, so in that case If if the tcp stream future gets notified What that really means is just there's maybe something for you to do now because I received something or I sent something or Something happened, right? And so you might want to check in on your stream again So imagine that for every sort of top level future that we have we create a task and then here what we're going to do is sort of tasks set current Uh tasks I and then down here we're going to do something like Or the way to do that here, um If and continue don't pull futures that can't make progress Right, so the idea here is that uh here Uh f Must have arranged For uh tasks I its task to be notified later Right, so this this notion of a task Both means that we know that there's no reason to pull a future that can't make progress Right, if we haven't heard a notification for it, then there's no reason for us to pull it And then just before we pull it We sort of set this magical thing that's set sort of sets what the current task is And then we pull so now if the pole tries to get a handle to itself to its own task It will get tasks I And so if it gives away that to some other thread this might just be a clone or imagine that's a clone Um Then if it gives away a handle to that task to some other thread then that's really a handle to tasks I And so if that other thread detects that there's something relevant for this future It's going to notify this is going to notify tasks I which means that the next iteration we go through this loop Tasks I dot notified is going to be true. So we are going to pull again Right, but notice what this means is that if a pole Fails to get someone to notify it it will never be called again And so this is why part of the contract for pole if you implement a future is that you must have arranged for yourself to wake up again If you don't pole won't be called again. You won't get to make any more progress And so down here you can now imagine that we actually let this be a busy loop Right, the thread is just going to spin and it's only going to pull on things that are That are ready to make progress And then eventually at some point all of the future is going to complete and it can sort of go on its merry way Of course, you could also imagine that this could be more efficient, right like down here It does something useful But we don't really know what that useful is yet like currently let's just have it busy loop It seems like the the tricky part in all of this setup is to find Some way to have these tasks been notified Right, like how do I get something to be notified? Now futures doesn't actually the futures crate doesn't really talk about this Right, the futures crate is only about expressing things that are asynchronous And the mechanisms that you use for waking things up it even has Down here Executor So an executor is basically the thing that we wrote here, but executor here is really just a The definition or the The types that you need in order to write an executor So notify handle is sort of you can think of this sort of akin to task Like notify handle is the thing that we're going to make that we're going to Set as the task that someone else is going to be able to call notify on and internally. It's like, uh, I mean, who knows what it is internally actually unsafe notify Gee, that's helpful Right, so the idea here is that if you are an executor, this is the with notify, right? So with notify, this is really going to be something like executor with notify Right and this appears not going to be task new, but it's going to be a notify handle new But all of this and you notice that notify handle Also just has a notify the the id here is so that you can have Here so actually what this does is instead of having a vector of these You have a you create your own notify handle Well, let's ignore what this unsafe inside of it is Yeah And then down here executor with notify So with notify is what should you do? So notice it takes a Notify which is going to be our I guess this should be a notifier So notice this is all still really just the task stuff But now we're actually fitting into what futures does So it's a notifier an id which is going to be An identifier for the current future, which in our case is just the index into the vector And then an f or f is Oh, that's fine The closure f will be executed right so really what we want here is With notify says Just give me a notifier I will set all the task stuff up for you and then call whatever you give me in this closure In this case what the closure is going to be is f got told And so we're going to match on this whole thing And so notice how this is basically the same as what we wrote just written without sort of the additional vector and all this stuff So the the futures internally the future stuff is written to have very low overhead So you don't want it to do lots of vector allocations and whatnot in this api Now we just have this single notify handle and then we have these unique IDs that we used to Uh, we used to Figure out what's going on. Um, and then notify handle Save notify I think that's what I want Right, um so The real way to do this is Just so we actually now that we now that we're decently close. We might as well do this anyway Notifier is going to be art new of something we're going to give that to this um So you could create a notify handle by giving it um basically, uh A thing that implements notify It just happens to require that there's an arc here so that it can easily make more of them, right? So imagine that you're spawning you're creating lots of futures Then all of them need to handle to the same Notifier for your executor and so Arc seems like a reasonable thing for that to be Um, and then we need to implement the Notified trait for what we give it. So in this case, this is going to be like an e notifier And we're going to do this in a stupidest way so So my notifier Is really just going to be What is it even going to be? an A mutex of a bool. This is going to have an arc of a mutex Again, this is a really stupid implementation that has all sorts of overheads But my hope is to give you roughly an idea of what's going on um, so here what we're going to do is uh, we're gonna We're going to just use a mutex over a vec And that vec is just going to have Uh Sort of store this value that we use this notified right whether it's notified or not And we're going to say that initially we should think of all the futures as being notified We want to pull all of them sort of the first time otherwise they never get a chance to Sort of set their notifiers All right, so this is just a my notifier and we're going to give that in and then we're going to implement notify for my notifier And the notify trait does these things provides implementation for those and the main thing it does is have this notify method. So this is think of this as um If something that's been given a task. So remember how In poll you can do task current Which gives you a task Right, and then on that task you can call task notify So if t is a task then you call t dot notify um If if you call t dot notify within this poll What that's going to end up doing is calling notify on this notifier with this identifier And so that's what's happening here. And then let's just imagine that the way this actually works is this is going to lock The mutex Then it's going to set that index to be true And down here what we're going to do here is uh self dot zero lock so that is the the same my notif or the same The same mutex We're going to lock it and look at i and if The ith element is true. Sorry if it's not true, then it hasn't been notified And then of course the moment we choose to poll it We now want to say that it is not ready anymore. Otherwise, we're just going to keep calling it That roughly makes sense, right? So the idea here is I guess this should be uh was or Yeah, it was notified If the i future was notified in the past, sorry if it was not notified then we don't need to pull it And now we said was was notified to false because we're about to pull it, right? Okay, so now we have all the infrastructure that's necessary for notification Which is sort of causing a thing to wake up later And that's all that's really defined in executor So the way we got into this was the question is what do we How do we get notified to be called right all of this infrastructure we've set up all that really does is it's a bunch of infrastructure so that the When you call poll you have a thing that you you get a handle and you can call notify on that handle The question is who calls notify on the handle? How do you know when you can make progress if you return not ready? Who's going to wake you up? Who are you going to give this task to? And that is what brings us from future land to tokyo So futures defines as we as we talked about sort of the general principles for Or the general interface for how to deal with asynchronous computation The interface for things like tasks and notifiers and wake-ups and executors But it doesn't really define any implementation details Right it doesn't give you an executor It doesn't give you some other thread that's going to wake you up it only defines the interfaces tokyo on the other hand is an implementation of Both an executor and sort of the other infrastructure that you need in order to make things wake up That is basically what tokyo provides And in fact as you can see for the documentation tokyo is an event driven non-blocking io platform to write asynchronous applications with the rest of the programming language and So in some sense what tokyo is is this business, but this business was not too hard to write Of course, it's a lot harder to write an efficient implementation But it's basically this Where tokyo shines is it also gives you this thing that's going to wake you up And in tokyo that thing is called a reactor So let's go to Or there are many things that can wake you up The most common thing that can wake you up is that there's some kind of io that you were waiting for Whether it's a disk reader write that finished a network reader write that finished like some socket is ready So like that kind of io is the most frequent thing to wake you up But there can be others and we'll get into those later like timers for example So Actually, maybe we should do timers first No, let's do io first right so Again think of the case where f is a tcp stream I guess While we're here, we might as well make this proper. So this is going to be f It's going to be f items is going to be f error Where f is in this future Right Remember this requires all of your futures to be the same type This is not true and sort of the real implementations But you sort of get the idea of course normally you also wouldn't return like a vector with The index is one of the values. That's also really weird But such as life Okay, um So the question is how can we do like what is a reactor? Well a reactor is basically a A thing that you can Give a tuple of a task so a thing to wake you up and Sort of a Handle to something the operating system is going to tell you is ready So in order to understand this we first need to talk a little bit about how how asynchronous asynchronous aio works in the first place How much of a change is future 0.3? So we'll we'll get into that a little bit later in part because future 0.3 isn't fully stabilized yet That's why this rfc is open down here. Um It is Basically the same in terms of execution model But there are some changes to the interface that we'll get into Uh Later the reason I want to do it later is because tokyo uses future 0.1 And so I want to talk about it in the context of tokyo first and then we can talk about how this is changing going forward Um, okay, so so to talk a little bit about how how asynchronous asyncers io works in sort of It works differently in different operating systems, but the basic idea is the same So if you have some kind of io handle Oh, I can draw. Yeah, let's draw. It's time for some drawing. Who needs code? So i'm a terrible drawer. You're gonna regret this. I'm gonna regret this, I guess. Um, so You in most applications you sort of have oh, I guess I need a um Down here we have the kernel Wow Writing on this is hard. Uh, so down here we have the kernel and then up here somewhere is your application This is your application. It's a great application. You're super happy about it. Now, let's erase that No, that's fine. Um What do I do? Um So you have your application you have your kernel and your application has a bunch of outside connections Right and the kernel really those connections aren't to your application, right? This is not really true In reality, there are a bunch of connections that your kernel knows about and it sort of has a has a connection Or sort of ties each of those connections back to your application And the way it does that Is that it internally has some identifier for each of these and um in linux and unix and mac os Are not in windows quite sort of so the idea is that each of these have sort of an identifier Let's just imagine that there are numbers on linux. They are in fact numbers um And your application when you call for example, uh tcp stream connect Which you really get back as a number is the kernel tells you I connected and I gave it id one And so when you do connect here um In this case imagine that you have four things already you connect to some server the kernels eventually going to tell you Okay, that's now five And it ties that also to your application Uh identifier. Yeah, so the this identifier is the file descriptor in linux All right, another operating systems that call different things But this is basically a a number that identifies this particular io resource in the kernel or this resource in the kernel Uh, so connect is going to give you five and then when you later So let's say in your application you said something like x is connect Then what this what x really is storing is just the number five nothing is When you call say x dot write Uh, and you give it say foo like you give it some string foo Uh, what this turns into is really just you do a right system call So a system call is you tell the kernel to do something So that's the primary interface between these two is called sys calls Um Really what you're doing and this is usually a c api of some kind So that's really just calling the the sys call right with the number five And foo Right, so anytime you do an operational on x you're really just doing an operation on a number But the the kernel knows about the mapping between these numbers and the actual underlying resources To all of that roughly makes sense So where this now gets tricky is imagine that I want to do let's do a Clear this So and then redraw just the basic things that we need so we still have the kernel down here It has a bunch of resources it has numbered them And your application is up here I'm gonna draw it a little bit bigger this time And it has it knows you can think of it as like it knows all of these numbers But that's all it really knows You can ask the kernel more things like if you wonder what ip address that I connect to there's like a System call you can do that says hey tell me more about number three so uh In our future space world right we have these numbers one two three four Which are which we've mapped to say x y z and An a region letter because why not um I also I also just the middle of the three new region letters because it seems like more fun Um All right, so we have these four variables that are all tcp streams and internally. They're really just storing like One two three four, but our application doesn't really know that right it just knows the names And now we have some future that's computed over all of them. So imagine that we're gonna do like x dot read Uh, we're gonna do the same for y we're gonna do the same for z and we're gonna do the same for Right, so all of these four we're gonna do a read on but of course in our asynchronous model We don't really want to first do x's read then do y's read then z's then us Instead what we want to do is sort of do all of the reads at once and just wait for whichever comes back um This Turns out to be a little bit tricky to do right because the system calls we have are a read Right, it's a read of some number here Right, that's going to give you back some data Now this is actually what the system called interface looks like there has to be a buffer and whatnot But generally like you can think of it as you do a read you give it the number the the resource you're operating on and it gives you back the data So given that interface, how do we map that to doing these in parallel? Imagine we we only have one thread we want to do these four in parallel on one thread When you do a read like that's going to block you And so this is where the idea of non blocking io comes in The kernel has this this like this additional parameter in here That's called usually flags. It's different on different operating systems But let's just talk about linux in general it generalizes the windows somewhat The interface is a little bit different, but The the ideas map pretty well So you can pass in flags and one of the flags that you have is one that's called I think it's oh non block now This uh This flag is not actually set on read, but rather it's something that you set on the socket So you say that from this point forward whenever I do an operation on say socket four I want that operation if it were to block so if there's no data to read I want you to return saying there was no data to read Instead of just like blocking so normally I ignore this then Normally when you do a read if there's no data available The kernel is just going to stop that current thread and just not return until there's data If the socket that we're reading like it here Let's imagine we set this non block flag on four then if this is four and there is no data Like there's no data at all What the kernel is going to do so imagine here I do a read Of four and four has no data What I'm going to get back is basically an error from the kernel and that error is going to be of the type would block This is again different on different operating systems But in general there's some error messages said you told me to read There was nothing to read and you have told me not to block on four Therefore, I'm going to just going to tell you this is an error. I would have blocked And similarly, of course, we could set this non block flag On all of these resources, so we try to read it from any of them and it doesn't succeed. We're just going to get would block Okay, that's all well and good. So what this interface would give us is we could just sort of Try to read x and then we could and then if it returns data, that's great Then we can continue If it doesn't return data, we then immediately go to y and we try to read from y And so now we're just sort of polling them, right? So this is very similar to the api we had in futures We're just going to sort of try each one and then if none of them had anything we go back to the beginning We try each one we keep doing this, but that is really inefficient, right? So imagine that we could this is sufficient though Imagine that we have our executor down here, right? and We have these futures and each future is trying to block on a different stream And so what the remember how the the core issue is is who's going to wake you up So imagine that this future spins up another or it doesn't even spin up another thread Uh, no, sure. Let's imagine it spins up another. So the second thread in our program And what that thread is going to do is it has sort of a channel into it from here That tells it a task And a uh One of these identifiers so an ident This would be a file descriptor right So every every future if you have a future You call poll and it decides that it needs to do some reads from the network and the network isn't ready yet It sort of sends its own task and the file descriptor wants to wait on to This thread and this thread is really just looping And in every iteration of the loop it's doing like for task Uh an fd Um for each one then like do a read on fd And if this returns anything that is Like not would block Then notify I guess notify that task right And of course you would have to adapt this to also do things like deal with other things than reads You would also have to make sure that this read doesn't actually read anything Right because if this read Actually reads say a hundred bytes from this file descriptor Then when we then go back to poll the future and it tries to read those hundred bytes would just be gone And that's not okay. So so writing with this interface is is uh actually not straightforward But you could imagine you could implement this way But this is of course really inefficient because it means that first of all Sorry, uh, you also need this other thread and this other thread is going to spin doing system calls And imagine you're waiting on say a thousand sockets Then this loop is just going to be Trying to read from every single socket over and over and over again So it's the the time it takes to imagine that um You're doing the spinning through right you're just reading from all of them until one of them returns Not anything but would block and then say the The like five hundredth socket Is now ready But you just checked the five hundredth socket right before it came ready and you're on like socket five hundred and one Then this future wouldn't be woken up until you've walked through all the other All the other file descriptors and then gotten back to it. That seems really inefficient And so this is not really the api we want This is the api we'll want to use inside the future right if i am a future and uh I'm a tcp stream and i'm trying to read some data I would do read right because I need to read and if I get woodblock what that translates to is really um I want to sort of register my handle with this thread and then I return not ready right So that that part is all fine For using read inside of that but for implementing this thing that wakes people up We don't really want to use This sort of non-blocking interface because it's really inefficient and also not entirely clear how to even do correctly right For example, um Imagine that you have to do this for you do have you would have to do this for writes as well What would you write here to test whether it was ready to write you write like no bytes? Is that even legal? I don't know I think then it would never block because it it says you asked me to write zero bytes I did it so you wouldn't even know whether it would block. Okay So this is all clearly kind of stupid. Uh, so this interface is not really what we want But there is a better interface Can I erase just some of this? This is a new drawing program. Oh, that's awful. Can I somehow Change there we go brush Make it a big one Yeah, great. Maybe larger than I wanted Um, oh this goes away. All right. Let me get small again. So all right, um So we want a better basically we want a better interface that lets us do this wake-up business And it turns out there is uh, there are different interfaces to use for this for different operating system We're going to talk about one called e poll E poll is what you end up using on linux And all the other operating systems have mechanisms that are not quite the same but have a similar Uh end result so e poll is this interesting system call that uh, you can sort of think of as You give it a bunch of file descriptors Uh, and then you give it a bunch of sort of operations But not not the actual read or write operations just sort of says Can I read or can I write or? Let's imagine just those two, right? Uh, in fact we because we're allowed to cheat we could make this a lot So we're gonna have something like It's gonna be given a tuple of fd and op Where op is either Is sort of can read or write And then it's given multiple of these and then It returns um A list of fds So let's go through this in a little bit more detail So the idea here is that if i'm a thread I can call e poll And let's say hey kernel I want you to block me And then I want you to tell me if any of For each of these pairs if any of them that operation is this operation is ready for this fd so um So this operation Is ready for this file descriptor right And what kernel is going to do is it's sort of going to Internally add markers to each of these Just sort of say hey, so imagine that we call this with like four and read Uh, and one and write It's going to add a marker to one saying hey if you detect that I can now do a right That a right would no longer block if I tried to do it So basically the space in the outgoing connection Um or into four. Hey if you notice that there's data available to read Then the kernel is going to keep sort of a little Thing over here sort of like a notifier. This is very similar to what we did for future. It's just internally in the kernel So it's going to I guess it's not going to mark on these It's going to mark on one and four and then say if you see a right then uh come poke me If you see a read come poke me And then this e-poll is going to block until this thing gets poked When it gets poked It will return the e-poll and it's a little bit smarter than this in fact E-poll is a system call. Yes Um It's similar to read and write except that this is the interface instead And if it's a little bit smarter it initially when you first call it It's first going to check all of them and if any of them are ready it's going to return straight away Right, so it might tell you immediately that yeah four is ready for reading and one is ready for writing Go ahead that might just be what it returns But if it detects none of them already yet, then it adds these watchers And what it returns is These are the file descriptors that are ready It I think it even maybe gives you the ops as well So what this returns is like four is now ready for reading Maybe one wasn't ready for writing yet, but four is now ready for reading And now you might see how this fits into our whole executor idea here so because All we really have to do Is have this other thread. It's still going to be given task and ident and maybe now it's also going to be given an op right And then inside of inside of this Inside of this other thread that's going to wake things up Uh, we're going to do something like this actually here. We might switch to code again All right, so we're now looking at what's inside of that other thread So this other thread like this is going to be the reactor thread And the reactor thread when you start it you're going to give it, uh What are you going to give it? You're going to give it like a channel It's not really going to be a channel. Let's imagine that it's a channel of uh task Uh and a file descriptor fd for file descriptor And an operation Where enum operation Is read or write or I guess Read or write you can think of this as can read and can write but read or write Uh, and internally what the reactor thread is is going to do It'll loop and operate forever It's going to keep sort of a waiting for Which is going to be Let's imagine that it's a hash Hmm No, no, sure. Let's imagine a task From fd and operation To task on every iteration of the loop It's going to make like a select set. So this is uh select here is the the sort of set that we're going to give to equal Uh, and that's going to be Waiting for dot keys dot collect and then it's just going to call epol. Remember epol uh will Oops, uh Epol returns this out of file descriptor and operators, right? So this is going to return something like fd and op Uh in epol select So we're going to do an epol over all the things that we're currently waiting for And then for everything we get back, we're going to do uh task is waiting for dot remove fd op And we might even do something while let uh some sql to Notify me So we're also going to sort of accept more of these If there are any Then we're going to do this insert fd op So this is our reactor thread Right, it doesn't really all do all that much It just accepts new things to watch for And then it just watches for all of them and anything that's ready. It will notify In fact, when you look at this, why does it even need to be a thread? Why does this have to be a separate thread? It doesn't right here? Remember how when we were writing our executor? oops Here remember how when we're writing our executor There's like some piece of time during our executor where we might not have anything to do We've pulled all the futures all of them have said that they're not ready So how about we just do some useful work like run the reactor So instead of having this separate reactor thread and have this channel How about we just sort of Do this Keep waiting for up here and then down here, we're just gonna We're just sort of gonna Whenever whenever we know that there's nothing more to do I guess technically We have to be a little bit careful here um You could also imagine. Well, okay. I'll ignore that for now. Um So what we're going to do is whenever we know all the futures can't make any progress Then clearly we need to wait wait for something right? We need to wait for Something to happen so that they'll be ready again And why don't we do the stuff that's necessary to make them ready again? In fact, it's totally fine for us to block until they're ready again Which is what we're doing here, right? Epo is going to block on all of these select things On everything that everyone is waiting for And then whenever Epo returns that means something is now ready. And so we're just going to notify those And then we're going to go back to the top and then we're going to go through all the futures again So now this is all still single threaded. There's none of this extra sort of indirection through a channel All that is unnecessary In fact, because all of this is now single threaded. We don't really need this arc and mutex But we're going to leave it in because Getting around it is a little bit annoying But just believe me that it's possible to get rid of that extra these extra interactions We do have one issue though, which is Now because we got rid of that channel and even if we did have the channel How is pole? How does the futures pole send some like If we still haven't said how it adds something to waiting for right? How do I as a future I'm given no arguments to poll How do I sort of say to the wider world that hey, uh I want to wait on this file descriptor. I have no way of communicating that And so, uh in tokyo with reactors So tokyo is split up into Many many sub crates that deal with specialized things and then tokyo is sort of putting all of it together Uh, why are we dropping what's not oh, sorry, uh, the reason we dropped was notified is what's notified is holding the lock and pole might try to notify itself immediately and then If we're still holding the lock that notify would block And and would never be released because we're still holding the lock. So it's just releasing the lock We could easily do this as well with just this It's just to prevent a deadlock. Oh Um, right, so Let me save this just for posterity. Um Um Yeah, sorry. So tokyo has broken it into lots of small sub crates Basically, there's one for the tokyo executor. There's one for the tokyo reactor There's one for tokyo timers and the the tokyo crate itself Just sort of brings in all of these things And it's intended for anyone who uses futures If you are implementing futures yourself, you should depend on the individual crates like tokyo reactor And that's because that has all the stuff that you need to be able to notify Uh, and so here we'll see that uh in reactor Sorry, yeah, so remember how we had this, um Executor with notify right which sets the notif the sets basically the task that's going to be returned by task current Well, tokyo defines this reactor with Uh With default so it has handle which is a handle to a reactor Uh and handle has a handle current. So this is very similar to task current. So inside of a future There's also going to be the ability to do handle or I guess tokyo reactor handle current And that's going to give you back a handle and a handle is uh, basically So if we look at handle, uh, where is the Yeah, so, um You can create a registration and registration is basically like this, uh, this topple that we have of task and Uh task and file to square pattern operation And you say register with then you give it a handle and the handle is going to be the reactor And so this is saying sort of giving to the handles Telling the handle. Hey, this is one of the things I now want you to notify on And so given that we're implementing our own handle Um, tokyo isn't because tokyo just provides us with a reactor This is an interface that we can write ourselves because tokyo just provides you an implementation, right? But if we were to the idea would be that we create a handle and handle has a method if we could look at it internally, I guess Um, it's a lot of very optimized implementation. Um But the idea is that inside of inside of the future You can call handle current to get a handle to the reactor in our case the reactor would really just be waiting for right So this I guess would also be an arc mutex probably Um And then given that handle of course now you can add something to waiting for And if we go back to here You'll notice that this with default So in addition to doing this we would also do tokyo reactor Or if this was sort of generic it would just be reactor, right? And we would say this with default We would sort of do like my handle like we'd make our own handle Uh ignore enter for now And the closure is going to be called as this Right and so this sets the what handle I guess this is going to be yeah This sets what handle current is going to return and then my handle Here would really sort of be waiting for right This is sort of makes sense So inside of pole we now have access to the current task for wake-ups through task current We also have access to the current reactor through Reactor handle current and so this this we can use to notify This we can use to put something in waiting for which will also be the causes to be woken up later Uh Yeah, we'll we'll get to future 0.3 later Um Okay, so that is basically all the infrastructure that we need in order for Um For pole for futures to be able to have themselves be woken up later when iowa happens, right? So inside of a future Where can I down here? I guess so let's say we have some struck through Um Which internally contains like an fd A faulted script And then we're going to implement future for food Its item is going to be nothing its error is going to be nothing because It's not important for this particular discussion Uh, this returns we're going to make it explicit just so we I don't miss anything Right, so we're doing this Uh, actually let's imagine that this is Standard tcp stream, uh with the exception that but with non-blocking set Right, so this means that if you tried to read from it and it failed it would give you It would just say Woodblock So what this poll is going to do let's imagine that what we really want to do with this is just Uh Print every time there's stuff coming back from it So poll is going to do like self dot fd dot read right Now let's match On that that could be okay, which is it read some number of bytes, right? And if that's the case it's just going to print line that like God This many Again, this is with imaginary types Um, if it got error and then the error was like an i o error woodblock So if it got a woodblock then it's going to return async not ready like this And if it got any other error Then it's going to return that error right if any, oh, I guess What we could do is like i o error closed Again, this is not actually how streams work, but it's not important. Uh, if we got closed Um, then Now we know that we're not going to get anything more So now we're going to say that this future is ready. It's finished. There's nothing more to do for it Right, it's not going to do anything more. That's useful work um So this might be the sort of naive way to implement poll, but of course now We have no way of waking ourselves up. So here, uh, we've returned not ready But we haven't arranged for ourselves to be woken up. So the simplification is clearly broken uh, and so here we're going to have to do something To make sure we are woken up And in fact, it turns out that making ourselves be woken up is not that hard anymore now, right? We're gonna Uh, basically do handle current So this gives us back the reactor right and then, uh the Let's just ignore tokyo for a little bit right now So, uh reactor dot sort of register self dot fd fd Imagine that we could do this Sorry, that's still not sufficient and we're gonna have to give it task current because it needs to know who to wake up And now if you imagine if you're sort of squint at it reactor could totally be waiting for from up here So that when we call reactor dot register It really just sticks the file descriptor and operation into there and the task current as the value of that map And now we know that if we're if it's ever possible for us to read again, we'll be woken up appropriately Even this code is a little bit broken though because imagine that we do a read and then we get some data Then of course, well, this wouldn't type check in the first place, but I try I guess here What do we return in the ok case? Like the read succeeded And so We got some data now. What do we do? Well, we could return ok not ready to indicate that To indicate that there there's going to be more data, right? It hasn't been closed So we might get more data But then of course then we have to do this in order to maintain the contract And that might be fine like this would still work just fine But imagine that there's a bunch of data available. We read a little bit of it We don't really want to have to go through the entire song and dance and sort of go to sleep Put us on the reactor the reactor does epel we get woken up and then we do a read again If we could just immediately read again So actually what we're going to do and this is usually what you end up with in poll implementations Is we're going to lose So as long as it's possible for us to read more data, we're going to do it Right, so remember that read will never block now read will always return immediately If it if it would have blocked it returns would block So the read is going to come back We're going to read some bytes and then we're going to immediately try again And it's only when the read tells us there is no more data now That we register on the reactor and then return not ready right So I guess this is going to be something like uh print bytes Because that's what it does And this is actually the implementation of a future that you would have now That's only somewhat true because the interface for reactor is not actually this If we look at reactors When you get a handle the only method on handle is current and implement some other things, but that's basically unhelpful Instead what you do is you create something called registration Uh So uh registration is an ior resource, right? So this would be a file descriptor and an operation for example um I guess actually that might be a higher level background, maybe No Pull event in Yeah, this one Or Yeah, so you would instead do this So You would do something like pull evented new with handle Uh And then you would give it the fd And then you would give it The reactor What does that give you? Dot whole read like so So this that's how you would actually write that code. Um And if Right, so if you if you look at what this does it creates um So pull evented is basically a a struct that contains things like the file descriptor So it's basically the same. It's just wrapped in types Um And new with handle is saying I want to listen for this file descriptor on this reactor And then I want to check I want to wait for ready now. This actually returns if we look at it Pull read ready returns a pull Of I don't know what ready is here So this returns Some stuff that right returns either async ready With a thing that we're going to ignore for now Or it returns okay async not ready Or it returns error And if we look at What where is it here? If the resource is not ready for a read Then not ready is returned and the current task is notified once a new event is received And this last part is the crucial bit that we needed right remember the contract for not ready Is that you have made sure that you will be woken up again If we call pull read ready and it returns not ready Then we know we're fine So here, um, if we get not ready Then we know that we can just return okay async not ready And we know that the contract is maintained if you get error then we just return that error onwards If we get ready here Um So first of all keep in mind you will never have to write this code yourself because there is Tokyo net Does all of this for you? You will generally never have to write at as low as of a level as interacting with the reactors I don't think I've written any code where I had to interact with the reactor correctly Because you just go through things like tcp streams But this is more the goal here is that you understand what's going on behind the scenes Okay, so async ready Basically tells you that if you were to try to read now you would actually get something So this is really just saying try again So this is like The file so this is really itself referred to as a socket the file descriptor for a network thing is called a socket Uh, so this in this case what the saying is that the fd socket became ready between When we read and called all ready Now of course the actual implementation is smarter than this. It doesn't it doesn't call Read and do this read Uh, and sort of do an extra system call. It doesn't have to do that. It's smarter than that But in our implementation, this is fine. Um And so notice that this now this poll now follows our contract And it does all this registration with the reactor and all of it just does the right thing and then down up here We're just gonna If you imagine that this handle is really a handle to our waiting for then this all does the right thing And in fact now we're getting pretty close to what tokyo does So tokyo Has this idea of a runtime So a tokyo runtime Is basically a thing that provides you with a reactor An executor and then also a timer. So all of the things we've talked about for reactors so far Uh, also applies to timers, right? I imagine that I have a future and I wanted to resolve 10 minutes from now Then who's going to wake me up in 10 minutes? Right, you could spin up another thread and have it just check the time all the time But in reality, that's not really what you want to do And so instead what you do is at the bottom of each loop like this You sort of check whether any of your timers have expired Um, and if they have then you notify the appropriate Tasks if they have not then you go down to e-poll and then instead of just using e-poll use e-poll timeout And then you give like the min remaining timeout here so what, uh What e-poll timeout does is it will it will block in the kernel, but it will block for at most this long And so the reason you want to do this is to ensure that if there are timers you're waiting for You'll sort of get back to them in a timely manner Imagine that the things you're e-polling over all of them just block for hours and hours If you just blocked in here, you wouldn't check the timers again For hours and hours, which means that the timer that's supposed to go off in 10 minutes Wouldn't go off in 10 minutes and so instead you will do e-poll timeout and here the e-poll would then return in 10 minutes Then you would go through the loop see that some of the timers have expired notify them, etc and timer Is very similar to reactor in basically all the ways Don't give a timer If you look at it, there's a width default and there's a handle Current right so that the setup is exactly the same as for i o like the interface for interacting with it And then of course you end up with here tokyo timer with default Whatever and then this closure right so you can see that this starts adding up a lot because you have to Give it handles to all the different ways in which it can wake up Uh, and so this is why we have this notion of a runtime now And a runtime is really just it gives you all these three things Is it possible you rarely reach that bottom part because there are many futures in the queue Yeah, so imagine you have tons of futures and they're always already You basically need to Find the right way to switch between dealing with the futures and driving the reactor or driving the timer We'll get back a little bit to that question then about what do you do if there are lots of futures? but the basic idea that or the The thing to keep in mind is the the reactor the executor does have to deal with this right it needs to know that Sometimes it needs to like not execute futures because it has been a while since it's checked its other stuff This is basically the idea of fairness, right? You want your executor to always make progress And if it has lots of futures that it's calling poll on then it is making progress So in some sense it's doing the right thing But what you're asking about is But that means that there are some things that are not going to be notified, right? So you're not providing fairness You're not allowing all the futures to make progress instead You're taking just a small set of the futures the ones that are currently ready and just executing those and so Again, you have to think of this as if you only have one thread Then if it's always if it's doing work then it's being productive So is it then better to check the timers and work on some other futures? It's unclear, right? You will you're already using all your cycles And so why is it better to check that other future that depends on timer? Maybe it's not But it is true that that means the timers might take a while to fire. They might not fire straight away And so the usually the executor has some kind of Almost like time granularity of in fact if we look at a tokyo timer I think it says Delay has a resolution of one millisecond So if you want a timer that's finer grade than that, it's not going to give it to you I don't remember exactly the details around this. I think it's also related to the fidelity Of the granularity of the timestamps you can give to like a e-pole timeout Um but But in reality The executor also has to make sure that it checks the timer that often Okay, so the notion of a runtime is really just all the stuff that we've written so far Sort of all this with default with notify business And also all of the so here just like we had Just like we had timers and reactors We also have tokyo executor And tokyo executor is basically the thing that we wrote Right so executor with default basically just calls into the future executor with notify And there's a Where is the Right, so this is just all the traits that you need around an executor And then the runtime provides an implementation of a reactor an implementation of an executor an implementation of a timer There are Right, so if you if you see what a runtime does it's sponsored. It spins up a That's not true anymore. This documentation should change documentation is wrong um If we look towards the bottom of runtime you see that there's a runtime here and then there's a module called current thread We're going to look at current thread first Because that's more similar to what we've built and then we're going to look at the default tokyo runtime afterwards so the current thread runtime Is a runtime implementation that runs and everything on the current thread, which is the thing that we've been writing so far And at the heart of it is a runtime And all of all that runtime does is basically the stuff that we wrote here, but in a much more Efficient way And so when you create a runtime Just create it you get back a runtime and on that runtime you can Get a handle to it if you want so the This is a runtime handle and a runtime handle. Let's use spawn futures So remember how really early on we talked about how you can You can sort of do something like a dot spawn future x Spawn does not let you learn the result of a computation Yeah, exactly. So, um If you spawn something it basically means I don't care about the result of this computation Just make sure it gets run in the background. So this is usually what you would do with um, Managing connections for example, so imagine that you have a Like a server which is going to be a tcp listener new Dot incoming I'm going to listen on for 234 Incoming gives you a stream which we may or may not talk about streams. It's relatively unimportant. So think of this as like in This gives you a for each thing So this gives you a Think of it basically it gives you an iterator, uh, but it's sort of an asynchronous iterator So this returns a future not an iterator And then for each is going to be called for each new stream that we get and basically what we're going to do Is going to do tokyo spawn So tokyo spawn is really just syntax for tokyo sort of for tokyo executor handle current dot spawn future So if I write this it is really just that Yeah keyboards are hard Um, these are basically the same except actually it's not executor It's runtime Um, but you get the idea Um, and so in here usually what I'll do is something like I'll spawn s. So s here is a tcp stream right And so usually I'll do something like My application Over s and I'll spawn that So this is instead of spawning a new thread. I create a new thing that has all the state for a connection So this might be like client connection. Whatever you want to call it Um, and so I created I guess new I created a new client connection over that stream And then I spawned that client connection. So client connection itself is a future Right that will resolve whenever the client disconnects Um And that client connection we give to tokyo spawn and now tokyo spawn is going to Or the the executor the tokyo has is basically going to add it to the Set of futures that we're operating on So keep in mind here. We have run all right So what you could do is here instead of having run all you have a vector of f future And this instead of being run all is just going to be a run This is going to make a sad so let's not do it sort of sure why not So spawn Takes an f does not return anything And just the self dot one dot push That's all spawn really does Now, of course, this is not how it actually works, but uh, you can sort of think of it this way Right, so if you spawn on an executor, you're really just saying hey, by the way Also execute this thing and I don't care about its result Right, so here we're spawning that client connection and having it be run in the background And tokyo will just automatically deal with that Uh need these anymore so let's make those go away Uh, so if you have a current thread runtime, so that includes a timer or reactor and an executor all running on a single thread Then you now have a spawn method which lets you spawn any future And notice that it spawns only futures that do not have an error item type because you have no way of getting at its return type Right if you spawn something you sort of lose the handle to the future so you can't you don't know anything about it Uh, it also has block on which is you give it a future and it will give you back the result of that future so think of this more akin to um Running that future directly and then still running the other things so the executor is still running But you whatever this particular future finishes you're going to return it Now of course because this is a a current thread runtime Uh, the thread can't do anything else While this is running so if I call block on it really is blocking the current thread Until that future finishes and then it gives me the result and of course because I once this returns This thread is no longer running that runtime as so all of the futures that respond on that runtime will also not make progress The reactor is not going to make progress the timer is not going to make progress because we have nothing to run it The current thread has returned and is doing something else And that's also why this is run method. So run is sort of similar to our our run all down here Right, except that it doesn't return anything run is just going to operate over all the all the futures that we have stored And it just assumes that they're spawned. So it's not going to return their value at all In fact, uh, we could now write this Sort of like So uh run is really our run all but it just drops all results and block on is uh Sort of like run all Except that it returns the result for a particular future namely the one you gave it Um, okay, so that's all current thread runtime Uh, and it's basically what we've written so far But then of course the observation that came from the chat here as well was what if I have many futures Like I have lots of futures all of them are doing stuff Then it sounds like I need more than one thread. How do I do that? right And that's when we get back to the the tokyo runtime itself So remember how we looked at current thread runtime. There's also the general runtime This is also what's used if you call tokyo run and give it a future Tokyo run is going to use the the default runtime the way in which the default runtime is special is That Again, this is not true. It uses a thread pool Whoa So instead of using the current thread executor that we talked about this this far it uses the tokyo thread pool executor And the documentation here is pretty helpful. Uh, it says that This is a work stealing base thread pool for executing futures. Um, It supports scheduling futures and processing them on multiple cpu cores Users will not normally create one themselves, but instead we'll use it via runtime, which we talked about so far Uh, we're not going to talk about blocking threads for now Uh, and so here is sort of the basic description each worker has two queues a dq and an mpsc channel The dq is the primary queue for tasks are scheduled to run Uh tasks can be pushed onto the dq by the workers, but workers may steal from the dq The channel is used to submit futures while external to the pool Okay, so here we're going to need to do some drawing again back to white um So In this setting You're going to have your all of this is inside your application And we're going to have your code As you can tell this is clearly code You can tell because there's a curly bracket Uh, and then what tokyo Thread pool does This is going to make down here Start up multiple threads Let's uh Say that you have four cores in your machine So this is a pool of four threads each of these things is a thread As you can tell because there's a little thread on them Uh, and each thread is going to have a queue and your code which has a Somewhere in here. There's like a let rt of type runtime Right and with this runtime, uh, you have the ability to call dot spawn And give it a future And what this runtime really is At least kind of sort of Is a sender So this is an uh mpsc um And so if you spawn the future that futures goes through the sender and the sender just sort of points to the pool Again, kind of sort of uh And I don't think this is Double check this And What's going to happen is every worker thread this queue that they have Is a queue of futures So all of these are futures So each thread has a bunch of futures that it's executing. So this is similar to the Uh The sort of vector of futures that we were storing in our own the own sort of single threaded implementation So each of these futures The so this each of these threads, uh has a reactor has a Timer Unclear May or may not have a timer. Uh, it might be that it spins up a separate thread for timers. I don't remember But let's imagine that it has a timer too So each thread has a reactor and a timer down here Which is remember, uh, just a part of the thread. This is this is not a separate thread Just like in our executor the reactor and the timer driven by the thread itself Just so with these worker pools And so each of these workers is going to have the same kind of loop as we wrote in So here each of these is really you can almost think of each of these as just a Current thread executor, right? So each of these is basically one of the things that we wrote ourselves They're basically indistinguishable their main loop is just Take the next future from my queue pull it And then try to make some progress on these if necessary Or I think it pulls everything that's in the queue Like it walks his queue pulls all of them and then puts them back on its own queue And then after it's pulled all of them then it goes so Color back to this one So the steps for one of these is Let's go right in here One, uh Pull Every Future In my queue Right, so that's the first thing it's going to do Uh And then there's like 1.1 If ready Then remove Right, so if the if poll returns ready, there's nothing more to do for that future. So we just get rid of it Uh If not ready Then Put back On my queue I don't know why I'm writing these. I don't know that the writing helps But it's sort of that's the protocol like for every future in my queue I'm gonna pull it once and then if it's ready, I get rid of it if an error I get rid of it. Uh, otherwise I put it back on my own queue after I've walked my own queue Then I'm going to uh check The reactor Right, so this is just like trying to make progress on the reactor um And then Three Is basically go to one Now where this gets tricky Is imagine that sorry and And the the connection to spawn is that whenever I spawn a new future here It just gets placed on a random worker. This is a relatively recent change. It used to Used to go to the same thread as you're spawning from but It basically goes to a random worker every time you spawn Uh So I spawn this future it gets placed say on worker two I guess we can number these this is one two That's not that's a three three four So every time we spawn a future it gets placed on a random worker and then When What this might lead to is imagine that we get really unlucky Imagine that oh a racer great Time to erase some things uh this guy and this guy Just like they've finished all their futures all of them finish relatively quickly and so they have nothing to do and so So these have nothing to do meanwhile this guy Has lots of futures and is super busy And this has like some not terribly many but a few This seems problematic right because it means that this thread is going to be really busy Because they have all of this work to do Right, so they're really quite sad So this cpu whatever cpu is running this worker is just like spinning at a hundred percent Meanwhile, we have these two cpu's over here that are idle That seems stupid and so this is why if you recall back from here, uh, it talks about how Where was it this is a work stealing based thread pool So what work stealing is is this thread when it goes in Pick another color Let's pick uh, I guess that color no this color so When this thread is on step one Oh, it sees q Is empty Well, that seems silly So instead of just sort of looping on its own q and waiting until it gets something random Right, so it could get one of these But that might take a really long time. There might not be anything more coming And so instead what it's going to do is when it detects this q is empty Then it's going to do like a special 1.3 which is try to steal The way that works is it's going to look at the other So in this step It's going to look over here at this worker and go ha You have a ton of things in your queue that you're not doing anything with how about I help you out And it's going to steal some of the futures and pull them back to itself And put it basically sorry, I guess this should go here and it's going to stick those into its own q And then it's going to start working on them And so now this work stealing means that we are This work stealing means that now we're spreading the load So if one thing is more busy than the others then that load is now sort of moved Of course now we're in a slightly weird position where Some of the futures that we stole might depend on wake ups from here Right. So some of the futures in here waiting for Notified to be issued from this thread So there's an additional step here What if you steal something and then it has to wait for i o or for a timer then it gets put back Sort of onto this queue But it does mean that this thread thread 3 Gets to help thread out one out Right. So the futures that it steals it does a bunch of Compute on it pulls on all of them and then once they become not ready It sort of returns them but it returns them is not ready So it has still done some work for one and similarly four is going to do a similar kind of work stealing We'll probably also steal from one. And so now we're spreading the load out And then we're still making sure that things end up sort of back where they were when things are settling down And this is important because you could imagine that if Thread one had to keep waking up thread 3 all the time and or sort of notified thread 3 all the time You get a lot of communication, which is sad. So we this is why we return the future to one But this is the basic idea of a work stealing thread pool Right. So in this in this setting you basically have a bunch of threads that are all Independently an executor and has all the stuff that they need But they sort of cooperate to make sure that all of the workload gets spread out nicely Does that setup make sense or do you want me to talk more about these pools? I know this is very random, but I started learning python today I will not spend time on the stream talking about python bugs Because it's already pretty packed with Things, but you should ask on like a python IRC channel or something. So please don't take that here But does this setup of futures and thread pools Roughly make sense or do you want me to talk more about it before we move on? Make sense to me. Great. I have one data point I know that this there's a lot of complication to this like there's a lot of I like pools nice I know there are a lot of weird interacting components here and it can be hard to follow the flow I think Going back to look at this look at the recording after might help and sort of go through it at your own pace And try to work it through in your head. Um, also the if you read I assume it picks the queue with the most extra work Sort of so you don't actually so in this work stealing setup Uh, if whenever a thread tries to steal if it has to check with every other thread That's also pretty slow. Imagine you have like a machine with 40 cores You don't really want to have to talk to all the 40 cores to figure out which one to steal work from So usually there's sort of an algorithm of you walk until you find some that seem to have a bunch of work And then you take it so you do it somewhat eagerly so you don't end up interrupting everyone else Um, the only thing that might be worth mentioning Is how blocking plays into this? Um I'll maybe get into blocking later Uh, I will point out the issue though, which is um Imagine and this is not too specific to the pools But but even for executors in general Imagine that you have some future that is just doing a lot of work Like you call pole and then it like computes all the digits of pi So it's just like takes forever to return in fact actually literally forever to return. Um Then now you're not going to make progress on any other futures You're not going to check your timers. You're not going to check your reactor. And so it's just going to get stuck That is problematic even in the even in the the stealing world Um, imagine that here this thread Starts computing pi Then now it's not even going to wake up the stolen futures So it might be it might have given some stuff away that that's not going to make progress either And you've basically now sort of lost a core and anything that's stuck. So, um Let's do like yellow Imagine that there's some futures down here that no one ends up stealing because no one is Everyone else has their own work to do these futures down here No one is going to steal But this thread is stuck just computing pi Now these futures we're just never going to do anything with even if all the other threads are sitting idle Right, they're not going to steal it because they see that this few q is pretty short Now there might be ways around this but you could see how if a future takes a really long time Uh, then this is problematic And what uh, brian is mentioning here is there is this uh, there's a mechanism in tokyo That's relatively new Where you can call tokyo blocking Or technically it's in tokyo executor but blocking which is You sort of mark the current pool thread as hey, i'm doing a bunch of work. So steal everything I have It basically sort of Let's go with its own q and says everyone else deal with it Uh So that I wanted to mention there's another thing to mention which is Uh, I mentioned how futures get returned to the the original q because otherwise they would have to get wake ups from this other thread Uh, there's a change that we want to make in tokyo, but haven't yet Which is how about instead of doing the wrong color burning other colors Green green is not ideal for this but let's use green fine here No, no not green pink uh What if instead of doing that we just sort of move the part of the reactor that's related to the futures we stole over here That way we don't have to return the futures. We can just keep them here It turns out that this this pink stuff, uh, we don't really know how to do yet Because it means that you have to move stuff from We have sort of e-poll running in thread one and we have e-poll In thread three and it means that in this set to e-poll And this set to e-poll we have to take like the part of this e-poll That is related to the thing we stole and we have to move it to this one And tokyo doesn't have a way to do this yet, but it's something we're looking into how to add Uh, can you repeat repeat why it's returning the future? I've just stole the future Right, so that's a little bit what I got into here but the reason is um Because otherwise we would have to do this so in thread three steals a future from thread one Then now thread one because it still has the the resource the thing that notifies that future then now Three can't make progress without one telling it what to do And then you have to sort of bounce back and forth between the two with these notifies Whereas instead what we're going to do is three is going to do the compute part So do the pole and then it's going to return it to one so that one can just do the wake-ups locally instead I'd also like to learn more about tokyo blocking Yeah, so I think I got roughly into the idea of blocking I don't want to go too much into detail in part because the design there is still evolving Like blocking doesn't work for current thread run times at the moment Um, does reactor run in the same thread? um Yes, yes, so the reactor Here So every thread in the pool also has a reactor And remember the reactor doesn't do reads or writes, right? Those are done in pole All they do is e-pole and notify uh the Up until I mean I can find out Tokyo Up until this landed in october It used to be so before this awesome pr landed It used to be that Tokyo actually started Sure, we'll keep the same color started like an extra thread that just had a reactor And then there was in in that version of the world Um Line green This is again, this is not what tokyo does anymore after that pr But it used to like not have these not have reactors in every thread Instead it had a single thread that was the reactor for all the futures That would then sort of do all these wake-ups Uh, and this works relatively well if you don't have a lot of things to do But uh, what you run into is if you have lots and lots of connections Then now it might be your bottleneck by just waking them up Like there's so much work to do that this thread is just really busy waking everyone up But everyone can't make progress until they're woken up. And so this becomes a major bottleneck Um, and so that's also why uh a while ago. I wrote, uh, some of you may have seen it this thing called tokyo i o pool And tokyo i o pool Was essentially this design of having a reactor per Executor, but I didn't have any work ceiling. I just had I just spun up a bunch of current thread runtimes And they all just ran individually and the spawning was random. And then later, uh, japan came in and uh implemented this pr which Changes the design of tokyo to the one we talked about where there's work stealing and there's a reactor per pool And that brings, uh, tokyo Pretty close to tokyo i o pool um Because now you have many reactors And then they also made this change which is being able to spawn tax on the random workers as opposed to a single one And now you see that, uh thread pool So tokyo i o pool, uh Takes about 34. What is this? Uh, milliseconds? My gross millies Uh, 34 millies to wake up and with this pr Tokyo proper takes 46. So there's a lot of work in tokyo to try to make this fast Um, and the design they've basically the only thing that's really missing now is the ability to do this move right of moving the Uh, when you steal a future also steal. It's the part of its reactor that's relevant Uh Great, uh, okay. So I think I think that's most of what I wanted to say about tokyo Um, there's maybe one thing missing which is Uh, yeah, so there's a timer per pool workers. So they have both a reactor and a timer. Um There's one more thing I wanted to say about tokyo which is, uh I've sort of lied to you in that I've said that everything is epol Whereas in reality, uh Epol is just a thing you use on linux. So epol is linux specific Uh, in reality what tokyo does is it uses a crate called mayo so mayo Is a crate that's written by the the same guy who maintains tokyo and it's basically uh, you can think of it as a An abstraction for Doing the kind of things that epol does, right? So it's a low-level library for non-blocking apis and event notifications it just On any operating system. It gives an interface that basically gives you a way to say I want to wait for all of these resources these io resources Okay So let's now close the book on tokyo Think that gives you an idea of how tokyo works and there's really no more magic to it than that There are a lot of optimizations internally, but now you understand everything that tokyo does There's nothing more to it. Uh, in fact, if you look at the things that tokyo Gives out clock is a way to deal with time that's not related to systems time. So it makes testing easier Um, codec is probably going to be removed, but it's basically a way to Uh If you have something that is read and write um, or Async so we can talk about async read and write actually because they're a little bit interesting maybe So there's these traits called async read and async write really what async read and async write are is Saying that something is read or write so a tcp stream or a file or whatever is read or write But with if it's async read so if you look at async read it really just Is just a trait over read and all the methods are already implemented for you So you don't have to add any methods. All it is is read With the additional contract that it's non-blocking that if you try to do a read it will never block It will just return woodblock And async writes is the same so async read and async write are very straightforward And then codec is just a way to map Async read and async write into sync and stream. I guess we can mention sync and stream now So these are not tokyo specific concept. Uh, they're just related to futures in general So a stream is There's a lot of stuff here. Let's get rid of all of it. Great Uh A stream A actually we don't even need to draw just open it on stream A stream is very similar to a future. It has an item and an error Uh, the and it also has a poll method. It also uses the poll type. It also has item and error But notice that as opposed to the future as opposed to futures. So future if you recall, uh, has Uh, just returns an item this returns an option item And also the contract of poll is you're allowed to call poll after it returns ready Poll on future if it returns ready, you shouldn't call poll again because the thing is already finished On stream if you call poll It's going to give you either some or none if it turns to some and some item Then think of this as an iterator. Then that means the stream gave you another thing If it returns not ready, it means I'm not ready to give you another thing yet And if it returns none, it means I'm finished just like an iterator It's like the next method of an iterator And the contract here is that if poll returns ready none, then you shouldn't poll anymore because there's nothing more. It's sort of finished Uh tangential question. How does someone profile rusko to figure out what this That the single executor was the bottleneck with many connections Uh, the single reactor you mean, uh, so this was actually a part of my research where I'm writing a really high performance database And so we have lots of connections and we notice that as you increase the number of clients The throughput was not increasing with the number of clients even though we know that the underlying database does scale And so clearly there was some bottleneck And then what you do is you just run top or h top or something and what you see is there's a single thread That's at 100 cpu and the other threads are not busy And then you try to figure out what is that thread and that thread was the reactor thread So it was called like i o Tokyo i o or something And so that led me down basically led me down to start talking to the tokyo team and be like, hey What's going on here? Why is there one thread and the other threads are not doing work? um And in fact, uh, you you can look at the implementation of tokyo i o pool because it is really straightforward um tokyo i o the implementation of tokyo i o pool Is i think a single file because i'm a terrible programmer uh single file great It's mostly documentation And like builders and whatnot and So the build method that gives you a runtime is really really stupid It creates a channel It spawns for For each for as many workers as you want, uh, it spawns a thread That thread is a current thread runtime and uh It All it really does is it receives on a channel and everything it receives it spawns uh Sorry Let me rephrase it spins up a current thread runtime It sends a handle to that runtime back to the thing that's starting all the threads So it's going to end up with one handle to each current thread runtime Uh, so all of those are returned those threads are just going to block on a signal to exit And then a handle is really just a A A set of handles So one handle for every current thread runtime and it's just going to spawn on one of them That is basically the entire implementation very little magic Uh Right, so stream is basically just a future that you can pull more than once after when it's ready Uh, and it basically just gives you an iterator and then sync is Sort of like the inverse so a sync is something that you can stuff stuff into It's like a It's like a channel sender, but it's asynchronous. So, uh, remember how if you have a If you have a channel from a sender to a receiver Right and there's usually If you make an unbounded channel, this will never happen. So let's talk about a bounded one So you have a bounded channel and imagine that all of these slots are full Right, so now the sender tries to send another x But this is full What will happen in the normal sort of non-futures world is that this sender will now block Until the receiver takes something off and now this slot is free again And now this x can go into there and therefore it's no longer blocking Uh, and so the blocking world, this is all fine Uh, but of course in futures we're not allowed to block like if you if in poll you block It's basically the same as if you ran forever. You really don't want to block and so the sender and the receiver If you have this setting Then a sync anything that implements sync has a Start send I'm gonna not talk about start send because it's annoying Instead there's a send That takes a t or in this case, I guess an x And it returns like poll ready That's this is an angle bracket Uh, what am I doing? This type is not helpful In terms of poll uh async Sure, uh async of nothing And the error I think is nothing as well And so the idea is that you try to send this x and it will tell you whether it's succeeded or not, but it will never block Right, so if this is full, it's just going to say not ready and then set Notify the task whenever the receiver has taken something off. So your send will later succeed Remember, we always have to satisfy the The contract of not ready means you've arranged for yourself to be woken up So if I try to send and it's currently full I put sort of a marker here So that the receiver when they remove this thing they notice notice the marker and then they notify my task So that the next time I basically I'm woken up again asked to send again And then the The definition here is really async sync of t Which is either ready With no with contains nothing or not ready Which returns the t So the idea here is that if you try to send and it fails it gives you back the thing you tried to send So you can try again later Uh, so here if we look at sync You'll notice that sync has also an item in error. They're prefixed for uninteresting reasons They've split send into start send and pull complete so that you can send you can send things in batches Like I can start send three x's and then I can wait for the all three to complete But the basic interface is you pull this is where you get back the thing Uh, and you pull and it tells you when the when the send is completed Uh, so that's stream and sync sync is a little weird because we don't have like stream is basically the same as iterator But sync doesn't have a good parallel. It's basically just a channel sender that is asynchronous Okay, so that's futures, tokyo, sync and stream So now we go into rust going forward with this in particular There is this rfc It's generated a lot of discussion I'm not going to go too much into the discussion because it's not relevant to what we're going to talk about But this proposes moving Everything related to futures from the futures crate into the standard library So that is task and future Uh, and if we look at the render rfc It basically proposes Um This is a worthwhile rfc to read But it proposes adding poll which we've looked at. It's just ready or not ready Uh wake which is like notify It's basically just a thing that allows you to know. I don't know why they called it wake instead of notify It's not important. Uh, but it's uh, if you have a wake, then you can wake something up and say, hey, like You it's basically the same as notify right wake means You might be able to make progress now So the contract now is if you return pending the function must also ensure the current task is scheduled to be awoken when progress can be made This is the same contract. Just substitute notify for wake Um That's fine. That's an example. That's an example And then the future trait So let's I guess contrast this with the one in futures Okay, so this is the trait that we have in futures Right. It has item error and this method for poll The proposed thing to standardize Is there's an output type But notice there's not there's not a distinction between item and error. There's just I guess I can zoom in here a little Sorry about that. Uh There's a future type. It has an output associated type and output does not necessarily need to have an error It's basically the observation here that and the the reason for this is There are some futures that just cannot error like if you wait for a timeout Then you there's no meaningful error or if you compute a hash Then like Eventually the hash is going to be ready. It's not a matter of erroring And so the argument here is you're going to have Some value that you get eventually And of course, there's a trivial mapping between future Between the futures future and the standard library future, which is just to say Output is result of Your result you're okay and your error type Uh, where this forces you into in many cases to declare sort of a dummy error type This does not do that instead. It just says if your thing can fail, then your output is going to be a result uh, the Pole method. So that's why there's only one associated trait and then of course the only other thing on future is pole Pole in the standard library version of future Looks fairly different, but we're going to talk to what that means So let's sort of look at the return type first. The return type is pole of the output So here it was a pole of item and error, but because we got rid of error It is now just pull output. So that seems pretty nice Um, you're given a local waker Uh, local waker Is basically the same as what you would get from task current Right. So in futures, remember how if you ever want to be woken up again You have to call task current to get a handle to yourself To like a thing that lets you wake up yourself that you can then give away Like to the reactor for example in the standard library version of futures that thing for wake up is um Uh is passed in explicitly It is not something that you that's like passed magically somehow because think about this. It's a little bit magical, right? What we're saying is that pole if you're in pole, then you can call this magical function and get a task Right. So if we go back to the executor Remember how we had to pull a little bit of a trick to call this with notify function and with notify is really just gonna Like hide away this this notifier that we give it Somewhere that pole that task current can get at it. In fact, the way this actually works is uh, it uses thread locals So russus's notion and there's a general programming concept that uh, you can have thread local storage So some stuff that just the current thread has stashed away somewhere. Um, and With notify just sets a A thread local variable that's sort of global to the probe or global to the thread That contains notify and task current reads that Whereas in the standard library implementation You just instead say that whenever you call pole you have to give it A handle to itself or a handle to its waker And so there's no longer any need for this task current the thread local stuff in that magic It does mean that pole is a little bit more verbose But it also means there's less magic It also means you don't need thread locals which can be a little bit of a pain to implement in certain execution environments But this is also a generally a a good change and it it maps very directly to What we had in the old futures just with slightly different mechanics, but the the underlying principles are the same And then we get to self So in the old futures Bring us back to that. Um Pole just take mute self all well and good In the standard library futures, it takes pin mute self Uh, I'm trying to see what the right way to introduce this is. Um Sorry before we get to that are there any questions of everything leading up to this because this is about to be another sort of Break to something that is relatively different So if you have any questions about all the stuff we've talked about sort of from the middle of tokyo to here now's the time You can ask them later too, but now's a natural sort of break point Or about local waker or I don't remember self can be specified with the type um Oh, yeah, so, uh, that's actually not a feature that's specific to this. Um That is Rust arbitrary self types So this is an rfc that landed a while ago. Um But it hasn't been stabilized at all yet It doesn't have an rfc. Yeah, so the basic idea is that you can Uh, have self be not just directly a mute self or arrest the ref self or self You can also declare a self to be something that depends on the self type And this would be you can only call this method if you have an rc of self So if you just have a ref self, you could not call this method This basically declares that I am callable on an an uh an item of this type But only if it is in the following form Uh, and so that's what this is using it's saying that you can only call poll on self Or on an item of this type if the type you have is a pin mute self Otherwise you cannot call poll Yeah, it's really neat. Uh, I think it's currently behind a feature. It's implemented everything There's some question about trait objects that are still being resolved Um, but it's basically you add this feature and then you can do it I don't think you should generally need it, but um does it work with anything generic over self? Uh, yeah, I think the proposal is I'm not entirely sure. So this is one of the things that are still working out and why it hasn't been stabilized, but um I think it's the requirement is anything that derefs into self I might even say here somewhere Right, so let's see original Thing Yeah, basically there's a lot of questions around Object safety and trait objects that we're not going to get into here, but I wanted to see if I could find But yeah, the the basic idea is Uh, that you want something that derefs to self Whether that's deref or deref mute or sort of deref owned But it has to be able to be turned into self Okay, so it doesn't look like there are any particular questions about tokyo or future stuff leading up to this So let's look at pin Pin is there for async await I'm trying to think whether we should deal with async await first or talk about pin first Um I think we need to talk about async await first So let's do that um Async await is a feature that a lot of people have been anticipating for a long time for rust And It gives us two things that are kind of neat. So Here's what it gives us Um Imagine that you're actually we can do it with our example from up here So writing this for uh, few decks is fine Right, like It's but there is a bunch of noise here. We have to write all this and then stuff And furthermore, it's a little awkward because Imagine that we have something like a buffer That contains foobar Uh, and or in fact We like read it from the from the user's input or something This is currently a static string, which is annoying But imagine that this is like, uh, this is really a string And then what we want to write is really what's in buff You can't really do this because the future you get back is going to be tied to This lifetime, but that future we're going to like give to tokyo run Uh No, I haven't really introduced it yet. I'm about to Um, if we're going to take this future now and sort of give it away to a thread pool to operate on but this um, I guess Let's imagine that this also uses buff Or like we want to keep using buff down here Right, like i'm gonna Do something like tokyo spawn few decks Uh, or sure spawn it's not important I like send off few decks to be executed somewhere But then I also after that Actually Sure run I'm gonna do this Right, so i'm gonna I'm really just giving a reference to buff to this future And then when it finishes, I want to do something with buff Or you can imagine that I have buff and I want to use it in multiple parts of Of this future execution I guess pretty annoying. Um, in fact, you can't really do it very nicely And even even if you ignore those kind of things like this just Is a weird way to write this code like I have to do all this like chaining of and thens so async await, um tries to Make all of that better by introducing two new keywords of language, uh, well It's called the keyword. So it introduces one new keyword, which is async and a macro called await Um And what that lets you do is it allows you to write code the following way The stuff above it's a few decks We can write it in this case. This is an async block You can also mark functions that async and closures is async Um, so we declare declare this as an asynchronous block And inside of this we can say Uh, await Wait PCP streaming connect And then we want to await c.write buff actually It's just Not this for now. It's not terribly important Just to demonstrate that this is annoying Uh, we're going to write foobar and then we're going to await Uh c.read And then we're going to check if b is bar food So async await lets us write the code that's above this way So notice that now we've taken this sort of chaining and callback based thing and written this linear code Now there's some discussion about await being becoming like a keyword. So you can do something like this Um, makes it a little bit easier to read maybe but for now it's just a macro so that we can iterate on the design And this is pretty cool, uh, in particular because now This code is just a lot easier to read like it reads like the Corresponding synchronous code. Um It's also really handy for writing functions. So whereas before I would have to sort of write a an fn like check foobar right That returns Either I could say that it returns like an info future and then I could do all of this like and then business That we did above Or I could write a check foobar that is like a foobar check future and then here And then I declare some like enum foobar check future and it can be either like connecting in which case it has a TCP stream connect future Uh, or it can be like writing which has a right future or it can be reading which has a read future And then or it can be Say that and then I have to like input future for This foobar check future and all of that just gets such a pain, right Impel future helped a lot as we can see but it still requires you to write all this and then business And it's still really annoying in this new world. I can just say fna sync check foobar and ask an info future And then I just write this code With a weight and it will just work And notice how similar this is to the synchronous code and that is basically the goal But you might wonder well, how is this actually going to work like What does this turn into ultimately this code sort of has to turn into a future somehow? So what does it actually do? well, uh async What async does is basically construct a type for you That is an enum uh Sorry, let me rephrase that a little bit async constructs a type for you that is a future uh, and It will run from the previous Await point to the next one Every time it can make progress So in this case when you initially create this block, no codes gets executed. It just just a future Uh, then the first time this async block the the return future gets pulled It's going to start running from the start of the block Until it hits the first await Uh, that first await of course contains a future or is given a future as an argument and it's going to pull that future If that future is ready, then it keeps executed until the next await If that future is not ready, then it's going to sort of keep track of where it got to so, uh, The first time this gets executed Uh, like the first time it's going to be there the second time It's going to call tcp stream connect Uh And then it's gonna notice that when we pull them on the connect It's not ready yet because it just initially started connecting. So the second time we're sort of stuck here Then we're going to pull again pull on async future Resumes here and it sort of turns this into a loop So await if we look at it await basically sort of kind of de sugars into this Ignore the pin of future stuff for now It basically de sugars into a loop that pulls on the asynchronous future So down here this is sort of kind of going to de structure into this If it's Ready, then we sort of It's ready with x then we sort of yield x. So we return that out of the loop And if it's not ready All right, fine. Let's do it the other way around Then we break with x out of the loop and if it's not ready, we've yield So think of this as we sort of store how far we got and then we And then we return from the future saying not ready The next time the async block so imagine there's a imagine that oh, this is going to be easier Just for exposition Imagine we finished the connect and then this We're like now stuck on the stuck on the read Sure, let's say we're stuck on the read. So we finished the two things above and we're now on this part So this is going to de sugar into something. That's kind of a little bit sort of like this If it is okay async ready, then we're going to break with x I'm gonna break with okay x. I guess now there's no break x if it's Async not ready We're going to yield and that's what we're going to do So that's basically what the await kind of sort of sugars into now. There are a couple of things that are interesting about this Notice that there's a bunch of code above this loop, right? So when we hear if we sort of think that it returns not ready Think of it sort of kind of like that It's like the future that we get back from async if we get to this point Then we're going to return not ready, but it can't really be not ready Right because if we return not ready Not really If we return not ready then the next time this async block would be pulled It would execute from the top of the scope, which means it would reexecute these pieces of code Which is clearly not okay, right? We have already executed that So this is why it sort of needs to be the special kind of yield operation Right, it can't really be a return because that would imply we're issuing all of these instead We're going to yield and yield we're going to declare to be sort of a special operation. That means remember Where I returned from And when I get called again Then continue from here Continue from here on pole Or on reentry Right, so when I re-enter this block when pole is called again, then continue from here as opposed to from the top And what this means is if we continue from here, then of course So we get notified of some kind a pole gets called again. We continue from here This is what's known as continuations basically Then we're going to sort of follow The bottom follow the control loop. So we come to the loop we do the loop again And now we do c.read And now it may or may not be ready, but at least it means we didn't re-execute the stuff that's up here Right, which is what we wanted And then of course, eventually it's going to return ready Then we give back the x and now b is that value that was ready and we can continue down here So that's the basic idea basic away that we need this ability to continue from where we where we stopped But if you look at it that is really quite complicated because first of all we need to note where we returned from But more importantly when we continue Where does c come from? We've returned, right? So has c just gone away? c we got from here, but we somehow need c in order to continue from here We don't get to re-execute this code And this is where isn't the way it gets tricky Because c here is sort of like where is c even stored, right? It's sort of stored on the stack But but the stack is going to go away when we return. So where is c stored? And so what async really does is async basically turns into this pattern down here Probably going to do this but mention the word generator Uh, so I don't really want to mention the word generator because it isn't really a generator Uh It's sort of is a generator. So for those of you coming from javascript and python It's sort of is like a generator in that a generator also has to continue But it's also not quite a generator because it's asynchronous And so it needs to deal with things like wakeups But it is continuation passing in the same style as generators. Um, there is in fact in the Uh, document or in the rfc for async await. There is in fact down here somewhere Uh async based generators like here That async function should be the syntax for creating generators That's not what they've chosen to do, but you're you're right. The mechanisms are pretty much the same Um So sorry So what async really does is it uses a pattern similar to what we sort of started writing out here for this Very verbose check foobar. I said the compiler is going to generate it for you So when it looks as this code it's going to generate an enum like compiler made async block with like Some unique identifier that is unknown only to the compiler, right? And so this is not a type that you will ever see Right, the async block is really just going to return an input future Right, but in reality This is really compiler made async block whatever And the compiler is going to generate an input future for a future for This compiler made async block But the compiler is going to generate all this And so you will never see this type It'll be an entirely private type to the compiler But it's going to generate this enum and what this enum is going to be is sort of like uh, I think it's like a step zero So step zero is before the first time you pull so there's no state Uh Sort of there's no state except whatever the async block may have captured from its environment Right, so if there's a bar here And here we like Do something so that we pull bar in then step zero would also contain a veck see this is a Then step zero also contains a veckbool So it's like everything that's needed for the before you even start that you capture from your environment And then for every await it's going to generate a new step So it's going to be a step one uh And step one in our case is the await on tcp stream Connect Right and inside of each variant it's going to store all the stuff that it has Made so far in our case that would I guess be z at this point Uh, and it would also be the connect future right I'm just going to have some type Because the first time we call connect We're going to get back a future that we're going to have to keep pulling Right connect gives you back a future and here we're saying await that future So we're going to have to store that future somewhere and that's going to be here Um, and that's going to be if we're stuck on step one We're going to keep pulling the sort of I guess this is really going to be something like waiting on And anytime you pull on a step you're really pulling on whatever that waiting on is So this is going to be some info future Uh, and the the output is going to be something that step two needs Right, it's not really going to be a step two, but it's like right You can think of it as a step two Here, uh step two is going to be while it's still going to have the z So let's just like say that all these have z Ignored from now because it's not used It is however, we're going to keep the c which is going to be the the tcp stream Because at this point we do have the c right because step one was a future that we've finished pulling on Well that eventually gave us the tcp stream and then in step two When we get to this await point now, we already have the tcp stream because this poll eventually returned ready So this tcp stream we now have Uh, and then what this is going to store is another sort of waiting on and that's going to be an input future Whose output is going to be like u size. So this this is Future returned The future that's returned from c dot right but we have to be a little bit careful here because um We might start to wonder what's going on here because c dot right Takes a mutable reference to c Right this input future is going to sort of not quite consume c but it needs c And so it's sort of tied to a lifetime of c But c is stored here So this Internally contains a reference to c but c is stored in ourselves And for those of you who sort of have dealt a little bit with this This is known as a self-referential differential data type so, uh Imagine I have some type foo and inside I have some data and data is say a veq u8 Then I also want to do the store like a self-refer Which are going to be a slice It's going to be a last two or something And half is going to be a slice Into data Right, so half is a pointer into ourselves I guess a better example of this actually for reasons they'll become clear later So I have a data that's like a A Buffer that's stored inside of foo and half is going to point to half of that data But this is really weird because imagine that we have some code that looks like F is foo How do I even make this? Right, so so this is going to be something like zero thousand twenty four How do I set this? How do I set that type? I don't know how to set it right that I don't have a reference to foo yet And let's say that imagine that somehow I managed to create it in the first place Now I do say z it's going to be box new f So I'm moving f So now instead of f being on the stack f is now on the heap but now F dot half or rather z dot half Is still pointing to the stack It's still pointing to where f was because I haven't in this move. I haven't updated half So half is now a pointer Into somewhere that I don't control anymore somewhere where the data really isn't Imagine sort of the example the trivial example of this here. I change z dot data zero to be like false I guess one the z dot half still points to f Which has been dropped so that's not okay, but even if it was Even if somehow that reference hadn't gone away this value is now like Wouldn't see one for sure. In fact, it would probably just see garbage or something that's been reused But we need to be able to express this kind of type in order to have async away Because this this future we're waiting on is using the stream Okay, so clearly there's just something very weird going on here. We need to have a way to express this Now there have been lots and lots of proposals for adding self-referential data types to rust And it turns out it's really quite hard. Of course One way you could do this is you could say that half is going to be a A pointer a raw pointer That we're going to manage ourselves But we can't do that if the user is allowed to just like randomly move foo around We have no way of guaranteeing that that pointer is always correct And so Really this problem turns into one of Can we guarantee That foo does not move Once we set half Right So at some point we're going to set half We're going to set the value of half to point it to foo right imagine there's some like initialization step Like there's a foo init function or something and that's going to like create data And that's going to create the pointer of half into data And of course there's no way for user code to deal with the state in between So the user code will only ever see sort of the half pointer being set correctly But if the user ever moves foo or specifically if data ever changes memory location The now half would be no longer be correct And so in addition to that having been Proposals that have been sort of how do we create a cell for an differential data type? It has sort of reduced to how can we express that something isn't allowed to move So there have been things like in pole not move for foo is something we've seen There have been a lot of proposals for this but none none that have really worked all that well And this has been a major problem for how do you express async away because we need it for async away And finally a while ago without boats and Eddie B and some other people I think without boats was sort of the first one to realize this I think he links to the I think they link to the somewhere here To the blog post they wrote rendered Without bores, that's new In my history, that's weird. Yeah, the it's on a different url. So it doesn't really help me So sorry, it's on a different computer. So I can't even get to that Here. Ah, there we go. Um So Down here somewhere What they basically did was Spent a bunch of time trying to think how can we do how can we do async away, right? So this starts out by talking about separate self-referential structs Then sort of talks through well, what do we actually need? How might we get it? And then here They basically realize that we have all the parts we need in the type system We need to just need some very small proposals And this was back in january and this was basically the breakthrough that we needed for how do we express this in rust Uh, and then there are a bunch of blog posts in some on how to polish that api. In fact, there was one very recently on This one that's on Well, that's one of these though All of the anything here that mentions pinning is probably worth reading if you want to understand it The observation was that we can we can get sort of support for this Basically the thing we want to express with async await that something can't move once we've started once we've once we have it be self-referential Using the notion of pinning Uh, and so pinning can you link to the voting site you made last time? Ah, sorry. Yes Sorry, so the pinning api is is what they came up with in the end. Um, and pinning has a lot of very Sneaky details. There's a lot of Intra ticket contracts that are being maintained, but I'll do my best to try to explain what's going on so pinning introduces Two new types. Um There is a pin pin type p Uh, where p is sort of a pointer type. So where p implements the ref so there's a stroke pin great, uh, and then there's a trait called unpin This is a marker trait. So it's automatically defined for all types Uh by the compiler just like uh clone and sorry just like send and sync um and the basic idea is that Uh, this is the first thing I really understand the need for pin api and it's relation to async await Oh, I'm glad glad to hear it. Um so Where this gets tricky is remember how up here We need to be able to express the fact that once we've set this pointer Then foo will not move And that is what pin expresses. So a pin if you have a function foo uh And you get some argument x and that argument is a pin So p here is any kind of pointer type. So a mutable reference or reference of arc rc Well, let's say you get a mute t Sorry, I'll take some t What this tells you so if you get an argument of type pin There's a contract so basically what if you get a pin, uh Whoever is calling you is promising the following. They are promising that either Uh t will never move again Or T implements on pin. Okay. That is the contract uh Now this does not say anything about t If it's not if you're not if you're just given a t right so t here is any type And so until you have it in a pin, there's nothing special All this is saying that either t will never move again the moment you give a the moment you get a pin Right of of any type. This is similar to if I gave you say, uh Uh box t Right that i'm saying that once you get this t will never move again or t is unpin And remember I mentioned that uh unpin is a uh an auto trait. So it's something that Every type unless explicitly explicitly offs out of it. So impo not unpin Ah, no, no, no, no Unless you have an input like this Then every all of your types are going to be uh unpin now The unpin is a little bit of a weird term that there's still some discussion about what these things should be called uh The argument for calling them what they are is that Unpin should be read as a verb So, uh, you should read this as Uh, if something is unpin it means that it You can if you're given a pin of it you can unpin it and that's okay So unpin as in a verb it's not yeah, uh, this is saying If you have a my type You can and if you have a pin of a thing to a my type you cannot unpin it We'll see how that explains, uh, no So it's the question is uh, is it auto implemented because it's an empty trait? No, so, uh You can have traits that have no members that are just a contract Auto traits are special in that the compiler will automatically give every type That trait if all its members are that trait This is not true for every marker trait Uh, although that so That said, uh I don't know of any empty traits that are not auto traits In the standard library Uh, in other crates, you could usually easily use them for markers of varied kinds, but they're not necessarily the same Uh Yeah, so unpin is special unpin and pin are known, but well sorry unpin is known to the compiler And it has been marked as So in the standard library, you're allowed to write auto trait Which basically tells the compiler this trait should be applied everywhere. You cannot define your own auto traits at least not currently So it is it's a reserved special trait. It's not as a reserved special name But it's in the standard library and the compiler knows that it's an auto trait Uh, and that's why it has this um this propagation this auto propagation Uh, and so the way to to think about unpin is Is in terms of this contract, right? Either t will never move again or t is unpin Uh, and if something is unpin what that means is That It is uh If t is unpin, uh, it is not sensitive to being moved Sort of right, so Uh Off-topic question about generics. Can you specify generic must not implement a trait? um Not currently You can only declare that something does not implement a trait You cannot depend on something not implementing a trait yet Because it turns out to be fairly hard for the compiler to do negative reasoning Uh, it would be nice for for specialization, for example, but I don't think you can do it uh so Think of our referential Data type foo up here So foo would not be unpin Or the other way to think about it is if something does not have any self references Then it doesn't matter if you move it Right, it's not sensitive to being moved so even if someone pin it and Like if you moved it, it's not like your data starter would break. Nothing would be wrong if it got moved And so that's why if I just declare like a bar And that has like a V which is a Vecco bool And maybe some like buff that's a Vecco u8 And maybe it has like a Z that's an R, u-tex Uh Atomic Use is a terrible type R, u-tex Hash map Tuple of u-sized bool String like who knows so complicated type this type is unpin because it's not sensitive to moving If you move it the data structure is just as valid as it was Right foo on the other hand is sensitive to moving Right, but but that doesn't mean that you can't move foo It's just that once you rely on this half value Once that's important to you Then you can't move foo Right, if you ever if you once you know that you're going to start using half So once half is something that you're going to access Then it matters the foo is not moved. So therefore foo is not going to be unpin Uh, so here We're gonna we're gonna have In our code unsafe impo Sorry, it's not unsafe impo not unpin for foo This is saying that foo is sensitive to being moved And now what that means is I guess we can bring this a little bit closer Uh unpin is not unsafe It is always okay to declare a type to be not unpin You would just if you do it you're doing yourself a disservice if it's not important Right because nothing if you say that something can't or is sensitive to being moved Then that just means that people won't be able to move it if it's in a pin The reverse on the other hand is problematic Right if you have something that is sensitive to being moved But you don't market as such then that would be problematic Now the the name unpin is a little weird because you end up with this double negation And that's part of what's going on in the discussion In the stabilization effort right now We'll see whether or not it changes But there's some arguing for unpin being the right word, but it it um Reads somewhat weirdly like when you talk about it. It's a little bit weird If you create a custom self-referential struct it should impo not unpin. Yes, exactly Uh, I don't think the compiler is going to be smart enough to do this automatically So it because it doesn't know that half or half here points into self Right half could be a pointer to somewhere completely different in which case foo is unpin So it's only because the semantics of half is that it points into self that it is self-referential And furthermore Only matters in the context of pin so unpin on its own is not important It is only important in the context of pin and we'll see exactly how that all ties together But imagine that I set half And then I just like never touch half again Right so half is pointing into self, but it's like never used then of course obviously foo is still unpin because Even if foo is moved nothing's going to break on foo Or for example imagine that I do some What's a better example of this we'll see how this ties into futures later, but I think that's the basic idea Okay, so why this is all matter well If you're given a t that it's unpin or not unpin you can move it all fine unpin does not restrict moving in any way However, if you are given a pin of a type Then the contract is that either t's or the the Target Let's actually write this out. Um, there's an impulse D ref An impulse is going to be a p t D ref For pin p Where p D refs into t I need to make sure I get this implementation right. I think this is right. Yeah Uh, I guess type target is Okay, so this D ref exists for pin no matter what the type is right because if you if I give you a read only reference to The target of p's pointer, then it's still not going to move right. You're just going to read it. That's fine D ref mute is where it gets interesting So D ref mute for pin Uh Where p implements D ref mute? Sorry the sorting here is a little bit important for clarity So, uh notice the pin always implements D ref into the target of the pointer Right, so that's the thing up here and it does so unconditionally right as long as p implements D ref Which is like box implements D ref arc implements D ref rc implements D ref most things implement D ref Pointers references, of course implement D ref D ref mute is where it gets interesting. So Imagine that we just implemented D ref. We did not have this restriction. We just said that it also implements D ref Um, this is problematic because this gives me a mutable pointer to t So what I could write now is um The code that we had above up here So I have a Have a foo Well, actually I have a I have a pin box right, so remember foo is is uh Not unpinned so it's sensitive to being moved um And I have one of these now Because it implements D ref mute What I can do is I can use mem swap or mem replace on The D ref of f right So D refing f is going to give me the mutable D ref of the box Which is foo. So this gives me a mutable pointer to foo And I can just replace that with some other foo Right, there's nothing stopping me from doing this. It's totally safe The problem of course is now I just moved foo. So if I previously gave someone this pin Then the guarantee that I gave them namely I told them the t will never move again If t is not unpinned Right, so this sort of implies this the statement down here implies that if t is not unpinned Then it will never move again foo here is not unpinned So therefore it should never move again once I have it in a pin that here I'm causing foo to move right the old Z here is now going to be the old foo So the the the old thing that was inside f has now been moved And so this is clearly not okay Right And so the sort of insight here is we're going to require t to be unpinned for uh For d ref mute to work for pin for this to be safe now. This will not compile Because foo does not implement unpinned So this code will not compile in fact now we guarantee that just foo will never move Once you put it in a pin, there's no way to get it out of the pin like pin doesn't have a way to destruct it Right and you cannot d ref mute it So you can't get a mutable reference to t and so therefore you're not going to be moving t You don't have a way to move it right, so this is the core insight of pin that You can only get a mutable reference to the thing inside of it If it is unpinned if it's safe if it's not sensitive to be moved Right now, of course, there's one thing you can observe here, which is if I have some code where I know that I'm not doing member place right, uh I have some code where I I just want a mutable reference into foo because I'm going to change a string in there or something That's all fine. I'm not moving foo Right. I'm not doing member place. I'm not moving foo in any way Then I know because of the code that I've written That it is safe for me to look at look beyond the p in a mutable way. And so that's why pin p There's an implementation on pin Which is unsafe As mute. I don't remember exactly what it's called, but it's not important Uh, you get some mute, uh, p Right, this should be target p target So, uh So I can have this unsafe This unsafe as mute, which given a yes, this is going to have P implements dref mute Right, so this is saying this function is going to basically just do a dref mute as well But it's unsafe because it will only give you that mutable reference if you promise that you're not going to move the target right, um Does the compiler consider dref mute similar to dref because of the names Um, no no the compiler doesn't do that although D so the the trait dref mute extends dref So in order to implement dref mute you also have to implement dref, but it has nothing to do with the names Uh, is it kind of sad that you can't get a mutable reference to mutate it but not move it? I think your answer Yeah, yeah, so the answer the It would be sad if you had no way of getting at the thing that p points to if you sort of know that you're not mutating it And that is exactly what we're getting at here We do have an unsafe way to get to the target But the the unsafety here is you need to promise not to move t Basically, you need to manually uphold this con contract that he will never move again Right If if you're unpinned if your target is unpinned You know that it doesn't matter to the type whether it's moved and so therefore it's fine We we can give out the mutable reference because if someone chooses to replace it that's fine food doesn't care And the fact that we the thing that we promised in the past was just that if we move it it's not going to matter Uh, so this unsafe implementation comes along with this contract that you don't replace it Notice however though that pin is only one level deep Right, so if I uh, if inside foo There was something like a U size Now has to be string uh So foo now has this Has all so itself has a pointer to a string Notice that it using only safe code It is totally fine for us to say uh It is totally oh I don't know how to phrase this If we have Ah a better example of this is actually the other way around bar Has a foo If we have a pin bar Right Then it is totally fine for us to deref this because bar is Um bar is unpinned Right, so it is totally fine for us to now move the foo inside of bar There's no problem with that all we promised here was that Bar is not going to move That's all this pin promises. It's not saying anything about foo inside of bar. Um And this sort of brings us full circle to futures So pole takes a pin mute self Now think about what that means When pole is called We promise that self will not move Or that self is unpinned And and uh keep in mind here that if we have something so pin also has this uh Uh as mute self Which returns a pin Uh A pin of mute p target and notice that this is also safe Right, so this is turning a this turns for example a pin box foo Into a pin mute foo Right, that's that's necessarily safe a pin box foo promises the foo will not move And a pin Mutable reference to foo also promises the same so we've done nothing weird by doing this Right And so this is an entirely safe method because you still can't move foo if I give you a pin mute foo You would still then end up having to either use this unsafe or the target type would have to be Unpin and so this means that if we have a pin box foo Then it's trivial for us if they imagine the foo was a future Then of course now we can Easily call f pole That's fine because this pin can be trivially turned into well, I guess it would be like asmute, right? Uh this pin can now be turned into a pin mute foo and so if Uh foo implements future that is the same as a pin mute self That's now we're given this And to bring this then full circle to async await Now that we have this pin We have all the guarantees that we need for async right This the compiler made async block We will not unpin for compiler made async block We know it's not unpinned because there's sort of self-referential stuff going on here Right, but once we start polling we know that it's not going to move And before we start polling it's fine for it to be moved, right? So if you just have one of these You can move it as much as you want It is only the first time you call pole that it's going to add the pin to it And so this is why um In the async await if we look at what it actually does Here you'll see it Pins the future that you give it it creates the future So this would be like the foo right so it it um Gets a gets the future part of the foo Then it pins the mutable reference to the foo So basically establishes the contract that from this point forward I will not move foo and then it starts pulling the future It starts pulling the mutable reference the pin of the mutable reference to foo and so now um Inside of here Once we start going into step zero step one step two all of those are calls to pole Which means that we're getting a pin of mute self Which means we know that foo will not move and therefore this self-referential stuff is fine because we know that this will not move anymore But until you call await Until you start pulling it you're free to move it because we know that it's like in this state We're moving it is fine Today we're going through all of futures and tokyo and async await and pin Uh, I think it would be hard to follow from this point if you don't have an experience with it But I recommend you go back and watch from the beginning. I think we've done a pretty thorough job of going through all the things Uh, so whether something is pinnable or not is governed via unpin and not unpin Sort of so every type is pinnable You can always make a pin of some type key The question is just what is the contact you're establishing and the contract for all pins No matter what the type inside of it is that if you have a pin of something then either So remember the the type of pin is always a pointer It's either that the target of the pointer will never move again Or that the target of the pointer does not care whether it's moved So for example, it's not self-referential Yeah, so in some sense Unpin or not unpin dictates which contract you are tied to when you make a pin and so That is then what enables us to do this because now The the moment you start polling this You know that it will not move anymore or you Or in the case where like you implemented your own future If your own future doesn't have any self references in it then it's trivially unpin right self is going to be unpin and so therefore Writing your future is fine Because writing your future you're given one of these Self is unpin therefore the d rough mute of the pin is safe and you can use it And so it's just going to be you can basically just write self dot whatever and you get a mutable reference to self Because it's unpin. It doesn't matter Okay That is pin And that is what enables us to have async away Uh, and that is currently what's about to be stabilized Now some of the names might change. There's a lot of discussion in this thread Uh, and about these double negatives the There's one other thing that is the proposal for stabilization ads and that is um Imple not is currently a nightly only feature So this is similar to if you want to declare that something is not send Like uh, sorry if you want to do something like Imple not send for my type This pattern of implement implementing a negative you can only do for auto traits Because they're the only things that are not opt in. This is basically a way to opt out This is a nightly feature But they want to be able to ship pin without shipping that feature uh, and so what they're adding is a um A type called pinned Which is Sort of stupid, but but in the standard library, sorry the idea is not stupid. The name is annoying So in std pin They're adding a struct pinned Which has no contents. So it's like Similar to phantom data, right? It contains nothing. It's a zero size type And inside the compiler, of course are allowed to use negative implementations. And so there's an impulse not Unpinned for pinned The reason they do this is now um, if you want to if you So for foo what we would do is instead of having this Requires nightly Right, so instead of doing that which requires nightly We just add like a Not unpin This and now because pinned is not unpin foo will be not unpin because the compiler only sets Auto traits if all members have that trait Right, so this adding this to the standard library means that now even though they don't stand Stabilize this you can now have things be not unpin on stable Uh, I've been looking for more approachable resource to understand all this information related to futures in async await And I must say this has been the best for that I'm glad to hear it. I I do think this is part of the reason I did this dream in the first place was because There's a lot of interconnected components here And there's a lot of stuff you need to understand and I think it's actually valuable to go through all of it Like sort of the entire Like ball of stuff um Because they all interconnect and understanding all of it is important. So I'm glad you find it useful Did it require many more compiler features or is it more in a library level? So that is one of the things that is really cool That is one of the things that is really cool about pin and why I recommend you go back and read those blog posts because it's sort of this gradual realization that We can do this Basically without adding anything. So pin is not special It does require that you add an auto trait to the compiler but adding an auto trait We don't want to add lots of them, but it's like pretty Trivial that's the only thing you add and it didn't require a feature, right? It just required a Adding a type and an auto trait for it, which is already something the sorry adding an auto trait To the standard library, which we already have all the other auto traits So there's no feature needed in the compiler at all to add pin And then the realization the pin is all you need for async await. So async await is not Async await does require compiler features basically because the compiler has to generate this business, right? It has to implement it has to take code that looks like That's a take code that looks like this And turn it into This enum and the corresponding input future and so that is something the compiler now has to learn to do Basically sort of know what variables to capture and produce each step of this enum But that's about it But so I mean that was somewhat complicated But it was more the realization that pin is all that's needed to make that work Does that answer your question? I think that was what you were asking Um, okay Right. So so pinned uh, it's the last thing that is proposed to be stabilized in In uh, in stabilizing pin, right? So the the stabilization efforts that we're going through are Stabilizing task in future. So that needs to go into the standard library to get futures Uh, future requires pin. So pin is being stabilized Um, and then async await the rfc has landed Uh, I don't know if the I don't think there's a stabilization issue for async await yet Yeah, probably not So this is going to be the the sequence of things that we're going to stabilize I guess pin first because it's needed in futures and then stabilize task in future so that Because that's going to be needed in the standard library in order to have async await And then hopefully we'll see the ecosystem sort of move towards using the standard library future and task stuff So that would include things like tokyo has to move to it, which is a non-trivial effort, but the work is sort of underway One of the problems we're having is that the The sort of pinning stuff While it is really cool and probably basically what we need It is a pretty big change to how you write futures because you now need to think about pinning and changing all of the Ecosystem around futures and especially everything that's built on top of tokyo and tokyo itself to use this just requires a bunch of work But it is undergoing And then once this is stabilized then at least hopefully we could Finish the implementation of and stabilize async await and then hopefully we would just have all of it But that is still a little bit of a ways off. I think the hope is to have async await in like Two release cycles or something I think There's some discussion, but I don't remember what the Conclusion was in this in this thread, but I think the hope is to get it out pretty soon although it's a trade-off right because These are Really tricky things and you need really good documentations, which is one of the things that's currently kind of lacking If you look at the both the rfcs and the implementations in the standard library that they're proposing to stabilize the documentation is Okay, but this is why I made the stream as well right to try to give people more Wholesome introduction to all of the concepts the concepts that are involved and the documentation in the standard library isn't quite there yet Where it gives you that kind of full interconnected Picture of what's going on and so my hope is that that's something we get into Or manage to make a part of the stabilization effort But So we don't want to sort of stabilize in a rushed way, but I do think we're getting to a design We're now we're understanding why all the pieces are there and hopefully now you understand it too In such a way that we can get async await which we all really want In like a reasonable fashion Um, I wrote a little crate with one trait the rats many pointers many pointer types into a pin type It's called pinpoint. That's a great name Uh, you can call pointer as pin Yeah, so there's um There's a bunch of work on building So one of the things that's complicated about pin is uh You want to be able to say Uh, if I have a pin If I have a pin to bar, sorry a pin to box to bar Or I guess at mute bar Uh, is there a way for me to go to a pin? mute foo for example um Specifically like sort of project into this field Is or It might not be the best example. Let's go with that one. Anyway, is there a way for me to do this safely? through uh barred up foo It turns out it's actually fairly difficult to express What the requirements are here the requirements basically if I remember correctly is uh Oh, actually that's separate. Um, yeah, so so exactly what this uh, what you need for this is a little unclear There's a a crate though that's intended to Do this for you, which is this pin utils crate Um, I think I think it was pin utils. It could be wrong. That might be mentioned here somewhere Yeah, I don't remember where it is Yeah pin utils, okay so pin utils has a basically convenience methods for Interacting with pinning uh pinpoint is probably a similar kind of thing. Um And the hope is to provide some kind of mechanism for Um for giving out this kind of sort of transitive pinning Where you can say that if bar is pinned I can give you a pin of foo But we don't have exactly the mechanism for that yet. Um The the only other things I wanted there's one more thing I wanted to mention, which is and you may have seen this already Um For this New in fact we can look at this here Creating a new pin based only on reference Uh Right, so the rfc goes through basically everything we've gone through here But for new This is missing something from a while ago Yeah, this this rfc is a little bit older. This doesn't include There was an older version of pin that had like pin and pin box and pin arc and None of that is no longer needed. What what I give is still that is sort of the new instantiation But I guess this isn't the right one Um, uh, I specifically wanted to show you the constructor So there's a safe constructor when the target is unpin. There's an unsafe constructor if the target is not unpin And I want to explain why that is So I'm gonna require that p influence era All right, that's actually something I've missed in all of this like all of these need to have the bound p as the error um If you want to create a new this is actually unsafe pointer p Gives you a so the question is why is it unsafe? right, so there's a there's a safe implementation which is If p target implements unpin So this is safe But why is this not safe? This has this basically just does this So that's safe. Why is this unsafe? um, the reason is I We basically want to avoid in in the case where you don't have Where you have a type that is not unpin We want to avoid ever giving out a mutable reference to t Right, because if we do you can remember a place and now the the value moves Imagine that I write sort of a malicious if you will like I write like bad box And what bad box does is just contains a box t And I implement d rough for bad box Right, just ignore what the implementation is and then I implement d rough mute for bad box and it gives out a mutable reference to the thing underneath, right I'm soft on zero Right, so this all seems fine. This is all fine But here for a brief sec. Uh, yes For a brief second here. I have access to a mutable reference to t in d rough mute Right bad box Whenever the pin chooses to mute to go through d rough mute I as bad box implementation of d rough mute I can mutate some so imagine the bad box in d rough mute Does a memory place With like t default or whatever or who knows mem uninitialized I just moved the t But the target is not unpin But I moved t After it's been pinned And so clearly that's not okay. And so the reason that we need new to be unsafe Even that we need new to unsafe when T is not unpin is because we need to guard against this. We need to make sure that at no point Does anyone have a mutable reference to t? And do something like this. So the unsafe here is also like I promise that my d rough mute Does not move t It's basically the promise you're making in new unchecked Uh, so foo contains a Unpinned there is no unpinned. Uh, are you talking about Pinned or are you saying that foo contains a type that is unpin? So pinned This pinned is really just it is really just this it's a way to Uh opt to say that your type is not unpin without having to write this line Which you can't write on stable. That is the only thing that pin does Pinned that is pin and d like this thing That is the only thing it does now if foo contains If foo contains something that is unpin that does not matter Pinning is just a guarantee about the immediate dereference and nothing else I hope the documentation for new unchecked will explain the required invariance and reason for them Yeah, so this is one of the things that's going on in this, uh, discussion is Uh, there's been a lot of discussion about like exactly what should the docs say And I learned a lot of this from reading the stabilization thread I don't know what it says now It has changed a little bit, but the the plan is certainly that it should that uh That all the documentation will explain exactly what all the invariants are because they are really subtle for pin It is really it's really subtle why all these things work and why it ends up being safe And I'm I am positive that like throughout the things I've told you there are probably things that are false But you know it's gotta you gotta start somewhere um Let's see Oh, you saw the My paper in the morning paper. Yeah, that was pretty fun um It's interesting because I have I have had a bunch of academics that have said it the other way around that It's interesting to see someone who's an academic be in rust circles um Okay, I don't think there's anything else I wanted to talk About sort of futures or tokyo or async await Uh, is there anything that you feel like you would like to hear more about? um In any of the stuff we talked about so far Or other things you'd like me to cover About this stuff Anything that's still unclear you want me to go into or you want me to talk more about the sort of directions we're going forward I think it's now it's sort of the now. I think I've given you The entire package, uh, and if there are still things that aren't clear, I think now would be a good time to cover it Uh, while you're thinking about that, I will go pee Let's see What comes for the next stream and when do you plan it? Um, so if you missed it in the beginning, uh, I wrote this website This is a previous stream that's been filed. Um Where you can vote on what upcoming stream ideas you'd like to see it uses rank choice voting. It's pretty cool You just pick a unique username and then you drag and drop the uh the ideas into which ones you would rather see me cover um, the next stream will probably be in Three weeks probably I have a bunch of plans coming up. We'll have to see I'm not entirely sure um It looks like the next thing is going to be porting flame graph which is a cpu profiling tool or visualization tool. That's really cool Uh, to rust we probably won't Port the entire thing, but it's it it would be a slightly different streams I wouldn't talk so much about async and stuff, but rather do a more performance oriented thing Um, which I think could be really fun. So it looks like that will probably be the next thing But feel free to vote there. That'd be a good idea. Um What is this channel about so this channel is about the rust programming language and in particular, um Uh introducing more advanced concepts and talking about and writing more intermediate or advanced level rust codes you can see like real code being written Um, I think yeah, I think watching the full recording later is probably a good idea There's a lot of details here and it might be hard to follow in real time Um, I recommend you also go back and sort of draw some diagrams for yourself and see if you can explain it to yourself Why this is this is correct. There are lots of details that I've sort of skimmed over or skipped And so feel free to try to dig into some of this documentation yourself. Um, that's a good point. Uh, tokyo has recently done sort of a Uh Big documentation effort it's still underway. Um, but if you go to tokyo rs documentation, there's now here so No going deeper This section in the tokyo documentation As reasons have been written and it's basically going into things like the runtime model like why It goes through things like what is uh, exactly how do the runtimes work? How do you how do the executors work? How does the work stealing work? How do you interact with io and reactors timers? How how would you build a runtime yourself and what do you need? There's also tokyo internals, which is still being written that goes into even more detail Even more stuff about reactors and non-blocking io. I really recommend you go read this. Um, there's also the doc push repo which is all about improving the documentation for tokyo and so Especially now do you watch this dream? I highly recommend you go Like look at this And see so there are a bunch of issues about where we would like more documentation to be written If you feel like there are things that you now understand or even if they're saying you're still learning like try to contribute to this because You could really help people Understand how futures work how tokyo works how the ecosystem works how async weight and pinning works for that matter Like contribute to this small and large would be really helpful in order to improve the documentation of the ecosystem And that it includes the standard library too Like if you can help improving the documentation for stuff in the standard library related to this Especially sort of futures and pinning without lands That's great Is there anything in rust for csb or actor based concurrency? Yes, um in particular the Is a crate called actix Which is all about implementing the actor model in in rust And uh, I haven't used it too much myself. It builds on top of tokyo Um, I don't know exactly how the integration works Um, but it's basically you have a bunch of things that are actors that operate on their own state and they only communicate by sending messages to one another This is very similar to like the orlang model And it's supposedly very good There's a it's primarily built for something called uh, actix web Which is a web framework that uses actors to build sort of a microservice oriented architecture for websites And I've heard good things And now you may be better Better equipped to understand how actix works under the scenes too I don't know how well this is documented for How the asynchronous part of it works, but at least now maybe you have a better understanding of how the internals might work For csp Not really There's been Some discussion about adding support for a yield keyword in tokyo But you sort of need it to be at a lower level than that. You almost need it to be a compiler feature Basically to give you continuations Um, but now with async await it might be that you can express csp and yield in particular using async await But I don't know that there are any immediate plans to do that But I think that's something you would need to get closer to true csp Um when implemented my own future, I'm fairly sure I understand what to do If I need to return not ready, but maybe do a two-minute example of a simple custom future Uh a simple custom future. So, um, there's one more thing I want to highlight before I do that uh One of the goals of async await is that you shouldn't need to implement future yourself as much Because you should be able to just use async functions and async blocks and async closures and await to just Write out what you would normally have done in the implementation of the future, right? So this basically means that you Uh like Many cases where you would previously implement your own future and sort of keep an enum Where you walk between different states like are you connecting connecting to google google.com? Are you writing stuff out? Are you reading stuff back? And sort of manually handling the state machine of that future You could just do as an async function instead using await and it will be a lot easier to write As for the contract with not ready It is really just a matter of any time you return not ready make sure that Something is going to wake you up and remember that you can always assume that if something Else returned not ready to you then they have arranged for them to be woken up. So you don't have to do it So this contract really applies transitively Usually, uh, that is actually worth pointing out in the future's crate There's a macro called try ready and what try ready does is pretty much Pretty much looks exactly like this So the idea is that if you call if if you have some struct like my future And you have a bunch of fields and one of them is you like your future You have some inner thing that implements future of some kind Then when you're implementing future for Your own future Right, then you're gonna have a poll I'm gonna return so remember that Poll is just an alias for this All right, I guess if we're gonna do sort of new fancy futures then really Gonna be this now right remember that how Future 0.3 just got rid of the error So we can just get rid of all of that error and result related stuff So if you implement future for your own future, then like you're gonna do a bunch of things and pull And at some point you are going to pull the inner future Right, then you have to be prepared for the fact that the inner future might return not ready in which case Maybe you can make no progress right But but it might also return Something in which case you can make progress and the try ready macro lets you do this And so that's gonna it's basically like the question mark operator except for futures This is going to be if poll returns not ready then just return not ready from me as well Otherwise give me the thing that was inside ready The reason this works Is remember this means that you're returning not ready And the question is have you have you arranged for yourself to be woken up? And the answer must be yes because this inner future Also must maintain that contract that if it returns not ready It has arranged for the task be woken up So it returned not ready and will be woken up So when you return not ready, you will be woken up because it will be woken up And you're on the same task, right any any chain futures are on the same task Um And so that is usually when you're writing your own futures That is usually the way in which you uphold that contract Is just that you're using other futures that are themselves upholding the contract Uh, doesn't await use the yield keyword internally? Uh, I sort of lied Um, sorry Okay, so if we go back to So down here This right so remember how we talked about what await sort of expense to See how the rfc says roughly expense to this so As it says the yield concept cannot be expressed with existing rust code And it's not really yield, right? It's really Store all the stuff all the stuff on the stack Into the struct that I generated right into this Special future that's like the async future And then return not ready But it's also not just that because it also needs to say the next time you basically Uh, the way to think about this might be When it when the compiler constructs this enum For something like this code is too far So we we sort of claimed that this async await stuff turns into an enum like this What it really does is it Um, does like a match on self And if it's step zero Then it does this If it's uh Sort of this If it's step one Then it does like self dot Waiting on dot And it sort of does like try ready of that All right, so this is like the next value or I guess see Right, and then if it does get to that then it's sort of kind of awaits for that and that it moves like Self is equal to step two Right in here. It's sort of like Self is step one You see sort of what I'm getting at like This is really what async turns into Right, like self dot Whatever I guess waiting on Dot pole So this is uh, then self Is step three So this is how it gets around that problem of not Re-executing the previous stuff is because each of them is in a separate step And so it just uses the self because self is an enum it uses it as like um Uh as a state machine so that it can move between the different parts Uh without executing the previous parts again And so this is why that the yield inside of a weight is not really a yield It is like a Set self kind of thing Um Yeah, so so so this is why uh brian mentioned a while ago that like That you can sort of talk about async as generators and that is true You can sort of think of them that way, but they're not really generators. They're just Magically or not magically created but just very carefully constructed enums with matching And they really are special like they require the compiler to generate very particular code Uh The patreon link. Oh, I should remove that from youtube. Yeah the Because I'm an international student in the us It turns out that I'm not allowed to have other sources of income and so I had to shut down my patreon, which is kind of sad But such as life I guess I uh In some sense, I didn't start to do this to like make money either. I think it's I think it's fun to educate people and I think it's uh Fun to write code and I think it's fun to have people to interact with while building things and sort of getting new feedback As we go so that is sort of my reward Um, but it was a bummer, but there's not nothing I can do about it sadly, but thanks. I appreciate the sentiment um Think I think that's all then unless there are any more outstanding questions so a lot of content we've covered so Going back and looking over it one more time is probably not a bad idea and sort of drawing things Pretty drawing too Well in that case if there are no more questions, uh, I think we're just gonna end it there Four hours. It's pretty good Pretty good. Um, I hope you found this useful helpful educational I hope the the ecosystem around futures And asian computation and rust makes a little bit more sense now If you have other questions then feel free to sort of reach out. I'm on twitter. I showed this earlier, but I am This person So feel free to reach out there or send me an email And then I will be happy. I'm also in mastodon now for those of you who are a little bit more paranoid about like privacy and not wanting to Give twitter control over your social spheres. Um, so I'm here as well. If you want to follow or message me there I I pay attention to that too Reach out if you have questions if you have ideas for upcoming streams too I will add those happily to the the voting site that we now have It will remember your votes So if I do one stream and then I do a second stream, it will still tally a sort of year the remainder of your votes But you do want to check back every now and again to see if there are new things to vote for if those may be of interest um I also really highly urge you to contribute to the tokyo documentation push Part of it is also not about tokyo Like the the doc push is all about documenting basically all the stuff We've talked about today and the futures in async ecosystem Of course, it's a focus on tokyo, but it also discusses just what are futures like? What's the execution environment like and contribute to that because it's going to help other people find better documentation to learn from in the future And Yeah, I'm glad you all came to join Have a great rest of your saturday and I'll see you and Three weeks time, I guess Bye