 Hey everyone, let's do a mic check for us this time as well. Can you hear me? If so, just like type something random in the chat. Just want to avoid the case of like 10 minutes into the chat people like hey, we can't hear a word you say. Fantastic. Thanks. Okay, so I just wanted to point out my Patreon again. I'll post here whenever there are upcoming streams and also sort of go through thoughts for upcoming streams. So if you want to sort of subscribe somewhere apart from Twitter, then feel free to join in on that. Today what we're going to do is we're going to continue looking at the Tsunami crate that we started making in the first and second video that we did in the stream. The Tsunami crate, just to sort of do a quick recap, is a rust crate for spinning up short easy-to-instances that are used to run some benchmarks. So the example is probably the easy thing to look at. The idea is that you set up that you want a certain number of machines of each type. You are given an SSH connection to every machine. You get to run some commands to set it up appropriately. And then after all of the machines have been spun up, it runs some closure where you get SSH connections to all of the machines. And then at the end it tears down all of the easy-to-instances. There's a fairly straightforward crate. If you remember from when we wrote it, we initially had this the sort of problem that we run into is you are trying to spin up lots of machines, but if you want to say SSH into one of the machines, you're sort of doing a blocking call, right? You're doing some kind of SSH connection where you're issuing a command and you're waiting for that command to finish. And this is unfortunate because it means that in theory you're setting up just one machine at a time. And the way we got around this earlier is we used rayon to do a parallel iterator over all of the machines of all of the types, and that way we can do blocking calls within each one. But now we're limited to we're essentially going to spin up as many threads as there are machines or in the case of rayon you spin up number of threads equal to number of cores, which means you can only set up that many machines in parallel. So the solution to all of this is to use asynchronous IO. You through Tokyo and futures and the idea would be that we send off all of the commands that we want to run on the different machines and then as answers come back we handle them. So this means we could have a single thread that is managing multiple machines concurrently. The way to think about this is that you have multiple requests in flight at the same time. And futures are sort of a neat way to to extend this. Now in order to do this we're going to start by looking at the API that Tsunami provides. So Tsunami again, we have the example up here. The basic idea is that you have a Tsunami builder where you set a bunch of options for the cluster that you're setting up and then you have a run method which takes a closure that's given essentially handles to all of the machines and it just executes synchronously at some point returns a result. The way this is going to change, we can sort of think of two ways that we could change this API. The first one is just have the setup run asynchronously and have the run method actually not be asynchronous at all. Have the run method be like a synchronous call that is given a handle to the machines and then executes and at some point returns. Or we can make this entirely asynchronous. So we could say that when you do run you pass in a closure that returns a future and the closure that was sort of the future that's returned by run when that eventually resolves that means your benchmark finished. We will probably do the latter just because it lets us do more futures things. But in theory you could imagine that we did futures entirely internal in the library to do things like concurrent setup but still exposed to fully synchronous API. I think I want to expose an asynchronous API here, but be aware that we have both options. At the core of Tsunami, there are basically two things that we need to do in order to interact with the outside world. We need to interact with the EC2 API for setting up machines, making them run, tearing them down, setting up firewall rules, all of those kind of things. And then we need a second crate for doing SSH access. And the insight here is that if you want to do things with futures, the things that you depend on need to be futures-based as well. They need to be able to handle multiple things in flight. Or you need to use something like future CPU pool, which is a dash. So future CPU pool, essentially what it is, it spins up a bunch of threads on the side and you can give that pool of threads synchronous tasks or blocking tasks and it will execute them synchronously on one of the threads in the pool and they give you a future of the results with the result of that blocking call. And this lets you essentially emulate sort of an asynchronous layer on top of a synchronous one. But it still has the drawbacks of you can now only run as many things in parallel as you have threads, even if they're not really doing CPU work so your CPU could handle more work. And so the way we're going to do this today is we're going to use the updated Risotto library. So Risotto is a Rust library that deals with the EC2 API. And in the previous version of Risotto that we used, it was entirely synchronous. Whereas now with Risotto 0.32, Risotto is fully asynchronous. So if we look at here, where are the migration docs? Yes, if you look here, the change from Risotto 0.31 to 0.32 is basically to make everything asynchronous. So with this setup, if we look at the methods, all of the methods that we can call in the API, they take some requests and then return as Risotto future the resolves into some kind of result. And the way to think about this is we can set up multiple requests and have them be in flight. And then we can get the responses back asynchronously whenever we want. And we'll see how we end up using this. Then the second point to look at is we need something for asynchronous SSH access. And this is one of the things that we wrote in the previous stream on top of Thrush, which is an asynchronous SSH library written in Rust. Async SSH is sort of a higher level abstraction on top of that that provides things like asynchronous reads over channels you set up to remote commands. And so we'll be using this library that we wrote last time for doing the actual SSH connections. So I think we'll just dig in. Let's see. So we're going to go back to our lib file. Now, this file has grown a little bit since the last time we did the stream because, I don't know if you've seen this, but there have been a number of issues that people have filed or that we found in previous streams where people have then contributed fixes that actually makes the library better. I've also been using it for some personal projects and then that sort of expanded the feature set that's supported by it. But I think, in theory, if you were familiar with the past streams, the library should be basically the same just with some additional calls to the EC2 API. So at the heart of this, what we're going to do, in fact, let's start with doing the internals of run as asynchronous and then we will expose an asynchronous thing on top of that in the sort of second stage. Whenever you interact with something that is asynchronous based, you are probably going to need some kind of executor. So this is something that keeps track of all of the features that you have in flight and then lets you do things like keep running all of the features you have in flight until this one resolves or to say, I want you to also handle this additional feature. And that's generally what's known as a Tokyo Core. So the Tokyo Crate is undergoing a bunch of changes. We're going to be looking at the old version of Tokyo. So this is Tokyo Core, Tokyo I.O. This is before the Tokyo 0.1 reform, which you can read about on the Tokyo.rs website. The reason for this is because it makes things a little bit more explicit so it's easier to understand what's going on under the scenes. Under the scenes, wow. But the transformation from one to the other should be pretty straightforward and that's probably something we'll do in a later stream. So at the heart of it, when you want to do anything asynchronous, you have a bunch of features. Actually, let's pull up the docs for features as well. So there's a future 0.2 that's coming out in not too long, made a bunch of changes. We're going to stick with the currently stable features. So at the heart of it, a future is just a trait and the core thing it is implements a pull method which essentially check whether this future has results. So think of this as like you try to do a read from the file and you can call pull to see whether the read is completed. Or if you try to accept TCP connections, you call pull and it will return sort of not ready if there are no connections or it will give you a connection if a connection is available. Another bunch of extra methods that we'll use, you can almost think of this as a little bit like an iterator almost or like a result that's lazy. But we'll see this in plenty detail. But the idea is that if you have a future, you're going to need something to drive that future to completion. So something has to keep calling pull until it returns some useful result because in the meantime, it's more like a promise from JavaScript world that doesn't actually have a value yet. It's like it will have a value in the future. And the way you do this is you use a Tokyo reactor, in this case a reactor core. So a reactor core is similar to the event loop in Node.js. It's just a thing that keeps calling pull on all the things you've given it. So notice that there's a run method here on core that takes a future. And run will just keep pulling all of the futures that it knows about until the one you gave in resolves. And then it will give you a result because then the future is actually resolved. There's also, if you have a handle, so this is one way to register additional tasks with the core. You can spawn a future. Spawning a future, notice it doesn't return anything. All it does is take this future and put it into this pool of things that should continuously be pulled. This is useful for things like, if you want to do background IO, it's also useful if you're implementing your own futures, but we will not be doing any of that today. So we're gonna need one of these in order to drive the entire progression of all of the calls to EC2 and various other things like that. We will actually only need one core. So normally you could imagine that if you had a really heavy workload, you might want to have a CPU pool of things that all manage your futures. In this case, we will not do that. We'll just stick with a single core because we're not doing anything particularly CPU intensive on the clients. So the first thing we'll do actually is we now need a bunch of more crates. We're gonna need Tokyo core version 01. We're gonna need futures version 01. We're gonna extincrate those two things. We will also need Risotto 32 because that's the first one that supports async. And then down in run, down here, we are going to need a core. Now, a core can fail to create. I'm not entirely sure why actually. I don't know what the error condition for creating a core is. Oh, if it's unable to start the thread. Well, that's fine. We have this already returns an error, so we can just do this. If you look back at the, I think the first stream, maybe the second, we did error handling and using the failure crate, which basically means that all errors we can just propagate, which is pretty nice. Arguably we should provide a context to this. So this is something that's provided by the failure crate that lets you add additional information to errors that you throw so that any caller gets to know not just what the bottom error was, but what our library was trying to do at the time when the error happened. So in our case, fail to create core for running EC2 API. All right, so let's see. The next thing we'll have to do is look at how the Risotto API has changed. I think mostly it's now the same. So EC2 client, you make the same way. This is simple constructor. In this case, one of the things that we've expanded on is we let the user provide a credentials provider. And so we do actually need to use new here. But that, oh, it's not working. So creating a new EC2 is fine. Down here. Okay, so this is where we start making EC2 requests. Now, the way you deal with futures, there are sort of multiple ways we could do this. We could either just immediately run every future. So currently, basically the only modification we would have to make to this API is we could just put wait here. Or sync, but wait. What this does is it says, take this future and keep running that future until it completes. This is sort of like a wait. It's basically saying like, I don't care about any other things that are concurrently running, because all I want to run is this one future. And this makes it easier to write sort of relatively synchronous code. In our case, that's actually fine to do here because there's nothing we imagine you might do in parallel here. So in this case, we could actually use wait. However, just to be sort of nice, we have this core that we made up here. We might as well just use that core to immediately resolve this future. So we're going to core.run that. That's going to end up, so remember in the new async risotto create with security group gives us a risotto future that will resolve into a successful creation of a security group, which is what we, or that will resolve into a result of the same type as was in the non async version. And so when we core run, what that means is that we take that future, we run it until it resolves and then we get back the same kind of result that we used to have in the old version of risotto. And so just by wrapping core run around this, we are essentially going back to the old way we did things. Alternatively, as I said, we could do dot wait, which is basically the same thing here because the core is local. But you could also imagine here, for example, that we let the user pass in a core. So instead of making our own, we let the user choose where we're going to execute this easy to API. And in that case, we really do want to run it on whatever core the user gave us rather than, like, on some other place that we decided. Um... Gee, that's a good point. So one of the things that we might want to do here, so notice now that currently run is entirely synchronous. So if the user calls run, it will block until everything is finished. So essentially what we're doing by putting either wait or core.run here is we are forcing the user to wait until this finishes. This means that the user can't do, like, imagine that they wanted to spin up two entirely separate tsunami instances and run them both concurrently. They couldn't do that with a current scheme. That requires us to go into sort of full future mode. So I think we'll do that next. Yeah, let's do that next. This code is going to change a lot more when we move into making all of run a future. But for now, for the sort of exposing building it asynchronously inside and exposing a synchronous interface, this is a totally fine way of doing it. Here we'll do the same thing. You'll find that the transformation is pretty mechanical for most of these things where we are only doing one thing at a time. Then futures are basically no different... They feel very similar to just using normal results. It's only when you start wanting to do multiple things in parallel that these things are actually different. So the first place we're going to run into something that's not just running a future on a core is going to be when we get to... Oh, this should also no longer be necessary, I think, with the new risotto fixes this problem. So the old code used to have this problem of sometimes the connection to the EC2 API would be broken, we would have to manually reestablish it. This is now handled transparently for us. And so we just do... I don't think we want to break it. Oh, I guess this terminates all of them. So there's nothing to break. That looks good. Let's see, down here this needs to core. So the first place where we will not just be using core.run is going to be further down when we do parallel setup. And in fact arguably we should be doing that up here where we launch all the spot requests. We could do all of these in parallel too. So we'll go back and look at that later. Would it be possible for the stream to support a lesser quality option? So I don't know if Twitch lets you do that, unfortunately. I think it just takes the raw stream that I send. It doesn't do live transcoding. Yeah. It's also at lower qualities, it's pretty hard to read the text. I will post the recording of this video afterwards and then it will be posted to YouTube. So that will also post lower quality versions of the video. But I don't think there's a way for me to provide multiple quality streams unfortunately, if I could I would. Are those all the EC2 things? Looks like it. So now we've just basically taken all the things that were asynced and made them not be asynced. So in theory all of this should sort of work. Let's see where it breaks. 342. Ooh. Is this no longer how is this changed? So there's no longer a default TLS client. So I guess this has probably changed to some kind of Ooh, that's a big file. Oh, I see. So they only provide transcoding if you have it. I mean I would also be open to switching streaming platform probably not during the stream but for the next one. I don't know if YouTube Live lets you do live transcoding. I'm not sure. This now does request dispatcher default. I guess we will do that. So this is going to be request dispatcher and this is going to be this. Why does it not let me do that? Why did they change this? So the implementation of simple currently just does request dispatcher. Oh, that's where it is. Okay. So it's in reactor. An option would be to keep the resolution below the bitrate and FPS. Ooh. Don't know if I can easily do that. So I experimented with using a lower bitrate. The problem is that the thing that gets impacted most quickly is the text and it's a little bit unfortunate to do a coding stream if you can't read the text. It's possible that a particular tweaking of parameters would help but I also don't really want to mess with it midstream because then suddenly like the whole thing breaks down and that would be sad. P may not live long enough. So fine. Sure. Static. I don't know why that needs to be static. Does EC2 client new require its thing to be static now? It does. Oh, the implementation requires it to be static. Okay, fine. Yeah, so here is this is one of the places where you run into issues. So the core is entirely single threaded. It's owned by the current thread. And so you can't move it into other threads and it's also, it's mutable. Like you should only be able to execute one feature at a time on any given core. And this means that we, I don't know if you remember, but we pulled this defer trick where we want if the code below us crashes we want to still terminate all the instances. However, this means that we're sort of moving the core into this defer. Think of the defer as a closure that gets executed at the end. So here it's almost like we want our own core if we terminate. So think of this as like if we crash and return, then this this now needs access to the core but the core in theory this should be fine. I don't just think the borrower checker isn't quite smart enough to realize that you never end up with mutable borrows. The problem here is that the defer creates a closure that borrows that borrows the underlying core. But that closure lives for the entire duration of the functions. That means that we can't use the core anywhere else. It might be that we can use a handle here. This would actually sort of go away if we made all of run future base as well. We are sort of rubbing against the grain of futures here, right? Okay, so I think the easiest way to get around this for now given that we're going to do all of this future base later anyway is that we're going to do we're basically going to create a new reactor core here. It's a little bit unfortunate but I don't think it really matters. And again, this will no longer be the case once we make this entirely future space because then this teardown is just going to be a map of the error over the future we end up doing. This will be a lot clearer once we make the whole thing future space. I just wanted to show the first transformation from async to synchronous. Great, okay, so notice that now what we just did is we changed the library to use an asynchronous library underneath us, right? So now we're using the asynchronous risotto in a synchronous implementation of Tsunami and we've done made all the changes we need to do. Currently, it just compiles and everything works just fine. Of course, this is not really all we wanted to do. We still haven't done asynchronous.sh. There are a bunch of other things that we need to do. The first of these and this is where things are going to start to get interesting is that currently we do this thread pool, right? We use rayon to set up all the machines in parallel and really what we want to do here is we want to create a future for every machine that we want to set up and then we want to wait for all of them to resolve. There are a bunch of different ways to do this. The easiest is probably to do so let's see we're going to do it the naive way first and then we can improve on it later. So we're going to keep a vector of futures, right? So remember none of these are going to execute immediately. We're just going to keep track of a bunch of machine setups in parallel. That's what we're going to do is we are going to loop over all of the machines so this is in 4 name machines in machines 4 machine in machines this could arguably be improved and so the idea is that this is how the code used to be before we started adding rayon in and notice what it does is it loops through all of the machines in every machine set that we have it synchronously connects to it over SSH and then calls the setup function that the user provided over SSH and then at the end it provides some at the end the machine is set up but it does so with all of the machines one at a time and that's not really what we want. We want all of this to happen in parallel. However that means that we need to have a single thread that has multiple concurrent SSH connections open and is managing all of them and you can't do that with a synchronous SSH setup so what we need to do is now move to asynchronous SSH which is the other crate that we wrote so instead of SSH2 we are going to be using async SSH which is currently version 0.1 great. So if we look at the docs for async SSH notice that there are a bunch of things that we have to do they're all a little bit sad but they're pretty similar to the thing that we had in the SSH module that we wrote. So remember we wrote this wrapper around SSH2 that tries to connect until it succeeds does the handshake and sort of abstracts away some of the things you need to do when interacting with the SSH protocol. We'll just use that same wrapper and then instead have it use async SSH which also means that we'll have to make it be entirely asynchronous. So in our case we're essentially going to run this code sort of. So if you try to connect to a given machine what it will really do so actually here's another question so one of the problems we ran into in the old version of this was we try to connect to the machine and often EC2 takes a while to spin up the machine and so we keep trying to reconnect multiple times. Doing this in futures is actually a little bit annoying the best way to do a retry in futures is maybe we could implement this ourselves let's not worry about that quite yet. So we're going to need Tokyo Core here as well and the idea is basically that we are going to connect to the given address connect is now going to have to take a handle because now remember connect is going to be asynchronous so it's going to take a future so this is now going to take a Tokyo Core reactor handle and the idea here is that anything that happens, anything that connect wants to do in the background needs to also be run by the reactor that we end up spinning up. So imagine for example the connect wants to accept connections in the background or wait for packets in the background it can't spin up a new thread to do that that would be sort of antithetical to what futures are trying to do so instead we need to have connect tell the core that we have in the sort of main library routine we need to tell that core that it should also be looking at these father scripters and that's what a handle is used for giving away jobs to some other thing that's going to execute them for you and what this is going to return is actually an impulse future so do we want to be nightly only here so in the next stable version of Rust we're going to have impulse trait where we get this we get item and error so impulse trait so we could type impulse future for example and this basically is I don't know exactly what this type is going to be so compiler you just fill it out for me but in reality this would be really nice to have but it is currently a nightly only feature it will not be available outside of nightly until we get to Rust 126 so the way to get around this for now is that we box the future so the way to read this is that connect returns a future that resolves either to a session or it resolves to an error does impulse trait increase compile times impulse trait does increase compile time a little bit there was a really bad bug at one point where it caused non-linear increase in compile times but that has since been fixed so I think impulse trait let's just think of as well it is basically the same as naming the return type not that every project is relevant it's curious so think of impulse trait as the same as putting the actual type there except you don't have to type it out the difference with using box is that box actually hides what the implementation of the future is so box will reduce compile times because box is essentially saying it's adding dynamic dispatch which the compiler can't look through that means the compiler can't optimize through it and if the compiler can't optimize through it the compiler has to do less work you also get very slightly less efficient code so there's a tradeoff here in general I don't think you should notice this too much okay so connect is going to return a future that eventually either gives you a session an SSH session or it errors and what it's internally going to do is it's going to first try to establish a TCP connection what does this do I don't think we need all these map errors anymore well sort of so okay so TCP stream connect returns an IO error or IO ok it adds a little over this compiler needs to resolve the type itself but yeah it's not too bad comparatively the way to think about it is infiltrate is exactly the same as putting the real type there which like does mean that the compiler needs to do more work but it is the same as having the real type there as opposed to box and then a trade which hides things from the compiler which means that the compiler has to do less work so TCP stream connect returns a future where the ok is the actual TCP stream and the error is an IO error normally you'd imagine that we could do something like this where you get back the connection and then we establish a session on top of that connection and if it was an error then we'd just probably get the error if it was not an error that we get whatever session returns the problem is that session new also returns a result here but with a different error type than what TCP stream connectives that's why these maps are here however since we're using failure we can actually get around this pretty easily by saying map error E and for E what we really want to do is we want to turn that into a failure error instead because those are the same across all these different types so we're going to do this with I wonder if failure has context for futures I don't think so yet unfortunately yeah so it has these extra methods that are implemented on result which is really convenient but I don't think it has it on futures yet which is a little bit sad which means that we need to construct our own context there are a bunch of different ways to do this I think the easiest one is to just do like error new is that true so if you look at the error type from anything that implements fail which I think is implemented for yeah so the way to do this would be E dot into so E here is an IO error and so we're going to turn that into a I wonder if we can do this we do error E dot context dot into and then say fail to establish SSH connection that way we don't need any of these because the error type at this point here now is going to be just a failure error and then we're going to do the same for session new where session new is going to if you look at session new here so establishing a new session, if you call it session new it gives you, ooh it gives you a result directly, nice so this one can actually just be result dot to establish and then ultimately we're going to have to connect somehow here so if we look at the example we need a key file in case it's going to be pretty straightforward because we already have the path that's passed into this function we're going to need the rush keys though I guess a little bit sad too ideally the async SSH library should forward use, I say this but we wrote the async SSH library so in theory we should have done this but we want thrush keys thrush oh it's 0.9 so in here we're going to need extra cray here we're going to use thrush keys and so now we can load the secret key here from the path that we're given in in theory actually this could all now be in memory because thrush keys also has a decode secret key that can just take an array that's in memory so we would never have to write it to a file but let's ignore that for now okay so that's going to return an error this is going to return an error and authenticating a key is going to take the key that we constructed above and map it out so that we get an authenticated session and then we're going to add some context which is going to be does authenticate key return a future or does it not authenticate key returns a future so this is going to be and then session session.context failed to authenticate SSH SSH and now all of those things would go away and this would be a session in theory at least so now here this would be an async SSH session we don't need to store the stream anymore stream is handled entirely here to retry tcd connection so let's see what happens this will probably break in all sorts of ways such as async SSH not being found async SSH let's see what it gives us oh need to pass the real username good catch we are not trying to authenticate this username we are trying to authenticate this username see this is why it's useful to do live coding it's basically like pair programming where other people catch mistakes that I just completely missed so notice how these are the kind of things that we are going through remember now that our run method is still entirely synchronous we are only implementing async internally currently and so just getting risotto to be async was fine now that we also want to do parallel setup that's when we need to have multiple concurrent SSH sessions which means that we need to have asynchronous SSH sessions which means that we need to modify the way we do SSH sessions to be didn't bind the handle oh yeah do catch that's also why REST format didn't kick in and so even if you want to provide a synchronous interface it's very straightforward if you want to also have concurrency then that's additional things to can't wait for compile times to go down yeah yeah it is getting better but it's taking a while incremental compiles helps a lot also in this particular case is because I updated my nightly so it had to recompile all the dependencies too when you compile just the crates it's usually fine I guess we need to fix up all the docs we'll do that later this is going to be an async SSH session what is oh it's over the stream ah so this is going to be asynchronous over uh tokyo core yeah so notice how now that I've compiled all the dependencies this compile is pretty quick right uh let's see a bunch of things we're failing with 696 uh yep connect takes four parameters that's true this now also takes a handle to our core notice that this is still not really doing all that much in parallel so there we're expecting this to not fully compile yet see no method context found for box future in 7.10 yeah so here um notice that this session is now actually going to be a future and what we want here is um then which is uh with the result that we get so there's a bunch of different things it's like and then map map error or else etc um and then is the previous future resolved then try this future if the previous future result correctly then try this future with the result whereas uh dot then which is what we're going to use here is when the previous future resolved regardless of whether it's success or error or call the following function and this list is easily easily map context uh so here we can just do our r.context right this is probably what we should use uh this should be then this should be um what's the state of incremental compiler incremental compiler is now on and stable i think uh and it actually works really well there was recently uh red green incremental compilation results also turned on and that helped a lot um here why does this map the error oh that's right yeah so the map error here just um just prints the standard out which is fine we can keep that in um but notice that at this point this is still a future right we haven't actually connected yet um we're just saying that when you connect if it's an error then do the following and so this question mark doesn't do anything anymore um the other thing is if it didn't error so in this case and then um we're going to have an ssh connection here and so the question is what are we going to do with that ssh connection and realistically what we're going to do is the following we're going to do this ooh that's um pretty not very nice formatting uh can't wait to be it takes a little while to get used to them but you can be very efficient when you are uh there are a bunch of plugins that help too um okay so the idea here is that if we successfully connected to ssh on this machine then we go into this clause where ssh is now the connection and then we want to print some debug information uh and now this is where we get into some issues because now we're going to have to call the setup function the user gave us but that setup function is not at least currently futures aware but it's going to have to be so even if uh even if run as is not the setup function that the user provides us so this setup function is now going to have to be futures and the reason for that is the ssh session that it's given is actually asynchronous right um so this is now going to be well it's true that it's an ssh session but our rssh session is now asynchronous so that means that this is now going to have to return a box future item is this error is this and no longer needs to be synced though which is nice uh and this the other thing is that it will also be given a reference to our handle in case it wants to spin up things that we need to continue running in the background yeah I think 126 is going to be really exciting there are a lot of cool things lined up um so this means that now our f returns a future which is what we want so really what we're doing here is we're saying so this is one of the things we run into in futuresland um we're going to give the machine set up a mutable reference to the ssh session the question is how do we get that ssh session back because we still want to execute more commands against that machine later if we give this to this function the returns of future that ssh session is essentially consumed by it right this is basically going to take one of these so the way you often get around this is that um ssh is going to become an RC of this so the reason we want to make this an RC is that now we can clone ssh sessions very easily it does however mean that like so we're going to derive clone on it this means that session is no longer send or sync but it does mean multiple outstanding ssh connections within the same thread which is basically the setup that we wanted here it also means the command draw is just going to take self and similarly also command because there's no reason for them to take a mute self when all they get is an RC now the reason this is all fine is because if you look at session it follows a similar kind of pattern and this is generally true for future space things that the that's not true why does session take a mute self feel like that's wrong and all is the one that is cloned ah okay no so this is fine this is all I take all that back it's all true but we don't actually need it um so this is still going to take a mute session the futures it returns should be allowed to depend on the session is that even true so one thing we could do is this the concern is how do we get the session back actually no this is all fine this is all fine we don't need any of this um so the reason we don't need any of this is on session yeah so notice how channel does not have a lifetime parameter this means that once you have opened a channel you get your session back or rather you can keep reusing the session so the way what this is that the setup function if it wants to open a connection oh this is going to be terrible okay here's how we're going to get around that we're going to do this I can't do that either so what I was thinking was um just pass along the session in here um and have it give it back at the end of the future but this would mean that we need to wait for the entire setup routine to continue so we are actually going to have to pull the same trick as I initially started writing out which is and this you end up with this so often in futures code and it actually bothers me bothers me a little we're going to need cell ref cell and then SSH is going to be an RC ref cell now it can be cloned it will now no longer be 0% this will be RC new command raw is going to take self that's fine um and specifically it is going to so it's okay so the setup we're getting at here now is that anytime you want to open a channel for example you're going to need to the intention was that you can now clone a session and you would only take a mutable reference or because you have a ref cell to it you can now clone all the references to a session that you want and then whenever you need to modify that session you know that you're the only thread that has that session right because RC is not 0% so even though there may be multiple pointers to it you know that you are the only one that's currently using one of those pointers and therefore using ref cell you can now borrow you um unfortunately the downside here is that open exec here uh wait no it's just tied to the command string oh yeah okay so we're fine we're fine um so the intention here is that you would take a you would take the mutable reference to the session up here right and then you would open a channel which gives you the the channel open future and then you can do whatever you want with that and at that point you no longer need the borrow of self because notice the lifetime a here is just about the command string and not about the session and so at that point you now have the session again and so this in fact means that we can do this yeah we don't need this great sorry for the flip flop but I'm uh yeah so so the observation here is that if you give a mutable reference to the session to set up then set up can spin up channels as much as it wants and this future that it returns is not tied to lifetime of that session which means that we get to hold on to the session that we hand in uh and when you're previous dream you said you prefer to touch your mouse a little as possible you ever use pindactyl with firefox uh so I used the Vim bindings for firefox for a while um yeah like something like vimium um but there were so many cases where it just didn't just stop working or slow down the browser or like I updated to quantum and then it no longer worked correctly um it's pretty sad there's a pretty cool vim browser based on servo that someone is building which is neat all right back to where we were so the idea here is that f is going to give us back a future of some kind um right so f is this closure so we're going to give it a mutable reference to the session and a reference to the handle and it's going to give us back a future so this means that here we're going to give it an core.handle she arguably we should make the handle appear handle is core.handle this is going to be handle um and that future uh do whatever it wants to so we just want to execute that in the background somewhere um and at this point remember we have our SSH back so in fact we can uh yeah when this resolves that's the way to do this when this resolves with we don't even care what the type is uh we're going to instead map it back to the SSH connection this means that the session this means that we'll hang on to the session afterwards so actually it would have been fine for us to give SSH to the setup routine but this feels more ergonomic somehow I guess we'll see we can adjust this later okay and then if the uh and then we want then so depending on what happened with the um I think specifically we want r here so the reason we want the then in here is because if we put it here it would execute even if we failed SSH to the machine right so the way to read these is for the futures anything that is sort of in the same amount of indentation uh will all be the same error like errors just keep propagating along that same line so if this errors if this block here errors right so if we have this context then this map error will execute but also if you put it then down here that then would also execute because that's still a part of the same future this then however this inner then uh will only execute if this future failed uh and so in here we can then do r.context and so that we get the log so and then at the end we're going to map uh with the SSH session and we're gonna print out that everything worked fine and that and then we're gonna end up yielding the SSH session that we now have constructed and so all this is to say that sesh here will be a future that eventually resolves to an SSH connection where the setup routine has been run which is sort of what we wanted um I missed at the beginning what's a session in this context so and a session here is um one of these so it's really just a wrapper around async SSH session but with some just some niceties for executing commands uh we might not even need it because openexec gives you something sort of similar so you have a channel open future and that on that channel you can then do various things so it's like we might not even need this wrapper anymore but it's basically a wrapper around async SSH session uh the reason you need multiple references to it uh you're right actually we don't this is the took me a while to realize but we could have done this we could have passed it in and then uh forced the future that the user writes to sort of yield it back at the end uh we could totally do that but it means that the user code needs to take care to sort of pass along the SSH connection all the way so we can return it at the end this means that they don't have to do that we keep track of the SSH connection um I don't know if that makes sense but hopefully okay so this is now one giant future that will in the end resolve to the right thing um and what we sort of want to do here so you can imagine here that we do futures uh dot push sesh and now we sort of wait for all the things that's in the futures however if we do this um we sort of lose the mapping between which machine each SSH connection is for we can imagine that we carry this along like all the way through here we keep track of let's maybe just do that um so if this like enumerated which number machine it was let's do that so into it or enumerate so that gives us an I in a machine uh is that or the other way around? no I think that's right um and now at the very end this thing can map into name I SSH so this is the SSH connection for the I machine of this type um the other way we could do this is we could have the last we could have a we could keep a ref sell over machines so that the last thing that the future did was update its own entry um maybe that's even nicer nah this is fine and if you and then what we're going to do at the end is we're going to do something like whatever the syntax is uh like for uh name I SSH in like futures.resolve all it's not actually what the syntax is but just for exposition uh then machines name I SSH is SSH right we're going to do something like this it's not going to be quite that um but that's sort of the intention of where we're getting to now some of you may recognize this pattern of we're doing a for loop that we're creating a single variable pushing it onto a vector and then we're collapsing the vector at the end turns out you can do this with iterators instead and that's a lot nicer so we're going to do let futures is going to be machines. intuiter .map in fact .flatmap uh name uh that type inference works the VEC new yeah the VEC new will be a vector of triples um you don't have to do anything more uh so this is going to be over machines uh and now what we're going to do here is we're going to do this and then we're going to take machines.intuiter .enumerate which is going to get an i and a machine and that's just going to give this session over that and now futures is an iterator that will generate futures for all of these things so this is really nice and then we can move this up to here so notice now that we are we actually at this point have not constructed any of the futures yet right this is just an iterator that lazily would produce all of the SSH connection futures and so now the question is how do we resolve them all well we're going to have a letmute machines is a hash map new because we're going to be stuffing them into somewhere right because we consumed machines up here actually one way we could get around this is we could drain here instead but let's not deal with that um so the way we actually collect these is if you look on futures um and you look at future which has a bunch of convenience method what we really want is join all so join all conveniently takes an iterator over things that are futures or can be turned into futures um and gives you a future that resolves when all of them have resolved um does it in the same order they were provided this is important actually no it doesn't matter for us is there a select all not any future don't actually care about the order wonder if there's a thing that lets me do it more efficiently if I don't probably not fine let's just use join all um and so what we're going to do is we're going to do uh futures future join all we're going to give in the I guess let's call this uh setups setups so this is still a future by the way right like this still gives us a future that will resolve when all of them have finished um and join all join all an iterator and returns an item that's a back okay fine uh and so this means that now we can do for name so in fact we can do basically this um but this time we actually do want to run the future so we will do that on our core and then we will do machines entry name uh or default uh or default is not in stable yet that's too bad we'll insert with you it's going to be a little awkward uh I need more rust practice even after we need that many things yeah I mean it's one of those things where you need to you need to work with rust a while before these big things sort of become second nature uh not a rust program but there's a pipe missing on the setup declaration line that is totally possible and that might be why my rust format is not kicking in uh yep you are totally right it's still not kicking in why is it complaining uh 7.2 no uh did you forget to close this probably so I forgot to close something that all seems right anyone spot what I missed so I definitely missed something this is where I should have uh there's a really neat vim plugin called like colored parentheses or something that basically colors all your parentheses in increasing color value by depth so that these things become 43 this uh yeah how about that but I haven't really changed that that should be this else so that means I oh this of course yeah like rainbow for you max this is the same thing um uh actually I guess we don't really care about the I do we don't care about the order of these machines so that means that we can just get rid of this and this and the I from here and just make this dot push SSH in theory what's the plugin for the relative line numbers uh no plugin you just uh it's built into them uh you set uh rel set relative number and then you get it it's fewer errors than I thought uh can't fight in future that seems unfortunate uh that's not terribly surprising though we want to use futures and we want self and oh I guess we don't need self because we have the external let's see that's telling me I don't need I like to get rid of these warnings because they just pollute the output stream here we don't need thread anymore that's pretty nice we don't need duration I don't think that's actually going to be true once we implement this retrying logic um here line 140 this is because this is now different this is now a Tokyo core reactor core no handle uh this is now a box future uh from what I can tell the rest typing for instance it makes whole driven development harder whole driven development I don't think that I sort of know roughly what it is but I haven't used it this is like a thing you do more in Haskell right what you say I haven't defined what this type is yet um yeah you can't really do that um although the compiler will just tell you that it couldn't infer the type and it will usually tell you that fairly late in the compilation process um so you can sort of do it but it feels a little bit different this is going to be just like various cleanup 757 28 SSH oh this is going to be a map uh so notice here what we get back here is a future uh and then is saying if the previous future succeeded then execute the following future but SSH here is not a future we just want to ignore whatever result the setup procedure gave us because it's just it just gives us unit and instead yield the SSH connection should help a little uh 708 question mark somewhere ooh that's a good question yeah so here okay so this is a little awkward um we used to be able to rely on the fact that everything is a result and so therefore we can just like it's the same thing whether you can parse an IP address as whether you can connect to a machine but parsing an IP address is a synchronous operation so we sort of need this result to become a future it turns out you can actually do this pretty nicely so uh a result implements into future then now this is an adder we're also going to need into future but it's complaining about 741 should just get the rainbow plugin shouldn't I uh if you want to try some things out oh yeah the the play the rustline playground is really useful for just like testing things out quickly I still have something wrong oh that's what I have wrong like so that looks a lot nicer so the idea here is that um once we have a machine for every machine that we iterate over first try to parse its IP address um make that into a future so the way you make a result into a future is you make a result that a future that immediately results to whatever the result is um yeah someone pointed out that it would be nice to have a rustraple it's a little bit tricky but it is doable um and then once we have the address which is going to be immediately if we do indeed get an address then we try to do an SSH connection this is going to have to be and then because SSH connect gives you a future this then no that seems fine this then as we discussed earlier this then will also capture the error from here um but I think in this case it's fine to log the error there seems about the same 710 yeah so we run into this issue of we want the error type of this future to be want this to give what is the interview basically I want the error type here to be failure error not in this case context so if you look over yeah I can't point to the screen can I um yeah so this error over here gives us a context whereas what we really want is the error to be a failure error which is a something we can cast basically everything to um we can do this with map error 747 expected vek found oh yeah this oh right so of course the setups here could fail wait what does join all when would it ever return an error join all ever return an error that's not really what I want well maybe the observation here is that join all if it's sort of like collect where if you have an iterator over results then it will give you either if all of them succeeded then it will give you all of the successes or if any of them failed it will give you all of the failures actually it gives you only one error so I here is the the iterator items item as into future and then the error type of the future that resolves into so we will be given a single error which like arguably is kind of what we want so remember we had to keep this like errors list um but in reality we won't need that anymore instead what we would do here is this and if it's okay then now we have all our machines and we want to do then we will do this and we don't have to check if errors is empty on the other hand if we get an error of any type then what we really actually no we can do this even better because we're still in this synchronous world oh no we do want to continue I see the reason here is we return the errors down here yeah no so this is fine so we can just do this let all is that and then we just do this so the idea here is that if if any of the setup routines fail then core.run would return whatever error that that future resolving would give us and we prop would propagate that error by returning from the current function so returning from run which is what we want it doesn't mean that we wouldn't get down here but that's fine so this must be okay great so now we have all the machines and all the machines are given oh we sort of want to keep the machine info is where this gets a little bit awkward so all these sort of want public IP right and nothing really else is public DNS the observation here is that currently this is now a map from name to a vector of SSH connections but when we invoke the main run method we really want to give we really want to give the full machine description see is using an inline use within a function definition just a matter of style or does it have any practical consequences so there are no as far as I'm aware there is no difference between having uses inside of a scope or at the top as far as compilation is concerned it doesn't mean that that name is only available within that function I like to do it in some cases where a particular use is only used in a single place in the code I'm not entirely sure why I do it arguably there isn't really that good of a reason I think if I had to justify it it's because I I like the top of the file to indicate the kind of things that this file cares about and the things inside functions to be the kind of things this kind of function cares about exactly so as Reqnop mentions like I don't want to pollute the global namespace okay so we're going to have to keep the machines around here which actually makes this a little bit awkward the way you're going to round this would be have this just iterate over machines have this just iterate over machines so now this is going to be this is going to be that this is going to map over machine or machine is going to be a reference to each machine which is what we want so the reason we do this is because now basically what we did now is make this iterator be over references to the machines rather than owned copies of the machine and this is useful because it means that after all of this this entire future has been expanded or has been run to completion we are no longer borrowing machines what that means is that below we can modify machines so now we do need this I again so we don't need to construct a new machines here instead we just do getMute this should never be none because name was extracted from machines in the first place and we should be able to getMuteI and we know that I will be some because I was extracted from machine list in the first place and then we want to set .ssh on the machine to be some ssh you think as a reader you prefer this oh having some of the uses be in line expected to go from reference oh it's a map that's why and two name is already a reference ssh 34 a mismatch type adder expected reference I can do that I can give you a reference slowly but surely async ssh this is because we need the result we already have that here so why is it not letting me do that it's saying can't find context for result because it doesn't implement result text why doesn't it implement result text because I must implement fail I guess that implies that I see so thrush handler error doesn't implement fail that's probably true it's compact it's not really what I want so yeah so the problem we're running into is the ssh new returns a result that's true but the error type is not compatible with failure this is arguably we should have fixed by making session use failure as well but we can map error this I think just with it's required to have fail so fail is implemented for anything that's standard error so this basically just means that thrush does not implement standard error that's a little bit awkward hmm don't know how to easily get around so one way we can get around this is just by casting the error we get out to a string but it's not particularly nice maybe we could just give actually let's check this so where is our async ssh session returns a handler error from thrush and thrush implements from various errors but does not implement display though so that's really where we're getting because if it implemented display would be fine probably well don't have a good answer for you there except by fixing async ssh so I think what we'll do here and this is going to make you all hate me but it's fine as we're going to do yeah I know I know it'll work though should work okay 940 so now this probably has the same error of this needs to yeah it's going to do the same thing also we need to move the then in here otherwise we have the same issues we discussed before so we're going to do this the ugly way and then like so and this is supposed to return a future now which means that this future is going to be returned and then it's going to map the ssh into a session ssh if everything works out correctly so now it's a future in 34 it also needs to be boxed because we don't have imple future yet that's very sad I guess actually I could box new context found error so I guess we have to lifetime required that is true connect now needs to have a handle that lives for as long as you want this future to look for I also wonder whether the session is tied to that lifetime don't remember what we ended up doing there yeah the only requirement is that we keep driving this handle if I remember correctly the reactor ban of the given handle must continue to be driven for any channels created but it's not tied to the lifetime of that so self won't be tied to tick a but the future will be because ssh because we need to pass in the handle here which means that the handle needs to be alive at the time when we get to here which is going to be at some point in the future because right it's not until the connection succeeds and so that's why this handle reference needs to be valid all the way until this future results well technically until like this midpoint but we might as well just say until it resolves username also needs to last for that long key does not because key we use straight away no? still a little bit surely command raw yeah so this is just the API is different open exec is what it's called so this is going to be command raw is now also going to be futures based I guess is the other thing that we have observed the channel open future gives you a channel that's fine I guess really the this is going to give you a box future the item is the bytes that you read and the error is an error as usual the way we're going to do that is we're going to take the SSH session we're going to open exec command we are going to map error I guess we're just going to do then fill to execute command and then if you successfully this is starting to execute something on the open SSH connection and then if the command open then we're given a channel and on that channel we then want to we don't need to send in a file because that's handled for us right so a channel implements async read async read is the way that you do IO over asynchronous channels the thing to look at this is Tokyo IO and so Tokyo IO has a bunch of convenient methods for reading and writing from asynchronous streams and what they all have in common is they're sort of similar to the methods you have on readers in general but they all return futures instead of returning the result immediately and that's what we want here we want to return a future that gives you all the bytes and so in our case we're going to need Tokyo IO Tokyo IO is version 0.1 and up here we are going to use Tokyo IO and now down here this is we have an open channel and now we just want to read all of its bytes and it turns out then you can do that with IO read to end so notice that it takes an async read then a buffer the buffer we can just start out with an empty vector and then it gives you a future that will eventually give you all of the bytes it also gives you the channel back but we don't actually care about that in our case so we're going to do this read to end yep that's all we want recent theory read to end see we need to give it a buffer which can just be an empty vector for now for the future I guess we also want .then just so that we can add context and in this case our context is fail to read standard out of command this thing this is going to be format it's interesting I wonder whether we still need this loop so that it turns out that in SSH connections sometimes do a read of size 0 and it doesn't mean that the stream has ended I don't remember how we implemented this in async SSH so let's look at the async read stuff yeah so it handles this automatically so if it read a 0 and it's not yet in a file then it says that it would block which means that we don't have to have any of this extra stuff we have an implemented standard area in async SSH so all of that goes away we don't have to wait for close I think I think that's handled automatically read for channel that should be handled automatically we still have a to-do of checking the exit status and we have a to-do of returning standard error and that's all fine and then we don't need this okay anymore because that's handled automatically by the end then so we'll move these up here and that's now command draw and command of course just interprets things as a string so this will be basically the same as what it used to be in that it will just call into command draw and then try to parse it as a string but in futures world we're sort of going to wrap this inside out so we're going to have command.raw and then we're going to map the bytes that we get and we're going to end them by doing string from utf8 invalid utf8 and let's see what that does probably has the same issue somewhere we need to do this song and dance of mapping it to a string that's really sad we should fix up async SSH to not require you to do this but fine command.raw mismatch types oh right we need to box the future I so want impolfuture because I don't want to have to write all this boxing wrapping stuff now what expected context found failure error 59 yeah so this is going to be more of the map error into I think we can just do let's see what falls out the other side of things that expected context found if you did use infiltrate it would only be unstable for 6 weeks that's true I could just do it but I'll be yeah I think for some of these live coding sessions it's also useful for people to see the like workarounds for things and in this case it's definitely a workaround like this is just how you have to do futures for now oh right and gives you back the channel as well and in our case we don't actually care about the channel anymore so we will is there a close for channel I don't think so yeah so I think we're just going to drop the channel and keep the bytes into into I wonder if I can make this error oh because we get a I guess this is now already error this is like error juggling is not normally something you have to do although it is true that with futures very often the problem I run into is like the types don't quite match for the thing that I don't care about this is in theory something that failure should fix but here oh yeah so the other thing here is the command that you want to execute the future return is actually tied to that command string because we use it in all these print statements right so we want the command to be valid until the future has been resolved we could remove this restriction given that we use it straight away by cloning the string or by not including it in the debug output or by creating the debug output immediately but let's just like add the lifetime in for now on tick a expected context in 80 this also has to map the error into because a string from utf8 error is not a failure ooh nice so it type checks now we get border checking errors that's cool that means we're pretty close also I want to get rid of rayon where's rayon? hi rayon I don't need you anymore we have futures now with the power of futures 748 um machine so this is where nonlexical lifetimes would fix our problems which is also not unstable yet right it's also for 126 or something so the issue here is that we borrow machines here that borrow technically only lasts until we resolve this future because the borrow of machines is into setups and setups is dropped here but without nonlexical lifetimes the borrow of machines is tied to this entire outer scope and so we're not allowed to start modifying machines here because it thinks that the reference to machines is still alive this will be fixed with nonlexical lifetimes which is currently an unstable feature the work around is this and it's painful but basically we do this so we essentially create an artificial scope so now the borrow of machines only lives within the scope and we use setups within the scope so at this point the future is considered ended so here we could do just for our own sanity scope for lack why is it lying to me okay now what uh cannot borrow machines as mutable still telling me that on 702 that's true I do borrow machines there 753 is that one and it telling me that the borrow ends at 771 so I must be missing something silly here this should be totally legal why is it not letting me do this machines that borrow of machines should no longer be there interesting it is lying to me how many hours did you spend on this project so far approximately let's see so we started tsunami we've done two like four and a half hour sessions on tsunami uh and then we did a five hour session on asynchronous SSH and then this will probably be like actually this might be shorter but like somewhere around four hours so it depends how you count but on this crate alone I'd say a bit under 10 hours this method looks a bit long yeah you're not wrong we should arguably split this up in fact that would be a first issue for someone to tackle I've actually gone through all the tsunami github issues and tagged the ones that I think are useful for people who either are unfamiliar with Rust or unfamiliar with this project to sort of start out with and so you might find some of those useful so one way we could try to deal with this is let's just turn on feature and an L gonna have to recompile all the dependencies I'm pretty sure this code is correct for what it's worth but I actually found that this crate is actually useful like in the real world so so I recently changed for some handwritten code to using tsunami and the diff is removing a bunch of code it's pretty nice and this is now for running like lots of benchmarks on EC2 for a research project I'm working on can you use Rust in your PhD project so my I am currently working on a new type of database that's written entirely in Rust so we've been working on it now for almost three years and it is about 40,000 lines of Rust code Rust was a somewhat questionable choice when we started out but I think people have been excited with it by the end like the other people I work with on it if you want to see the code it's on here it's not particularly well documented yet but if you want to see the code that's where it is whenever you have to recompile a crate from scratch it takes quite a while but as we've been seeing all the way through if you're just recompiling you made a change and you want to recompile that's usually very straightforward I'm really wondering why it's not letting me do this because the borrow I know why this is we don't need NLL it's because we need to do let machines is at machines what? apparently I was not right well then fine that's very weird that's very weird for what it's worth if NLL just gets rid of this issue then I will just switch to it being nightly and then we can also use the infiltrate so maybe I should hope that NLL fixes this problem in that way I don't have to think about boxing futures anymore alright it's time let's see what it what it gives us there's a bunch of other errors here what command I guess we can fix that while we're at it so these are probably all who? hello kind of borrow a mutable item SSH that's mutable who? not so familiar with native compiled languages couldn't one speed up compilation time significantly by caching it caching what exactly? so that's what incremental compilation does and so you notice normally it doesn't compile any of the dependencies so if I kill this now and then run it again then it will only compile my current crate and none of the dependencies so it is caching all the dependencies it also in fact does incremental compilation so it caches states of compilation and so generally the recompile of the current crate if you just make small changes is very fast NLL seems to be stuck here which is its own kind of interesting let's make it happy about that do we think we found a bug in LLL? should not be I mean it is doing some CPU stuff well NLL I hope you'll save us here why would this not be okay? I wonder if it's the closures it's complaining about because these closures do still have references to machine it probably thinks of these closures so remember that machine here is also a reference into machines and that means that when we use machine here that's also a reference into machines so it might not detect here that the closures also disappear at the end of the scope but adding that scope shouldn't fix that issue does not look like NLL is happy about this code it's taking way too long to compile so I will skip that okay well I mean there's a straightforward way around this I guess by just creating a new copy of machines it's a little bit sad but sure why not we found a compiler yeah I this so let's see what this means that sounds a lot like that one I'll probably post something to this after the stream that seems seems wrong if you've already used the same dependency on a different crate I'm working on I don't think it caches creates caches dependencies across different projects I think there's been some proposals for this but it doesn't currently do it okay well we don't particularly care about performance in this I think this is not performance critical code so what we'll do is machines 2 this is a terrible name for this but we will just do this and now this can be and now this is free to modify machines terrible terrible hack that's because the machine isn't cloned which is technically true but machines should be cloned that makes me a little sad so the problem here is that machine isn't really cloned because you can't clone an SSH session you can clone an SSH session if it's none so I guess the way we're gonna do this this is terrible I sort of want to implement cloned only for machine like internally but we're gonna implement cloned for machine now this is way too risky here's what we'll do instead here's what we will do instead we'll go back to having this be intuitive we will have this no longer enumerate we'll do all the things we did previously except just name an SSH and then we will do 4 let's need machines so this is basically the thing we started doing out and then thought we could get away with not doing machines.entry name . and then we need to chain the machine along here so we're gonna do let is gonna be machine.public.enus clone this is just gonna be DNS it's gonna be DNS and this is now gonna be this machine machine.ssh is some SSH and then it's gonna yield the machine and then here we are simply going to collect all of the machines again let's see how that works it's gonna complain about some things specifically now this is now because we're using into itter yeah because we're using into itter name here is now not a reference to a string anymore it's an actual string 751 found reference to machine no no this is gonna be inputter as well we're gonna own the machine does not live long enough sure it does not live long enough this is this is fighting the bar checker our data cannot be stored in here right so here we're gonna need to keep a separate clone of the name for every iteration and then now F up here this actually we're gonna need a separate core for each one you find rust compiler's two verbose in cases like this where I know what the errors are like I basically look at a single line of the error than yes or rather what I really want is I want cargo check like max errors to right just so I don't have to keep scrolling up because I'm gonna fix them one at a time right ooh wait this handle not cloned see this I should not need to do any of this I guess now this doesn't need to be moved so that means I'm a little bit surprised that I can't do this do F I guess I can only move it once is the problem just bear with me boy we deal with all these usernames so we're gonna have to do the same thing here we just want to capture a reference we don't want to capture the whole thing does it do these all really need it's only this that needs to be moved does it even need to be moved I don't think it needs to be moved and if it doesn't need to be moved this needs to be no this also doesn't need to be moved nah it does a comment from myself oh yeah how about that it's true that's the feature I want and have wanted for a while if you look at the date for that username what I mean usernames doesn't live long enough needs to live until 744 but it does I feel like the compiler could be more helpful here I don't know exactly how but like name it's basically like I want to say which things to move and which not to and I can't easily do that it's basically like either the thing is moved or it is not moved set up FNs down here now name is the reference again finally and machines now name here is gonna have to explode again 708 is there playing that would display the errors in the buffer with squiggle lines oh you mean like this wait needs to run in the background like that like this so yeah I use vimail for this as well and it is really useful it's just that I don't want it to run every time I type or every time I save the file so I have it bound to a key so I could just talk about these but it's sometimes useful to get the context like I want to know which part of this is highlighting piping to head works it's a little bit awkward actually because you lose color you have to like pass color always and then it's a little bit annoying it's now struggling with name and the reason for that is because we have a flat map you know what we don't need to do this we can use it because we allocate a new machines it is totally fine for all of this to use it that means we don't have to clone name ah no we do because we need the actual machine that's why that's annoying hmm yeah this is something I keep running into with flat map so the issue here is imagine that you're iterating over the set of machines right this is Rust not C++ so imagine here that we're iterating over all of the machine types and then within each machine type we want to iterate over all of the machines in that type we're moving the name into the closure but we can't really do that because there might be multiple machines and so the question is how do we get the name into here and I think the answer is going to be we're going to hack it a little and to iterate this is going to do drain because now the name will remain so we can keep a reference to the name name this is very much a hack and it makes me pretty sad the game does not live long enough do they all need to move now? probably in emacs it shows me the exact position of the error and I get the error message to pop up via shortcut yeah so I have the same thing shift L will run cargo check keep all the errors and then I can move through them either with control control J or K or I can open control I can this to open up all the errors I just don't use it because moving to this is like basically equally let's see so what is it actually complaining about now I can't get handle sure I can well we're getting closer in 14 we could do this by chaining the machine along instead of taking this reference to it shouldn't really be necessary I really want all of this to not be necessary maybe we can get away with not having these moves I think the only move should be the one I would use razor though for completions I use RLS which uses razor or I actually in this case I use RLS for completion I use Vimail which uses cargo check with json output over LSP to get errors really doesn't want to make me happy here because here we need to move in usernames okay this is definitely like borrow hell let's see so usually when you're in borrow hell it's because you're doing something that you shouldn't be doing in this case what is it we're doing that's silly this move okay so now it's saying name does not live long enough which is false in theory I should be able to do username is equal to this and let f is equal to this those are shared between all the machines and we should only need references to them and that fixed all our borrowers I don't need either of these anymore yeah so often if you get all these borrow check errors like take a step back and undo all the magic trickery you were trying to do I think this can also go away no I think that's old this is one of the reasons I don't press shift L very often because the errors stay there until I re-run the re-run the checker oh you're saying that's why it suddenly allowed all my things to go through because now it couldn't parse the file anymore yeah it's too bad yeah you're totally right it is getting better though name does not live long enough this should now just say username username does not live long enough why does username not live long enough username is a reference to a thing that's outside of here so I disagree yeah that's too bad so we have it's claiming that I think this ties back to our original problem which is claiming that all of this has to live back to this line which is just not true so for whatever reason at the point there's got to be that we have some other variable that I'm not thinking of I want to undo all of this back to where we thought it was NLL and the reason for this is that this is how the code should look it seems like all of these issues stem from the same thing do you run into these borrows often no this is actually the worst I've seen it was harder when Rust was newer and NLL has fixed a bunch of the issues but this all seems to stem from one particular problem which is that for some reason it thinks that the borrows that are made inside of here need to live down to the end of this else which is just not true what that suggests to me is that we're moving something into one of these closures that exists in outside of this closure and that is let's deal with the bottom ones first because these seem straightforward line 63 yeah 63 it's saying that we might closure may outlive current function but it borrows cmd 39 it's probably the same right so here it borrows handle as a reference so it's fine to move and then my guess is 70 is going to be the same so 70 needs oh this is moving move closure something has to happen so here let's see it's going to live long enough so remind me again why can't we just do interview not do this do this not have to do any of these things ignore the result of all of these and instead here do machine.ssh just sum ssh and then return nothing how about that inside of here we're going to have ip is machine. actually yeah machine.public.ip now let dns is machine.public.dns and machine.sip ip so the intention here is to never need machines outside of here and if we don't then we can just modify directly in here instead so that might just fix all our borrow issues expected expected session machine has an option ssh session it's telling me shadowed ssh oh thanks shadowy is very nice but in some cases this looks a lot nicer okay so now it's going to yell at me for using machines down there and we'll deal with that separately it's telling me that name does not live long enough which I believe is just not true username is going to be now this is going to be we no longer need the enumerate it's probably going to require that we move into these now username does not live long enough in fact usernames and fns are not used past here right so it's actually totally fine for us to not do any of these things in fact we can probably move those into this stored outside of its closure really don't know why it's hating on this so much but okay setup fns is setup fns and now in theory we should be able to move into here because there's nothing really stopping us from doing that username those are just going to be straightforward this is going to move in those references which is fine username is going to move in the username because username isn't used elsewhere here what is this I cannot move out a captured outer variable because it's borrowed that's fine name does not live long enough it's true that these will have to be ref name or any of that should matter very weird it's like tying all of this to machines why is machines leaking here why does it think that we're holding on to it after let's fix that issue I guess we can move in both name and DNS here because they are just references this IP as well should be a reference here all of these movies should be totally fine well that's a lot better really really yeah so there's definitely some NLL factoring into that but that's just machines leaking and here I guess we just had to do the right song and dance to get all the references all the things to be references that's really awkward well does the right thing now so that's good and then it gives after the machine so in theory we now have all the things we wanted after that the last part taking most of the time which you know is all kinds of frustrating okay so if we now check this with the example so the example will not compile and the reason for this is because the example currently is not future spaced so in 14 SSH that comes into here there is a command see this this is all fine in theory this should all do the right thing commit yeah that's right I'm not surprised you lost track that was a very deep and unfortunate dive into some annoying baro checker details I'll probably actually when I upload this video sort of put in some notifications there saying we got stuck here for a moment skip ahead to hear where we solved that problem because the code didn't really change that much the biggest difference is just like getting the right references in the right place and this kind of baro checker fighting is not really all that useful to follow but hopefully now we'll return to somewhere where things are interesting and useful let's see so why does the example so in theory our example should look just the same right because the run still has a fully synchronized interface except for the SSH handle it's given machine setup is given an SSH but you still have command and you still have map it's just that this will be a future and they won't really know but that's fine like we don't really need the callers of this to know that things are running asynchronously the type of this value must be known in this context so let's see so the machine setup machine setup takes an f it's an f and it takes oh right this now takes a handle as well it might be that this handle is never useful it might be we just want to hide the handle from the user but it feels more right to include that and here the map we're using is really a futures map remember we have changed the external interface here it's just that the same code happens to basically work futures 16 expected box found map we can do better than this by doing the following so the issue it's giving us now is that the closure here currently we've said it that f should be a function that returns a box future this is where we'll have imple future in the future what we'll do is we'll make this a little bit more ergonomic by saying that this is going to be into future of this the return type of f is going to be ff necessary and then the set of function we're given is going to be something that's into future so what we're going to box is something that takes an ssh and a handle and we're going to call we're going to call that with ssh and handle into future so this is basically us wrapping the closure that yeah this is neobim so this is us wrapping the closure that the user gives us so that the user doesn't do the whole like boxing stuff so this now lets the example compile without knowing anything about futures in fact it doesn't even need this anymore does it for the trade okay so now what we have is we have a fully asynchronous implementation of run s so in theory it should connect to all the machines in parallel and everything should be fine the way we're going to test this is running the aws example that we have from last time and see whether it still works but notice how the in fact if you look at the example file this file has apart from now using the future trade has not changed so the API we expose is like still sort of basically the same except that it's futures in all the setup hmm of course the next step is then going to be making run entirely futures based like the externally currently run seems like a synchronous function should see whether end up getting some stuff here I'd arguably I should also commit all this I am using Xmonad although I don't particularly like Xmonad is fine I don't really care about Xmonad I do care that it's a tiling window manager with decent hotkeys and Xmonad is that for now the bar at the bottom is poly bar which I've actually been really happy with a little faster in fact this will probably fail because we don't have TCP retry it's also a little bit unfortunate in fact arguably that's something we'll have to do before anything else so this is remember how in the old version of the code when we connect to a machine what we did is we tried to SSH to the or connect to port 22 on the machine and if it failed we sort of like tried again for a little while the reason for this is that EC2 can take a lot to set up instances and it might give you like connection failures initially but that doesn't mean that you just want to give up I guess we'll see what this ends up giving give us some instances that's a start see if it actually does anything else useful it's a good question actually how we're going to implement TCP retry I think there's a future's loop we'll do this tail recursive loop great so we'll use this thing it actually did the right thing that's amazing you not have rustlang server racer neo make setup so I do have it set up but I don't want it on type or on save so I have a hotkey for running it and so if I'm in the codebase I can press shift L and it will run all the checking for me I do have auto completion right and that's all through RLS and language server well that worked surprisingly well but let's implement TCP retry anyway just because it might be useful so recall from this code used to look source so if you look here we used to sort of loop and retry connection and we're going to do the same thing here except that we have to do it as a future right so we're still going to get the key here to use a file so what we're going to do is we're going to use futures loop fn so loop fn takes some initial state I don't think we actually need the state so the state can be unit and then it takes a closure is that right a closure over the state that returns something that resolves into loop that's its own kind of weird okay we're going to ignore the argument so this is going to be does it need to eventually give an s thing so what we want to do is basically the same thing as what we used to do we're just going to make it asynchronous this time so we're going to have this but it's going to be connect yes this will move and what we're going to do is either if this just resolves correctly I guess what we'll do is dot then we're going to match on R and if we successfully connected I've never seen anyone that has the Firefox UI at the bottom so I actually really prefer it this way it's not so much so currently my screen is horizontal but at work I have a rotated screen so it's vertical and it's fairly high up so I use it as a standing desk and so and then it's just really nice to not have to look up all the time to look for the tab bar at the top it was a huge pain to modify the Firefox CSS to do this correctly and it still sometimes does the wrong thing like notice where the auto-completion bar is it's like in the middle of the screen so it's still not perfect but it gets there okay so what we're going to do is we're going to in every duration of the loop function we're going to try to connect and then if we successfully connect then we're going to return with a so if we successfully connected then we want to break from the loop so we'll do future loop break C where does the type of C need to be here so if you break the loop there are no limitations on T right? great and this the loop of N so loop of N implements future I assume where the item is T so the item is the thing we break with and the error is an error ooh oh is the error of the future that we return from here okay so if we get an error then if we want to retry then we just do future loop this is an okay this is an okay and if it's actually an error then we return there and now this is going to be connected so the idea is that we have a looping of a future that is going to keep iterating on itself until some condition is true so it's basically like a while loop but inside of a future so here what we're saying is whenever it's called try to connect after the connection has either succeeded or failed then if it succeeded break from the outer loop so break the outer loop future into the connections value if it failed and we have not yet passed so this is currently just like if it hasn't passed two minutes then continue so run the loop function again and then run its future otherwise return an actual error so resolve the future into an error now this almost works the difference is that we also need to set a timeout for the TCP connection so if we look at tokyo core net TCP stream so connect to there is a way to set what is it we set here connect timeout I don't know if there is a connect timeout in fact the way that probably works is we would need tokyo timer hmm so the observation here is we want to connect but we want to give a set a timeout on that connection so there are a bunch of different ways you can do this but with futures tokyo timer is probably the one you probably want to do which is deadline so with tokyo timer deadline what you do is you give it a future and a deadline and then it will give you a future that either resolves or returns with an error if you are not within the deadline it does however mean that we're sort of masking it doesn't mean that we're masking the underlying error so if we put a deadline around this then imagine that you try to connect and you get a legitimate error like a DNS name could not be resolved or some error that's not going to resolve itself like the box is never booted then now eventually you'll hit the error you will hit is a deadline error and that's what we'll return to the user rather than the underlying SSH error which is like DNS could not resolve or something but I think that's fine let's do that so I guess here we're going to need tokyo timer and to we're going to have extra create to tokyo timer up here we're going to use tokyo timer deadline specifically and here what we're going to do is we're going to do a deadline new over the connect and the deadline is going to be in time instant I guess just instant dot plus duration how long is the timeout we set in the original code 3 seconds so what this does is we create this connection future but then we also put a deadline on it saying that if it doesn't resolve within 3 seconds of now then also just give an error so tokyo timer currently is using the new version of futures it looks like it feels just like the version of for future so why can't we use 0120 oh I probably need to go update that's all yeah so this should give us a this should give us exactly the same behavior of as what we used to have and then what we do is we use that that connection that may time out as the source for the connection in the future that we eventually return I could just put this in line below as well I guess somewhere where is my syntax error hmm but as the basic setup here makes sense so the idea is to deadline is just a wrapper that if the future hasn't completed in a certain amount of time then it automatically fails the future regardless of what it may eventually end up resulting in okay and we ran this and it worked correctly so this means that we've now basically sort of done the first part here of exposing implementing the internal concurrent setup but exposing a totally synchronous API and now we're going to start phase 2 which is going to be implementing a fully asynchronous version of run on the old version of 42 bothered by this indentation that just seems wrong which does not live long enough you're not wrong we could also use start as the initial value and just like feed it along that would also be fine alright so in theory this should now still work that's going to have to recompel the dependency which was a little bit annoying but now we can commit this as saying retry ccp connection for a stitch alright so while that compiles let's go back to our code and now we're going to make all of run as be a future are you ready for this so future are the items this and the error is this now the closure that the user is going to run they are indeed going to be giving this hash map so they're going to pass in a tokyo core one thing to notice here is that the user passes in a handle here because they might have other asynchronous things running at the same time and so we don't want to start our own core because we don't really have a way of driving it we're just going to be returning some lazy computation it's going to be up to the user to drive that computation forward that means that the user has to be the one that owns the core that we're going to execute things on and have them give in or handle to the execution core that they have which is basically like where they are going to eventually call core.run we are going to have to be bounded by I don't know actually we're not maybe it's unclear but the closure they give us is now also going to have to be a future because otherwise well I guess we could we don't really want to execute it synchronously because this closure that they give us could totally like want to do some ssh connections and what not yeah thanks for coming out there it would all be posted afterwards and then at least you can skip all the compilation which is nice yeah so the closure that the user gives us for run us is now also going to be asynchronous right it is also going to have to execute in the background and be able to do like ssh connections and like if we want that to be asynchronous so that means that this now needs to also take a Tokyo core reactor handle and so this is now similar to the closure that we give in on machine setup right which is given a handle and then returns a future this is going to be the same here it's going to give a handle it's going to return we're going to be nice and introduce this additional type here ff is an into future where the item is this we are no longer going to be creating our own core and now this is where this is going to rearrange a bunch of the code our future is your favorite way to reason about async programming um I think once we have the await keyword that's going to help a lot async and await is going to help a lot currently it is a little bit painful in part because of the interaction with borrows so this is the like borrower checker held that we got into earlier where you know what the code is supposed to do and you think that it's correct but you have to like convince the compiler that this interaction between all the closures you end up with are correct and I'm hoping that will be somewhat nicer once you have async and await they do work nicely with the functional so that's totally true I also don't know that there are other models that I prefer like there are other models but it's unclear that they're better right anyway let's see so how is this going to change things well that has to happen synchronously arguably we should return into future well so the thing that's a little bit weird here is easy to client new connecting immediately because it means that when you call run as it will immediately print like some debug stuff even though you haven't started running the return future yet but I think we'll just have to deal with that because the question is where do we start I really wish easy to client itself was a future is there not a way to make it do that just returns a self so right for me to like check credentials or something just like useless like check authentication probably not it's fine okay so the first thing we do that is a future is this right so here we can almost search for core.run right so remember I guess we should undo this just so we can check compile of the previous thing no we don't want to go back to that at all where did I start missing with this so let's see this should know in theory but just to see that the basic setup still works compile faster there we go just spin up a bunch of instances slowly but surely okay how about now I almost wonder whether it is like me running this command which would be its own kind of disturbing let's just not run that command for a while how about that well alright we'll try it again for the video on demand I'll stitch together this as well so it's fine let's just not run the command for a while and just not test our thing it's probably fine where were we okay so we're trying to make run to be a future so we want the entire use of this library to be entirely asynchronous so that the user can do other things while they're also spinning up these instances and so the way we're going to do that is by essentially constructing one really large future that does all of the steps that are necessary in order to execute the code that the user wants us to execute and the way we're going to do that is start out with some future in fact arguably the thing to do here the thing to do is we are going to start out with a future so the place where this is sort of weird is initially we're going to execute a bunch of things just when the user calls run as and that's a little awkward because the user isn't expecting anything to happen until they start to drive the future so there are many ways we could do this one is we could just create an empty starting future yeah exactly so we could start with an empty future and then we can sort of build it up over time that's one way we could get through this and so the way we do that is by saying let's take out the log first so if you look at the future documentation back here there's you essentially start with something that's just an okay so a future okay just immediately resolves with whatever value you give it so in our case we are going to say in fact all this is going to be one future so we don't even need a variable here all we really need to do is say futures future okay we give it the unit type and then we can start with our long chain of and thens so the very first thing we will do might not even be necessary because it's still going to resolve this probably immediately is there a lazy fn I think that's really what we want to do do that instead futures lazy is this lazy what do I do it's not what I meant to do at all yeah so we just call lazy and that takes a closure that is just going to be called and whatever future it returns is what we are going to execute so we are just going to have a lazy and inside of this future is where all the magic is going to happen so most of the things we do are going to be the same until any point where we hit something that is a future at that point we need to wait for that future to resolve before we do anything further so in our case we are going to do this is the first place where we do something so the first thing we do is going to be to create a security group and we are going to this is also going to have a we are going to do a bunch of events here as well just to add all the context that we need and so this is going to resolve into actually that is there are many ways we can do this I think we will just chain them outside so after we get this security group this is so notice how it used to be let rest is equal to core run the way we are going to change this is to instead chain them so that rest comes next right the other thing we need to do is we need to be careful about also including any variables from the from the original stuff that we might need in the subsequent operations so things like if we need a group name from here later in the closure we would have to pass those along in the future that we pass along because we no longer have shared variables between code that is further up and code that is further down so we are going to here do and rest until the next point we had a future which is here so this is now going to be instead of running that we are going to do then our context this entirely ignores its argument which is a little interesting it just uses the group ID from here and that group ID I think we are going to need here as well yeah so this basically reuses wreck so really what we want here is we want to map and then move along the request that is not at all what I am doing so that this now gets the request so that it can keep modifying that request the way it used to be then here it executes another future and we are going to do the same thing of our context not going to be a core run anymore like so and that is where that and then ends and then we have a new end then this also ignores the results and then actually let's have all of this be indented for us show me like so and then we just essentially keep doing this chaining right so here this is going to do this with a then our our context this here so here there is a write all write all is normally something you do on on files which is a little awkward so we are using this name tempfile which is a synchronous writer so if you look at tempfile gives us back a file does file have async that is a good question feel like there is an async write for file so we are going to have to use these IO helpers from Tokyo IO so write all takes anything that is an async write in our case async write is implemented for ooh that is a good question can we not do a is there not a way to get async write for files that seems unlikely async write it is not really true you can do a future cpu this is a little bit sad but I think we will just keep this one synchronous for now it is pretty unfortunate but it should be a really short write and it is unclear that the write shouldn't really block because we are just writing to a regular file so we don't do that here is another thing we are going to run into probably more than once and that is sometimes you want to do you have an if and you want to return a future from either if the problem of course is there is no single type that you can course into so you could make them both both be like box future the way to get around this is you use in the futures crate there is a future either so an either is an enum with two options A and B that both have to implement futures that resolve the same way but the futures themselves can be different so that is what we want in this case basically this is an option that I added after the previous two streams where you have the ability to say whether or not you want all the instances that are spawned to be spawned together and what we are going to do here is if you want clustering then we are going to have to execute this down here so this is going to be futures either A we are still going to have the then R context and then we are going to have to do this code at some point the else condition is going to be the either B which is going to have some future inside of it so notice how the pattern here is that one side of the if has an either A the other side has the either B and you can do this with matches with you nest ifs and you can have multiple of one another in our case this is a little bit weird because we sort of want well what do we want both of these to resolve to that is another good question this is certainly going to be a futures we don't want to actually assign it to anything it is going to be like this and then something we don't quite know what that thing is yet for the either A we want if it succeeds this stuff to happen uses the placement name so we move that into here it actually ignores its argument and then returns some placement which means that either B is just going to be an okay futures that just returns nine so notice here that they both have they are both very different futures but the futures have the same item and the same error associated type in both cases error is just a failure error and the item type the okay type is going to be an option spot placement right so in this case in fact this is a map in this case we try to create a placement group and then if we succeed then we keep track of the placement that we're going to use and if it fails then it's a failure regardless if we want to return otherwise we just return none because we don't want a placement so this is now going to be placement yeah this is another thing that we didn't end up doing in parallel which we might want to do in parallel now which is issuing all the spot requests which I guess we're going to do now so the thing we're going to do here is going to be similar to what we did with the parallel machine setup further down right this is a very similar kind of thing where we're going to sort of collect together a bunch of futures and then we're going to wait on all of them so we're going to say futures for now actually no let all is self.descriptors.inter this is going to look very familiar it's going to be basically exactly the same as what we saw below so for each one we're going to do a bunch of things which ends up with a request being made and so this is the future that we're going to end up executing for every spot request and again we're going to do this whole .then our context so that we get our failure information the rest will be this like so and this is now reqid.extend I don't think we'll actually need that I think all we really want is we want to get all the spot instance request IDs back and so we're going to have this be the entire future so that all is now an iterator that's going to construct futures for requesting all of the spot instances and then what we're going to do afterwards is we're going to loop we're going to join all of them using the same join function as we used before we're going to do this by for less in yeah so remember here we also can't use core.run anymore because we don't have a core and so what we'll do is we will use futures future join on that iterator and that's just going to be the future and then at some point that will resolve with all of the reses so resv here is a vector of all of the responses that we got and what we want to do with all of those is we essentially what this map is basically doing is flattening that to get to extract all of the spot instance requests from all of those so we're going to say something like spot rec IDs is resv dot into iter, the flat map here where's this name coming from though it's a little bit weird we're not going to have to figure out what that is oh right name is the name of the group so this is why we weren't doing them in parallel in the first place because this name remember we're spinning up a bunch of instances but those instances are not the same they're associated with some kind of machine setup and so as we iterate over here we're going to have a bunch of futures but after we've joined all of those futures we need to be able to associate back each of the spot requests with what machine setup we're supposed to use for that type and the way we're going to do that is by mapping to include the name so in this case I guess we'll just do name so with the res we're also going to include the name so now what we're going to get back from this future is that all of these are actually going to be a and then this res is going to be into iter filter map we're going to move and so now rendering the flat map all of the things so notice what we're doing here constructing a bunch of futures but not actually executing any of them then constructing a future that waits on all the futures and then we're saying when you have that we don't say that you should execute it we're saying when you have those ready it's up to the court to actually execute it then do the following and that is iterate over all of the responses you've then collected and sort of collect them by name is essentially what this does and so then we keep going here we're waiting for the instances to spawn so this is another instance where we have to loop like we're retrying until we have successfully until all of the requests have been satisfied so when you issue an EC2 spot request what you really do is you tell EC2 hey I would like these machines and then you have to keep waiting until you actually have those machines and so in our case this is going to be a future's future we still have no state really for this or the state but this is going to be a future where everything inside of here that's a pretty long loop turn it down down here right so for each one we will try to describe them with a response do some stuff yeah so this is basically in every loop iteration we're going to construct a future again and what that future is going to do is check all of the things that we care about so in this particular case we're going to first try to describe them and then all of these things are actually entirely synchronous they're really just checking that they're just checking whether all of the instances have successfully been set up so in our case then just a then query code so it's a little bit sad this timeout stuff we might have to deal with separately we'll probably use something like deadline again that's exactly what we'll do we'll do deadline we have deadline here timer deadline new so we want a deadline on well we really want a deadline on the whole loop basically if we have all of your requests then we want to cancel and say we can't be bothered waiting for this so we're going to have a Tokyo timer deadline new over this thing this is the this is that thing this is going to be the loop so this is going to be the timeout which is going to be now plus we only sort of want this to be optional so we're going to say let wait for that what's the relation between the futures crate and Tokyo can you use one or the other for futures to complement each other so the way to think about this is futures is a way of constructing a computation whereas Tokyo is the way you execute a constructed computation there are some blurred lines in here so for example Tokyo provides Tokyo also provides things like mechanisms for constructing a computation that does IO because it's not really just a pure construction thing it's like you need some implementation stuff for how to interact with IO but I think most of that in the new version of Tokyo and futures is actually going to move into futures so the futures is all about constructing the computation including whatever IO it needs to say that it's going to do and then Tokyo takes that description of a computation and then decides how to execute it and so this includes things like whether it runs a CPU pool somewhere that executes the computation how it figures out which things to run next how it keeps track of which things the total set of things that need to be run it runs things in the background etc ok yeah so if there is a weight limit then we want to run this instance weighting loop under a deadline otherwise we do not in both cases we want them to have the same signature right so here we're going to run into the same case of this is in either A and this is in either B however if this returns an error then what we really want to do is we want to cancel the spot instances and basically like give up so we're going to need if you look at futures if you look at future we basically want a map error no I want the other one or else so or else is the way you say if this failed then run the following future so it's sort of the opposite of and then we couldn't use map error because we actually want to return a future here because we're going to cancel the spot instance requests so we're going to do an or else E and in fact in our particular case we're going to match on E because if it was a deadline timeout so what is a deadline error deadline is a deadline error that's pretty unhelpful I want to know whether ah, is timer right so if E.isTimer I wonder why they did that and not just have it be an enum otherwise return then we're going to do a bunch of stuff so I know I'm doing this code a little bit in an ad hoc way to tidy up the things that need to go under the loop from the things that need to go in the loop ok so looking at the loop now this is where we're going to wait for all the instances to spawn the way we're going to do that yeah wait for instances wait for instances so this main loop that we have um we want to first describe the instances then we want to check if the error is that the spot instances don't yet exist then we just want to loop again um so in this case I wonder whether this can just be a map oh it already is great then this can just be a loop continue with unit otherwise if it's some other error then the IDs don't exist so the IDs might not exist because you issue it a request to the Amazon API and then you immediately ask it again there's some amount of delay before things show up in the API so that's why we have to deal with this specifically um otherwise we just want to return an error uh then down here this is all just regular straight forward code we're in a then so rest here is basically just a result just the way it used to be um instances that's it's own kind of weird global thing up here so that's probably what our state will be no no that's fine so then we're gonna just return with this uh so this is if there are no pending requests so that means we did everything correctly then we are ready to break we will break with this uh otherwise we will continue although we sort of want to sleep here uh so currently this loop function is just gonna hit the API like as hard as possible which is not really what we want to do we want to be slightly nicer than that so the way that would work is we could make uh this loop yeah we would add a um an or else after this that then issues a timer delay so I think in the timer crate in addition to deadline there's also delay so we could just use one of these so we would say um down here so basically what we want is if r is a continue then we want to delay right so futures future loop continue um then but it's gonna be a tokyo timer delay new time duration 500 uh if it's any other kind of okay so if we break then we just want to return the break and if it's an error then we will just want to return the error we can't actually use the syntax but um so the problem here is we need again we need to have all these things to be the same type so the way we'll do that is this will be future no we have to use either here again um so what does delay do if you make one it just resolves to empty so this we would have to map to still return a loop continue so the three possible futures we can well actually really this is just an yeah it's an end then so it's it's just either that or the other it really is just futures uh sorry just either a uh or it is a either be futures future um and we actually know that it must be an okay because this is an end then so what we're doing here is if there was no error in the looping function then if the outcome was a continue then we delay and then continue otherwise we just return and now afterwards so that gives us the loop that's going to wait for spot requests and then but it's still just a future that when you execute it will not return until you have done that thing if we want to add a weight limit then uh what we're going to do is wrap that in a deadline and then if the deadline failed because of a timer then we will uh cancel the outstanding spot requests and sort of just like give up like we'll return an error all around and so that will look something like cancel we'll go up then all the context whereas if we succeeded they care about what we do there we're going to have to I don't know why this else is here oh right if it's an so okay so if the deadline expired then we go in here then we want to do various requests specifically the ones down here in this case let's see what do we want to do so we cancel them and then we want to wait for a little while so this is going to be the same this is going to be uh Tokyo timer delay new plus that and so the observation here is this is us taking what was a timer error doing some things with the error and then still returning an error at the end right that's what we want this to do um if it was however just a normal error then all we return is just a normal error we return the error that we had uh so this is we cancel the spot instances we wait for a little while and then we don't do any core run don't know why it describes here it seems unnecessary let's just cut that so this is basically it after cancelling all of them uh it sort of checks the status of all of them oh it's so that we know which instances respond that's why okay so the what this sort of whole part of the program that we're in right now what's going on here is we tried to spin up a bunch of instances by spawning a certain number of spot requests and then after some of them succeeded and some of them failed then now what we want to do is we want to cancel all once we realize something has failed we want to cancel any of the remaining ones and any that did spawn instances we want to kill those instances so we need to keep track of what they are um so so really what this means is that this really needs to resolve to an okay case so I think what we'll really want here is future is going to resolve into a list of instances and everything okay so in the case of we where's the break here so in the case where everything started correctly then what we really want to do is we want to take the instances and return instances in true because everything was indeed okay whereas if the deadline timed out then what we really want to do is return all the instances but not continue with the program so not run to setup and such and so this means that this will still do one of these guys take all of the instances and return them as a vector and map those instances into instances and false in the end the main thread should spawn some spot request futures and then wait for them to be ready well so the intention is that the future that we're constructing is going to set up a where as you run it it's going to send a bunch of spot requests spot request easy to then it's going to it joins on all of them right so that means that if any of them fail then we're given a notification saying one of them failed but if if one of them failed then we still care about the one that once it succeeded right because we still want to kill the instances for the ones that succeeded in fact that's something we don't currently handle here just a little awkward but the observation here is if in fact all of the spot request so in general all the spot request will go through like Amazon will accept them they won't error on the call but then some of them may not be fulfilled like Amazon might tell you I don't have any more capacity left trying to build some call graph of the futures yes so the current call graph of the futures are when you usually think of this as a sequence of futures right so we're constructing a computation without executing it the first thing we do is send all of the spot requests requests too easy to the second thing we do after all of those have gone through is we look for all of the instances that were spawned by those spot requests then what we do is if there is a timeout and that timeout expires without us having all the instances then we want to cancel all the spot requests and kill any remaining instances if there was no timeout so if we successfully got all the instances then we want to continue with the experiment without tearing anything down and so that's sort of where we get to down here eventually is that the next thing that's going to happen after the mess that we're dealing with up here is going to be that we're going to get a set of instances that we know have booted and we're going to get a boolean saying whether or not everything is okay so everything would be okay if all of the instances indeed spawned and so basically what we're doing in this if there's a limit phase is we're wrapping, waiting for the instances in a deadline and then if we timeout we have to sort of ask the API for all of the instances that it did spawn and then we need to tear them down okay so I think that does the right thing but this is another case of if else so all of this is also going to have to be either notice that I haven't currently been threading through any of the variables so we'll probably get some errors about that like down here for example it's using wreck and I think the wreck there is really the wreck up here which we don't really have access to down here so we'll see how this turns out maybe we don't need them let's see what happens okay where is my syntax error right so we keep going so down here we have all the instances and we're told whether or not everything is okay remember how we have this in the old in the synchronous version of the code we use scope guard to have this thing that when it goes out of scope it terminates all the instances with futures we can actually do this slightly differently we can make it we can essentially start a new scope of futures here where any error that occurs inside here will map at the end with terminating all the instances so the way to look at this is once we get inside here we're going to is going to be a closure that's going to take whatever end it's going to terminate all the instances so this is the future that we're going to execute at the end of I never knew that Rust is so ugly so it is totally true that futures code can get pretty nasty this way because you end up chaining all these long and then chains it gets a lot better when you get the await keyword for example so it basically let us collapse all the and thens it's also in part because like I could split this into many more functions but if you look at the top the top here is not actually that bad it's just saying we're going to do this and then do a system call an API call and then when that system call comes back we're going to do a bunch more stuff and then do another API call but you could imagine that if we instead had a wait here if we could do this that would make things a lot easier that would eliminate all of these end ends in between but for now this is what we have to do so what we're going to do here is we're sort of going to have a cleanup future if you will so at the end if there are any instances that were successfully launched then we're going to return a futures so think of this as this is going to be inside like a or else so it's going to be given an error and need to return a future and so what we want is this but the real error is going to be so we're going to make sure to return the real underlying error that happened arguably we could give a context here but yeah so here also we're going to have an either and otherwise we're going to have an either be which is just a future is future error that just fail forwards the error essentially so this is sort of the cleanup logic that we used to have and that cleanup is now executed in the future instead and what this allows us to do is I guess if not everything okay then we'll just immediately execute that future so we will just return to do with where is my syntax error now oh did I not still do something wrong yeah so if everything isn't okay then we can just immediately trigger an error with that closure above us in fact we could just immediately return that but what we're instead going to do is if everything is okay then we're going to start this with a future that does not have an error so we're going to do an either a future future okay so it's just going to be an empty thing otherwise we're going to give an either be that's going to be an error of unit type maybe actually arguably this should just include the error that happened but it's fine for now so this is going to be the start and now what this allows us to do is to say that inside here we're going to do start and then and then we start chaining again right down all the way to the end and then down here we do or else at the end and what this will do is that whatever we end up doing inside the future that we're constructing here we know that at the end the cleanup code will be run because it's going to map whatever error comes out the other end so as long as the future's in between if they eventually end up giving us an error then we can deal with that so here what are we doing we're waiting for all the instances to be up so this is going to be another loop function it's going to be another loop future so we're going to do future's future loop is it loop or loop event I feel like that we also still don't need the state I think so ignore the state and we're going to execute whatever is in here just until it finishes and then we're going to end up with a bunch of machines good old machines that we used to deal with earlier right I guess actually we're going to do so the future we're going to execute is described instances that's going to give us a bunch of backup reservations so that's all the instances that we have running and then we're going to have to decide what we want to do with those instances so for reservation the idea here is that as long as not all instances are ready we will not return from this loop we will only return from this loop once all the instances are marked as being ready and the way we'll do that is here for reservation in reservations in reservations dot reservations sure why not for instance in this we are going to execute this stuff yeah so what we're doing is just iterating over all of the instances that we got back from this describe instances call and we're going to collect all we're going to make sure that all of them have some for all the values that we care about and if they do then we add them to the map to the machines map and if they don't then we set if any of them are not ready future loop continue because we know that if any of them are not ready there's no reason for us to continue and then at the end of this if we have not yet returned then we know that the thing to do is to loop this has to be a this can be a map now because we're not executing any features inside of here and that should give us machines notice that there are a bunch of things that we're going to end up running into here like this ID to name mapping that we keep ID to name we made far up here we're going to have to thread a bunch of those things through I'm just trying to get the main call graph to sort of work out so here this is now we have all the machines if you have all the machines then now this is when we're going to do all the setup routine that we did so remember how this constructs a giant future right so now that can change a little now in fact ok so that's going to return I guess we do still need that to be setups we don't need handle anymore here if running is not the expected let's do this the other way around just because it avoids an annotation level so if that's wrong then we do this instead so let's see then we can just make this immediately be an error I guess we will figure out what that error is later otherwise we'll do the main thing which is going to be just to join over all the setups so we probably don't even need this and then if at the end of all of that then we have machines then we can invoke the user's routine dot context map error this that's a future whatever result they gave we will just echo that or else at the end those worn info critter are they part of the standard library or external crate so those are all part of the s-log crate so if you look back on I think the first tsunami video we did or maybe the second we went through and added all the logging stuff this is it works sort of similarly to nvlogger which is also a pretty common crate in sort of the Rust ecosystem it's just really nice to have the nice thing about s-log is also that it compiles away any statement that is not in the current debug or current error level so there's a minimum threshold for each compile level and anything that falls below that threshold is compiled out where's my syntax error it's nice this is now going to not work at all but we'll figure out why first of all the EC2 client sort of needs to be threaded through all of this that's why we are going to move it out here because it's not worth doing cannot find setup functions so where do we pull out the setup functions it's down here somewhere so down here we are going to do map whatever this produces produces a placement it's going to give us really names and setup functions and expected num should happen here and notice here that here both id to name setup offends and usernames are all things that we are going to want to keep track on later on we need them further down to the program one of the things that you end up doing quite a bit in future's land is you map some value and you map it to bring along the state that you need be a little bit problematic here id to name the reason I said this so now we get to carry these along the reason these be in their own port so we can pass them on as one thing the reason this is a little bit annoying is because remember all of these are now futures that execute in parallel and so they can't modify these easily we could wrap them all in RC ref cells I don't really want to do that so I think the thing we are going to do instead is we are going to do a synchronous loop first we are going to say that I sort of want to extract out those things first id to name is not in use here so id to name comes down here somewhere I think so that means we don't have to thread along id to name but we do need to construct setup events in usernames here so notice how we are modifying them here right what to do is just to map in fact that's what we will do we will map along those here instead setup.setup and setup.username like so that way those can go away these can go away expected none I think we need somewhere further down we can deal with those later now we don't need to map this at all some syntax again yeah so now this resv has name but also setup and user so this is where id to name comes into play and also down here we can also construct both of these pretty easily so now setup fn.insert name.com to see how depending on how far down we use these we are going to have to thread them quite far down and this is this annoying process of moving and mapping moving and mapping moving and mapping I don't have a better way to do that although there probably is one let's see why does it complain about next trains about so this needs the group id and the group id is allocated up here yep so here we are going to have to move this and bring along the group id this is now going to take the group id and this and we are going to have to map and move we don't care about the result we care about the group id now this has group id and here we actually want to thread along the key name as well because we are going to use that when spawning the instances so we are going to move we do care about the result here so we will do res key name now this is giving res group id and key name this also has to carry those along so this is going to map the placement and give us p group id and key name you see this gets kind of tedious alright this now is the thing that uses group id is also the thing that uses key name so at that point we don't have to thread it any further so that's nice uh expected function brrr what unmodulated id to name uh where did we construct id to name it's not down here? yeah and where is it needed? so we construct it here and it's needed at 753 down here yeah so that probably makes sense so we are going to have to bring it along from there to down here so we are just going to map instances and okay to make that instances okay and also these three because we are going to need all three id to name set up fn so here we could construct some kind of like construct that would keep all the state for us it's the reason I use tuples here is just because which things I have to carry along basically varies between every single step of this computation so making them different is like not really like there is no good reason to because I would need one struct for every crossing right okay so that should give those all along to here did you mean user names? private key file oh no private key file is up here so we are going to include pk this is going to include pk this is going to bring along pk and now we have private key file did you mean okay? no? private key file expected num is needed at 783 that's like the bottom of the program expected num but this though we can move all the way out here because it just iterates over the descriptors no need to do that surprisingly few errors probably going to be more once the bar checker kicks in oh right run as is now going to have the same signatures here because I basically call one another the biggest difference is going to be that it doesn't take a provider so it doesn't take a p it also needs to pass along the handle ooh so here are some use things that we are going to have to figure out someone mentioned earlier that they weren't sure about using having uses inline as opposed to top of file this is one of their cases where it bites you because like here I need RAND RNG and for whatever reason I've chosen to just use it in there realistically should probably be a top level use oh yeah that's true in fact this should probably just map and I guess private key yeah this kind of threading is pretty annoying is going to map is there a version of map that lets me give it a result or is that not I want like a synchronous version of then of and then I guess not well I guess it will be an and then but it will be a trace that where did I mess up 437 yeah so the observation here is that this is where we use a synchronous API inside what's supposed to be a future and so we can no longer use the question mark operator which is actually really sad it means we need to do the song of dance of taking the result turning it into a future and then and then continue to chain it along that way so this is going to be the end then a file this is just going to be how many bytes were written so we don't really care care about the private key file we sort of want to print this I guess we could say path is f. and then we could map this and give out the path this would need to do that and this would be new alright this actually needs to return the private key the pk and it will give you out pk group ID and key name so we have to carry it along again expected string found stir let's see so it's claiming that oh this is where we run into the failure types not matching specifically we don't actually care about the fact that it's a context we just want to turn it into an error and so the way we're going to do that is map error error the right thing why is it expecting this error to be oh used to be here so it's taking the error from this type and saying that it doesn't match the output of this and then this future we have to keep doing the stance mapping the error into error from at all times before it's probably going to complain about the same thing expected vec found flat map alright this has to collect should collect into a vector phase 7 we want this to collect into a vec alright this has to be this is just a vec so we can't really wrap over it we've done too many futures so this is going to be all of those and those and say that v is a vec and now we can do just v and true no need for math 609 slowly but surely we're progressing expected result yeah so these are all okay this is okay I guess there aren't really any errors in here is that true oh yeah there is an error and now 556 tokyo timer error why does this need to be a tokyo timer error where is the timer ah it's down here yeah so here yet again we will need to do dot map error we have to constantly tell the compiler that the error type I want to use for all of this is going to be error this too is going to have 15 either error so this guy that's weird ok should be generic over the type of error it takes apparently not let's see did I get this so in 615 expected a tokyo timer error and found a failure error why did it expect that probably here 616 right because this future now also needs to mask out the timer error actually no it can't can it this this needs to actually be a timer error because otherwise wait this doesn't give a timer error we're talking about delay the tokyo timer delay shouldn't ever return an error right why would it error well it should be fine for us to map away that error that's the real question so here oh the issue here is the future we're getting in has an error type of failure error but the future we're producing by the end then has an error type of tokyo timer so we need to do this only a screen full now 651 is probably the same would be my guess this also needs to be turned into a failure error don't know why delay would ever error though that's really weird oh unless no I don't think it should um expected error found context because this needs to be error make all the things error I wish there was a nicer way to express this to say that like I want all my things to just basically I want the question mark operator but for futures which I think maybe a wait will end up being this will have to be a map because if it doesn't type oh I guess this should really have context shouldn't it then to find instances after cancel 698 we can't take a deadline error we don't want to where is deadline error here see so a deadline error has a way of turning it into the inner right into inner wonder why they chose that API that's really awkward but ok we're going to match why would this error kind inner what's the other kind inner yeah it's an elapsed ok fine so we're going to do match the into inner and if it's none then we know that it was a timer expiry and if it's some can return that error so that should give us not a deadline error but the error that was inside 706 closure expected to take a single two tuple argument this indeed that's true so few errors now so close 762 error type is wrong because this is going to be right so the observation here is that if we timed out up here we sort of want that to be turned into an error question is what should that error be I think I think what we want that error to be is really something that says not all instances started so I think this is going to be format error which is from the failure crate saying not all instances 764 the error is not right that is true the error here is going to be is going to be here 815 let me guess the error is wrong the error is wrong okay so this okay sure format error single page now 857 this private key file path so now we're going to change this to instead that's not actually the private key file I think we're going to have this no longer right the private key to file to disk I mean but instead just get the private key because that way this can be a map we can get rid of template can instead just do here like so and then private key file is just going to be private key and SSH is just going to take the key instead and it's going to decode the secret key now we don't write the secret key to a file 803 oh it's so close I can taste it this is now just going to be private key yeah I was worried about that so this is remember the whole machines debacle so the issue here is that we're sort of expecting this to be given a copy of machines but this future doesn't actually return machines it just mutates machines in place so it's almost like what we wanted to do is run this entire future and then give away machines I don't know that we can convince it to do that I think we would have to do something like if we construct the future insight I think we should be fine so we this has to be mutable so we're doing arch linux and the vim theme is atelier dune from base 16 it's really nice this is the one I've stuck with the longest actually so I wonder whether we can have this setup closure resolve inside of this future so like this and then the move machines I don't think it's going to be particularly happy about that so let's think about this when this join all completes then we know that none of the setup closures are still running and therefore it should be perfectly fine for us to do dot map and then like give machines unfortunately there's no way this works because the compiler is gonna I mean we could see very surprised about it I mean it'll type check but the borrow checker won't let us I think and this of course has to have the right error type expected future result found in that what? expected type future result with an error oh do we have a return up here yeah either A or maybe 800 expected found failure error feel like you've seen this before map error error from should also take a handle which we have from long above 890 needs to future so these are the kind of types that you end up with in futures this type covers my screen expected thinking general we want handle to be sure the borrow checker is gonna yell so much of me this needs to I guess this should whatever result we get back from the closure which actually now come to think of it we're currently requiring that it returns a unit type there's no real reason for that we could totally say that the user is allowed to write a run closure that will actually evaluate to something but let's just ignore that for now this is gonna return R and start this is gonna be R I'm guessing it's yelling at me because this needs to be a box right we're gonna say all the things it's gonna be equal to that and then we're gonna return box all the things and the reason is because we don't have a portrait to it's really at the end I mean this thing needs to return back it's not even true where is it yelling at me now 902 it's saying that it expected a terminate instance result that's not true that's really odd it expected a terminate instance result and it found nothing oh it's the okay value this should drop whatever whatever value it tried to output we can ignore entirely 520 the variable is no longer needed it's 56 656 that's not where all of the things is the thing because I cannot infer the type from D in 656 let's see cannot infer that this should be a vector is what it's trying to tell me and doing a really bad job at doing alright borrow checker tell me all the things I did wrong 656 are extreme future that's true this is a map 680 is a oh that's probably because of the same thing may not live long enough it's true that is going to have to live long enough for us to execute the future so this future will be tied to the lifetime of the F this is an FF does not so this and this also needs to then be a wait why is this not okay FF may not live long enough that's true FF also needs to be tick a so that this can be tick a so that this also needs to be tick a now what getting a one step closer each time one step forward two steps back 363 all of them borrow log and ec2 let ec2 is ref ec2 see if that's enough to make it happy probably not yeah that is a little bit of an issue so specifically the logger hmm so the issue here is we have the logger that the user is set to be used we need to carry along we don't really have a way of expressing that though except by passing the logger along as well with all of the other values it's pretty sad we can make an RC that holds the logger but then we'd have to clone it into every closure which is also not great hmm I wonder if what we'll do is we may just have to pass it along that's pretty sad so we'll basically here map r r ec2 and log and we will have to do that at the end of everything that's terrible move closure to I was hoping we wouldn't have to do that but it looks like ec2 and log we have to keep carrying this along as this is definitely pretty tedious here somewhere slightly more happy with me now 3.83 closure may outlive the version cluster is self.cluster we basically need to deconstruct self here where's my tsunami deconstruct all the values so one way we can actually get around this is by having self reference anything that has self. should just use these variables for 39 because it uses cluster we're really going to have to move into all of these for 66 511 uses 708 uses what 463 aperture of log I wish there was a nicer way to thread these through without having to do it like entirely manually 71 it borrows placement I guess all of these futures now need to okay that's fine so that's going to fix a bunch of issues luckily it does seem like we're getting many at the same time so this I can't steal ec2 because ec2 is used inside of here how are we even going to oh I'm going to have to collect here so the problem here unless ec2 client is clone so this is actually sort of interesting so what we're doing here is we're creating an iterator of futures and we're passing that iterator to join all the issue is that the iterator needs access to our ec2 client so over here it needs access to our ec2 client but because all of this is asynchronous and lazy if we give the ec2 client into this future then we can't also give it along here because here it's already in the future so the problem here is the bar checker doesn't know that by the time this map is called the future will have resolved this means that it thinks that we're doing things we're not allowed to do so I wonder if ec2 client is clone that's awkward well in that case the thing to do here is to collect this the reason to do this is if we collect it then this closure all of the futures will be constructed immediately and since ec2 does not bind its own lifetime to the lifetime of the future it returns this means that ec2 can be borrowed just as a reference in here it's not moved into the closure and that means that we're allowed to move it out after it's a little bit awkward but I may need intubator here hopefully not so that should fix a bunch of those 496 needs it borrows name that's pretty awkward so we can move name into here I guess and then we can map 510 partially moved value setup oh yeah that's true I'm just gonna have to do setup.setup user is setup.username so that we can then use setup.fn user 49 it borrows log okay so here we're back to giving a long log this is gonna be log is log2 because luckily for us log is clone and so this means that we're still allowed to use log outside of closure 42 from here this can move I don't know how to fix this though because it's sort of like I want to give a reference to this closure that I know will be valid until the future executes 547 what the flat map above also uses log but it doesn't move the value I feel like that's not true for sure I can do this I guess but it shouldn't be complaining about that 549 is gonna complain about the same thing I'm just gonna get rid of this thing that it says doesn't need to be handled does not live long enough I mean that's probably true oh and this is also gonna need this and ec2 which means this is gonna math move so this is gonna have to do the say it just never ends does it I'm just gonna get an e log is of type s log of this value that is an closure is if I take one argument this which is of type slowly but surely need to declare the type nope so it's saying now that found that it expected it's in the error case per this can still be r this can still be that this has to be oh man that's gonna be terrible I don't even want to think yeah that's not great the problem here is that we've set at the end to run if this future ends up resolving into an error so this is we're gonna run we're gonna set up all the machines and then if anything fails at any point then run this closure because how do we get easy to log to that closure well so log we can get pretty easily because log we can just get with log.clog so we don't actually need to pass in log.clog2 but easy2 is tougher unless we're like willing to make a new easy2 client I guess which maybe we are yeah maybe we are hmm because we can totally do this right like we do we could down here make a new easy2 client right in here and that way this could just take the error that it used to do and everything would be great the concern here is that provider I don't think we've already used provider so I don't think we're allowed to move it again do you have fewer errors though? yeah capture moved the provider so I think we would have to require the provider is clone which actually what is required for provide AWS credentials it only requires refself what about like this thing that looks reasonably complicated does not implement clone but that does mean that we could do we could say the provider needs to be a one of those guys wait are references clone I think references are clone because if so we can just do this right and then this could be provider.clone trait bound environment provider right but they don't implement clone for any of these guys that's awkward I mean ideally we don't want to make another easy2 here they make it really awkward because we don't have a good way to keep multiple of these around except by connecting multiple times and we really don't want to thread this through every error so yes okay one way whoa what's a nice way we could get around this so we could get away with this by we could use an rcref cell here because that way the termination could get its own it's really ugly though really ugly though yeah so what we could do here is say easy2 is rcnew actually could just be rcnew ooh yeah let's just do that so we say that this is rcnew this then everything we had so far should be fine except we can also do easy2.clone and now easy2 can be easy2.clone yeah so basically what we're doing now is because we're wrapping it in rc we're basically making we're basically making easy2.clone and the reason this is okay in this particular case is because all of the methods on easy2.clone take refself so in fact if we were if we were really lucky ooh what does logger here does it take refself does it take refmute self it takes refself what this means is that we could have an rclogger as well the problem is that all the moves moves don't do clone is the issue so we'd still end up with this painful thing of having to clone it for every instance which is all sorts of painful so really hope we manage to finish this up we've been going for like 5 hours in theory there's not that much left but it's just a constant amount of pain 608 id2name is not okay because move it into here we should move it into we will oh it needs to modify id2name yeah it's a little weird does it also need to modify the setup fn's no the setup fn's the setup up there so why does id2name oh because one is spot instance to name and the new one is instance id2name oh why is that needed I think this is just a new id2name I think really this is id2name a new hash map and it's going to do here id2name we're going to collect and then we're going to break with v and id2name and here this is going to be it's not the same id2name as we had about this is just going to keep coming up isn't it id2name here this is that I also I realized that this particular piece is not particularly interesting to watch through and so I wonder what we'll do is I think what we'll do is really to maybe I should finish the rest of this offline because this is a fairly mechanical process and not something that I think is particularly useful I mean let me know if you disagree but then I think what we need to do is finish this offline at some point and because in theory there's not that much more to it now we basically have all the structure all the things that we need to do in order to make run as be a future we've done now we're just like massaging the innards to to fit we're just massaging the innards to fit all the borrow checkers rules if you find that to be useful then I'm happy to help out but I think that's probably not true telling me that 609 it's not allowed to use it there because it's moved to 535 it's so tantalizingly close though here yeah it's that right isn't it this wreck which we sort of want to I think I will probably end it there then I realize that it's a bit unsatisfactory to end without us having something running but we did get to the halfway point with exposing something entirely synchronous on top of the asynchronous sort of inner parts and then exposing the asynchronous parts now what we're working on I think this particular threading part is not all that interesting so what I'll do is I will probably end the stream finish this offline at some point and then announce announce when those changes have been made and sort of highlight the changes that we had to make from last time I think that's more useful than continuing to walk through this here this is just going to be a bunch of checking compile, checking borrow check errors and then going back and forth there will be another stream at some point I don't know exactly when it will probably be a second stream on async SSH which there have been some people asking for which is going to be sort of more technical in nature because we're actually implementing futures in particular async write rather than in this case where we're just threading a bunch of futures but if you want to stay tuned feel free to check you can just follow on Twitter or Patreon or whatever and yeah it's a little sad to get further with this but it's such as life okay well I'll announce the next stream and thanks for hanging out and writing code with me oh and I'll upload of course as usual