 All right Everyone welcome back to yet another stream I'll do the the spiel from the beginning just Just before we get into the stream. So I'm John I I do a bunch of these live coding streams Sort of various different things but in general the goal is to write somewhat more intermediate or advanced rust material for people who Have some some beginning grasp on the language or it's coming to it from from a programming background And want to learn or see the language being used in sort of more advanced ways or to build real stuff I Have a patreon page where I post updates whenever I do video streams or recordings from past streams or I solicit Suggestions for what streams to do next So if you want to like watch me there you can do that Otherwise, I also post basically all the announcements to Twitter. So either is basically fine I posed all of the recordings to YouTube afterwards. So if you go to This page So on my YouTube channel, there's a playlist for all the rust live coding sessions we've done It's really slow today So if you go into this playlist, it has all the old videos and I'll keep adding new ones there as long as we record them So this stream is a little bit different and I don't quite know yet how it's gonna work out, but I have this idea of Like over the course of my rust programming there are lots of times where I sit down and I Like want to use something that someone else has written I don't always want to write everything from scratch even though it's fun It's every now and again like you end up using some software other people are using in there either as a library or a binary You need to depend on it in some way Or even if you just want to like Expand your knowledge of rust and you want to contribute to something someone else has built Then usually what ends up happening is I open up I end up in some GitHub or get lab repository and like now what? And I figured it would be pretty cool to try to show what that experience is like to come to someone else's codebase and like figure out what's going on figure out How the crate is organized and so and also how we might contribute to it And so sorry is I can't imagine not using other crates. It's possible You often don't want to do it But but so for example one of the things that I've done a lot is write relatively primitive crates like things that are Like data structures for example where you often end up not depending on any other crates Or if you do very few or like only for testing and benchmarking but in any case I Sat down and figured like how about we just do a stream where I do open-source contributions, which is A little bit terrifying. I'm not gonna lie because it means I'm gonna dive into some codebase I know nothing about and have never used and going to try to like modify it live We'll see how it goes And so I outlined this on Twitter and I got a bunch of responses where people who seem pretty excited about it And the basic idea is we're gonna look at some crates that people have suggested We're gonna look at how the crate is organized Try to figure out what it does like look at the readme the documentation how the source sort of fits together and Try to give some feedback on those either if the authors are watching or just for our own sake like Which things do we feel like they've done well which things would you perhaps do differently? If you were to write this project from scratch And then we're gonna try although we'll see how well this works to see if we can contribute to the code in some way Like whether that is improving the documentation improving the readme adding some feature to the code looking for a known bug Like it depends a little bit on the crate and what kind of Sort of how sophisticated it is and also how well-developed it is some of these are Relatively early on in their development and some are later on And so we're gonna like look at the bug tracker and see what we can find there and We'll see like my hope is to spend maybe An hour to an hour and a half per crate, but we'll see it depends a little bit on how Expensive the changes we want to make are Remember that this is this sort of ends up being pretty Collaborative right I'm I know very little more about these crates than you do So if you see something that you think we should change something we should point out to the to the authors Something you would like us to try to change or something you don't understand and want to know why it's there Then feel free to like ask in chat, and I will try to address them and also sort of go along with you And we'll we'll see how this goes One thing I would like is after the stream is over I'll take like a few minutes just to sort of sum up things with you and see if I can get any feedback on How you think this format worked whether you think this was interesting Whether there are particular Crates you would like me to see me do next with you'd even like me to like to see me do this again But we'll see we'll see I think you can be fun I also want to point out Nathan linitz Just donated future lords a month on patreon, which I think is really cool This is a person. I have no idea who he is, but he decided to give me money and that's amazing. Thanks Let's see before we start there was some How about reviewing the regex okay, so I have gotten a couple of recommendations for like really large crates Or really sophisticated and mature crates like cargo regex servo And a couple and like just the compiler and While that would be interesting that is a very different sort of monster. So in a sense what I wanted to do here was look at Things are sort of still in flux and still in development and where there's some hope that we can understand what's going on in 90 minutes This is not true for any of the large and very sophisticated crates, right? like even just regex, which is a somewhat smaller crate is a really sophisticated Beast of a crate right now in even even though I know I happen to know that it's pretty well developed Trying to dive into that and do something useful in 90 minutes is going to be pretty hard One thing we could do is do a stream where we Pick a crate and just try to understand it without even trying to contribute anything We could do that there will be a lot less programming in it and more just reading code But we'll see it's not it's not a bad idea. It's just hard to balance Do you already have crates picked or do you need suggestions? I do have some great quick picked so I did a Twitter poll over the couple of suggestions I got and This was sort of the the winner. This is a part of Graphql implemented in Rust And so specifically the hyper bindings for that we're going to look at that we're going to look at Tokyo beanstalk D which is a Tokyo implementation of working with the beanstalk Protocol which is sort of similar but simpler than the zookeeper stream we did in the past and then Argonautica is implementation of a Rust password hashing algorithm or it's a It's a password hashing algorithm and the implementation of it in Rust And so I feel like these are three relatively different things And so that's why I feel like they match up pretty well and they all got some decent amount of interest There are some others too that maybe we'll get around to look at later If you have other suggestions and like ping me on Twitter or something and I'll try to queue them up Yeah, Nathan I saw I appreciate it in a sense I'm It would be great to get to 150 but that's also a bunch of work So it's like I really I do really appreciate it But this particular week is also a lot of stuff is going on. There was a deadline. So I was like But it worked out Thanks, though Let's see ooh author of the okay, so we have the author of this crate here fantastic You can of course still suggest more crates We probably won't get around to them in the stream But if you like put them in the chat or tweet them at me or something Then I will add them to my ever-growing list of things. I want to cover in streams. Sadly. I only have so much time All right, let's dig into juniper hyper first So in sort of the same spirit as with all the previous streams I've tried to not do any work ahead of the stream This is in part so that you don't feel like you're missing anything like any part of the experience it's also so that You can observe my entire process of going through something new as opposed to me having like fully Prepared before we sit down and know where everything is because I don't think that would be helpful to you And it's sort of disingenuous because that's not really what the experience is like and I want I want people to realize that Programming is like not It doesn't just happen there right you need to invest some work All right, so juniper hyper let's find out what juniper is first so juniper Graphql, okay, so it's some kind of data query language and there's a rust implementation Does not include a web server Okay, so it's like sort of like a database But without a web server or any kind of an API and Integrations for hyper all right, so I'm gonna take a wild stab here, but it seems like this is a It's like a way to set up I mean just judging from the name you set up graphs of data structures and then you have some way of querying those data structures and the hyper thing is going to be a Rapper around that so that you can make requests to the stored data over the network Let's look and see if I got it right Okay, did a query language Well, this is pretty unhelpful how do I use it give me an example examples There is no examples directory. What about here? No Getting started quick start great quick start Yeah Yeah, okay, so it's like a you set up structs they have relationships to one another and there's probably like a okay Yeah, so you declare objects and you can search the object. Okay. Yeah So it's just you're querying for data in some structured manner. All right now The question is what is juniper hyper because that's the thing we're actually looking at. We're not gonna look at all juniper Pinpoint to look great for pinning. Oh Yeah, can you can you ping me without on Twitter because this will get lost in the chat Why not ask the author here, well, I mean we could but usually you're not able to ask the author Also, I think he's the author just of the hyper part, but I might be wrong Okay, so Let's see what we got here. Oh example. Good. All right example is a good start. Let's look at the example Oh Future CPU pool that looks weird. I don't think future CPU pool should be here Why don't need to build a CPU pool? This Tokyo This is using hyper. This should just be a Tokyo That way you wouldn't have to clone the pool in here. Yeah regardless of this setting up a hyper service So hyper for those who aren't aware hyper is a rust web framework both for clients and servers And it's sort of built entirely asynchronously around futures and and Tokyo And the idea is that you set up a bunch of services and every services responsible for routing the request it gets in You're basically like you're sort of given a you're given a request and you have to populate a response And it's all asynchronous And so a service is just a thing that can reply to requests In this case. Yeah, so here you see this is basically doing routing, right? Like it's looking at the method in the URI and then it's doing some stuff in response Okay, so this this is this example is then basically the The hyper part. Why isn't the example? Okay, so the thing what does this look at the docs? Okay, so Juniper hyper provides these two functions Um Interesting, I wonder why just those two that seems like a weird thing for this crate to provide Like I would expect a crate that's supposed to provide web bindings to provide web bindings Whereas it looks like from this example It looks like the user has to come up with the the actual routing Right because this is in the example codes that the idea is this would be in your code And so I guess the question is what are these functions? Right because it's routing. It's routing these two different things. So it looks like Get slash Iql. Okay, so what's the different these two names are not clear. What's the difference between these two? Oh That I would say that the first thing that's lacking in this credit is that documentation maybe so in theory That is something we could provide Graph You just forward the request to Juniper. I see I see so But if you're just forwarding then why is this crate necessary? That's I guess I want to know what this function does. It's a good source so Iql This just uses Juniper directly To request an endpoint and just construct a buddy. Okay, so this is extremely straightforward, right? Like this method is just Calling whatever that method is and that's notice specifically. This is in the Juniper crate not in Juniper hyper So this wrapper seems pretty straightforward. What is the? I feel like it should be named something else Twitch don't have communities anymore. I don't have a game creative Oh, I mean my stream isn't game programming. So Graphic is a UI for the GraphQL back-end Yeah, but why I see so the response so what you're saying is This call here Just it's actually prints some HTML of some kind. We could probably just run this I just want to I like exploring the the high-level part of the the crate first before trying to run anything myself because I don't It might not be that we even need to run this All right, so the so this is just like I guess I is for index maybe So this just provides you with the graphical index the UI All right So what does this other thing do because that that's what all the the seemingly interesting endpoints go to right? You go to slash graph QL. There's something else happens So this takes a CPU pool, which I think is really weird All right, so this Further matches on the method Fascinating I feel like Some of this stuff should probably go there should be like a a wrapper that's provided by the crate So that might be something we could build so this Takes Because a root node is the root of the data structure we're searching over CTX is is DB Which is database? Okay, so I don't know the difference Okay, so database is like the entire collection of objects and root node is the one we start searching for it would be my guess And then pool unclear why they need a pool I guess we'll find out later and then they're passing the request in for this graph QL So again, so graph QL is the main thing provided by this crate. It seems like It matches on the method and if it does a get Then I see okay, so mainly what this crisis this crate is doing is mapping HTTP right HTTP requests to graph QL requests and then taking the responses and mapping them back It seems like right, so if you get a if you get a post you Essentially parse the body you make it into a request Yeah So here for example notice it's basically taking the body of the request turning it into a string Parsing it as Jason into a graph QL request and Then doing this execute request business and the response that comes back Oh, I see so execute request is really the thing we want to look at Execute request I wonder why this uses pool dot spawn like why does this not use Tokyo spawn? Why are they using like this future CPU pool? It seems odd Quest dot execute so request is the graph QL request. Okay, that's fine. Yeah, so that's what we parsed it into Yeah, okay, and then it just does a new response which is presumably then mapping a graph QL response into a Into whatever web response we want to give so where is new response? Yeah, okay, so great So it just creates a response right so the sort of main stuff of this is really mapping then between HTTP and The underlying graph QL. It also looks like all of this is single file, which is a little sad CPu pool is needed because hyper is async and juniper is blocking so Tokyo has Tokyo blocking Which I think fixes that problem for you So that's another thing we could do is we could Use Tokyo blocking to get rid of the future CPU pool and just have it be Tokyo all the way To make it a little bit nicer to work with and you would also avoid having two thread pools So currently you end up both with the hyper thread pool and the future CPU pool. It's a little unfortunate Currently the stream is not easily discoverable for anyone who's not already where you're streaming Now there's a programming camera. Sure. Let's fix that while Actually, should we do that? Sure Well, it's the worst I could happen Uh Sorry, you want the category here to be programming great update Great, it's now programming channel Nice Not really sure what the difference was between CPU pool and Tokyo's thread pool so Tokyo's thread pool Does work stealing and it's also where hyper will be executing so Because hyper is based on Tokyo hyper spins up a bunch of futures and it spawns them on a Tokyo thread pool normally And so now by using future CPU pool you have two thread pools running You have the CPU pool that you explicitly constructed and also the thread pool the Tokyo started In particular here when you do server bind so that's a hyper server and if you look at I Mean unless you're depending on a very old version of Hyper No, zero twelve. Okay, so if you look at server Server server Returns a builder, right? So what do you do with this builder? You call dot serve Yeah, so if you call dot serve If I Right, so the thing you get back needs to be spawned somewhere so currently this example This server that you get back you run RT run and RT so hyper RT if we go back here RT default runtime So the default runtime for hyper is a Tokyo runtime and a Tokyo runtime starts up its own CPU pool And so what you end up is this is starting a this is already starting a thread pool for managing all the hyper futures And then in addition you're creating a separate thread pool for doing juniper request processing This is no longer necessary because if you look at Tokyo Executor, where is the oh, I guess this is probably now That's a good question. So there's a there's a blocking there's a Blocking function now that gives you a future that runs a blocking operation This function. Yeah, so this Takes a function and gives you back a thing you can pull and so basically it gives you back a future But it makes sure that it it like still reuses the The various threads that are already in the pool to execute the job So this is probably what you want rather than have a separate CPU pool And this might be the the conversion we want to make on this crate The creative category. Okay. Yeah, that's fine. I think I think the change I made was correct, right to the twitch category All right So so blocking seems like a good candidate here because it would avoid spinning up to thread pools And let's see the other thing it's doing is really just mapping between hyper requests and graphql requests and GraphQL responses and hyper responses, right? So graphql execute Really just Does an execute and what does it do with the response? So it does execute and then graphql response is one of these Okay, so this suggests that really it's just like Jason encoding the response probably All right, so the crate itself seems pretty straightforward. Like I think this is basically the entire contents of it the thing will want to So I think that the two things we could do here One would be to get rid of this extra pool, which I think is a pretty nice change to the crate The second would be to add some documentation to it, which like is maybe interesting, but it's unclear The third thing we could do here The third thing we could do is this business we could probably wrap up instead of it being an example In fact this is part of this example code should probably be in the documentation too, but instead of this just being in the example it seems pretty reasonable for This crate to provide this provide this service explicitly Like there's no reason for every user of this crate to have to write this code, right? I guess the question is like what if they have other things they want to match on Maybe Maybe I guess this does let you change the URL pattern But it might be nice to provide like a shortcut for users who just want the bindings In fact, I almost wonder whether this shouldn't be an example, but instead just be a binary just like a straight binary maybe In a PR, ooh, look at PRs Hyperduration, yeah, see I don't think it should you see if you pull so that's separate Oh, hey, I know DGC or rather have contributed with him on a different crate Yeah, so okay, so this is the observation, right? Okay, so maybe the goal then should just be to be able to simplify the example Right so they can write whatever integration they want. All right Let's give let's give this a try. So I guess Fork PR 230 that's the one I had open right all right. We have a fork get clone this So usually what I like to do is If I'm contributing to a crate, I usually have two remotes So I have I have origin point to my branch and then I have one called upstream upstream this one Also do and then I point master to upstream So the master is not pointing to my fork And then I make a new branch. So in this case what we're gonna do is Remove CPU pool move hyper CPU pool And then Juniper hyper Let's do our cargo check. Let's see if we can build this thing in the first place. That would be I'll be handy Okay, so the the real question we want to we need to deal with now is If I can get back. Oh great. We can't build it. It's fantastic. All right, so We basically need this function to not take a CPU pool, right instead. It should just give you back a future Yeah, and this could probably be impulse future. I don't think this needs to be box It depends out performance critical. This is but I guess let's focus on getting rid of the pool So here's what we're gonna do. Oh, no, it's almost rush formatted Nope, oh that formatted everything. Okay, so Juniper is not formatted Fine. We'll just ignore that change later All right, so over here I guess we need to change cargo Tom all to get rid of the CPU pool and move Tokyo to be a dependency Moving Tokyo to be dependency is actually fine because hyper already pulls in Tokyo So we're not actually adding a dependency. We're we're making an implicit dependency explicit It should be fine So now there will no longer be a future CPU pool instead they'll be Tokyo always Hypergrade itself should be rust format So it mostly is I'm running the latest nightly, which is probably why it's Why it's complaining specifically the only real diff is this business down here. That's the only diff it makes So that's probably a rust format nightly change. So I would not worry about it Now it's obviously not gonna compile Okay, so we need to figure out how we want this to work. So specifically I Guess the concern is when you call a graph QL, we're gonna have to do something that's blocking Which is fine. So we're gonna use Tokyo blocking for that Which means we have to pull in Tokyo thread pool because that's where the blocking function lives Tokyo has been moving this direction of instead of instead of having all the things exposed through the Tokyo crate They have lots of other crates and then Tokyo just exposes like the really essentials that are not expecting to change The reason for this is they can improve the other crates without having to issue Like breaking changes or anything to Tokyo itself So in this case, we'll need Tokyo thread pool zero one seven So yeah, so the question is what are we gonna do about graph QL I think it does not need to take a thread pool. Instead. What it's gonna do is just call So I guess Execute request is the part that's blocking. So I think all it really has to do is return a blocking These arguments are not pretty, but I guess it's fine I think this can also just be in full future does not need to be boxed Yeah, so down here, I guess execute request and execute request, okay, they both use execute request So that's really we can just modify it execute request Right graph IQL This can also be in Oh, I guess the reason why Apple futures a little annoying here is because we have different return values depending on We have we have different futures that return depending on whether it's a get or a Get or a post or anything else. That's probably why it was originally a box We can get around this using either so futures It's in future either So we would do either a The double a is a little annoying, but I'll show you in a second why it's necessary The so either is a is a either so there are only two options, right? It's left or right or a and b and in this case we want to return three different things Specifically, this is one. This is two and this is three and the way you get three from two binaries is You wrap the binary. So in this case, this would be Right, so the type of a a contains an either b is just this So now that can be in future This can just be in future directly anyway You HTML response So isn't this a blocking call? I guess we don't really know but to me this seems like something that might block But given the current writing, I guess All to Graph Iql source blocking Because if it is this should be wrapped in Tokyo blocking right like we should we should mark this as also being a blocking future So now execute request is gonna return something that's an input future. Okay, that's great So here input future was used already So my guess is for the box is because they didn't they tried in full future It didn't compile and they were just like Alice's box it The reason is because in full future only works when there's a single future type that's being returned if you have so if you try to do like FNFU in full future and then you do like if a And like this is some future and this is some other future This will not work because impulse future is gonna say there's no one type. So you should think of impulse future is like There is a single type here. I just don't want to name it Right, but in this case there is no single type for the return value because these two futures this and this have different types Even though they both implement future the same way. They're different things and so the way you get around this is by doing Either a So now the there's only a single Type that's being returned and that's an either Which is generic over its left and right future All right, great, so we got rid of a heap allocation that's good. Oh Yeah, either is fantastic Let's see what else we have. Okay, so we got rid of the pool from here And now the question is down here. What do we do? So this doesn't actually have to be lazy anymore Uh This is now really just Why does this put the wrong number of parentheses somewhere? probably Yes here and also Here and also somewhere else 52 And extra comma And another extra comma great So it was interesting. Okay, so the code now actually compiled even though there's no pool and the reason for this is now We're just like executing this request Directly on the current thread and so you should think of Tokyo as it spins up a pool of threads and those threads are supposed to keep processing futures and if and What what happens if you put like an expensive blocking call somewhere like imagine you like do Like an expensive file system operation or you do like even just you do a loop that just spins forever Then that thread pool worker that that one worker in Tokyo cannot process any more futures and so imagine that you had like Four futures and all of them just did a spinny loop or they did something that blocks Now if you only had four cores, there would only be four threads in that pool If all of them are just spinning then none of the other Futures you have get executed. So like if there are another HTTP requester comes in or something it will not be processed and so the way we deal with this is That we tell Tokyo that a given Given worker is now Blocking so it's it's doing something so that it's not going to be able to handle futures for a while And then Tokyo is going to respond to that by essentially spinning up some extra threads or keeping some extra threads to make sure That like the rest of the world continues working So in this case what we're really going to do is we're going to do This is going to return us a future and then The response of that is we're going to do this Mapping that originally happened. I guess this Does this even do any? I think this is the map. I don't think this needs to be an end. This is the map. Oh, I guess actually We probably want the blocking stuff to happen all the way in at request execute So we're going to have request execute return a few an impulse future So Here So this is now going to do it's going to return an impulse future Where the item is this and the error is Blocking error This thing Now this is not something we want to expose to the user. In fact, why is This pub. Why is this not? That makes no sense request is not pubs. So this is not Yeah, so if you try to execute a request what we're really going to do is we're going to execute it We're going to return a future that will eventually result at being it having finished and in this case Oh, I see this does multiple blocking requests Fascinating. All right, so this business That's a Juniper GraphQL Okay, so this is the actual blocking call. So here is where we're going to have Tokyo thread pool Blocking Takes a closure that's going to execute And then we're going to map that into a single response great if you get a batch of requests Then we're going to do I Don't know if it's going to be okay for this execute to be on a ref to self. I guess this is where we have to find out what the Juniper API is like so the Where is it? graphql requests Yes graphql HTTP maybe here request. Oh Oh Execute returns a response. It's tied to the request. That's awful Yeah, that's kind of awkward so the problem here, of course is that This future is just like spun up in the background Hmm hum hum hum hum hum hum. Yeah, specifically execute return something that borrows request And I wonder if Blocking blocking need to be static. Maybe not It does not specifically the closure does not hmm This might not compile which is a little sad, let's see Why not I see with the whole thing in a blocking call so That is basically what we're doing It's just that it Wouldn't help so if if blocking requires its argument to be static Then we can't give it any closure because any closure is gonna have to borrow the graphql request So the execute heat up here is Given the way why oh I see So really we do have self here The problem is just what do we what's the lifetime of the return future? I see that's why I mean, let's let this might actually work. Let's try this So specifically if we get a batch of requests what we want to do is We want to Do an execute for all of them So there are a couple of ways to do this in Tokyo if you have you want to essentially wait on a bunch of futures And the way to do this is Let's see future Join all think is the one we want and a join all I think is a stream No, it's a back. Okay, great. Yeah, so that is what we want so specifically what we're gonna do here is we're gonna do a futures Future join all I guess this comes from if we just use the Tokyo prelude That brings in future as well, and it brings in future So we're gonna do a join all across request dot iter Dot map right and this is now gonna be a blocking So we don't need to collect anymore The join all is gonna produce a vector for us that we can then just map into the final batch and This is now an either Right because the two different match arms are different types It's probably gonna complain all 85 All right, it's gonna complain that the error here. Why is this generic over the error? I don't think that's true this is specifically a Tokyo thread pool like Walking now down the line this error might have to be more refined specifically, I think that error type is gonna have to be something like Derived from the Juniper error type so that you can also return Juniper errors, right? I guess the question is whether Juniper execute Yeah, so execute here will my guess is at some point will return a Result, but I guess the response maybe contains that error anyway To complain about it's complaining about a lot of things. All right, let's look at 51 first. So here it's saying Type mismatch The error should be a graph ql request error Right because here I see so up here we're making the error type be graph ql request error So the question is what do we even want to do here? I Think what we want to do is ignore the blocking error and Just make it a graph ql Request error Sure invalid, I mean, it's not really invalid, but it's gonna be something like no more capacity So this would happen if we already have lots and lots of threads that are We have lots and lots of threads that are already executing these blocking requests and Tokyo saying I'm refusing to spin up any more threads You can set the limit on the pool for what? What you want that limit to be In fact, we could hoist this Could put this further in but I think for now. We're just gonna cut it here Now what type mismatch on to wait It's here. It's saying expected signature Oh, right. What does blocking actually returns? It turns a pole. Oh, I see So we need a pole fn is what you're saying this is so blocking If I'm reading this correctly then Blocking doesn't return a future. It just returns whether it is Currently executing that blocking call or not, right? So it it might return you a blocking error if that thing is not yet ready If there are no threads available to do the blocking call And so I think what we want here So imagine that you're you have four cores You're currently executing for blocking futures and Tokyo goes no I'm not gonna spin up another thread for you then the call to blocking is gonna return an error And so really what we want to do in that case is every now and again retry it essentially we want to pull whether there are available threads and the way we do that is by using a future pole fn Which really just Tries to execute the future until it succeeds, which is what we want in this case So here I Guess the question is what is a blocking error then? What is it? Oh, that's unhelpful Oh, if the thread to pull is shut down I See so really this should just like not this is not no more capacity because that would return not ready So in this case This is just This is really like the red pool has shut down, which I think should never happen So I think we can just do unreachable Because this shouldn't ever happen and so this is gonna be a whole fn around 198 Right, so actually we need It's a little awkward so because we're Carrying along the root node in the context into the closure in here And that might like we might return from this function almost immediately, right? Because we we try to put it on the thread pool and it like is there aren't enough threads And so it's gonna run it later. And so these references Need to stay alive for as long as this future is alive And so what we're gonna do then is we're gonna have a tick f and these and the future are tick F and then we're also gonna have to say that Take a Outlives tick F So basically the reference to self also has to live as long as F Because otherwise the future wouldn't resolve right and now it's probably gonna complain about all sorts of ownership issues So 109 here Requests does not live long enough all these things borrowing. They don't need to be borrowing If they are not borrowing that makes this a lot easier Because now this is no longer a reference This is an arc to that This is no longer a reference is an arc to that Now this doesn't have to be f anymore. So we're gonna move into here Yeah, mismatch type I guess this is not gonna borrow these Expensive lifetime required in the type of root node root node takes a lifetime to Quest does not live long enough. Well, the request is actually owned So let's just have this Just take self instead This now has to be generic over a but just for the root node That's a good question. Okay. So the problem here is that the the response for giving back is tied to the request. So we can't We can't produce a result So the reason this is awkward actually is because we want the ability to have this freestanding execute requests bit Because this thing Ma it changes. Okay, so let me rephrase the problem we're running into is that we're trying to Do a request we're execute a request and that's gonna take a while and the response is tied to the lifetime of the request The problem then is that means that the request has to live for as long as we want to run this thing for But we own the request and we want to move it into Where we execute it. The problem is then the response Is still tied to that lifetime? So to phrase that differently when we exit the thing we get back from executing the request Still borrows the request. So if we just here return then request would be dropped and then the response is no longer valid, right? So I think really the thing to do here is to Provide a map So we're gonna Consume self so crucially the problem here is the response only lives for as long as request is alive Request is still alive here when we execute But the moment we return request is dropped so the response is no longer valid Which means that we basically need to map the response in here to get rid of its lifetime So what we're gonna do is we're gonna allow execute to be given a mapping function And so there's going to be an F and an R And this is going to return an R So F is going to be an FN And it's going to be given a graph to a response and going to have to give us an R Right. Does that make sense? So what we're now going to do is we're gonna have this dot map map and Same here. And so now notice the response is no longer time to the lifetime of self. This can now be moved into the function Now 199 the execute function that we had up here somewhere This is now going to pass in the mapping closure directly selected graphql response found type parameter Okay, so the real way to fix this is just to Not have this be a separate method But I don't really want to pull that out. Okay, so the problem now is Our mapping function is over a graphql response But the problem is that we don't have the graphql response until we've resolved all of these and they are still all borrowing Request although that might be fine. We might be able to have this just not move into this Not map here So then the request is still borrowed at this point This is now an either a All right, we already did that and Then the future that comes back from this we map with map before we end up dropping self Maybe specifically now a reference to the request is going to be passed in there Yeah, the problem is there's no There's no well-defined owner of request is really what's going on Yeah, mixing blocking and an asynchronous code like this is always a bit of a pain Hmm, yeah, I wonder whether it's just really problematic that the Why is the response tied to the lifetime of request this seems really odd juniper why I've done this so Execute to the response. Why is the response? What does it contain? Oh The tk is only there for the error. Oh That's awful That's a bad choice juniper. That's not worth it. Errors should be rare That's awful You know that makes us a pain So notice that if if he executes response did not was not tied to the request None of this would be a problem The the problem is specifically that the take a of self here is tied to the take a and the response No, because well the input bytes are only used if you return an error And if you are returning an error then cloning is fine, right? The standard operation should be not getting errors. So it's fine to make the errors a little more a little more expensive Well, then Yeah, the real problem here is just like that the request ends up being dropped Hmm, but I don't know how to avoid that easily So the way we could do this is like wrap the Request in an arc because like okay, this case is trivial because here we just Dot map graph QL response single So that part is fine. I think because there we know the request doesn't get dropped All right after can't be after the poll It's like match on this and if it is okay Async ready V So if we get the value out, then we want to produce map of Gracuels Single V right so this erases the lifetime So that's fine and anything else if we get oh So we guess okay Async ready if we get oh, so okay async not ready Then we really just oh I guess actually the nicer way to do this would be that's one ugly future So the idea is that we try to do this blocking and if it succeeds Then we immediately map the result and this is still happening. I Guess here. This is still happening while we have a reference to the Request and so that's all fine the problematic case is this business because here we can't pull the same trick because They're multiple of this request, so this is going to be into it and each Yeah, so here the real problem is we can't call map on the entire batch Maybe you can convert it early on into GraphQL request error No, the problem is specifically this is actually a GraphQL response Right, I need to get the value from the response And there's no way of getting the value as far as I can tell Which is kind of silly The okay, so they had the other way to do this is to specializes code a bit more and just have this happen in line So what does wait this only looks at the yeah, this is serializes the response anyway, okay We can make this a lot simpler by just we're just going to erase this this lifetime and Sorry What does this do? So this is if it's okay, then you response from the code All right, so really all this needs to return. Let's ignore this map function. This just needs to return String which is the JSON encoding Actually, it needs to return a hyper body Which is the body we're going to include and it needs to return whether or not it's an error. I Think that's all this code relies on So if that's the case, then this is pretty trivial because it's just It is just going to be here I guess This we're not going to produce a GraphQL response at all We're just going to Do Whatever value we get back bear with me here So we're just going to do this blocking and if it succeeds Then we are going to do the Just going to do this body is this And then we're going to do Is Okay, is V dot is okay I guess we can call this rest and this is rest and this returns now Is okay in body And then we're gonna have to do the same thing down here Right so this joint all is a little bit more problematic, but we can pull a similar kind of trick Actually, which is we're going to do for each one we're going to try ready and Then Not going to give a response instead This is missing something this is missing a future This All right, so we're gonna pull for all of them. We're gonna try to execute it Synchronously the problem now is what this joint all is going to be is it's going to be a bunch of already Jason serialized responses We did just so happens that given those we can easily produce the final Jason string It's a little less efficient than what we would probably like What we're gonna do now a map of this So what a joint what the joint all is going to produce is it produces a vector of results? Where each result is An is okay in a body and we need to produce a single Single is okay body and you can concatenate Jason pretty easily right So really what we want to do here is just Is okay is results dot iter dot Specifically the request is okay if all of them are okay and Concatenate Jason bodies as array which is what we want to do right like if What this would have been produced if we just if we did the whole thing and then Jason serialized it That would really just give us a serialization of a VEC of them and The serialization of a VEC of things is just comma separated and start with an open bracket And so we just need to modify the body to do something like body is results into fold body and For each one we're gonna have I Guess all and body and we're gonna have to produce what the new body is gonna be and the initial body is gonna be actually this Right, so we're gonna start out with an open square bracket And then we're gonna just add all the things with commas in between and then finally this is gonna return to it's okay I'm body so we now need Docs are a Hyper so hyper body. What is this? Oh? Ooh, ooh, that's even better Yeah, let's do that. All right, so watch this. This is even better So TXM body it's gonna be Hyper body And then we're gonna do we're gonna send one of these What does this what do I have to send on the sender? Oh, it's not an unbounded sender. That's awkward Add to one I can only a synchronously add to one All right, that's fine Sure, that's fine. So we're just gonna make the body up here. You're not gonna be a map down here So we're gonna start out our body up here Sure, so we're basically gonna be streaming We're gonna be streaming the body into this So this channel we're gonna stream the body as the results come in so here what we really want to do is As these come back I'm gonna do something like and then Results Who is okay is gonna be a little bit trickier this way But the idea would be that instead of doing a join all We're just gonna do I guess we just want to wait for all of them We don't really care about the order. I think I know we do We have to produce the responses in order, too So the real way to do this is then probably Dot what's the thing to turn? We really want to turn an iterator into a stream. I think Really what we're gonna do here which we can do with the future stream Deprecated for it or okay and it results Yes, we want it or okay So what we're gonna do is instead of this join all business because we don't actually need them that way in order. We're gonna do Futures Stream it or okay that turns this iterator into a stream So that gives us an inter okay Which is gonna be a stream and on the stream we're gonna map each item each request. I Guess it's gonna be in pen then so for each value we're gonna do this blocking business and then For each result we get back So for each result that finishes the blocking This does mean that we're not gonna execute them in parallel, which is a little sad, but it might be fine But I guess the if this is in fact an ordered batch, then it's not okay to show them in parallel anyway So this is fine So we're just gonna iterate over the request one by one which produces an asynchronous stream for each one We're gonna call do a blocking call when the blocking call finishes with the result Then we are going to We need to do something to manage this it's okay value and then we're really just gonna forward Yes, it's gonna be inspect to do some business Don't know how we're gonna deal with this okay yet, and then we're gonna dot forward into TX Right. Is that roughly makes sense? So we're gonna iterate over all of the essentially this is gonna be a stream over all of the GraphQL responses and for each one we're gonna forward them into this body Right. So my guess is that hyper was the the sender is a sink. I hope it's not It's so unhelpful Who wrap stream yeah, okay, so we can just do that instead then so this is gonna be Let's wait for the idea for now. So this is gonna produce a Body yes here. We're gonna map this into result dot all is okay, and this is gonna be mapped into result dot body So for each one this is gonna be it's okay and body So this body now is the concatenation of all of the streams We're also gonna have to inject the Inject comma and the the characters to wrap around Good way to even do that. I think the way to do that is You sort of have to chain on to the stream right good way of doing that. I think I chain streams Is that a thing that's doable? chain There is Those are a con cap. That's not what I want Yeah, okay, so we can do this by doing Okay, and we're gonna use standard error once to produce a single element thing and that's gonna be this Right, and then we're gonna chain that with this future So a lot of stuff All right, so we're gonna produce an iterator that first produces an open square bracket Then produces each of these and then I guess here what we really want is for this to alternate between producing body and comma So we could probably use zip for that. No, there's a pair. That's not really what I want They make this so difficult Hmm I also realized that this is fairly involved code, but hey, this is often what trying to build an asynchronous code base works like All right, so we're gonna produce the open square bracket. I feel like this should be easier Actually, right. There is a way to make this much easier. It's gonna be less performant But that's probably fine All right Let's do this the simpler way Futures I guess future join all you go back to the join all We're gonna take We're gonna map this is gonna give us results Let is okay. So this is basically the code we have before results iter all these and Then we're just gonna concatenate the Jason bodies and it's a little bit sad like it is definitely less performant that it could be Much to do about nothing We're gonna do this by doing string concatenation So for result in results, I guess this is gonna be results we're gonna do Actually, I'm gonna do better than that We have to join them So it's gonna be Bodies is results into iter and then we're gonna say body is format this this this Body dot join and then we're gonna return is okay This isn't either be that is true 17 There we go, that's better Try ready as a macro of futures. So we do macro use This is gonna be Right, this is now gonna map over is okay and body Right there this is no longer now generic over the F and R We no longer need the take a Also don't need it here 5 Future is not implemented for graph QL requests join all ready wrong so Consider giving bodies a type. Let's just have this not make a body. I should just make a string Expected string found body That's because this is not gonna be a body from Yeah, so this formatting is really sad Maybe use body chunks instead Lifetime static is required. I see yes. We do need this No, this just because this is tied to a ticket and this is tied to the Query T also needs to live long true context also All right so close This So this is one thing that could also be made more efficient Shouldn't be necessary to actually it might not be necessary at all. We might be that we can get around with this by doing This Because we want to move in the request, but we don't want to move in the The root node so he's complaining about what exactly? So the thing we move in here. Yeah, it's arch. It is arch indeed And now This Huh, all right now that compiles These clones are set and now GraphQL response is never used 19 GraphQL response is never used Whoo, are there tests? I don't think there are tests. I don't think I saw any. Ooh Yeah, this open SSL business is really sad. I don't think there's much to do about it. Sadly Yeah, my guess is the Juniper Depends on something Juniper is depending on some like old version of open SSL somewhere. It's pretty sad Through oh, why does it oh, they just haven't run cargo update in a while really What is it that's depending on? cargo tree dash D What is depending on Open SSL or something is depending on an old version of open SSL Hypertelus. Oh, it's an old version of requests zero nine now this might not actually compile, but Oh All right in theory in theory, that's all we need practice who knows That's funny because it's a very elaborate way to remove a relatively minor Thing but it does mean that this becomes a lot nicer because now this pool can go away This can go away This can go away This can go away and this can go away the exemplars simpler Source live 285 No builder What does it need a builder for? No pool. No pool. No pool So satisfying Removing stuff from code 322 This is because that has changed Question would also be possible to use a single Tokyo blocking around the execute request function So The problem is the lifetime association between requests and response So that I think that's what makes it hard to do the blocking outside, but it might work Yeah, that might that might work, too It might simplify the code although. I'm not sure you could try it Header value. Why did they change all of this? It makes me so sad This is in response headers Get Which gives you an option T header value And what this is trying to do is print it which I think has to be So pool spawn is different lifetime requirements Well, it's more that pool spawn is not asynchronous whereas Tokyo blocking is that's what makes it tricky You need to keep polling So Tokyo blocking can return not ready Whereas that's not true for future CPU pool future CPU pool always succeeds and just queues up the request That is not true for Tokyo blocking. That's what makes them different If matches, I guess this needs to use futures Now there's no longer a need for the extra crate in the example go away CPU pool Go away. These are now all different. So this is either a Really wish it was a nicer way to express these because this just becomes ridiculous specifically This one So let's call these both be Then this is be then this is a So this is one of those places where like The box is definitely nicer. Yeah, maybe we should just let this stay box Especially given that it's an example. You really just want to show the user How the code works and you don't want them you don't want the example code to do like low-level optimizations, right? So let's just get Really so the problem is the box new will normally just so this is going to return a Box of type in full future This is also going to return a box of info future, but the two info futures are different specific types And so we needed to Turn this into box future and needs to make the compiler realize that we actually want box of a trait Sent between threads safely Listen great. Whoo. All right. That was a process But I think that's all I think now we're all the way through So let's submit a pull request Hey, that's not bad on time either. Good job team So this is where we want to be helpful Let's see a separate Use only a single thread pool for Junip, I don't know if actually moving this window down makes any difference. Hey Let's make this a little bit more helpful. Oh, why is thread pool underscored and pull? That's weird. All right The previous implementation so usually when I write commit notes like this I like to try to explain what the previous thing did and how this is different and why the change is better It's like a good way to follow these Previous implementation used a futures CPU operations there No, for example, I shouldn't used a future CPU pool for executing blocking Juniper operations This pool comes in addition to the thread pool started by Tokyo or executing Hypers this patch Uses Tokyo blocking to implement to Perform the blocking Juniper operations While we're using the same Maybe three please two words. I'm clear. Oh, it's a good question which simplifies the code and also the API and also reduce Great. Let's do that push you Origin this branch. All right Juniper That's new and then we It's kind of nice because most people don't know about this blocking function. I'm also gonna do this I'm also gonna link to this great pull request All right Good job team. We did it We implemented pull request and open source repository and it's been a little bit over 90 minutes But not that bad. I think I'm I think I'm decently happy with that All right, we did it Now oh joy All right, we successfully contributed to open source Now we move on to the next project I realized it was probably a lot of Like there's a lot of fiddling going on there That's sort of fairly low level and it's unclear whether it's useful or helpful But like often this is the case when you're digging into a code and especially Making a change that's so oriented around async stuff like it might be hard to follow Hopefully going back to watch this room again might help you But let's hope it was useful certainly the I think the commit makes this project better, which is good All right now we're moving on to project number two Tokyo Beanstalk D Let's see. What is Beanstalk D a Simple fast work queue. Oh, I see so this is similar to This is similar to sidekick or factory. So you like issue You issue jobs Yeah, okay, so you issue you issue arguments for jobs to some of the work server It distributes the workers and the workers pull the that job from the pool and then do some stuff Well, I'm glad you learned something it You are biased in that this is your crepe And so you presumably know more of the low-level stuff But hopefully which useful to other people too like I also did not know anything about this So I hopefully have explained the process that we went through Okay, so Beanstalk D Big to-do list for your distributed application. Yeah, so you queue up jobs and then workers take jobs That seems nice. Okay, so this is kind of cool. So I There's there are many of these but There's another one called factory which is pretty similar that I actually wrote the API bindings for So it'll be interesting to see what the API bindings are like here Although my bindings are not asynchronous words. This one is so I'll be interesting to see Okay, so let's look at this crate Yes, our crate is like a package and cargo is like MPM that is Basically accurate. So we can serve as a client for both the application and the worker Okay, so yeah, so this is it provides bindings both for issuing jobs and for processing jobs That's fun. This is basically the same library as I wrote it. That's kind of fun But just for entirely different back-end and also you see if you just any crate expense run your runtime Yes, you connect to a thing and then you can do a bunch of operations Okay, this is funny. So this person has very clearly watched one of the previous dreams specifically Tokyo Zookeeper so Compare this is the example that's given there of Operations interspersed with inspects or assert equals and this is the same So I think this person has watched this dream or a past dream. She's kind of funny. That's cool Okay, so you can put a job Reserve I think takes a job How you started to do commits and open source So I usually I actually got started basically because I found bugs that I wanted to fix and so I started Looking at the code and trying to figure out what the bug I observed was the problem, right? Trying to look for what was the source saying and what was the Essentially whether I could track down the bug in the source And then I try to fix it and then slowly but surely you find if you think about it Just all of the software you use any time something doesn't do exactly what you want It is probably something you could fix and you just need to be willing to like actually go in and fix it and Then that introduces you to open source a lot actually What is rust even good for I don't know how to answer that question except I have Russes the first language in a very long time that I really enjoy working with like I Actively want to write rust code. This is part of the reason for these streams, right? Why prefer it over go you can prefer it over go in part because that is a better type system so You can you have things like enumerations That actually can contain data, which is really neat. Go does not have this You have generics which go does not yet have and the proposal is kind of stupid You have much lower level control over things like memory management if you care about that You also don't have to care about that. You can like use RCEs and clones and whatever I Would say that if you're writing if you don't care about performance and you want to just like Write a very large code base that lots of other developers are going to be using Go is nice for like network programming. So that's one place you might want to use it C and C++ you want to prefer rust because it is a much nicer language It's a much higher level language without giving up any of the low-level things you can do in season C++ It's just a nicer language to work with Like I I think at this point there there are very few reasons to write C++ or maybe even see because The experience of writing rust is better than writing C++ They both compel with LLVM at least most of the time and you can link between the two So like unless you have this huge code base. That's already in C++ like I don't see why you would why you would All right, so I Don't know what this reserve business is It'd be nice to have comments in this to explain what it's doing. I guess we could take a look at the oh I guess this is the thing to work out. So put adds a job reserve gives you a job Delete removes the job. This is a nice way to phrase that. I like this What VM are you using? Oh, there's a just because I'm probably gonna get this question again So I did a stream a little while ago where we Looked at my setup so that if you look for this video Then that has all the details about all the setup that I have We can close all of these now. Ooh, that's nice. Bye. Bye The worker cannot finish can release Great. All right. So that makes a little bit more sense. So you Put a job to the server a worker reserves a job to operate on it It deletes it when it's done and it releases it if it can't I don't know what touches. I don't know what Barry is This is going to appear no Yeah, that might be handy. All right Okay, let's look at the docs so I think This example is is pretty contrived Because usually you you wouldn't ever write this program as a user I mean, it does say it's a contrived example But as a user you would never write this code, right? You're probably either a client or a worker and so it might be good to give the examples for the two separately Like basically give an example of here's a workflowed for a client and here's a workload for a worker So split them into multiple examples, maybe under different headlines Link to what exactly? If you just go to just go to YouTube and search for my name I guess maybe I can post it here somehow Like you're oh, I guess if you're on twitch, you can't see it here I think I can do this without it being too sad Really, it's not gonna give me a chat. Oh My internet might not like this that URL Yeah, I probably split this into multiple examples each one for a different use case so if we if we look at the The crate I have for a similar kind of use case Notice that it has an example for if you want to submit jobs if you want to accept jobs And I think that's that's a lot clearer for the user Instead of sort of this this is a test really, right? It's not actually how anyone would write code to operate on it It would also be nice if these were links so you now have Why do I always struggle to find this I feel like I search for this issue like every single stream well Not terribly important, but you can so in rust if you're writing some dog comment If you just put square brackets around like anything like I guess let's say What was the example we had over here delete Yeah, so this automatically produces a Link to the appropriate functions documentation as you want to do this like pretty You want to rely on this pretty heavily actually? Because it interconnects your documentation a lot better So that would mean that I could like click on all of these to get to the appropriate documentation Which would be nice There is an issue though where I don't know if this uses Yeah, okay, so given that you're generated to read me manually. It should be fine What rust project are you working on? For the stream or in general so for the stream we're currently looking at this one We're gonna the hope is to go to multiple right you thought about streaming advent of code 2019 Maybe I to me advent of code is not that interesting. I mean it's interesting, but it's not It's not advanced rust in the same way like the the goal of these streams is sort of to expose people to quote-unquote real world development in rust and Advent of code is not really real code. You're not building any real code base But maybe I mean who knows All right, so the the other problem with this example being so long actually is All the stuff that you really want to get to itself bottom and really far down So it looks like the only thing that you really want to look at here has been sucked the Problem the structure close the connection is you connect? What's error here? It's a failure error. Okay, so This is relying on the failure crate for error handling, which is nice The only thing I could imagine here is if I try to connect I might actually Care why it failed to connect which failure will not give me you might want to have a so I have this in So this is a crate for interacting with web browsers or for automating and manipulating web browsers and Most of them just have this like generic error Response but new has a new session error the reason for this is because I Specifically want to be able to highlight Reasons why connecting failed so you can imagine that the user Cares about whether it failed because of the network or it failed because of like its connection was denied The way to think about this is sort of Whether to use failure or not depends on whether the caller will care about which specific error Like are they going to match on it and in the case of connect they might actually match on it. Oh Here's a weird error or weird Put what is this? so put Returns a self Okay, so it consumes self. Yeah, this is this is one of those things that in the In the world of async we haven't really figured out yet like What kind of receivers should you take for methods like should you take self or ref self or mute self? In this case, it looks like he's opting for self Is he They Are opting to use cell to consume self Which means you can only issue one command at a time and you need to chain them Which is which means that this will be a little annoying actually to use with Async await because with async await it would mean that you're So let's say that I have some It's not important So the current code is going to look something like you do a put a foo and then and then what you get back is a Is the bean because the call to put consumed beam and also the response There would be some other stuff too, but that's not important and then you will do something like bean dot Right and then you'll do an and then and that will give you another bean and a res and you end up writing code like this, right? And that's all fine It's like a little annoying to have to change them, but that's just how futures work now with async await What we'll get is actually the ability to do something like Let res is a wait Bean dot foo Which is much much nicer, right? The problem is this won't actually work in this setup If you consume self this is gonna have to be this Right which works just fine. It's just it would be nicer if the code was just this Which it can be with a weight, but only if These methods take mute self if they take self then they always return self You have to do this game where you keep returning it The problem of course is that if you take mute self and you're not using a weight then This gets really awkward because the real way you do this is like this and then you do It basically becomes no better because you then do this Mac awful res And then and then your next line is gonna be this so Notice that if you take mute self then you can make it look like it consumed self It does mean that you have to take Take care that none of your That you never have this pattern right Because if you have this pattern then you when you call put imagine the put had this pattern When you call put the bean is a part of this future and you're not allowed to move it into this closure Because it's still owned by the future And so this becomes really problematic. So the way I've ended up writing this is if if I have any of these so I turn any of these into one that consumes self and then all others are Mute self and that I think ends up being a roughly this the right compromise, but it is a little bit awkward and I don't have a good way to Transition between them or to choose which one is better. It's just very they're just very different But so so I would probably recommend to have these do mute self And it also simplifies the return value. Okay, so why do they return? Self and a result and the future has an error type. Okay, so that's Documenting the arguments to this Okay, so I think this is trying a little bit to be generic. So first of all, I would probably take a struct for this Instead of having like four arguments is a little annoying to deal with And the response here is weird like this error. I feel should just be a part of this error Although this that comes down to this error needs to be introspective, right? Specifically You need to know the difference between you try to put a job and there's something wrong with the job you try to put and you try to put a job in the server crashed Because in one case you can just retry the job or like issue a request to the user if the server crashed Like there's no reason to retry and so this is one of those cases. We're having a structural return value is probably better for you So I think that's what I would do here So in front of Cheney, I think I made this change but I could be wrong No, this is such generic errors. Where did I make this change? And Tokyo zookeeper So if you look yeah, so Tokyo zookeeper has the exact same problem, right? This is also somewhere where I should split up this example Although to be fair in zookeeper, you might actually want to do all these things in order I don't think that's the case in beanstalk D. I think in beanstalk D. You're one or the other But that should arguably be sped up. So here So I do do this here as well. Okay, so this is another case where I'm pretty sure they've just followed what we did in Tokyo zookeeper Because notice here, we're also consuming self and we're also taking lots of arguments and we're also returning a triple So arguably Tokyo zookeeper should be updated to follow what I just said. This is something I've only recently Noticed myself. So I would not blame this person for doing this at all given that I did the same thing Well, yeah, so notice here the error For each given operation is actually a specific one So if you look to create these are all the ways in which a create can fail Now arguably this this error type could be hoisted to this one and then have create also include like a protocol error Yeah, I guess this is the reason to keep them separate is that the inner result It means that all of these error enumerations don't also have to list like protocol error So maybe this is it's unclear actually, I don't know that one of these patterns been in the other It's just a little weird to see double error. Specifically. I think What I'm reacting to maybe is the fact that this one is not introspective Right like if the put failed with this error, I don't know how it failed So ideally this should be like put error If if there is if there are in fact multiple ways in which a put can fail, which I assume there are if you go to Pinsak tea Protocol Yeah, so notice here there are actually responses you get to doing a put Right and if you get job too big like that's something you want to expose to the client and they should be able to match on And so I think I think you'll probably want to have these be is semantically relevant errors Part of that sees these seem pretty reasonable Actually, I have the same objection here to the response so The response I see so this is where buried comes in so some of these are in front parsed because buried is parsed But like this or this or not parsed and just returned as a generic error. That seems odd But crucially all of these so there's one just just one giant response type here whereas in reality in reality The responses to a put are Are only these Right, so it's not the current API Indicates that put can return any of these things But we know that that is not true put can only return one of these and I think ideally this is something this API would expose So maybe that should be the change we'll make is to make all the responses be Only of the appropriate type this probably doesn't have to be a static string Like the impulse future can probably just return a tick a these could be generic or a tick a for any stir Same here Okay, what else is there? So there's error in response. Okay, so the setup is pretty straightforward Let's see if there are any known issues here known issues great Great better documentation Well, I don't know what to do about that. I guess we could tag it as something we're working on A protocol commands. Okay, that seems good All right, well, I guess we're working towards that then So we'll fork Great Specifically, we'll need the protocol docs to find these we'll refine that In fact, someone should probably do the same thing to so to Tokyo zookeeper note that it does parse the result It only gives the appropriate result result for each one But someone should go through and If you want to replace all these self self with just me and self All right, let's see. So we now have a fork of this So let's do a This Stream. No, that's not what I'm going to the original one. So Point master to upstream and now we're gonna do check out a new and we're gonna say Ah precise Specifically, we're gonna have to figure out how this library works. So what's in source? Lib and proto. What's in proto proto response? Yeah, so this thing is a lie, I guess Let's first check that we don't do any Ah cargo format does lots of things I Like to Do all the formatting in a separate commit just so I don't have to think about it So I can just save normally my editor because it formats on save So proto response and I guess here so we know they're using failure, right? Yeah So really what we want to do here is So for error I guess these aren't errors actually I have to go now finish watching the upload video. Thanks again. No, of course. Hope you enjoyed it All right, so let's look at what these operations are so it's put reserve using a bunch of them. All right So for each one, we're basically gonna do something like So this is gonna be a put response and a put response which we look for the protocol can be inserted or buried or This thing which is an error this thing which is an error and Draining which is an error. So these are the only things that are actually possible to get back and now for What was the other one? So using So this should really be used not using but all right. So the only response to use is using So a using response. Let's see So put does not return that it puts a put response Proto mod so this returns a put response and This Using oh, I guess these not in order reserve reserve Returns a Reserve response Think this is going to be star. Let's make this instead be pub create response and then this is going to be a pub use Proto Response star, so we don't have to enumerate them all the time. I think pre job Is also just pub create and I think job is also probably just Let's see so a Reserve response can only be Deadline soon, which is not one of these timed out, which is also an error Reserve, this is a reserve response Can only be reserved all right, so maybe we can simplify these a lot actually so put response can be a bunch of different things a reserve response Can only be reserved Which gives you a job? Great, so job is actually public. No pre job is also okay fine So reserve does in fact give you a job using gives you a Tube and a tube is a super tube So this gives you back a tube right So we don't need reserve response because the only response to reserve is a job You don't need using response because the only response is a tube. I'm a delete Delete the only response to delete that is successful is deleted so deleted is really just Nothing because not found would be an error. So here Delete is really just one of these That could be an option failure error, but I think we can just ignore that Release Can return released or buried? Sorry. Yeah release can return released buried or not found Released or buried okay, so this is gonna be release response and that can be either Buried or released so so this is gonna be release response Touch What does touch do? Oh? I guess berry, but we'll get to bury so touch Returns touched or not found and touched has no contents and so therefore this returns nothing Berry Returns buried or an error So Berry returns nothing watch watch returns Watching watch returns watching I Guess let's keep around the old one So watching is a you 32 so this returns a you 32 Ignore returns either Watching or not ignored and not ignored But not ignored is an okay response according to this, but all right sure so watching No FN ignore so that is a Ignore response ignore response and so we have this Here No response and that is either Watching Or not ignored what else do we have? That's the last one great now this is obviously not gonna compile It's obviously not gonna compile because we haven't actually made it use any of these new values new types of added But in particular I think proto mod This here currently gives a response somewhere, so this gives an any response Depending on the structure of this. Oh, this is using tokyo codec. We should probably use that for Tokyo zookeeper 2 Yeah, so notice now the compiler we're getting is that look Well, you've promised that this was supposed to return one of these but you really gave us in any response What's going on, which is what we're expecting to see right? So Here I guess All of these are any response That example is also going to become a lot simpler now, which is nice. No, so this is now put response Let's put response. This is now This is a successful reservation Job This the only reply is to So notice how that we can simplify this a lot now in fact, it doesn't even need to say This it could say I guess We could just have this be Job right so this simplifies the documentation a lot as well, right? Here This no longer needs to talk about deleted This So this is now a release response Which is different because so it used to say that a successful thing is a release Response but that missed buried so I guess the question is whether buried should be considered an error It wasn't previously so that might be something worth looking into touch We don't need to talk about it all Berry we don't need to talk about the return type at all and this is going to be this Which is just the u32 now this I guess is And so There was another one release response, I guess so the successful one is What do we say those successful one was for release release? So this is going to be Variant released now Of course now when we try to compile this it'll still yell at us because we haven't fixed that Cannot find response in response I don't think we care about display. It shouldn't implement display Cannot find a tube. That's a good question I'll use proto tube this no longer needs to That all right, so now we just need to map all the response types so let's see Release so it gets the thing back. It is an error. Okay, so it does remap release okay, so Release response is just nothing as well So release response here is really just this So in this case Map that into great All right So I guess really what has to happen here is Handle response what does handle response do handle response is It's really just doing the mapping right so what we want to do here is Dot R and we want to match on R and If R is It's actually it's a little more awkward than that. It's gonna be Let's This is r dot zero and we're gonna match on r dot one In the dock of release you would oh, yeah, so this can go away now. Good catch How long did it take you to get comfortable with rust um That's a good question Depends a little on how what you mean by comfortable like I think I There are still times when I get confused about why it's yelling at me But at the same time At the same time I think I Think it's pretty fast that you become relatively proficient in the language It's mostly like every now and again. You'll get really stuck and it's really annoying But I think that applies to almost any language the difference is just in rust You'll get stuck at compile time in many other languages You'll get stuck with some bug at runtime that you can't track down And so I don't think it actually takes that long to become comfortable with it Like I'd say it will probably take you at least a few months of like regular programming And then of course it depends on how comfortable you want to be Response response Did I do something stupid? No, I did want to leave that release no longer returns a response The return is always just okay because Barry just turned into an error I'm also gonna make our life a little bit easier here by having Handle response Also, let you do a map and I guess So this is now gonna be given a response Actually, no, it's gonna be even better than that. Where is this? It's not gonna be a map. It's gonna be a Mapping and we're gonna do a This is gonna be This and it's gonna be a any response Who in fact it can be even better. I mean the question is how fancy we do we want to do this macro, but Do I even remember how to write this? Rust Macros Where's the there's a thing Macro book Maybe Specifically one I'm gonna write a funky macro. Yay In terms of the syntax and standard library Oh that I think goes pretty quickly for almost any language like I think you it takes you it takes you a few months and then you're Then you're proficient proceed Macro rules And I want the thing that lets me do multi matches Which is like this bit? specifically, what's the Rust HashMap macro. Yeah, this is what I want to write So I want a thing that gives me a This might be over complicating things I think I think this is probably fine I'm just trying to make it even shorter, but I don't think it matters So do you look like Charlie from it's always so new Philadelphia? I have been told that before I have not seen the show So I don't know to what extent it's true, but Let's see so any response no so put is supposed to give us What do we decide it was supposed to give us inserted or buried Inserted which has which has an ID in which case it could that should map to a Okay, necessarily That's why so that would map to an okay put response Inserted I guess I'm a little surprised that buried is not considered an error for put seems wrong But fine Let's not change the semantics of the application at least so that's going to be any response inserted and Barry apparently is not an error Any other response though is an error So where's this business? So what does it do if it gets something else? Does this oh? It's not considered an error because for Barry buried is what you expect. I think put response to just not Just return an ID think we should do this and have this return and then This now uses Which is The integer ID of the new job because now inserted gives you that and Buried gives you this and Anything else gives you There is a put error then why aren't these exposed so weird I don't actually know why they've chosen not to do that also If you can get can you actually get buried to a put? I don't think you can but you can and Buried still gives you an ID That seems wrong. Oh I see because technically this is not a consumer error. I feel like that. This is just wrong Like it might actually be buried, but Specifically, I think put I think this is specifically To Barry because now this can be error put buried You see, I don't think these should be wrapped in In failure error. I think this should actually give you a Put error Yeah, let's just do that Right, so now we can Semantically return what the actual error was without sort of trying to obscure it somehow. Oh I guess that doesn't know that should work. Yeah, so if you got buried, that's one thing and now we can actually Like actually interact with all of these, right? So if we got the other is our protocol errors Right, what are the other responses? Yeah, the errors are actually fine We can tidy that up later So if we did not get inserted and we did not get buried then this is some other error that we don't know what is and in that case I think what we want is a Unexpected response Put response, let's see if it is ooh Yeah, okay, so now it's expecting the same from all the others Which is fine. So for all these others See this is why I wanted the macro to do this because then the Last clause could be handle the same by all but it's fine. So I guess here. We're expecting a Reserved Job and that gives a job and anything else is an unexpected Reserve response and I guess all of these are going to complain so we might as well do all of them While we're at this So Are you doing program is a hobby or doing it professionally both so I'm a PhD student in computer science And so like I do rust programming for my main research project as well, which is why I get away with this So using that's a using response Which is a tube? Tube is funny. It's a funny word. I like to using What else could you get from to just using right delete is similar except what we're expecting to get is Deleted right. Yeah And otherwise we got unexpected deleted response Or I guess delete response and this is really used even though the function is using oh, and I guess This is a double lease Already does the right thing. So that's great Touch does not do the right thing yet. This should say touched Berry should say Berry I Guess here we want this to say touch response we want this to say Berry response so Berry should give you buried watch Watch should give you Watching should give you Either watching in which case it's an ignore And Or if it got or it got not ignored like so Let's see what it thinks now a variant not found Did I misspell that probably I'm always wondering if I'm the only person who uses his right-hand's fingers for navigating in BIM As right-hand's fingers who would use any other fingers I'm confused. I mean I've disabled my arrow keys in BIM. I Don't know if that counts Error is not implemented for string Oh, what's the This is supposed to be format error, I think failure error from Should be Hey, well great I navigate BIM using my left foot toes Actually, have you seen the VIM pedals? so someone bought Like racing game pedals and then map them to escape and insert mode so they use our feet to switch between modes Which is fantastic. Let's see how this works This definitely changes the API, but I think it changes the photo better. Oh Right now it's gonna complain about these alright, so that's sort of what we wanted to happen specifically using No longer need that fact you no longer need this Or that See this is gonna make this this is why we made this change right like This goes away This goes away This goes away Here also so usually for this kind of code if you're just asserting that something is true You might as well use unwrap. That's what it's for right So I think this should just say Assert equal response dot unwrap Dot data Right, there's no need for this extra song and dance this Similarly should just be Bean dot touch response dot unwrap Pd although we'll have to be a little bit careful here because This is an ass ref This is just gonna be The response dot unwrap just to check that it's indeed okay, then we do this Then we do this Then we do Response unwrap ID So that makes that much simpler Do I need to have been stopped the Installed or something probably Here we're really just checking that It was released. So this is just gonna be response on rap I guess as rough See how much nicer that code ends up now The reason you want to use Unwrap instead of it's okay is because it will actually print the error if it went wrong We want there to be no return value So Currently it wants it there, but it doesn't care here Just a little weird. Oh, well Here it's doing the same thing. So this is just gonna be job There's gonna be response dot unwrap ID This is another one. That's just response As rough unwrap did I mess up here? Expected That found that. Oh, that's why It's lacking that This should be another response as rough unwrap This is gonna be Ooh Yeah put we already know just does Suppose dot unwrap this So much code like so Don't need any of these. I guess the question here is okay So we can leave this code comment in place because you might want to do that and then this is gonna be Assert equals response dot unwrap is To and then this is gonna be response Wait How did this test ever pass? That's not what you're supposed to Get from that, right? Just ignore return Ignore response. Oh, no, that is what you get So that's an ignore response Watching Do you have a github or a gitlab account? Yes, they're both under the username john who like this 477 This Cannot move out of borrowed context. That is also accurate Unable to spawn server Yeah, so I probably need Beanstalk to actually Run the tests which is a little sad Makes me a little sad to have to do this but Compile all the things nice. All right. How about now? Hey, it passed. Oh, it failed use of undeclared type or module any response on 46 oh The doc test That's awkward That used to be the same Is the real question here? I think that's the same I Think we can just do this Are these similarly indented? No to less just to minimize the diff a little Let's see how that works. Hey, great. And I guess The read me as well Read me as well. So that's gonna be also the same Because we're not aiming to do all that stuff, too this minus Three all right, let's look at the diff So that's simplified That's simplified Great It passed all the tests. So I guess we submit it Make all Return only Return only possible Check that that's pub crate. Yeah. Oh I guess we should probably document this, huh? Let's so where's ignore here Return only possible Or I guess refined return types for all methods Previously all methods Generic However only certain Term Variance possible responses to each command this patch This forces users to Manually match on the even when that shouldn't be necessary This patch does the matching inside so that only the Expected return value is exposed Simplify other If an incorrect variant According to the protocol this simplifies The API Not say that right Besides returns The first three hours Yeah, time zones are hard. I don't know how to how to get better at this I guy tried there's a page called every time zone that I use occasionally which shows you like When the time is in various local times? I should start linking to that again I guess really what I wanted is All right, we submitted another pull request Good job team Yeah, we started noon eastern standard time, which is about three hours ago And we've been so So good at keeping on schedule. So it's now three hours in and we've covered two crates So that's an hour and a half for great Actually, let's Check up on a little birdie told me Look at that our pull request was merged we did open-source software Hey, how about that we made the world better please in theory although a Little unclear what failed the test. Oh just an out there. I don't believe in out there great Did that did that we did that We did all of these nice Rust pointer protection is all right, but why rust is hard to code I'm not sure what you mean by pointer protection I guess you mean ownership So Russ is hard to code because it's really hard to Write your code in such a way that you guarantee that there are no data races and the compiler enforces that in rust and that Forces you to reason a lot more through your code It's empty in here What do you mean? Don't know what you mean by empty in here. All right Great third crate third and final crate for the day Argonautica RS. So this is an implementation of the argon-2 hashing algorithm my bio Oh No, is there a twitch bio should I fill out the twitch bio important? I mean, I guess I can do that So the argon-2 hashing algorithm so this one I actually know a little bit about so a while ago There was a competition to come up with a new password hashing scheme Like there are a bunch of people that just use like do like a shaw 256 or MD5 hash or something passwords Which is generally a bad idea you want to salt them correctly You want to make the hash expensive to compute so there is someone downloads your database and get all the hashes They can't easily compute the the corresponding passwords and argon-2 was the winner of the password hashing competition 2015 and Then this is apparently the rust implementation of or one rust implementation of argon-2 You ever work with rocket I have a done very little with rocket Um So I guess the question is what do we want to do with this great? So it's designed to be easy to use robust and follow the rust API guidelines. Oh, I love these great Feature complete. Oh, that's a good read me. Yeah Hasher Yeah, so there are a couple of others. Oh and use a sim D Configuration for hasher and verifier, which is to be reasonably secure Okay That's a lot of text. Yeah, this is basically the Tokyo Blocking stuff we talked about earlier Although my guess is this uses a See that's a good question. What's the CPU pool it uses? Uses future CPU pool, but my guess is this crate doesn't actually do any Why are these not dev dependencies? I Think these should be dev dependencies But yeah, so my guess is they use future CPU pool just for the CPU pool and not for Any Tokyo stuff like I didn't see a Tokyo listed here. Oh scope guard is nice, too Yeah, so there's no there's no async stuff going on here there I think they're quite literally just using CPU pool for compute And not for anything else. So I don't think this is one of those cases where we want to eliminate the additional pool Yeah, they basically want to expose a interesting People want to expose an asynchronous implementation There's a question of whether they should rely on Tokyo for that so instead of spinning up their own thread pool they could have a thing that Relies on Tokyo for that and instead That way the user doesn't have to configure this explicitly here Like you can imagine that That if you did this it would just spin up It would just use Tokyo spawn to run the hashing on Tokyo spawn with a blocking to run the hashing on a separate Whatever thread is available in the pool and then return a receiver to that So that might be one way to get a get rid of the CPU pool here for the non-blocking methods It doesn't mean that they're now relying on Tokyo As you might it could be that we want to add it behind a feature flag but Any chance you give impressions of Tokyo beanstalk D So we've already covered Tokyo beanstalk D or actually so I'm gonna post the video and So you actually just missed it specifically We did it from So when the video eventually gets posted if you go to like about an hour 30 into it That's when we start beanstalk D and we talk about it for an hour and a half. So you actually just missed it Sorry that but the recording is there though The day the CPU pool died yeah, you're not wrong It's more that like Thread pools and CPU pools are really hard to get right and we it's good if we don't have many of them, right? And especially because the intention here is to interact well with future heavy code like You can do it on a CPU pool it just means that you're now likely to have more than one pool It would be nice if this could share the pool somehow Of course relying on Tokyo means that if you have some other executor then you are now relying on Tokyo instead Which is unfortunate Um Still do this business Even though they might already spin up multiple threads here. That seems unfortunate. I like this form of documentation though I do worry that this is maybe slightly too verbose like I would have a less verbose description here with a link to the the actual docs This is a really nice read me like I'm a big fan major props Beesie Myers have written stuff before but that's really cool Let's look at the docs and see what we can find yeah, okay, so the docs are pretty much the same which yeah, okay So they're using cargo read me. That's why cargo read me is fantastic because it means that you can generate the read me from the source Lib, I wish Rustock didn't set this ugly as font Specifically, why are they doing that? Just don't do that. Let me use my own font Stop setting my font See see how much nicer that is Rustock stop sending the font. Maybe that's the thing we should do I guess just like Rustock Tools Rustock main That's an helpful Rustock maybe like this is something I I want to submit a PR that just removes the font overrides Because you should you should never be doing font overrides Lib Rustock in dot dot slash There is no Lib Rust Rustock themes That's so unhelpful Rustock That's still pretty unhelpful Where did this? Light.css. I know this is slightly tangential, but Rustock Theme right. Well, where is the external files to RS? CSS really cheat CSS an HTML static Ha Great Where is it that it's overriding my font? Where is it overriding my font? Eh If I go back to this Where do these font stuff come from style.css. Oh This is probably set by Docs RS actually Docs RS needs to stop that immediately All right. Well, I'll do this later. It seems not worth it. Yes I Will fix this later stop doing that Hey So back to our unitica rustics aren't run by the rust team last time I checked So it's not about the no Docs.rs is not it's true It the handover is probably place I was thinking the style was set by rustdoc when it generated the file It's actually set by Docs RS and so that's why I'm gonna submit a PR to get rid of these because it shouldn't be setting my font Let's see, okay, so we have these Docs and the question is what's down here. So there's a nice See, I like this. That was very good I mean, I still have an example here, it's probably not terrible just these things from your machine This though their fault hasher does not have a CPU pool It's only needed for hash non-blocking and hash raw non-blocking. Oh, I see So that's why they've set it up this way. So if you try to call it and you haven't set up a CPU pool, then everything will be fine Whereas with Tokyo that wouldn't be the case like specifically if we tried to spawn the hasher if we try to spawn the hasher then Then there might not be a Tokyo runtime running and if there isn't a runtime running, they would just fail So we're adding this like implicit dependency on Tokyo Hmm hash non-blocking. Let's look at what this does Hash raw non-blocking. Huh, that's interesting. So it actually Oh, it moves the hasher. What does scope guard guard do again? Scope guard colon colon guard owning thee with deferred closure But it's not owning thee. It's just mute self Yeah, so I think if I remember correctly Tokyo spawn requires the future to be Oh No, it does hasher to owned. Okay. I see and Scope guard has a to own I guess the D refs. That's pretty weird. Why does this use a scope guard guard? because this to owned is gonna Go through the hasher It's gonna go through the scope guard and just call to owned immediately. So this is really just gonna clear Immediately before spawning I guess So this is just equivalent to calling clear on the original hasher after you call to owned Unless I'm missing something. I think that's right Yeah, I think this should be fine. Let's try to get rid of the CP pool So we have we have been making this the mission for the day. So we might as well continue, right? Hmm, so we'll do the same thing as we've done before We will clone this upstream this this make master track upstream And then we're gonna do no more Tokyo over CP pool and now I guess we specifically want to edit Like an audio see that that actually compiles. I assume that it does I wonder if that's the only place they use the scope guard. It's used in hash raw too. I feel like that guard is just not necessary Hello Fail to run why? Source path It's not an existing regular file What? Uh, what I mean, I guess Build rs line 43 So That's why great. All right, so it does Test all the things So I think really what this is going to do is it's going to do uh It's going to create So usually the the trick here is to create a one shot channel If you're going to have something run on the pool and then eventually tell you when it finishes What we're really going to do is we're going to set up a one shot Um one shot channel give the sender to the future that we spawn Uh, and then just send on it whenever the hashing finishes Are there any other places where this CPU pool shows up? I guess would be not but to configure and then hash raw. Okay Uh, I am using the well neo vim technically but So, oh no So the question now is where do we want to check this? So it's really just source are going What and cargo tunnel is going to no longer have future cpu pool. It is instead going to have Tokyo zero dot one. We also need tokyo Thread pool zero one because we need the blocking function Uh Source lib So down here, we're also going to need external create tokyo and tokyo thread pool Up here, we're going to use CPU pool is going to eventually go away That's going to go away this Okay, so I don't think the scope guard here is necessary Although it's fine like I don't need to change it um But instead of doing this it's going to do tokyo spawn Well, I guess so tokyo spawn takes a future and in this case the future is going to be a uh So this is the same trick as we played in beanstalk d, right? um, it's going to sorry in uh, well in beanstalk as well it's um In fact, we played all through the day is using this like blocking stuff, right? So we're going to need futures We have futures here prelude star And then we're going to do this going to be future whole of n it's going to be pokyo thread pool blocking and not move and that's going to be uh Hasher dot hash raw Uh, we're going to make a tx and an rx which is going to be a few uh We are going to need futures and we have futures sink one shot channel uh, and then Really what's going to happen here is we're going to dot map uh, this is the the final hash and that we're going to send on The transmit another channel and then we're going to return to the user It's just the rx part of the channel and that should be all Right, so the setup is What is error here? And how can I oh, I guess okay, so that is a result Yeah, I think that's right tx.send And this is going to be a then um Specifically we need to match out the r so this can be Futures let's look at what the one-shot channel is so it's under sink I don't want future zero two um sink one shot uh receiver So the error would be cancelled which is if the sender is dropped Which shouldn't happen, but it could happen So this is going to be a result result and this is going to be an error Cancel where is it cancelled? futures Futures sink one shot Cancel so the question is What is that going to be? I don't know yet. We do have to use Tokyo thread pool Argonautica Argonautica Right, so The other question here is I guess if the It's a blocking error. That's if the thread pool if tokyo is shutting down And so if tokyo is shutting down We're just going to drop the sender and then handle that is cancelled So all of those are just going to fall into the same sort of category So I think here we just do map error And we just ignore that error ignore error because It will It Is handled by the cancelled case There's no longer a default cd plane pool. We don't need that There is We do need to use tokyo for tokyo spawn Uh, I don't figure defaults It's no longer a cpu pool So This goes away. Why is that? Why is that? Default cpu pool surdy. It's not actually used by anything So I don't know why that's even there. So that goes away. This goes away I guess oh verifier also has a cpu pool I mean it'll work basically the same way, right? So we're gonna go back to our hasher Uh, I was right there. We saw it So for the verifier, it's basically gonna be the same thing. Uh, we're gonna Spin up a thing. We're gonna move the hasher. We're gonna move the verifier. I'm gonna do verifier dot verifier Send that's just gonna be an okay And that's just gonna be returned there and of course now we need all the All these uses Don't need this and don't need that hasher config no longer has a cpu pool So, oh, that's why it was there. Okay, that's fine cpu pool is gonna go away cpu pool is gonna go away cpu pool is gonna go away There's also a verifier config That goes away that goes away. Isn't it beautiful when so many things disappear It goes away that goes away that goes away. There's verifier 242 No longer needs the cpu pool hasher I guess 244 No, it's the same one What else do we do wrong? Uh, verifier 172 Oh, right This needs to Still have this line 282 Oh This send can fail Really? I guess the problem here is really uh This we I think we know that this can't fail because the sender hasn't been dropped So we could do dot expect Oh It's gonna be uh There's no way for that to be dropped really Oh, actually, no, there is so, uh If the user just drops the future we give back then the send would fail Which is fine. Uh, it's okay if the user Decides they don't care about the results And same down here where Right, that's still fine So now of course the problem is now you need to run this under a tokyo to work. I'm not sure about that um We also still need to figure out what to do if it was canceled So it'll be canceled if um If the thread pool shuts down Which it shouldn't be I guess the question is what errors do we even have to express something like that? So what did the old code do with the cpu pool? So the old code Just did spawn fn And that just worked So let's look at future cpu pool. So what does cpu pool spawn fn? It gives you a cpu future And a cpu future implements Really? How is that possible? So what this is saying is that There's no way for this to fail Except the Except in the ways that the returned future from this closure fails Which seems bizarre Like I don't know that I believe it actually Here's probably the way to do this actually Instead of We don't really need to do a tokyo spawn here We could just do a pull fn blocking Yeah, I don't think we need to spawn Which is nicer actually this can just be that right because uh we don't Because we don't particularly care So tokyo spawn is just going to put the xcu and the verifier on a different thread We don't necessarily care about that. All we care about is the fact that we're not blocking the current pool thread I think this is better It does still rely on there being a tokyo thread pool running But that should be fine And so he is really here the only then thing we need to do here is The thread pool is exiting So then what do we do it's complaining about what exactly? I guess we should just do then r And then match on r And if it's any kind of regular thing, it's just going to be r and if it's error It's going to be a tokyo thread pool Uh blocking error And then the question is what do we do? So I think that should satisfy it. Yeah, okay And then down here Uh I don't know this file. It's very frustrating. Um Yeah, so this is also going to be simplified because it's now just going to be here hasher dot raw And then it's going to do the same thing. Yeah So the question is what do we want to do if the thread pool is exiting? And this is a case that like the old implementation just ignored Uh, I feel like it just panics. That would be my guess I mean, it could be that the right thing to do is just a panic, but where does the error come from? The error is an error kind in a display You know the question is just like do we want to panic here? Basically trying to figure out whether there are other places they unwrap or panic Well, that's all in testing Yeah, it doesn't look like it. So I guess what we do here is we Return an error Where the error kind is something like What is error kind here? Uh, I think we do is uh Error in red pool a down wall hash and then this I guess the question is what what are the error kinds? Use error kind What's an error kind? Oh, I see. I think probably what we're going to do here is then add something like We'll just add a new one I guess that is Pool Uh Feel like thread pool needs to be two words the thread pool used to a synchronously Execute operations exited prematurely without full stop apparently Because now this can just be an error kind Pool terminated and same in verifier. Let's take a look at that Oh, it's argon 2 Yes, it is indeed argon 2 Unused import that is true. We no longer need the import of Choker or Maybe even or futures and also in hasher. We don't need any of those Ah, do need that okay now, I guess we still need to Okay, so this goes away This goes away And then here we need to say The hashing is performed Formed on the current thread that she's performed on the current thread Uh, let's start with putting it here Uh, denly reference aren't welcome. Um, that she's performed on the current thread But I guess that is also a good question So we have changed the semantics of non blocking here a little bit. Um Because if you just wait on I guess you shouldn't wait till this future The semantics of this now is the current thread is going to be used to do the hashing Uh, but the current thread is wherever you spawn the future On Whatever thread runs this future Yes, another question here is what happens if you call blocking? And you are not currently running under tokyo Probably panics would be my guess I guess we'll have to look at the source Yeah, I'm pretty sure this panics Because I think worker with current is gonna panic all right So I guess the hashing is performed on whatever thread Uh Executes Future it will not But uh, I guess we should think to this Uh, the tokyo blocking I guess but with A tokyo blocking Annotation to ensure that the thread pool That the Time is not blocked from pulling other Futures note that I guess we do have to say that it now depends on tokyo Leave at least two blank lines between functions So I would also normally leave two blank lines between functions or at least one But this is the way this project is set up. And so I'm not going to change it. You want to conform to what? What the original authors are doing right? It's not your it's not your job to set the Code style policy for For someone else's code base. It's not blocked from pulling other futures Note that that the returned Future relies on being executed by tokyo and will not if that is not the case same for verifier The verification is performed whatever threads is the future And then I guess ID CPU Okay, so we're gonna have to run that example arc R this is This has to return a future, huh So the question is whether actix I think actix web does use tokyo What? Oh, come on Really? On top of tokyo. Okay, so this should all just work fine Uh, just without the cpu pool Without the cpu pool cpu pool Don't need the cpu pool anymore Uh, don't need the cpu pool great Uh Source lid still has some cpu pool stuff in it That goes away that goes away This entirely goes away, which is fantastic I guess cpu pool Surdy cpu pool stuff there we can get rid of All right, so it's only the read me read me Diff read me indy did not change any of those Only thing I changed was Did not change that Did change this did change this did change this Sorry, just to see that we haven't included any weird changes here. I don't think so This is a pretty straightforward change really Uh Remove futures Uh Future cpu pool in favor of tokyo Previously a dedicated cpu pool was spun up To run uh hashing and verification In a non-blocking fashion um This is unfortunate Uh, given that, uh, there is usually already a cpu a thread pool Running the tokyo ideally Uh, this patch Changes the code to run non-blocking hashing and verification On the tokyo thread pool instead to avoid Creating configuring And creating a separate one uses tokyo thread pool Blocking to ensure that it does not hold up other futures while doing so um So one thing that's good to do is um, if you're submitting something that like Changes something relatively deep like changes the especially changes the api without expressing that in in the api So we've removed some functions, right? But it's not really expressing the api here. Um Then we want to explicitly point out the fact this does break the api This like makes a major change to the api that would probably require a major version bump Note that this removes the ability to It's used to Instead Places that responsibility on Whomever Sets up the tokyo. This is Probably does api furthermore, uh non-blocking methods Only and will not work. I guess here we could point out Actics uses and will not work in other async in non-tokyo asynchronous all right origin Tokyo over cpu pool Argonautica We have a pull request for you good issues here Nope Compare pull requests and nope. I want see why is it Not being helpful here and using the ladder commit Or the bigger commit rather and I guess we'll do the we'll be nice and do the same thing we did Uh that we did in the other pull request, which is add some links to the relevant So this should link to tokyo blocking This should link probably to tokyo thread pool the tokyo thread pool instead tokyo's Um, I guess we probably want to link to tokyo The tokyo runtime specifically runtime All right like so Great pull request We did it. We have now contributed to Argonautica as well Um, this is a slightly weirder change I guess one thing I will point them to is I sort of actually want to get rid of the rust format. Can I do that? I'm going to swap these two around I'm going to do So basically what I'm doing is getting rid of my Rust format commit now after the fact CPU in both of them like this is pretty straightforward config hasher And now it's going to be the same and I think all of these are Should all be just the rust format changes. So we're going to do reset Get push force And now this should only have the one diff that we actually care about And the files change should be much more reasonable Beautiful beautiful is what it is all right I guess now we check whether Whether anything has happened. What was the previous one we did? We did a bean stall Oh, hasn't emerged yet. Oh, well, it did pass the test though. So that's nice Uh, how'd you get the firefox bar tab bar at the bottom? um I did a live stream a few weeks ago where I went through my entire like desktop and editor setup So you can just look up the video for that In the youtube channel, but basically so firefox lets you Write css for the browser chrome and so I wrote This thing So this css file if you add that it moves the tab bar to the bottom. It's not perfect. It's just I like it a lot better that way Okay, let's see. So we now have this So that was the first this is the second I think that's a pretty good day's work We submitted a pull request For juniper for 100 lines plus 100 by minus one for beanstalk with two 300 lights minus 200 lines plus and For our gnautica 200 lines minus 50 lines plus It's pretty good. I think we did well I hope uh, I hope you feel that was useful I think we're gonna stop there and not do another one There are a couple of other that would be fun to do But I think Like it's been about four hours now and I think it's probably a good place to stop If you feel like so I'm going to repeat the message from the beginning If you feel like this was useful like if you feel like this This format of going through other people's codes and trying to make changes Was interesting and useful then please please let me know because then I will probably do more of them it's still a little bit stressful and it is One thing that's hard is it's hard to predict How complicated the changes are going to be and how Viewer-friendly they're going to be like I think some of them are just like really nitty-gritty details that may not be interesting to watch And may not be something you learn from watching But if you feel like the format was useful, please let me know if you feel like there are crates You would like us to take a look at please let me know If you feel like it wasn't useful you'd like to see something else then you're probably not still watching But if you are please let me know I'm not entirely sure what the next stream will be. I've also had some ideas to try to do Some more data structure work So we'll see maybe that's something we would do something from the standard library I should do like a big poll at some point to figure out what we're going to do for the next stream It will probably be a little wild because I have some conferences coming up, but At least I think we I think this was uh, I think this was useful I will post the recording as usual on youtube afterwards I'll make sure to try to link into Where we like useful checkpoints in the video of where we started a new crate for example So if you want to go back and look at some of the things we did and maybe look at them at like Lower speed and look at the code at the same time That's a useful way to do that I guess let's check if there are any last-minute questions Your desktop video made me really want to dig into mine Yeah, I there are still things I want to change. So one thing I found since last time is um This is an addendum at the last point of the video if you have any last-minute questions You should ask them now so I can Uh, check them out. So uh, six is great. It's a really simple image viewer that just has vim bindings It's really handy if you're live your life on the command line like I do The other is uh Dunst So dunst is a nice way if you're not running like no more kd or something and you still want configurable pop-up notifications for things like receiving email and Changing Spotify songs and stuff. This is really easy to set up and configure. I've been really happy with it Um, I'm also considering switching my window manager away from xmonad. I just need to find something to switch it to um But yeah, I mean I think the setup was pretty fun Uh, for some reason all of these links are dark blue, which is unhelpful Uh Yeah, the youtube channel. Sorry the youtube channel is just uh slash c slash my name Or that link I think this one is easier, but um All right, sounds good. Um, well, I hope you found that interesting Follow me on twitter or patreon or something if you want to see upcoming streams You can also of course subscribe to these pull requests and see whether there's any activity whether they end up getting merged or not Um, and I guess thanks for watching and have a good rest of sunday for those of you who are Still on sunday given your time zone. All right. Bye everyone That was uh joy to stream for you again