 Hello, everyone. Welcome back to another stream. I'm John. I do a bunch of Rust streams. I started doing these because I wanted to do some more intermediate Rust stuff that could be learning materials for people either learning Rust or have learned Rust and want to see something more advanced being developed. I have a Patreon page where I go through post ideas for upcoming podcasts or sorry upcoming streams or where I link to a bunch of projects I maintain. If you want to hear about upcoming streams, you can follow me there. You can follow me on Twitter and I'll post basically everywhere. Today we're doing a somewhat more advanced stream than some of the past ones. In particular, we're writing a client library for Apache Zookeeper and we're going to write one that is fully asynchronous. We're going to be using Tokyo. In particular, we'll be using the new Tokyo with the runtime as opposed to the old Tokyo core stuff. We'll be using the new Tokyo. Tokyo still uses future 0.1 even though futures are now in the process of being merged into Rust the language. We'll be dealing with Tokyo because we're going to want things like timers and such. It's just easier to do it this way. Now Zookeeper, as I discovered, is actually really poorly documented in terms of how its protocol works. Basically the page you have to go on is this one which mostly talks about the high levels of the protocol and about the guarantees that the protocol gives you. But it doesn't really say what the wire protocol is, which is what we want. Normally when you implement a protocol like this, you'll also be using Wireshark. So Wireshark is a really neat tool for, actually I should start Zookeeper. So normally you'd use Wireshark. You'd watch on some interface and you'd look on the Zookeeper port and then normally Wireshark is smart enough. Let's see if we can... So normally you'd use Wireshark and it will actually decode the protocol for you and that makes it a lot easier to debug. Unfortunately, Wireshark also does not support Zookeeper yet, which is pretty unfortunate. Let's see, set slash foo bar. Can I not do that? Oh, do I have to create it first? That's annoying. Create slash foo bar. Oh, it's on localist. Stop that. Well, in any case, Wireshark won't actually help us all that much because it doesn't support the Zookeeper protocol. And so we're sort of left on our own, which is a little sad. Luckily, someone asked about this on Stack Overflow. Like, what is the protocol? How do I understand this? And whoever this person is, Chris Naroth posted a really good answer that sort of... It doesn't actually give you the full protocol description, but it does point you to all the places where you might want to start. In particular, Zookeeper uses something called Jute, which is a Java serialization library, and it defines basically all the kinds of packets you can send over the wire. And so this will be a good reference for us. In addition, there are sort of Java definitions of all the classes that we might be able to use. There's a file that has various constants that might come in handy. There's also in the Zookeeper code. So notice that I'm in Apache Zookeeper, the source code now. And the server has this method called process packet, which sort of tells us at a very high level, what does a request look like? And then we could dig from here into how it's actually parsed. In particular, we can see things like at first, DC realizes a header before it starts to switch on the type of the packet. So this is going to be useful for later. The other thing that is useful when implementing a protocol is that we'll be using, so there's a Rust Zookeeper create already, which is synchronous. And that, of course, has the entire protocol implemented. So here there's a source proto, which has a lot of these constants and the classes and sort of basic protocol and serialization stuff. I think IO is the thing that has, yeah, so like how do you connect to Zookeeper? And so we'll be using this a lot, sort of digging through the existing code to try to build ours up. The other thing I found was that someone actually proposed a way to add Zookeeper support to Wireshark and have written a full disassembler for the Zookeeper protocol. So that's this one here. It's by ATO, this is called ATO, that's right. It's going to say weird for ATO to provide this. But this basically goes through how do you parse an entire Zookeeper packet and determine what it is for and how do you set interesting metrics like Wireshark would normally give you. It hasn't been merged. It's also in Lua, I think, and not in C, so you can't directly be merged. But it might also help us. So I think we're basically going to get to it. Let's see just whether this has a, no, they haven't done a protocol. Okay. If you have questions while we go, like this will be fairly technical and we'll do a lot of sort of digging into protocol specifics. If you have questions, feel free to ask them on Twitch and then I will try to answer whenever I can. I'll like try to glance over every now and again. And yeah, this is going to be like pretty low level stuff, but hopefully it will be interesting. So we will start with here. Cargo, new, lib, Tokyo Zookeeper, because that's what it is. At sort of a high level, whenever you start a new crate, usually the thing that's good to start out with is, wow, that's not even at all how that's built. Is to start with some core structure, this could be the way that people interact with our library. In this case, it's going to be a Zookeeper thing, right? And the Zookeeper thing you can use to issue additional commands and wait for responses and those kinds of things. So we're definitely going to have the struct. It's not entirely clear what's going to be in there yet. There'll probably be something like a TCP connection. We'll find out that later. And then on Zookeeper, we're going to have a bunch of methods. In particular, we're going to have a connect method. That's going to give us a good question, probably an IO result. Actually, it's going to be a failure error. We're going to use the failure crate, which is great. Results self and a failure error. So this lets us failure is a pretty neat crate that lets you do context wrapping for errors. So you can propagate errors up, including what caused an error. So you can have complex errors or chained errors with explanations and those kind of things. So I have a connect method. And just because we're building a protocol, the first thing we'll want to be able to do is to connect the Zookeeper. I have no idea how hard it's going to be. I haven't. I'm interacting with Zookeeper before at an implementation level. But I think what we want is we're going to have something like we're going to do something. Oh, actually, this is going to be a connect future. So because we're in future land, remember that nothing is synchronous. And so in general, if you want to connect to something, that is also itself going to be asynchronous. So in fact, this might just return a connect future. And so how we're going to test this sort of the very first test we'll want to pass is we'll make a zookeeper. And then we're going to need here futures 0.1 and Tokyo 0.1. We're going to need to create futures Tokyo. And then we're going to use Tokyo. So the Tokyo prelude includes lots of different traits that are useful, things like stream sync future. Again, as most of the things you want that are not implementation details, but are for how to use Tokyo. And so our test is basically just going to be connect and then disconnect. That's all we need to do. We want to be able to do that without there being an error. And so in this case, we will just do Tokyo. We'll use super so we'll use everything from above. We're just going to use Tokyo run. Now, Tokyo run is a little bit weird in that it Tokyo run doesn't really do much. Tokyo run just Sorry, it's very warm. Tokyo run just spins up a runtime. So this is a Tokyo runtime is sort of a thread pool that executes futures, it spins a bunch of timers, those kind of things. And then it runs the future that it's given to completion, and then it returns. And then it terminates the runtime. And that's basically what we want here, like we, we could be sort of more efficient by having to use the current thread and whatnot, but let's do this in the most straightforward way, there's going to be enough complexity anyway. So all we want to do is we want to be able to resolve the connect future. And so the question, of course, then is what is this connect future going to look like? Well, we're going to have to take some kind of address, which is for now going to be just a string. Got it just in time. Indeed, we have only just started. So this connect future is actually going to be pretty straightforward. Ooh, actually here we could do depending a little bit on how fancy we want to be. So so we could here also use impulse future and say that we return something where the item is self and where the error is failure. Failure. She remember what the current version of here is. It is 0.1 as well. Great. Failure. Yeah, so we could just use the impulse future here instead. Let's do that just for now, it might not actually work in the end because we might want to do more things in the in the connect future. But just for now, let's stick with this and see how that where that takes us. So for connect, the first thing we're going to have to do is we're going to have to connect the zookeeper. So we'll use docs Tokyo. So for Tokyo, we want net TCP, TCP stream, TCP stream, and we're going to connect to is it actually have to be a socket editor? That's a little sad. So and once we have connected, I guess actually, so this this returns a result. And what we want here is we want to add context. For basically all of these, we want to add a failure context. I'm going to skip over some of the context adding for now just because we want to get to the protocol stuff. But here you could do something like this right and say fail to connect. I don't think that will actually currently work. So instead, what we're going to say is after we've connected, that gives us a stream. And that stream, we're going to do a self handshake on stream. This means that there's going to be a handshake method, which takes the stream that we have. So this is a Tokyo net TCP stream. And it's going to return a in full future, or the item itself and the error is failure. So the idea here is that we connect and then we have to do some other business like we have basically have to tell zookeeper that we connected. And this is because zookeeper might like require a password or might require some configuration changes. But in general, you usually have to do some kind of handshake when you negotiate with a new server. And that's what we're going to do here. This is also, of course, asynchronous. Now, this is where it starts getting interesting. So the connect message, let's see. So if you look at Russ Zookeeper, we look at source IO. So when you make an IO, it creates a bunch of stuff. And then it sends a connect request. And a connect request is just a connect request. And notice this to lend prefixed buff. So we're going to see this a bunch. If we look at the Lua file as well, that the wire shark the sector, it basically first looks at the first four bytes as a number, and consider that the length of the payload. There are some exceptions to that. But then after that, it then reads an X ID from the next four bytes. And then it adds an opcode, which is the next four bytes. So basically, I think this is in the stack overflow. If you get something like this, that's going to be a header that sort of tells you how long is the following data, which connection is this, and then which operation you are you executing. In our case, if you look at connect request from that I have this opening somewhere. Source proto. So connection requests contains all these fields. And my guess is if we look at the jute file as well, yeah, connection request here just contains these things, and is prefixed by this request header. So that's the other two fields that we saw. And so what we're really going to want here, this is a trick you often end up doing in protocols, is we're going to want some kind of a wrapper around a stream, where we can write things and have them be automatically packaged. And we can read things and have them be automatically packaged into the appropriate serialization protocol. So we'll probably add a mod proto. And then this is going to do something like request is proto connection, and I'll have some fields. And then it will do something like it's going to wrap the entire stream that we have in one of these packet isers in a sense. So so that we can write request things, and it will serialize them correctly, including the headers. And then we'll be able to read back and we read back these serialized results that were the rat, the headers have been unwrapped. And so we'll do something like stream is going to be proto wrap stream. And then we're going to do say stream.send the request. This is still just sort of pseudocode like remember, we haven't written like proto connection, proto wrap, or send at all, we're just sort of setting up the the infrastructure that we're going to use to communicate with the server. Because we want abstraction here instead like writing the serialization code over and over and over again manually. And so we're going to send a request. And then that's going to that's a good question. What's the stream going to give us back when we send something? We'll probably give us the stream back actually. And we'll have to figure out exactly how that's going to work. But then we're going to do a stream dot receive. And then that's going to give us back a response and a stream. And hopefully from that, we'll be able to construct a zookeeper. Right. So the idea is, we send a connection request to the server, eventually we get a response back. And with that response that we get back, we can finally construct the zookeeper client state. And that will, of course, include the stream that we're dealing with, but also potentially with this, we might also need the state that the server gives us back with the handshake. And so at this point, this is indeed a future that will return self and can error with a failure error. Okay, so the question is what's going to be in proto? So at some sort of high level, the thing that's really going to be here is is going to be a struct that's going to be a packetizer, if you will. I don't know what's going to be in it yet. And there's going to be a function wrap, which takes a Tokyo net TCP stream, which lets you both read and write and returns you a packetizer, packetizer. And I guess technically, if we want it to be really good here, this would be over s taken s. Where s is Tokyo, prelude star, where s is both async read and async write. Right. So the idea is that we can wrap anything where we can send requests and read them back. The reason we don't want to hard code this to TCP stream is mostly for testing. But you could also imagine that you wanted to run this over some kind of other kind of reliable protocol. I don't know if zookeeper supports like Unix sockets, for example, there's nothing inherently stopping us from supporting that. And so we should make this generic over only the trade bounce that we need. So this will probably be fairly simple. It will probably just be something like this. And then so that means this will hold a stream s. But the more important thing is we're probably going to implement sync for packetizer. So sync is a sort of a future concept of something you can stuff things into. So think of it as the sending end of a channel with implement sync. So it's a sync where you can put values and they go somewhere where s is async write. So remember that for sending things to zookeeper, we don't actually need to read from the channel, at least in theory, I don't know, the protocol could be different, we're about to find out. Right. So let's look at what the requirements for sync is. sync. So in order to implement sync, we need the following methods. So we will implement them. So a sync item is the kind of things you can send on a sync. And that's going to be zookeeper request. And sync error is going to be just failure error, because everything is going to be failures, probably. Actually, that's a good question. It might not be that we want this to be the case. It might be that we want to support protocol level errors as well. But that will probably be on the receive side, like you send a request. And the thing you get back is a say that you tried to remove a key that wasn't there. We probably want to return a special kind of error that you can match on, whereas failure error sort of masks the underlying error. When you send, though, when you send the request, the type of error that you got is probably not important. What matters is, in the response you got back, there might be some server defined error type that we want to expose. So the sync trait has two main methods as start send and pull complete. So start send, the documentation has a bit more on them. But but essentially, the idea is that start send just wraps the thing you get it so that it's ready to be sent. But it doesn't it never does any blocking work. Pull complete is the thing that will actually drive the sending forward. And the rule is that you for any given item you want to send, you should call start send until it gives you ready. And then you call pull complete until that gives you ready. And that's the point at which you know it's been sent. Just waiting for start send is not sufficient. So in our case, for example, start send will probably do the serialization. But pull complete is going to be the thing that does actually sends on the wire. And the reason for this is serializing is something we're gonna have to do regardless. And we sort of only want to do it once. Whereas pull complete has to try sending on the TCP stream or whatever stream we have. And that might just like block, it might be the channel is full. In which case, in which case, pull complete would have to block whereas start send, we don't really want to have to block. So we're gonna have this enum zookeeper request. Right? And the only thing we know for sure is that that's going to have a connection request. Gonna have one of these guys. And I trust the author of the original zookeeper crate to have written this correctly. So there's going to be one of those. Now, here's the question. How so so one of the things that often bites you and in async land is that you need to be able to have something only half complete. Like imagine that you're Yeah, I mean, in this case, we're sending a request and you could imagine that the the TCP socket that we're using only has room for like, I don't know, half as many bites is what you're trying to send. At that point, the next time pole complete is called, you need to send only the remaining bytes, you don't get to send the first bytes again. There are many ways to deal with this. The way we're going to deal with this is that we're going to have an outgoing buffer and an incoming buffer. And so when we serialize, we're just going to serialize and append all those bytes in one go. And then pole complete is just going to try to flush that buffer. This will make sense pretty soon. So there's going to be an outbox. It's going to be a back you ate. And there's going to be an inbox going to be back you ate. So the outbox is bites, we have not yet sent. And the inbox is going to be bites, we have not yet be serialized. So in start send, what we're going to do is we're gonna do something like, huh, that's a good question. We're basically we're gonna push a bunch of bytes. So we're gonna serialize the request. Let's do this the slow way first. So we're going to take the item. And let's say that we're going to require the zookeeper requests can be can be serialized in some meaningful way. So in this case, we will do item dot serialize. Right? This is not going to be the final API because this might be fairly inefficient. But let's go with this for now. And then we're going to have to do something like Oh, actually. So remember how in the protocol, so we have to send this request header as well. And the request that has a connection ID and a type. The type is basically the the request that you sent, what type of request is it so that the receiver knows what to DC realize it as. In our case, we sort of want to add that automatically. So my guess is the serialized method that we add on security request is also going to produce both the both the type and the bytes following it, whereas the connection ID is only really known by the packetizer by the stream, it's tied to the stream. And the length as well. We don't really want serialized have to do with we sort of want to be able to serialize all the data and then count the length and add it at the end. So this is going to be payload, it's going to be type and payload, right. And then X ID is going to be some way to get the current X ID. The length is going to be the length of type and payload plus however long the X ID is in this case, we know it's four bytes. And then we're going to do self dot outbox dot extend, or I guess dot push. Okay, I'm going to write this in pseudocode first, and then we can discuss after. So the idea is something roughly like this, right, where we first serialize the the request itself, then we get the stuff that has to go at the beginning. And then we push all of that the outbox. And then we return okay. So notice in this case, the first of all there's basically no way in which this can fail. It basically just pushes into a buffer and does nothing more. And then pull complete, it actually turns out to be really straightforward to all it's really going to have to have to do is push stuff from the outbox we have not yet pushed into the TCP stream. So we're going to have to keep track of prefix of outbox that has been sent, which is going to be out I, or I guess out start prefix of inbox. So the reason this is useful is that imagine that we have a buffer of 100 bytes, and we want to send that to the underlying TCP stream, it comes back to us and says that it sent the first 50 bytes. Now, what we could do, of course, is just remove the first 50 bytes from the buffer and just keep the last 50. But this is really inefficient, because now the now we have to copy all the last 50 bytes to the beginning of the of the list of the vector. Instead, what we sort of want to do is just keep track of the fact that the first 50 have been sent, and that the last 50 still need to be sent. And then it's only whenever we've actually sent everything that's in the buffer, then we can clear everything. So this saves us a bunch of copies. It makes the code a little bit less nice, but I think it'll be fine. In this case, most of what we're doing is that we're going to do self stream, right, we're going to write self dot outbox self dot out start dot dot, right? And then we're going to use the try. Actually, it's a good question. Yeah, I think there's a try ready. We should work with this. So N is going to be so what try ready does is it's a macro often used in futures land, the try ready macro will try to call this method. So it expects that the method it's calling is something that returns something like this. So result async with an error. And if the if it gets async not ready, or if it gets an error, then it returns otherwise it gives you the value. So you can think of this as sort of as like try or the question mark operator, just for for futures. And so n here is going to be the number of bytes that we wrote. And so in our case, what we want to do is self out start plus equals n. And then if if we've now written everything that was in the outbox, then we can clear the outbox and and start from the beginning. So this means that the outbox actually could keep growing unboundedly if we get really unlucky with our TCP sense. But it doesn't mean that we have to do way fewer mem copies. So I think on the whole this is fine. At the end, if if it successfully wrote out everything that's in the stream, then we can return okay, right. So this is saying that actually, only if this is the case. Otherwise, we still have more stuff that needs to be written. So the idea here is that we try to write as many as many bytes as we can, we get told how many bytes were written. And yeah, thanks. I caught it without seeing it didn't cheat. Good that you're watching me, though. Yeah, so the idea is that if the stuff that's been written out is everything that we need to write them were done, we can say that we've successfully pulled for completion. Otherwise, there's still more bytes that have to be written. And so that's where we return. Of course, this code in start set and does not currently work and probably won't work for a little while. In particular, because we're going to need this zookeeper request, the serialize method. So one question is what is serialize going to even do? I think we want to make this decently efficient. So remember, currently, what this is going to do is, let's say that serialize returned a vector, right? So let's imagine for a second that it's something like this. So imagine it was like this. Then now we're going to allocate a vector, we're going to return it here for type and payload. And then when we append it to the outbox, we're basically going to extend the outbox and then copy all the bytes. Now, that's not a problem for a connect request, for example, that is pretty small. But in zookeeper, you can set and read relatively large values if you wanted to. And so we don't really want to copy all those bytes. So what we're going to do instead is serialize into, we're going to take a the ugly way for now. So what this is going to do is instead of returning a new vector, it's going to just extend the underlying vector. Now we have to be a little bit careful here, because remember, there also has to be room for xid and length and those have to come before the type and payload. And so what we're going to do here is we're going to do a little trick where we push a dummy value for xid and length. Actually, xid we can probably get regardless. But for the length, we'll just push zeros, and then we'll change them after we know how long it is. So that way, we never have to do this extra allocation and memory copy. So serialize into is unimplemented for now. The idea is that it's going to serialize all the bytes in the current request is probably just going to match on self. It's going to write out the fields in the appropriate order with the appropriate type into the buffer and it's going to return how many bytes it wrote. It's going to be fairly straightforward. Could even be that we just want to implement serialize here. But let's we'll do it simple. So this means that now we're going to have some way to get the xid of a packetizer. My guess is that this has to just be stored in here. What is an xid? It's an int. What is an int in Java? So let's see, connect request is an int for protocol version. And the zookeeper Rust implementer chose that as an i32. So int is an i32. So xid is going to be an i32. So I wonder whether you can have multiple connections open. This connection ID makes me think that you can multiplex connections onto a single stream, which we might want to support later. But for now, let's just ignore that. We'll probably want a way to set the xid of a packetizer. We'll probably also want a, like a new for packetizer. Because we don't actually like the outbox, outstart, inbox and instart, we don't want caller to have to deal with. So this is going to be a code crate. So you can only call it from within the same crate, because we don't want users to create packetizers. In fact, packetizer is not even going to be public. We'll have to be available within the crate, though, because we're going to use it from source lib, for example. So new is going to take a stream. It's going to give you a packetizer s. In theory, this could give you like an uninitialized packetizer. So we could have a separate type for when it has not yet been connected. But I don't think there's a good reason for us to do that. At least not for the time being. So a new packetizer is going to have that stream. It's going to have outbox be empty. It's going to have outstart be zero. And same for inbox. And xid is initially going to be zero. Crate, thanks. So that gives you a packetizer. In fact, let's just have that be instead of this wrap function. zookeeper requests is also going to be public for the crate. serialize into is not. So what we're going to do here is we're going to need a way to serialize and deserialize numbers into byte strings, because we have say an I 32 and we need to write the appropriate bytes onto the wire. For this, one of the nice things to use is the byte order crate. So this does all sorts of Indianness and whatnot. So we've got here, let's say byte order is what version 1.2 need extern create byte order. And I think this gives us a write bytes x, which is really convenient for these kind of things. So write bytes x basically gives us a, I guess, it basically gives us the ability to take anything that implements write, the right trait and call like write I 32 right, so we can write a value and have it be directly written and vacuates implement write directly. And that just causes the vector to be extended, which is basically exactly what we want. So in our case, what this means is that here, we can do self dot outbox dot write I 32. And that's going to be the length. So actually, let's check what that length is. What type is that length? It doesn't really help. There's a request header. Let's just X ID my guess is an I 32. But no, that's just what it writes out. Big Indian. Great. So yeah, you see the the other zookeeper crate for rust also uses byte order, because it's really convenient for this. And this is why this file is going to be a really nice reference for us. And this saves us a lot of work where that we would have to reverse engineer from the Java implementation otherwise. Notice that we have to tell it what Indian as it is. So remember, from your computer science background or or not. When you have a when you have a number that's multiple bytes, whether you print out the highest value byte or the lowest value byte first differs between different computer architectures and between different network protocols. And so in this case, what we're saying here is that the network protocol for zookeeper is big Indian. So the big bytes come first. And so we have to tell when we write out this I 32 onto the wire, we want to write it in that order. So we're going to use here, this and big Indian. So we're going to write it 32 big Indian. And remember that we don't know the length yet. So here, we'll just write zero. And because it's a vector, we know that it will not error should never fail. Because this will just allocate a larger vector if need be. Right. And then we will write out the the X ID, which is going to be the same self dot X ID. And then we will do item dot serialize into, and we're going to give it a, we're going to give it the outbox. And this also should not ever fail. So this is going to give us the n, which is how many bytes it wrote. And here we now have to be a little bit tricky. Because here, remember how we wrote out a length of zero above there, we're going to have to overwrite that zero with the true value. This is a little bit more finicky. But basically, what we're going to do is we're going to keep track of where like, we can basically go n plus four backwards in outbox. So this is going to be at mute self outbox. I is going to sell Len. And so from length I to length I plus four, right. So that range is where the length is stored in the outbox. And then we just want to write, we want to do this right again. So in fact, here, we could if we wanted to just like push for empty bytes. Maybe that's what we should do is push zero dummy length. xid type and payload set true length. And then the question is, I think we can now just do I think mutable rep mutable slices also implement right, but I'm not entirely sure implement right for Yep, it looks like it. How is that implemented? Right. So this will return. It would return an error if you try to, if you try to write, well, if you try to write more than how long the slices it would only write as long as the slice was, and then tell you it only wrote that many bytes. In this case, we know that the slices have linked forth. So this should be all fine. And so now we've basically eliminated that one copy that we would have had otherwise. And so that's pretty nice. We still of course are missing what serialized into is going to do. I think we might as well just write this straight away because we know we're going to need it. Another question here is whether we'll even need whether serialized into should take self by like own self. It could very well be because you can imagine some of these holding lots of bytes. Actually, here's what we're going to do. Let's just leave it like this for now. So this is going to now we don't, we only have one request type and that is connect request. This does not need to be called connect. This does not need to be called requests. It does not need to be called zookeeper. Keep it easy. So we're going to have a request connect. And if we get a request connect of it's a little annoying, we have to repeat these. See, this is one of the things is a little bit annoying about rust enums is when you have an enum that contains a struct, you can't in the code below assume that it is that variant and start using like dot fields. If you hit the expect with should never fail, will it be easy to see something like you know, mem. So if you run out of memory, there's actually no, you don't get an error from vectors if you run out of memory. This is one of the things that people have been a little bit sad about with rust is that it's not easy to detect whether you run out of memory. Like, if you have a vector, if you call dot push, it just will it doesn't have anything in the type to indicate that push can fail. Or rather, you don't have a way of checking if push failed because you're out of memory, then you're the thread will just panic. And so this is why here as well, we know that the the right to the vector will never fail like it will never return an error. It could panic if you run out of memory. But so if you get you know, mem what you'll see is a thread panic. Okay, so we actually need to deconstruct this here, which is a little bit sad, but I think we'll have to live with it. And so the question, of course, then is how do we serialize a request connect? Well, I mean, that's pretty straightforward. We have to write an I 32, which is going to be the the request type. So this is when we need all these depths, this is going to be actually is not going to be pub. And this is going to be wrapper. I 32. It's going to be notification is zero, create this one. Actually is this actually is QAD. I love the macros. What's the Vim shortcut for uppercasing a character? Can't actually remember. That's too bad. Oh, no. Why is it complaining at me? Thank you, so the idea is that this this off code, because we define it as having the representation I 32, in theory, should just interact nicely with with what we get from the wire. But it's complaining because this is now just request. Oh, I didn't finish writing this off code. Is there not an off code for connect? That seems weird. Where's our connect request? Right? I see. So connect does not send an off code. It probably then also does not send an X ID. Okay, so we do have to handle connection requests separately. I guess we'll deal with that later than 20. Try ready is not in scope. These are various unrelated errors. All right. Yeah. So we now have this off code enum that contains all the all the different off codes we need. What this sort of suggests to me is that when you create a new packetizer, I think it's going to have to immediately place there the the connection request actually, I guess we can actually work around this by if I want if not let. But apparently, I don't get to do that. Because we're also going to have to know that we shouldn't set the true length. Fine, fine. It's a little ugly, but it's how it's going to have to be the dummy length as well should only happen. So the observation is that if you when you initially connect, you don't send length and you don't send and you don't send the X IDs, you basically don't send a request header, because it's not a request is the the first connection. So if you see here, this is in the rust zookeeper crate. So he has a connect request and implements right to the right to traits is the thing that is right to. Yeah, so the right to trade. Oh, he rewinds it. That's how he does it. Oh, it's not terribly important. Where's the maybe does ever request. It's a good question. Let's see. So where's the thing that does connect and connect request? Oh, no, there is an off code. Okay, great. It's using off. So this can all go away. And this is going to write off code off. Does that return anything for us? So we're going to have to keep track of the length. Initially, we write zero. Here, we've written four bytes. As we can go back to this. So the raw request here, I think is just the way that he or she I don't know who the author is. But if we look for here for raw request, no. See here source. Uh, live. No. So we keep her raw requests. Okay. And how is this used? That's unhelpful. Oh, in flight and buffer. Okay. Where is the thing that sends things ready channel ready timer sender self.tx.send. Okay, so they send raw requests on a channel. So they it looks like they have some kind of thread that's running in the background. That does the pushback connector. Okay, so where's this buffer? So from source live? What do they do when you create a new zoo keeper? Zoo keeper? ZK thread? Okay, that's presumably the thread that they spin up. So connect creates a ZK IO. And it has a watch thread. And it's the watch that's the sender. So I think that's the thing that actually sends requests out onto the wire. And so yeah, that looks like it. So what does it do when he gets a new raw request? That doesn't seem very likely. So where is this try write? So it really just calls connect request pushes the request onto the buffer. This takes the request off the buffer. This writes the requests data. If that doesn't see right, where does it add the rapid? I think it's this try write will be my guess. So it first writes, okay, that's unhelpful. Huh. So where does he write the length, though? So they create a channel, kx and rx. They buffer up the raw request, which is the connection. And now the question is, where does that disappear to? So in source zookeeper, when you create a new one, they create a ZK IO, they get an IO sender ZK thread IO dot run. So what is this sender on IO? So that just sends IO things request. Okay, so when you make a request, you make a request header to lend a prefix buff, request header and request. Oh, I wonder whether the data is actually the whole thing. That's why. Okay. So this business. Yeah, so that this is where they they pull the same trick we do of allocating a vector, moving forward, then writing out the buffer, and then moving back and then writing the length. But this to me suggests that. So if you look at the code that we had here, here. Yeah, that I think this connect request does not actually have an XID. I think the opcode here, the buff is just the length and the connect request. And then there's an opcode on the raw request, but that's not actually something that sent, which is a little confusing. But all right. So I think what we're going to do here is, I'm just going to do we're going to treat connection separately. So a connection. We do something and for the rest we do all this business. specifically for the for the connect all we really need to do. Actually, we do have a length regardless. That's interesting. So here, we're just going to serialize the item. Otherwise, we're going to write out the XID and serialize the item. So this is really just saying zero. Otherwise, n is plus this, like so. This has to write out n. So what we're doing here is we're always going to have to add the length. And then if it's a connection, if it's a connect, then we don't add anything more like specifically, we don't add a request header. Otherwise, we do add the XID. So that adds four bytes. And then we call serialize into and our serialize into for connect is just not going to write the opcode. And so that's how we end up not having a request buffer at all, a request header at all, it will just write the the actual connection request. Whereas if we had say requests create or something, that would indeed write out the opcode as well. And that would add another four bytes making up the full request. Right, so the question then is what do we have to write out for connect requests? So here we can actually reuse code, we don't want to reinvent the wheel. So in this case, notice how because this person is writing directly into a writer, they have to try everything. Whereas for us, we don't actually have to do that. We can, we know that this will always succeed. In this case, protocol version, last exciting scene, timeout session ID, the password is a little interesting because the the password, let's see, so that's a veq u8. So the question is how are veq u8 serialized? Again, we turn back to what this person is doing. So write to is the way that they write requests out. And so the question, of course, becomes, how is write to implemented for veqt? It is, you write out the number of elements as an i 32. And then you write each of the elements for a u8. That's just writing the u8. So in our case, we will just do we'll write an i 32, which is going to be the password dot length, which is the line as I 32. And then we will simply write all. So that's going to be using IO write. So we're going to then also write out the entire password. And again, none of these should fail. One way we could do this in a sort of neat way is we can use this. It's a little ugly. I don't really want to have to do that. Actually, let's have this just return a result with an IO error. So the reason I want to do that is just because it makes this method easier to write, because I don't have to unwrap everywhere, I can just have the question marks, and then I can unwrap where we call serialize into. So in this case, we write out these. So good. And then we return okay, and this is going to be then for for the protocol version. Plus, actually, we don't even have to really return the U size. Because we have this thing called magic. So we can actually figure out what the length is specifically. That means we don't need this, don't need this, don't need this, don't even need any of this. And the end we have to write is written is self dot outbox dot Len minus length I minus four. So that's how long is the outbox now? How long was it before we wrote the length minus the four bytes for the length? So that's how much we wrote out. And that is indeed the length of the payload. That way we don't have to track this manually either, which is nice. These are all going to be proper. Right, so that writes out the connect. And then of course, we need to have some way to pull out responses from this. And the way we're going to do that is we are going to take info stream. So we're going to we have sync for sending requests and we have stream for getting the responses back. For packetizer s where s implements async read. Now, Tokyo stream, sort of the inverse of sync has a very similar signature as well. So there's an item. And this is the stream item. So that's going to be a response. Error is still going to be a failure error. Although keep in mind, this is where we this is where we may want to provide more introspection, like if the server returns as an error saying this node already exists or something, that's a that's a message we might want to propagate to the user because they might want to do something with that particular error that happened. All right, let's see. So stream, as opposed to as opposed to sync does not have two separate methods. It just has just has poll and poll should should return. Asic should return an error if there's an error. It returns async not ready if it does not yet have an item to release. It returns ready none if the stream is completed and no more items will be yielded. And it returns. Okay, async ready some and then an item if an item is to be yielded. And so here we have sort of the the very similar problem to what we had for sync namely that we might do a read and only get an incomplete item. Right, in which case we have to yield none, but we have to remember the bytes that we read because if we do another read, we're not going to get them back yet. If we read again. And so we're going to pull a fairly similar trick. So this is why we have the the inbox on packetizer. It's basically going to do the same thing. Like we don't even need this in start. Let's see what for now. So what poll is going to do is it's first going to check whether it has a full item. So it's going to parse out the length. So first minutes of the stream, do you mind explaining what zookeeper does? Oh, that's a good point. I don't think I even did this. So zookeeper is a you can think of it sort of as a key value store. So its API is fairly straightforward. It's, you can create keys and keys are paths. So they're slash separated and hierarchical. And you can set any binary string as the value for any key, but it provides atomic guarantees such as, if I set a value, I know that everyone else will see that value. It provides things like compare and swap sort of. So you can write a value only if its value is this value and has not changed. So this is one way you can use it to maintain a configuration or a cache. It's sort of like a very highly consistent key value store. Now, in a sense, the API for zookeeper has not really become important yet. So all we're trying to do now in the very beginning is be able to connect to zookeeper at all and set up the internal infrastructure we're going to need in order to send and receive requests. But down the line, of course, the API for zookeeper will become important when we start designing the API for the library. I hope that roughly makes sense. So zookeeper, you should just think of it as a very powerful key value store. It's not very fast, but that is because it is so highly consistent and gives these very strong operations you can't often do in other stores. You can also run it very full tolerance. You can run it on many machines. And if one machine goes down, the system still operates and you're guaranteed to always have the same guarantees. Sort of like reticent steroids, although you should think of it as it doesn't try to provide data storage, it is usually used more for a configuration where you have a large deployment of servers and they all need to agree on who the primary server is, for example, or on the current configuration or on where different files are located, use zookeeper for that kind of meta information to ensure that all the servers sees the same information and that even if there are faults like some machine fails, you still have guarantees about what servers see and what operations succeed and fail. Yes, it is indeed Apache zookeeper. That is accurate. Okay, so our poll, what it's basically going to do is it's going to first, it sort of has to check whether it has enough data. We don't want to do a read if we don't need to, right? We don't want to do a system call read. So imagine that you do a system call read and you get back two full requests. So you're filling up your buffer and it's now filled with two requests. If the user does one read, they get the first and that's also what filled the buffer. If they then do a read again, you already have a request in your buffer. So you don't really want to do a new system call because that might block, although in this case it wouldn't, but you don't really want to do another system calls because they're fairly expensive. You want to use the one that's already in your buffer. Well, I'll write this code sort of the straightforward way first and then we can iterate on it. So we know that the length, here we're going to need read bytes x as well. We know that the length is going to be 32. We can use question mark here just because we know it's not going to fail and it's shorter than doing the opposite. Yeah, so what we're going to do is we're going to look at our, oh, actually, if, so sorry, the reason I'm pausing is because it might be that we don't even have enough bytes to read the length. So I think what we want to do is if self.inbox.len- self. just in start or in box start in start is less than four. So if we don't even have the length, then what we want to do is try to read from the underlying socket. This is self.stream. What's the prelude? Async read. Oh, there's something like this for right as well, I think. But what's a buff need? I have no idea what that is. Ooh, but it has nice interfaces. Sorry, I got distracted. So what we want to do is we want to read from the underlying stream so that we get enough bytes to continue. So we're going to do dog read. Can I do the same for the right? I think I just want to do right there. I think there's a recommendation to use pole write instead of write. No, that's when you implement. We also sort of need to do flush here. I almost forgot about that. So here we now want to do self.stream.pole.flush. So even if we've written out everything to the wire, whole complete shouldn't complete until the stream has also been flushed, which is what we're saying here. For stream, what we want to do is we want to call the read method. I guess pole read. And here mute, self inbox, self install, dot, dot. Now, some of you may already see what's wrong with this. I'll just write it out and then we can talk through it. This code will not actually work, but I'll explain why in a second. But we're going to try to read some bytes and then it's the case n is zero to do. I'll talk about why that's special later. Actually, if yeah, later. So we're going to try to read out the length and then we are going to see whether len minus self dot in start minus four. So that's how many bytes we have available to parse. If that is less than length, then we are going to also do this. I realize this code is currently pretty messy. We're going to rewrite it a bunch. There's just to get the flow ready. So the idea is that if you don't have enough bytes, then you read more bytes. And so here, if we don't have enough bytes to read the length field, then we do reads until we can read the length field. Then if we don't have enough bytes to deserialize the object, because now we have the length, then we do another read. And then at this point, we know that we must have enough bytes available. And so now we sort of deserialize. So we're going to do something like response parse. I guess xid is going to be self inbox. So here, self in start in start plus equals four, because we've now read the length field. And then self in start to self in start plus four. Actually, we can just use this dot read i32. So this is reading out the xid, which we know is there. There's also the opcode, which we know is there. So at this point, plus equals four for the xid, plus equals four for the opcode. Actually, let's not read out the opcode. And then when we want to call parse, what we're going to do there is we're going to give parse self inbox from the start, so from right where the opcode starts to length minus four to the end of that payload. Does that roughly make sense? So the idea is we read until we know that we have enough to parse a request and then we parse that request. So that's what this piece is doing, sort of extracting just the bytes that correspond to that request and asking and trying to parse a response from that. Now, of course, this code is currently pretty messy and there are a bunch of extra cases we have to deal with. For example, if you try to read and you get zero bytes, what that means is the other side is hung up and you're not going to get any more items from it. There's also the case that poll read will not extend a vector. So in this case, our inbox is going to be pretty useless because poll read is just going to try to read into the bytes that we already have, which is not at all what we wanted to do. So this code is currently pretty broken, but I hope you see roughly the approach that we're going to take. The way we're going to rewrite this to be a little bit simpler is we are going to... That's a good question. Let mute need. Need is 4. So while this is less than need, then we're going to have some magic here. Magic to extend inbox. Actually, let's just write the magic straight away. So the idea is that if we find that we need more bytes, then we're going to grow inbox by a little, and then we're going to call poll read, and if poll read succeeded, then we're all good, then we try to parse. If poll read did not succeed, then we shrink inbox again by changing its length. So here we're going to do self.inbox.spec. I don't actually want setlen. I want... where's the method I want? Resize. That's the one I want. So we know that the current length, so target length, is going to be the current length plus... And then the question is how many bytes do we want to read at a time? Now one thing we could do is we could just say plus need. So need here is going to be sort of a counter of how many more bytes we need than what we currently have. Or rather, need is going to be how many bytes do we need in order to return okay. It's basically like how much how much stuff do we need in order for to correctly parse an element. So we could reserve just enough space for that. I think instead what we're going to do is just to amortize this cost a little. Actually no, let's do that. This will be fine. So our target length is that plus need. And so we're going to do self.inbox.resize. Now resize takes a new length. And if it's shorter it will truncate. If it's larger it'll grow the vector if necessary. And then some value to set the grown elements to. In our case we want to resize to target length. And we're going to fill it with zeros. Then we are going to match on pull read from. It's going to be inbox line. Right. So the idea here the idea here is basically that we we take all the all the bytes that we have so far and we allocate a bunch of memory at the end to make room for the more bytes that we have to read. And then we try to read into that segment of memory. So resize creates that segment of memory. And then pull read tries to read into that segment of memory. If we get okay then we get an n. Let's do okay zero. This is going to be special. Actually nothing. Let's do that. Or match that with a question mark. Or okay async not ready. So async not ready means that nothing was read. That call would have blocked and so nothing was read. Let's check that actually is a guarantee that holds. Pretty sure it will. But n equals zero is in place at end of file. Okay. So not ready specifically means that uh oh what are they working for pull read. If no data is available for reading that's really big. It doesn't explain whether you get a ready or not ready on zero. I'm going to assume that they've done something sane. So not ready means that the underlying socket has not been closed. It's just blocking. And in that case what we'll want to do is we'll self inbox resize back to read front. Right because we basically set the length back to what it was. I guess we have to do zero. But can I do truncate? Is that a thing? I think truncate is uh so we basically we we don't undo the allocation that we did if any. We just shortened the vector. So that when we go through this loop again we'll we'll return. And in this case we want to return async not ready. Because we tried to read the more bytes we needed and we did not get them. So in that case what that means is we're not we're not able to yield another element yet. If we pull to read and get ready so some number of bytes were read then now what we want to do is self.inbox.truncate read from plus n. So this is saying that now we got n more bytes. So now the real length of the inbox is where we started to read plus the n bytes that we read. And then we'll just let the while loop go again. Which will show us whether or not we get to continue. Of course here it could be that the the need might change here because we might read out the length. So remember that we're for we sort of in order to deserialize these things we first need to read the first four bytes which tell us how long the rest of the payload is. And then we need to read the rest of the payload. So that's why need is initially set to four. Because initially we just need the length. Actually let's just do this separately I guess. Yeah it gets a little bit unfortunate. I don't really want to replicate this code. So the the silly way to replicate this is we then do this. So initially the need is four. Then the need is however long the length is. So four plus this. And then we do the whole thing again. But see how this duplicates a lot of code that we don't really want to do. And so therefore what we're going to do is need is going to be if self.inbox.len-self. And start is greater than four. Then we know a length we know how much we need. Otherwise it's going to be four. So in this case it's going to be we're going to read out the length. And then it's going to be length plus four. So if we already have the length then how much we're going to need to read is going to be length plus the payload size. Otherwise we're going to at least need to read the length. And now here in the case where we in the case where we truncate. In the case where we do get some number of bytes we're going to have to check if self.inbox.len-self.in-start. I really want this to go away. Where's packetizer? self.inbox.len-self.in-start.self.in-len. So if we if the number of bytes that we have available to us is now greater than four and the need is not equal to four. Then we parse out the length and then we set need plus equals length. So now basically what this is saying is we're going to keep reading until we get the length and when we get the length we're going to increase how much bytes we need so that we keep reading. And so this while loop is going to continue until we do in fact have a full request available to us. And at that point we now know that need is set to length plus four. Here if n is equal to zero we'll like to deal with that separately. So that's if the incoming connection has been closed. And now what we want to do is remove the while we want to skip the length field when we start to decode then we read the xid and then we skip the xid and then we want to parse what's left which is basically going to be where we started plus how much we read minus the length minus the xid. Because the need is from the very beginning of the buffer so what where in start was but then we've removed the length and the xid so we don't actually want to read those out. And that's what we're going to end up parsing. So that's going to be the response. In theory this should have a question mark and then what we now need to do is self in start in start plus equal need minus four minus four and then we will return async ready some right so at this point we successfully read out an element and now we can return now we can return that element. Now there's a little bit of tricky here like this is going to end up doing some copying but let's just not deal with that right now. We also want if self.instart is equal to self.inbox.len then self inbox clear and self in start to zero. So this is just so that we don't end up accumulating more and more memory. Great so the only way in which this stream can stop yielding elements is if the connection to the server went away and in that case what we want to return is async ready okay async ready none. Now in this case we might want to add some debug information like like if there were things left in the buffer and then it was an unexpected shutdown. So in fact one way we could do this is if the if there are bytes left in the buffer then we want to return an error. The question is of course what is that error going to be in fact we're going to just do bail. So bail is a macro from the failure crate that just produces an error which is what we want in this case. Bail with connection closed with bytes left. All right so now we have a way to distinguish between the case when the server closed when the connection went away when there were no more elements either like add an element boundary and the server went away in the middle of sending a response. And what treating one as an error and one as the stream ending is probably what we want to do. All right so we now have a stream that goes both ways and so what we want the handshake to do is it is going to quest is going to make a request connect. We now know that that has a bunch of fields like so. I don't know what those are going to be yet but let's so let's set them all to zero for the time being this is going to be a vec. This is going to be a packetizer new on top of the stream and then what we're going to do is we're going to oh that's a good question if you have a sink where's my sink docs. So remember the packetizer implements both stream and sync. Sync is for sending things to the server stream is for receiving things and so in this case when we do a handshake with the server we've now sort of popped up a level right so we used to be looking at the protocol implementation oh actually come to think of it we haven't written response parts either but we used to look at the protocol version and now we're sort of stepping one step up to the the zookeeper library trying to connect so what's going to do is it's going to construct a connection request it's going to send that on the on the packetizer sync and then it's going to read back from the it's going to read back the response from the server so for the sync what we're going to do is we're going to where's my where the list of methods i want send yeah this is going to send the request and then what send gives back so send is a future whose item is whose item is the sync so it's in future land usually if you try to send on some channel then the channel will be consumed by the send until the future resolves which is sort of how you want things to work right like imagine you have a tcp stream if you send some bytes on it and you get a future back for when those bytes have been written you don't want to be able to keep using the tcp stream as well because then you would have two writers of the same stream so instead the way asian grite handles this or or at least the way futures in general handle this not actually for tcp circuits but in general is that when you try to send something that the send future consumes the sync you're sending into and then when the send completes you get it back so that's why this and then is given a stream back and now of course now that we sent something to the server we now want to read back and the way we're going to do that is because we know that the the zookeeper connection the packetizer also implements stream so we are going to then call if you look there's a lot of things on stream but basically the thing we want is into future into future so into future if you call that on a stream normally a stream is sort of like iterator so just keeps yielding more and more elements in this case we only care about the next one so what we're going to do is we're going to call into future which is a future that resolves to the next item and the stream whenever an item is available so if we look at into future that implements future for stream future the item that we get back is an option item and the stream so the response here is going to be an option response and the stream is going to be the packetizer and so if response.isNone that's bad otherwise this is going to return a actually this can just be a map so that's going to give back a zookeeper that zookeeper is going to have a like a packets connection with zk so our zookeeper we now know is going to be over some s and it's going to hold internally a connection which is going to be a packetizer over that stream s it's probably going to have some other things too it's i don't know quite what yet but at the very least this is sort of the basic way in which we're going to set this up just directly and so this is now a future that will eventually resolve into a zookeeper connection instance that holds a packetizer internally so now of course the last thing in theory that we need for this to all work out is that we're going to have to be able to parse the response right so this will now indeed send a connection request and the question is what do we get back so we're going to have a response and we're going to have a response type similar to the request type and we're going to have a parse method on that so we can probably split proto into our request and response let's do we're going to open request.rs request is going to have this and also this i think i want the sync and stream implementation to just be in proto probably because the other things are going to get pretty large whereas the sync and stream implementations probably won't change that much in size so response is going to be pretty similar to request in that it will be just be an enum and it'll be whatever we get back from connect so this is just going to be a connect and similar to what we had for requests we're also going to have similar to what we had for request we're going to need some kind of parse method in our case we could implement parse so there's a parse trait in the standard library i don't think i actually want to do that i think i just want to do this for now at least super so the reason this has to be super is because we want to be able to call this this method serializing to from our proto mod if it were just fn it would only be available from this file which is not what we want and what do we call this we call this parse so parse takes a buffer so some u8 buffer and at least in theory gives back a response it will probably actually be a result result response or failure because it could be that what we get from the server is like malformed in some way in which case we want to have a way to report that um it's another question is how do we parse responses well our string reader buffer reader it's a good question let's see what this does so connect request just creates one of these try read what does try read buff do so try read buff um well that doesn't really help as much let's look at the lua implementation um so dissect client as far as i understand is when the client sends something this is how it dissects it so notice here we recognize the the length being the first bytes the xid being the next four and the uh off code being the next four dissect server is probably the next method even though i guess uh also a length also an xid okay so it looks like the response there it looks like the response to a connect or the response to any request is actually dependent on the service current state so this makes me think that the server is not does not allow you to multiplex requests and responses on the same connection because it seems like there's a state of what did the client last ask for for this xid could of course be the different xids are multiplexed okay so what this means is in theory if we know that we're expecting a uh a connect response we should just give that although this is a little weird because it means that our stream doesn't have a way to know what it should return in particular uh i don't think it reads an xid back see that's a good question so the response seems to always have a length it does always have an xid and then for the other kind of responses this is there are a bunch of other fields as well that are always included hmm so so here's the the downside of us putting uh connect and response in the same enum as the others is that the the connect response has actually has very different fields whereas all the other connect responses share at the very least these fields so maybe no i think i want it this way anyway but it does mean that there's actually no xid that comes back it's just length and then payload and the way we can tell what the response is is by which request was sent so i think this means that the response type is dictated by the request type which to me suggests really that the packetizer should keep track of last sent uh off code uh what operation are we waiting for a response for is going to be an option off code i guess no it's going to be an off code sent is going to be a no it's going to be an option uh because remember that um when you send a connection request a connection request does not actually have an off code and so this means that when you start send uh then what we're going to do is self dot last sent well this is also a little weird i think last sent actually has to be a vecdq because um the way we've set up this interface is you can actually have multiple requests in flight right you would you would push a bunch of requests and then you would uh the stream would resolve each one in turn um so i think this would be bq it's a little sad for this to be a bq so this basically means that every time we um send another request we push its off code and every time we receive a request we pop the or we shift from the front of the queue which request that response must be in response to so this means the last sent is initially going to be a vecdq new last sent dot pushback item dot off code so this means that on request uh we're also going to have a pub super off code actually no stream here i'm just gonna just match on self and if it is a request connect it's gonna be an off code off is it guaranteed to be responded in order i don't actually know so this is one of the problems with the protocol being so poorly um so poorly uh documented i suspect that that's true yeah so you both ask the same question um so remember one thing we could do is we could enforce that you can only send or receive this is totally something that we could set up in the api that in fact uh it's not a stream or a sync it's just a future where you send a request and you get a response the reason i sort of wanted it to be a stream is so that you could have multiple request pending but it's sort of like in a sense you're you're making a good observation that if it is in fact um a send a request get a response kind of api then we shouldn't really use a sync or send which just make the whole thing be a future that uh serializes the request and then reads the response and then returns um that's a good question i wonder so this is one of the problems with the protocol being so poorly documented um leader activation doesn't really help us that definitely doesn't help us the the real question would be this xid so in theory we could have an xid per request or something which might let us have multiple outstanding requests the reason i suspect that it supports this multiplexing is because you have the set watches thing so um if we look at the zookeeper thing here one thing you can do with zookeeper is you can set up a watch so a watcher where is this watch um yeah you can basically set up a thing that should be notified whenever a given path changes so whenever basically when a key changes uh and this to me suggests that you must be able to send the server must be able to send us things even when we didn't ask for something um let's look at what that gives us so if we go back to the zookeeper class give me source zookeeper so where's the thing that okay so our quest um is sent on that channel sends the requests and then it just receives a response although the uh you know see here so it creates in this case the at least the author of the old library sets up a new transmit and receive channel for every request and then the response comes back on that channel um so this does suggest to me that you could have multiple things in flight of course the the way one one way in which we could make this concrete uh without guaranteeing anything about the order uh with the id until the response with the same id get back yeah so so one way we could do is just keep uh a mapping although it doesn't look like there are request IDs in the setup right unless these x IDs are that um but that's not really well defined uh yeah it's sort of unhelpful because this x id I guess that could be transaction id so if it's transaction id that suggests that you could have multiple x IDs in flight um actually let's see how x id is incremented um x id it's just request self dot x id ah okay so it does look like x id is a per transaction id so this means that every request does have it its own x t uh the current project is writing a library an asynchronous library for zookeeper for a patchy zookeeper uh the aws spot instances is still a library that's out there I haven't done any work on it for a little while um there are a bunch of uh videos on youtube of recordings of past um past streams around it so you might want to check those out if that was a project you thought was interesting okay so this does suggest that every transaction has its own id um which means that um it might be that we ah here's what we want to do okay I got it so uh I think we still want the packetizer because the packetizer the packetizer is the way in which you send and receive things through the server right so imagine that it does not it doesn't really care what the x id is nor what the opcode is uh ah here's what we want to do okay so here's what I'm thinking um the stream is going to take requests what the what the I'm thinking we sort of wanted a demuxer here so um at the lowest level what you want to do is you send a request to the server and you get byte chunks back with an x id um and then it's up to the server to look at which future should I resolve now that I got these bat um these bytes back so the way this is going to be set up is when you send a request what you get back is a future for that request and so internally what then happens is that request is sent on the sync that's the packetizer sync um and then at some point the packetizer will get a response for um for that future yeah that's totally the way this should be okay uh question is whether x IDs have to be strictly monotonic hmm that's a good question okay so so here's the way that would work uh this would be a hash map I guess from i32 to a uh futures unsync one-shot sender an opcode and a sender response so the idea here would be that you um the whole packetizer is really a future where you can queue up requests and then you just have to keep pulling it uh and as you pull it the appropriate other futures will also be woken up and yeah I think that's the way we want to do this all right that's going to change the design a little bit but I think it's going to be for the better um so we're no longer going to implement sync or send or sorry or stream what we're going to do is we're going to have a uh we're going to implement future for packetizer where s is async read and async write the item is going to be nothing the error is going to be a failure error and then if we look at what's the future and so the idea is that poll is going to both send things that are outstanding and read back things that are results and then in addition we'll have a uh I guess we can we could call it start send so there'll be a separate method on uh on packetizer for queuing up additional items in this case an item will be a request uh actually it'll be a request and a unsync one shot right so the idea here is that if you have a packetizer you can sort of you can enqueue a request uh and the thing you enqueue is both a request and where to send the response and then packetizer is also going to implement future in the sense that uh you can keep polling it to try to oh actually better yet instead of this giving that this gives you back a future but the item is a response and the error is a failure error beautiful and then this is going to be this is going to make a channel that is going to send the response on whether this is sync or unsync i'm not entirely sure yet but we'll stick with one right so self dot reply dot insert uh that's going to be xid is going to be some number so the xid and then we're going to register uh item dot off code and the transmit channel and then we're going to return the rx channel this does this make sense so you have a on a packetizer on the packetize you have a way of enqueuing a request and that gives you back a future that will respond when that request finished uh unimplemented as a a macro from the standard library it's really handy it basically panics if you ever end up calling it you can also do like unimplemented i have not done this yet or whatever but usually unimplemented is sufficient um right so the reason we want packetizer to implement future is because you need to keep polling it for it to keep doing reads and writes and at some point if the connection breaks down then that's also when that future would resolve so i think what we're going to do here is that um every time poll is called we want to both try to write and to read we want to try to write out any buffered requests and we want to try to write out buffered try to uh read out more responses now in order to make ourselves a bit more sane we're actually going to split these into two methods so we're going to have a poll right and we're going to have a poll read and neither of them are actually going to yield anything so so notice that these are basically unchanged from the way they looked wait that means the response has to carry an xid right or am i completely blind because otherwise how would we know which response to pair it with okay yeah there is an xid in the response so i was wrong down here uh i gave me back my xid yeah this so the idea here would be that um at this point to do uh so here we're gonna find the waiting request and this is going to be uh opcode and tx is going to be self dot apply dot remove and then the xid uh and then i guess this is probably gonna be uh what response doesn't have opcode so yeah so you're right so responses don't have opcodes but um but i was worried they don't have xids either but they do invite even connect responses do have xid so that's good um find the waiting request future uh yeah so here we probably return an error if xid was unknown and then what we're going to do is we're going to parse this give it the opcode that the request was made with and then we're gonna do a tx dot send and we're gonna send that response and then we're gonna just do async ready uh i guess i don't really want it to be ready doing we want uh it's a good question i think we actually want this to just be a giant loop want an outline as well so we need to we need to make sure that we always keep we need to keep pulling the underlying streams otherwise we won't get notified when new data is available um so while outline not equal zero this and then that and here we're gonna do uh this is just going to be a loop so the reason this has to be a loop is because um imagine that we read one request and then just returned the problem is there might still be a request sitting on our buffers and the and the future would never resolve like we'd never end up getting to that future is because we wouldn't need we wouldn't know that we need to pull again because the underlying stream might not have any more data for us imagine the server sends us two complete responses and then closes the channel we read and we return the first one how does the caller know that they should pull us again they don't because the stream isn't ready the stream doesn't have any data on it and so this is why in general when you call poll it should do as much work as it can do without blocking and that is basically what we're now telling it to do so notice that this is not an infinite loop because if we do a try read here so we do a poll read if that returns not ready then we return immediately from the loop which is indeed the behavior you want or opcode zookeeper's way to call some remote functions on it yeah basically so opcodes are the type of a request um is the way to the way to look at it all right why is this I think this will actually never return I think we can do this but it might not work 157 did I do something silly I must have Q and oh what did I miss I have a bracket problem here somewhere there yeah so the the idea now is that we keep track of all the xids we've sent what opcode the request in that xid was for and where to send the response when we parsed it and now this is going to be a hash map new should just make this a default default why is it a quest opcode should be this should probably just do a great use request request and it's because we're going to want to use those outside of here cannot find big endian that is true that is because it's called big endian what else uh I'm probably going to need a bunch of these uses in requests and in response you're probably not allowed to do that either am I oh and parse now takes an opcode which is going to be a super request all right so the idea now is that on the packetizer you get to enqueue things they give you futures back and as long as the packetizer keeps being run we're all good of course we're going to have to create new xids which I think is just going to be as simple as to do uh xid is a u-size and xid is going to be self dot xid plus one and self dot xid plus equals one so it's just going to be it's probably going to complain through this let's find we'll never actually get to that type all right so poll what poll is going to do is it's going to call self dot poll read uh then it's going to go self dot poll write it has to call both because um uh even if we so imagine that both a read socket and or or both the read end and a write end of a socket is ready if we just call poll read then we wouldn't be woken up to write again so we need to make sure that we do both um and so I think calling both here is just fine we if there is an error we want to return it if either of these return not ready that's fine too so I think we can just do this if let match rw if they are both async ready if they're both async ready then we also return async ready although that should basically never be the case ah that's the case so a write can be yeah so write would would return ready if um if it's written out everything and flushed it read would return ready if the incoming stream has been closed so uh uh so if both of them are ready then we return then we resolve the packet iser's future because there are no more responses coming in and the incoming socket has been closed at which point there's no point in trying to send anything more um if the incoming socket has been closed and the outgoing socket is not who knows what we even do in that case I think just in all other cases we do um um okay async not ready you could imagine that if the incoming reply channel was closed but the outgoing channel was not then we return an error because you could never get a response to those futures um but I think we will just not do actually we do have to do that so in this case we really want to notice that um fail uh outstanding requests but response channel closed what are we missing uh 95 yeah this is going to be a failure error this is going to be a failure this is live oh 16 right this is now over a stream uh connect is going to give you a zookeeper uh over a tokyo net tcp stream same ah no this will take any s that's so many errors oh that's so sad I can't read bytes from I guess I need a cursor found IO error expected failure error I mean those we can deal with um I guess the the next thing that we want to do on sort of the low level is that we want to parse responses so here I think basically we want to do is match on the opcode and then parse appropriately um in this case we only really have the connect response and the connect response what does the connect response have um thing connect response connect response so a connect response is one of these I don't want any of these to be pub but at least not at the moment what does that even mean is handled as i32 okay let's just make it an i32 then yep and so we're going to match on the opcode now one thing that's a little awkward here is there's actually not an opcode for connecting as far as I can tell so auth is actually used for authoring I saw this somewhere that uh yeah this is add off thing and that uses the opcode off so I don't know if there's actually an opcode for connecting let's see whether there's a here no huh that's a good question so I wonder whether what we want to do is actually invent our own opcode because it's not actually going to get sent right so we have here opcode I think we want this to just return connect and then I want connect to be like minus 100 uh so we're gonna match the opcode if it is uh use that so if we get an op if we are deserializing a response to an opcode connect then we know that what we should do is construct one of the uh connect responses and a connect response is made like this nice uh now in order for us to use the read methods it turns out that you can't actually do a read straight from a buff and that makes a lot of sense because um imagine that we do a read i32 from the reader or from the buff in this case and then we do another read i32 unless we have some other state to keep track of this the second read we just read from the beginning of the buff again uh and that's why uh just a plain buffer does not um does not implement read however so if you look at io read it's implemented for a bunch of different things in particular read is implemented for a reference to a u8 but we have to look at a little bit deeper in that read requires a mute self so requires a mutable reference to an immutable slice in particular we can do reader is at mute reader is uh i think that will work could be wrong so what this will do is the reader is gonna change the slice so that it points later and later as you keep reading oh that's right yeah so for vectors you basically read out um a length field first hmm so how do we want to encapsulate that nicely so one way we could do this is um just add a trait read buff and this is gonna i mean this is basically we can just probably copy the method from here and basically the same thing uh i guess we'll probably want the string reader as well but yeah notice that all this is really doing is just reading out a length uh and then it reads that many things from the underlying stream and so in our case this should be over any all this needed the two type arguments what do you mean expected two type arguments oh results right uh it's gonna be a failure we're gonna want to go through and uh tidy up our errors a little yeah so notice that uh proto 149 we have sort of the same issue that it's saying i can't read an i32 from what's just a buffer right so really what we do is here oh it's gonna be a little bit awkward here we're gonna have let buff is is gonna be a reference into this that lasts for um here to self in star plus need and buff is mutable and so this means that we can now read from buff that this can just be plus equals need down here so the length we already read out up here although we can technically read it again down here i suppose read that xid read that and then this is going to be uh the rest of buff so this is kind of neat so uh because the the implementation of read for a mutable reference to a slice is basically that after it's read it changes the buffer or that changes the slice to point only to the things that hasn't read yet and so that means we don't have to keep track of how far we're read which actually makes for slightly nicer code um yeah so this doesn't actually work because this um i think we need to do like which i don't know whether it'll let us do oh yeah 39 need plus equals length expect to use size found i32 poll read is not found for s at line 122 all right uh it's because we've written packetizer to not actually acquire anything whereas poll read and poll write are only available where s is async read and async read we could add these aswares actually to the where s is async write and this is where s is async read 148 expected async found results right i already have the question mark so i don't need that so warm whoo 119 this is the same thing where i need to give it a mutable reference so that it know so that it can try to advance the pointer uh 110 actually i guess here i could get give context let's not do that for now 100 try ready expected use size found async oh that's a little weird because it shouldn't care so why does it care expected use size found async so write will return the number of bytes written where is this function is supposed to return async unit so the question is why does this not mean in theory you just get around this with with this async ready and return n async not ready i mean that's basically what try ready desugars to so the question is why that wouldn't work one or two expected use size oh do i need to use poll write instead of write could totally be async write yeah i do need to use poll write so that's why so write is just the the right system call from sorry the the right thing that's in the right trait whereas i actually want the poll version so that i can use try ready whereas the right one if you look at async write um notice how it basically says that the right method from the right trait follows the following contract whereas poll write is just one that sort of uh encapsulates this this contract in a poll based api so let's me use try ready um here i expected i32 found use size on like 89 uh yes that has to be as i32 no length found 78.x id i want that to be an i32 not a use size all right uh 45 should be reply 19 oh it seems like we're getting closer so response for some reason ambiguous associated oh this is a connect okay in 42 wait how is this ambiguous wait response is not a trait response is an enum oh okay fine i'm so confused why is it saying that response connect is ambiguous 42 that is oh i didn't rename it that's okay that's a terrible compiler error message someone should fix that okay we're almost there i think source live 20 so this is now complaining that it's not actually getting an error so this of course we'll we'll we can clean up some of these so these for example could be then our our context and then fail to connect to zookeeper i think it's technically like this zookeeper sadly but that's how it's built 22 you really need to do this that's kind of silly yeah so the the problem here is i have a result where the error type is IO error and i've told it that i'm going to return a failure error i think i need to do this maybe no it's failing somewhere else now let's get rid of some of these warnings so response does not need write bytes x does not need futures not the hash map does not be right or tokyo or tokyo prelude that makes a lot of sense because there's nothing async in response my guess is the point is the same for response does not need read bytes because it's just going to write stuff proto mod does not need tokyo does not need self and write and does not need it that's what it is response that means i'll read request line 67 did you mean request connect is that not what i wrote oh lib cannot find packetizer so it's gonna be so it has a mod proto right so we're gonna have a mod proto and this is gonna have a proto packetizer right so uh now we're getting back into the the fact that we changed packetizer makes this a little annoying now because um you now sort of need to we need to have a way to drive the packetizer you can think of this as like whenever you want to send a request now you have to enqueue the thing on the packetizer and then you need to drive the packetizer forward so that the response eventually comes back on the reply chat the future you get back on the packetizer um so it's going to look something like this where you do enqueue request and that gives you a future um and then the question now is we're going to have something like uh this is just going to be this map that but of course the problem here is if we in fact wrote the code like this zk would now be dropped which means the packetizer would go away and would not be driven any further but it's a future that needs to continue to be resolved um so if you look at this now this i guess we can make it compile just because it's not going to matter a lot um this bug is still going to be there this does have to be self this has to be self and then it's going to complain 159 where is it complaining 58 it's complaining because because this failure error right so this now maps just through response I guess we could move the zk into here just to demonstrate the issue 34 read only is going to be false 21 complains because expected type parameter it's also a terrible error uh cannot infer it's for b yeah so this is why i'm just gonna failure error from for now 39 guaranteed to always get a response there sorry i'm right now i'm just going through the all the errors and fixing them up just so that when we do start tackling the actual problem and source lib um i can talk about without without having all the other errors get in the way all of these others are unimplemented compiler driven development yeah i know i actually really like it i think it's a i think it's a pretty good way to to work through your program but i maybe i'm alone in that uh irrefutable let pattern why is it complaining about that uh requests fine i'll add another request type food should not be necessary unimplemented and uh almost there proto mod 86 uh great all right so now we get a bunch of warnings but see that this whole thing now compiles right but um let me see if i can make this problem clear so the packetizer is a future that deals with uh that needs to be pulled in order to send bytes to the network and read things back when you enqueue something all that means for remember from when we wrote in queue is that you serialize the thing and put it in a buffer but that won't be sent on the wire and your response won't be read until we drive that future forward right and so that is why this uh this uh this packetizer unless you drive it this request that you enqueue here the future you get back will never be resolved because remember it gets resolved when the response channel is sent on it's only sent on when the server has sent us a response that we parsed and the server will only send that when we send the request none of those things will happen unless the packetizer is being pulled um so so there are a couple of different ways to ensure that this will happen the way we're going to do this is we're going to use the new uh tokyo runtime API and basically just say tokyo spawn zk so spawn if you spawn a future it means that it'll be on the the pool of thread the tokyo the tokyo runtime is going to make sure that it keeps polling you can only call this from within the context of a future uh luckily we know that we're in the context of a future because we're already in this and then of the tcp stream the issue of course is that spawn is going to move the packetizer so at this point we don't have a way to enqueue future requests we don't have a a handle to the packetizer anymore um and so what we're going to have to do is um let enqueuer is going to be zk enqueuer and enqueuer enqueuer dot send request don't move enqueuer enqueuer is a really hard word to type wow so let me see if i can explain why this happens or what we're really doing here enqueuer um so the idea here is that we're going to spawn the packetizer on the tokyo runtime so that means that tokyo is going to make sure that we keep pulling it and then we're going to have the packetizer expose some way to send it things so that we can enqueue additional requests without holding on to the entire packetizer now there are a couple of different ways you can do this i'm going to do this with a in a very straightforward way just with a just with a queue just with a channel here we can choose whether we want to use a sync or an unsync channel i think i want the channel to be sync probably um so we're going to add a channel here that's going to be incoming requests uh rx that's going to be a futures sync mpsc receiver of next i think i actually want the enqueuer just be returned when you make the packetizer in fact here's an even better api this just returns a sync mpsc sender request it does a tokyo spawn uh so that way this does the tokyo spawn for you should probably just import this shouldn't i use futures sync mpsc um so it's going to be an mpsc this is going to be an mpsc unbounded probably x is going to be the rx and it returns the tx document where it calls tokyo spawn so one thing to be aware of is that when you're using the tokyo run and tokyo spawn methods tokyo spawn assumes that there's currently a runtime running and that it is within the context of that runtime so once you start using these methods you are essentially adding a dependency on tokyo that's sort of hidden from view um so you need to be sure to document the fact that like in this case packetizer new does require that you're in the context of a runtime otherwise it will not work um i don't think this is particularly onerous it's just something to be aware of this will be an unbounded receiver all right uh and now there will not actually be an enqueue method anymore instead what we'll have is uh a fn poll enqueue also gonna pick one of these but how are we gonna give the channel back that's the reason it's annoying yeah we're gonna i'm gonna have to work around this a little bit um so it's actually gonna get a pair of request and one-shot sender response so remember that we need to we need to give the packetizer a way to send the response back as well so the channel that we send back we could just do this one-shot response well we could do this i probably want to wrap that up that up in a slightly nicer API but we'll deal with that in a second we're gonna have this poll enqueue business and what poll enqueue is gonna do is gonna be very similar to the poll write and poll read except in that i know i'll copy it uh in that it is simply going to it is simply going to um poll the thing that enqueues or that that receive the channel we have that receives incoming requests and then just serialize them so it's basically the same as what we used to do when the enqueue method was called but that process is going to be asynchronous as well um so in this case it is simply be uh while it's going to be a loop it's going to be a let item is self dot rx dot poll uh it's going to be a try ready on poll but i guess one question here is what if it errors so i think this one in particular is going to be a uh mpsc what is the error type for that it's a good question docs.rs futures where's our so in this case we have a why i don't want 0.2 um where's sink sink mpsc if you have an unbounded receiver and you call poll what do you get what's the error type there is no error type so what we're going to do here is we're gonna try ready and that might return an error if there are no more requests coming right so so think of this as um at some point the sending channel that we have in order to enqueue things is going to get dropped at that point we sort of want to disconnect from zookeeper now we don't currently have a good way of doing that but in theory um that would be this method returning an error saying i couldn't poll the enqueue anymore so we'll have to deal with that in the poll method of packetizer's future implementation and we'll see that in a second um so this is going to get an item and a tx it will simply do this well simply um forever so it will keep trying to read things from the enqueue channel until it it until it would have blocked so remember that this will not actually loop forever right this will loop until the the enqueue channel is empty this could be called enqueue rx i guess but uh and so this is just going to keep going until then and then we'll return and then what we'll do is we'll have our poll implementation it will do also uh self poll enqueue and we're gonna sort of match well so we're gonna match on this so if this is okay of any sort then just continue if this is error whatever what this means is no more requests will be enqueued and at that point what we really want to do is uh if no more requests are going to be enqueued then once poll read and poll write have finished then we really want to stop them right like once poll write is then finished then we can really just close that connection i don't know if there's an exit call but if we look at rust sukeeper is there a drop implementation or something self.close yeah close session wait what is create session i feel like create session is what open is let's uh not invent our own connect and use create session instead yeah so so once once we hit this uh this error case and we know there are no more requests coming in then once poll read and poll write have both returned ready then what we want to do is then uh issue a close and only when that finishes are we done so um it's here we're gonna say exiting is false exiting is a bool exiting is false um and we're gonna say here if not self exiting is false is true and then down here we're gonna have a case where why is this complaining not what i meant to do oh did i i did something stupid didn't i wait this seems totally valid why is it complaining quite in 59 i mean i must have introduced a syntax error somewhere because it's really unhappy with me but unexpected token tx oh that's why um yeah so when we poll in the future implementation here um we're gonna set sort of set ourselves to be in an exiting mode and we will only allow ourselves to exit when uh is here if self exiting so it's only if we're exiting that we will let that happen otherwise otherwise we are async not ready because we could still receive uh we could still receive more more requests right now here uh there's actually another mode which is when we first enter this case of everything has been flushed everything has been replied to and um oh actually i guess if the we now want to handle the case of the right channel has been emptied and we're exiting i guess that doesn't mean that there are no outstanding futures that's a little awkward yeah so the question here is um we sort of want to know when to send the request to close everything off right um i don't have a like we want to send this this closed session request i think for now we're just gonna uh not tear down things nicely even though we we ought to but i mean you're just gonna not deal with that for now all right how are we in compiling uh response right this is now create session photo mod is now why is it complaining about this this expected sender found unbounded sender yeah um so here i actually want this to be a like an enqueer so there's gonna be a pub structure i guess create uh enqueer so notice how almost all the code we've written is internal abstractions for our library the external interface is still basically non-existent but the hope is that most of what we've written here we can reuse really easily um on the outside we now have a sort of good driver for running all of our internal futures um so enqueer is really just going to be a wrapper around uh this business it's going to be a wrapper around this and we're going to implement on enqueer enqueer is going to be very straightforward all that's really going to do is it's going to have a function that is enqueue it takes a request and it returns a impulse future items response and error is uh failure so notice that here we're using impulse future here just sort of sort of hide the mechanism we use because we could totally imagine there's some more efficient way than a one shot future channel maybe to communicate this but we don't really want the user to know all we want them to know is that you get a bike a future that will eventually um evaluate to the the response that you've got um and all this really has to do is a it create all it really wraps around what you gave it as it creates the channel for you so it does a one shot channel uh and then it does self.zero.send and it will send a I guess this is unbounded send so this is something that was recently added that I'm very happy about so unbounded senders normally implement sync and remember how sync for the if you call send on a sync it consumes self but for unbounded senders you also have unbounded send which just consumes self by reference um which is very nice because it means that we don't have to consume the incur in this case we're going to send uh the request which is this and we will send the transmit channel to send things back on and then we return rx and we will map the error so now in lib the incur is going to be enqueue so and we will then move the incur into here yeah so this is actually going to give you the and incur like so this takes a self and to be mutable uh no function new on packetizer oh right and proto mod all right uh and the other thing that's nice about this is the and cure is not tied to the type of the stream so you can make a packetizer and pass around and cures without the and cures having to know um how we're communicating with su keeper this might not matter in like many other contexts but I think that's gonna turn out to be nice for us for testing in particular um expected option found double so if you have an unbounded receiver oh right that gives me an option so we're gonna match actually we'll just match on poll so poll is going to give us so we have an unbounded receiver here right so when we call poll will we get back is a no it's not at all will we get back where's my uh prelude uh stream poll we get one of these guys uh errors we want to propagate uh if we get an acing actually I do want try ready but if we get an okay uh item tx then we return item tx if we get a none then we return an error it's a little weird actually because I don't think you can what oh return uh expected option from results expected sender found sender on 98 because this guy has to be out so I'm using uh future sync and future instead of futures unsync here so that you can actually have many different threads sending and receiving responses from zookeeper at the same time um we're gonna require that that's a little sad but yeah we're gonna require that ss send and static in order to make a new one so the error that happened there was uh we were trying to call poll right and poll read and they were lumped in the same input block but we actually only require send and static for new because it calls tokyo spawn um and tokyo spawn might spawn things on a thread pool for example asian greed is not implemented right um and I guess here s where s needs to implement send it needs to implement static it needs to implement async read and it needs to implement async write so it needs send and static because we're spawning it and spawning might happen on a thread pool it needs to implement async read and async write because we've only implemented future for packetizer when s is async read and async write uh 62 expected failure found nothing oh right we don't actually get to have this return an error so that's also a little awkward the problem here is the packetizer is now sort of running out in some thread pool somewhere and so if this future errors um tokyo doesn't know where to return that error um and we don't really have a good way of communicating it back um so I think the way we're going to deal with this for now is to um is to just map out the error here expose this user this error to the user so I don't actually know that the best way to get this back is one way might be to the next time the user tries to or that that it just like resolves some future with that error instead that way we're guaranteed that it gets propagated I think what we're going to do for now is just draw which isn't great but we'll do it for now stream s handshake stream s that will only work where s is s is async read and async write to use tokyo does not meet futures which is interesting all right so 41 what's left expected response found tuple right does not get the cure it just gets the response uh and now there's no none and this now holds ah that's great so this holds an incure did I make the incure pub crate yes indeed oh that's great so this now alone ah this now longer needs to be generic or s which is also beautiful because it means we can do this and it means we can do this and it means we can do still need to have that seven handshake is generic over any s and 22 cannot move oh ununbounded send and that's why it's nice all right so in theory this should connect us the way we want to and uh make everything happy let's just throw it at the wall and see what's next oh I guess this fail to a queue this should just never fail pretty sure this should just never fail because the because we know that our receiver shouldn't be terminating I guess the way this would fail so this is sending a request to the packet writer failed this would only fail if the packet writer has gone away we know that normally the packet writer will keep reading until this channel the sender goes away so it should always be available so this would be if the packet writer went away and that can happen fail to include new requests that can happen if the it's true that can happen if the packet writer goes away the same here failed to the other question here is I probably want just my own sanity drive debug cash eat partial eat ward partial award and then I want requests to be debug and response to be debug photo mod so so the issue here is that um this function doesn't actually return a result right returns a future and I sort of want to say that I want to return a future if this failed so the way I'm going to do that is I'm going to match on this and if it's if it's okay uh then I think I think the okay from unbounded send does basically nothing uh so MPSC unbounded sender result of nothing uh so if nothing happened then this and then we're going to use the either uh future combinator which is also really handy so I'm going to use uh future either so then either a that or if there's an error uh then I want to return either b I think this is legal results thought into future great now it's complaining about a bunch of things in particular west at 61 uh it's fine uh foo a bunch of unused off codes which isn't terribly surprising n which is no longer in use uh response which we're not really using although I guess we could print it out now that we're debugging anyway this is going to spawns uh proto mod ignores length the 201 skip the length line 44 that's gone away so what do we have now are there op codes and cure connection so connectional zookeepers never used and result on proto mod 208 does a send and doesn't do anything with it uh I think okay so this is we're trying to send the response to someone who inquired a response but the receiver has been dropped and I think we just want to ignore that failure the receiver doesn't care we don't all right cargo t run a test in theory we now have everything we need to connect into the handshake and all the other things should actually be a lot more straightforward takes one parameter slip all right uh ross see I sort of wanted this that's really the the signature I want for connect the question of course is how does this choose super each adder what's each adder and can I even get one probably not that's really unfortunate it just want a nice api where people can uh find it will take a socket adder and uh we're just gonna have to do 127 001 port 21 81 which is the zookeeper port dot parse dot all right how about now uh isn't there a way for me to parse a socket adder connect time this tokyo haven't I just want some code I can copy paste for connecting tokyo oh is that all I need connect and do this great line 54 zk I think is an api we can use now is that a complaining about not found in tokyo oh okay so tokyo run is also a little bit stupid because tokyo run takes a future and runs until that future completes but that future can't return anything it was very often what you want is you want the ability to just use you want to resolve a future and get its result that's not something you can do easily that way the way to get around it is you create a tokyo runtime yourself and then you do rt block on all that I want maybe not could not find run time in tokyo oh then rt shutdown on idle this is saying keep running until this future resolves and then shut down once the pool is idle that's sort of progress what is it saying failed failed to enqueue new requests canceled canceled you say so and q didn't work that's interesting so we presumably get to the handshake part uh about to handshake yeah first of all I want these errors to go away variant is never constructed requests what is it allow rust allow enum variant wow I can not spell today I'm gonna guess no allowed dead code maybe yep let's see we've done it and then I don't want there's not the warnings to go away so it is easier to read this code this as well allowed code for now and this food business great so this still now fails with fail to enqueue new request if I run this with no capture what does it say about to handshake okay so it it does get to the point where it tries to handshake it makes the packetizer and so it's the enqueue that fails well I guess we'll have to look at what the packetizer does so packet and we want to see whenever it decides that it's done which is this place in theory it shouldn't feel like it was done right yeah the packetizer is not done packetizer is being pulled so why is it failing to enqueue the new request uh proto mod 27 so it's trying to receive the response and that's been canceled so this suggests that the uh when we send the tx to the packetizer that tx is simply dropped so why is that tx dropped that means we're dropping something from reply or we're never inserting it into reply so here uh e-print line got request so the idea of course was that the tx would be stored here and then that uh somewhere down here uh or is the here uh recovering uh our handling response to xid this uh with opcode this to the opcode all right let's see what we got so it does receive it receives the request it does receive the request the question is why does it then drop the sender that is a very good question because that's the only place it could be removed so that's a very good question why it would drop the response actually let me um just if other people want to follow along with the code i'll make a repo for it tokyo zookeeper this first non-working public push let me push you or jay and now of course the yeah so now the code is here if you want to go see it so it's at uh github donhu tokyo zookeeper um so the question now is why is the receiver being canceled given that it's not being removed it's almost like the packetizer isn't inserting it anywhere but there's nothing that can fail between these so it definitely got the requests which means it definitely inserted it here so why on earth would it go away unless of course the whole packetizer is dropped drop for a packetizer i'm not sure why that would be the case either it is being dropped so that's the reason it's being canceled but the question is why is it being dropped uh because the packetizer should be returned here which means it should be returned here i mean we it's true that we're dropping the and q are here but we're not really dropping the packetizer because the packetizer is just being um the packetizer is just being spawned onto the tokyo runtime i guess i guess it had an error that we just dropped so we get for taking shortcuts all right let's see uh failed to fill hole buffer failed to fill hole buffer but that shouldn't really be a problem it should be allowed to not fill hmm so i guess let's uh dig into this some more so a print line poll and q i mean my guess is it's in poll read all right so it gets to poll read and then fails to fill the hole buffer so somewhere here oh that's totally what it is no though we shouldn't be calling that so the question is here i want to know where exactly that i don't think it gives me a back trace though yeah so the issue is specifically that at some point it's trying to read inside this function and instead of just returning woodblock it gets like uh an error saying that they um but it failed to read the number of bytes i think the only way that could happen is if we uh as if we use the read i32 api so the read bytes x because we give it all just a slice it knows that it reaches the end of the stream and therefore it would give that error instead of woodblock um which i think means that it gets here um my guess is it fails in like response parsing or something um yeah i got there so i'm gonna guess that it gets here and did not get there that's interesting so i mean this is the connect response so maybe the length there is wrong or something got here with needables four but seems wrong oh more than or equal to four that's pretty silly more than or equal to four means that we get the length i got here still says with four read more bytes yeah so the this sort of suggests that it's not reading enough data read more bytes read more bytes how many self in length should it be need equals four instead of not equal to four uh where do you think it's not equal to four read more bytes have zero read more bytes have zero dot here with need equals four uh oh yeah you're totally right you're totally right good catch it passed we successfully connect to zookeeper you're totally right good catch and let's see what we get back so we get back a look at that that's something zookeeper gave us at least in theory don't know why it's uh read only true like these things look a little bit sketchy but why would it give us a negative session id but okay that's progress uh let's um get rid of these got here business i do probably want to keep in these for a while but at least this is uh where's the packet i can't connect hooray get pushed um so now let's try to run some other operations so in theory my hope here at least is that um writing more operations now should be basically trivial uh i mean it's probably not going to be but i want it to be so we're gonna derive um clone and debug for this and then we're gonna have if you have a zookeeper you can let's see what's the interface for this crate on zookeeper so i'd like the simplest things we can that add create and exists that seems great create uh sure actually you know let's make this as simple as we can for now we'll in fact just be this and it's going to give you back an input future uh string i mean i don't know what's in there but sure why not failure error this is one of those places where we probably want uh structured error type and now what this is going to do is it's going to self dot uh connection dot thank you uh now we're gonna have to define the request and where's our friend over here somewhere so that's connect request to connect response request header fly header create request great so it's going to be a new request type it's going to be this and if you're trying to write out a crate request and that's going to look like is the data acl flags and like all of these are ref you might want to make uh requests be um oh no let's leave that alone for now um flags so this is going to be very very very similar to what we had in uh had for connect because it's really just going to be uh oh what is it write to this is uh some funky stuff going on here so this is going to be create create uh oh that's because these are all funky these are new types so we have strings and vex of various different types ah so this is why they've got this uh write to trait yeah i think we want that trait and that seems like a good idea so we're gonna keep this trait so um and there's an info of write to for u8 for string and for vex t so and then also for acl although for now uh we don't really have this acl type yet do i want this acl type probably do don't i live what's an acl no acl oh yeah i don't want this yet it's too much stuff so we're gonna claim that this is actually an empty vex um to do write and so now um in self dot iter dot write two why are there no oh t implements write two are any other write two implementations here connect request request header etc etc that's all fine okay these are all the same all right so what this gives us is the ability to do uh path write to buffer data write to buffer acl write to buffer so my guess is that the encoding of vex is just the number of elements first yeah and for string is just the length of the string first yeah okay so these are very straightforward um it just nice to have a a convenient convenient way to encapsulate it so that we don't have to repeat the same code for writing the length and events each time oh right i result result yeah so this is gonna in queue a request we're gonna hear i was gonna use proto request create it's gonna have a path which is going to path dot into owned we're gonna have data which is going to be data dot into owned right uh what else is in our create acl and flags acl which is going to be a vex for now and flags which is going to be zero for now request 108 stir that's what i really want for i want d ref why does it not give me d ref so i guess i'll just do this for tick a for tick a t and that's just gonna call self dot write to writer how about like that wait oh it can't um fine uh evacuate that and then i think maybe i can now get rid of this uh it's because unit doesn't implement yeah exactly unit doesn't implement and i don't really want to implement write to for unit um seems seems not not the right thing to do um and response also needs to be able to parse a create response create response which just is apparently a string let's do it properly and that is just a read string so i guess this is the opposite trait which we probably also want to borrow from here right and this is very similar to the buffer reader that we already have so and now if we get an off code actually here we do have this off code create that's going to give us an okay response create response where it's just going to be response so notice how usually the the core protocol things end up looking very similar between async and non-async things it just what it means when there are errors but now 38 results i would rather have it be i o result than all these failure errors on methods that are basically i o stick to that um it's gonna complain about 32 because here there's gonna be an error kind i guess a woodblock read buffer failed source live 55 there should be two strings i guess yeah so what i was thinking earlier is we might want requests to have a lifetime so that you don't have to give owned things into it but it makes it a little annoying with the future because you don't actually know um you're sort of have to have to guarantee that the reference is valid until you read the response which is a really weird lifetime to have and not one that's uh trivial but we'll see how this works out and see if that's nice 53 that's gonna enqueue and then map so this is one thing that's a little bit annoying um is that uh sorry finish typing this first um is this case that you send a request for a create and then you know the response also has to be a create but the compiler doesn't actually know that this is the case um we could sort of fix this by adding a trait bound here saying that are enqueue takes any type and returns a future that resolves into that type's resolve type and then only implement like resolve the resolve type trait for create response for create request that way you'd be able to pair them up but i think this is nicer that would that would prevent abuse though i guess um um let's find our proto mod enqueue to do maybe uh so the proposal is that we make enqueue uh take a uh an r a rec and a res uh self rec takes a rec returns an input future item is a rest where return something like that so we could have a we could have that kind of a bound which would guarantee us that we wouldn't be able to to return a different um response of the one that was requested um so that would be pretty nice um i think for now i mean we can do that i'm more trying to figure out whether there's a better way than using unreachable so i think this pattern bothers bothers me a lot because it shouldn't be necessary this type is going to be mean that enqueue is a bit more of a pain to use though because it means that you don't really know what response you're going to get back um and we wouldn't have this convenient enum type either um got a non create response to create response 63 why is this so now let's so see how straightforward that was to add so we're going to do connect and then with the enqueuer it's going to do that then we're going to do zk dot enqueue no zk dot create uh slash foo zero x 42 that's all i want to write there that gives me a future and i'm going to block on that future actually that's not even what i want to do there's so many ways in which we could do this uh i think i think what i actually want to do is this oh that's another thing that's no actually that's fine so this is going to give us a zk and then we're going to do zk dot uh and here we're going to do zk dot create and then path the print line created path like so uh it did not like that how about now cannot infer type for t d2 it doesn't like let's see whatever we get this time so it connects um um oh read buffer failed that's not great hmm read buffer i mean i guess we did change read buffer but not by all that much um wait so this means that uh a quest is failing read more bytes have four so how many bytes does it have when it tries to do this decoding wait it works some of the time that's terrible 41 bytes so sometimes it works and sometimes it does not um and also the path that gets back is empty which is a little disturbing hmm that's odd so the times when it read buffer failed 41 bytes okay let's uh try to inspect what these bytes actually are in each case so here's a case where it fails then we get those bytes okay and in case it succeeds we get these these bytes so that's kind of interesting so they're the same number of bytes but for whatever reason one we can parse and the others we cannot the xid we got back we parse just fine regardless so let's look at our uh response parser the the response from a create session should be you first read the protocol version which is four bytes then you read the timeout which is four bytes uh then you read the session id which is eight bytes one two three four five six seven eight i don't think that's true i find it very hard to believe that the session id is the same every time but okay then it reads the password password you say which is encoding a length i don't believe that that's true i think we're missing something and then the read only because this uh read buffer so read buffer reads four bytes and tries to read that many things there's just way too many bytes so this can't be right feel like that password is like this zero or something so we parse this uh this is a different kind of response though so the thing we get from the server is first the length which at this point we've already parsed out right yes and then the xid which is the next four so what is this then this reads from four so this means you should not read the xid when it's a connect response oh that's weird right you see this so it reads the length which is zero four and then xid which is four four but then it reads offset four which is four for a connect reply which to me implies that the xid is not there so i guess we if we know that this is the first response yeah the part that's changing is probably the session key i agree but this just means that the the xid we shouldn't be reading for the connect reply which is a little bit disturbing because it it means that you basically have to know it means we need to know whether we're decoding the first response or a subsequent response so basically this is going to be if self dot first then so we're going to let xid is then it's zero then we know it's the connect response otherwise it's we're going to read the xid and then self dot first response yeah i mean that's uh it's not really connected as much as it's first but yeah all right that seems all good so that's better um and then for okay and then we're at create so create reads the let's see response to create should just be the path oh and the opcode no oh right all right i've completely forgotten about these other fields that's right so um it also zookeeper also keep track of a bunch of these timestamps that you need to pass around to ensure that you um keep progressing and we're not currently parsing those out so we do need to parse those out where does this parse those actually that's a good question uh yeah so that's something that's just extracted over here somewhere by the thread that's spun up by um yeah so these additional IDs are not generally something the user needs to know about um here yeah it's a reply header this thing um and the reply header they parse here right and that reply header holds zxid what else where does reply header come from proto probably xid zxid an error yeah technically this should probably be a connection state as opposed to just first um but so let's see uh zxid is this actually is an i64 an error so reply header is an i32 and i64 and an i32 what does it do with the error yeah this is also right so you can get a response to a close that makes sense this is where they handle connection responses we don't currently implement timers and timeouts either which is something we'll have to do this whole thing but crucially we do need to read out these fields and this would be xid we're going to need things like the z the zxid and the error we're going to need later but for now uh we just need to make sure that we parse them out otherwise they'll be a part of the response empty empty is uh very few bytes why is it empty i mean this sort of suggests that this no this can't be right length is 16 which is 8 plus 4 plus 4 so where's the content of the where's the create response hmm it's a very good question let's uh fire up why shark shall we and on loopback and i want port 2181 and let's run this again 2181 all right no don't right so now i want to follow the gcp stream so we send so this is us sending the uh connection request this is the connection response this is us trying to create slash foo and this is the response we get back so this packet so this first four is the length and that length is 16 you see the the 10 is 16 and that that's correct because the length is 20 and then we get four bytes of zxid which is one which is correct because that's the xid we assigned to the first request and then we get 12345678 so 53 is the zxid and then this is the error code ff oh so maybe there was an error that occurred then how do we know what the error codes are hopefully that's also listed somewhere hmm it's not gonna be easy to find is it actually let's look at the rest one instead and see if there's a any error parsing going on here source error so keep it derived it seems like something else const aha that's what we want to see so this i want this whole thing so we're gonna have uh error dot error dot rs it's gonna have this whole file and we're going to do here prepper i32 and also derive debug and this is going to be a this as zk error so proto mod we're also gonna have a mod error actually i guess we'll do this and then if error not equal to zero then you print the length this error so now it's the question how do i create ones i think i can just do this i have to do from last you know do any of you remember off the top of your head don't really want to match you can use transmute that's not really uh a num macros i don't really want to depend on that but does this also give me i guess num derive derive numeric traits i mean i guess that's what i want a little sad i have to add a crate for it but um derive is 0.2 and then i want zk source lib stern crate and derive and now we want this to also derive from oh i need macro use as well i guess derive macro from primitive source that seems really unfortunate oh i do that stupidly can't find crate for num traits it seems weird um i don't really want to do this this way um so in that case let's just do this pubfn from actually input from i32 for this from the self which code like so and then we want to substitute no i only want from here oh did i do a stupid i did didn't i regular expressions what did i miss why is this not legal uh the substitute preview i think is new in um in neo then i don't remember exactly what the place it's a good question it's somewhere in here i think it's relatively new but it's really nice uh i don't remember you'd have to look it's somewhere in my vmrc um which is on github if you look at uh at my repositories is one called configs and that has all of this um so yeah if you take a look there you should find it all right what error did we get it's saying marshaling error oh so the request we sent to zookeeper was wrong interesting well then it's not terribly surprising that it didn't want to do that i guess actually the what this gives you back is it resolves either with an error or a zk error uh and then here probably tidy up that a little bit later but um the result of this and zk error this will hold a sender of result this this looks like a place for a pibellius but because now here where we send the response um we sort of want let me error is none error code then error is sum this and then uh if let sum e is error then uh tx.send error e not is it okay us we do this happen regardless 239 this is gonna send an okay so basically this is just wrapping the one shot so that we can send errors back as well um we might want to just unify these and not have two separators but for now let's do it this way um we're gonna match on r and if it's an okay this and we do this if it's an error e then we panic with e and if it's anything else then it's unreachable uh i think plain vim has this feature now as well actually um but you'd have to check great so now it actually errors the way we're expecting it to marshaling error so why is it that the string we send is wrong i'm like fairly sure that it's uh for some reason it doesn't like this i mean i'm like almost certain it's the acl um or maybe it's the flags what do flags have to be so for a zookeeper here if you do create you have to select the mode ah yes we probably do have to give some kind of mode here and that's what we're missing um so specifically create request when it makes this create request what does it make it with mode as i32 yeah and modes are probably not allowed to be zero no persistent to zero so i should be allowed to do that um ah if the acl is invalid or empty invalid acl is returned so we have to send an acl i don't want to that makes me really sad i wanted something that was straightforward to implement uh ah let's do exist instead yeah let's do resistance then um so i don't think create will actually be that hard but except we'll have to also port the the acl stuff um which i don't want to do at the moment because we're already running like very late so an exist request has a path and a watch and i'm guessing watch is like a u8 or something it's like a bool okay so that's fine uh what exactly is a how do i find one proto give me proto so an exists request is a string and bool request okay so it really just is a u8 and the response is a stat response the stat response is a stat and what is a stat oh no data stat oh no it has so many things all right fine get a stat so i guess we'll do something like source data or source uh i sort of want to call it types but it's not really types let's do types rs for now and then we'll deal with that later um so stat is in types uh this is going to have a mod types is going to probably use stat what is our what does exist return here an option stat okay great um so we change this to be exists it takes a path it returns an option stat it's going to not make a create but it exists for the path and a watch which is going to be zero and exists stat it's going to get back a stat all right so what was the response to this exist response is a stat response which seems to always be some so when would this ever return oh zk error no node i see so error zk error no node is none so that's how it does it oh right we have it actually this is up so this is going to be exists it takes a path and a uh watch which is just going to do buffer write u8 watch exists our test of course is going to call exists so and i guess response this is now going to have exists just going to be a stat we're going to use stat there uh we're going to get a stat back and we're going to need a way to deserialize a stat i guess what's the read from so where is read from implemented for stat there we go as we probably want to borrow more of these traits so this is one of as i mentioned before like if someone else has written the protocol before especially in the same language it just saves you so much work like these are these aren't necessarily all that hard to write like if you look at them it's not like there's something that we would have a really hard time writing once we have this truck it's just really convenient not to have to um because now in here exists gives us an exists where stat is stat read from forms 32 a messed up my syntax here somewhere what else do we have zk error uh use types use uh proto there's a pretty clearly a bunch of cleanup we could do for the organization of this library um i'm just i want to get to the point where the um where all the base functionality works oh that's already there so i just want proto so now 59 this doesn't work because expect a result found option get a result 57 all right let's see does foo exist still a marshaling error that's so weird is there a zookeeper log fail to process establish session fail to process get data unreasonable length oh i know we're doing wrong when we're sending the request where is this um let's see so give me let's see the port is 2181 what is it we're sending out when we send out our request this one we send a length wait the first four bytes should be the length why does it think that the length is uh very long that's very long whereas clearly the length is that correctly for the connect so i think this is the length for the string so in the request we send out we first give four bytes of length then four bytes of xid then four bytes of opcode then a length and then the string 2f that seems far too long oh wait four one two three one two three four and then one two three four and then one two three four yeah something's not right here because we're supposed to send uh the length no i shouldn't need to send zxid the request header is just length followed by xid followed by opcode and all of those are four bytes yeah the request header is just xid and opcode and opcode uh is uh is just an i32 and the length is an i32 am i miscounting here because like if if this is indeed where the where the data starts then four bytes for the length one two three four bytes for xid one two three four bytes for the um opcode what is opcode for exists opcode for exists is three there's no three here so this makes it seem like we don't actually send the opcode or like we send the first three bytes of the opcode and then it gets overwritten or something oh i wonder whether that's what it is yeah it's it's totally right here we're not actually sending the uh the opcode this needs to send opcode uh exists great so now we get back exact exists some whatever and that's actually because i did it earlier if i do zk cry give me a command line interface help uh delete slash foo and now if i run it i get exists none right this is none and if i create food banana and then i run it now i get some with a data length of six which is how long banana is yay okay so we now have uh we now have connect and we have just any arbitrary one of the calls specifically we have exists i think that's pretty good so this is basically how far i wanted to get today um to get to the point where we have a fully running asynchronous uh client it's obviously nowhere near feature complete but as you've been following along hopefully you've seen that we've sort of built all the internal infrastructure we wanted for the asynchronous stuff and you even saw just how simple it was to change from create to exists it's very straightforward and of course the hope would be that adding the other methods should be as well there is still some more complication in terms of adding things like a watchers so um if you look in the original api there's also this notion of you can give it a watcher and no that's not it there's like a way where you can yeah listener i think is the one so you can basically set it to watch or listen for changes in a given path and you'll be notified whenever that path changes and of course the way we could implement this is you could uh call watch on some path and it would give you back a stream over stats of that path for example right so that would be a way in which the user we could actually give a nicer api than what you can currently get from the current crate because we can integrate all of these very nicely um i think we're gonna probably call it right around there is it pub sub um yeah so i think with the watcher api you can basically use zookeeper as pub sub except that it's not quite pub sub it's whether a given path has changed so the contents of a path or its flags um you don't it's not like you get an infinite queue that you can just keep pushing things to it's you can watch a given path or i think you can watch anything under a given path but i'm not sure uh we'd have to dig into that more um although hopefully so one of the the so one of the reasons why i want to stop here is we've been running pretty long and we've sort of um uh gotten to cover most of what i wanted to cover today the other reason i want to stop there is because the stuff that we've covered so far has basically been independent of zookeeper it's mostly been about like how do you implement a protocol um how do you implement it async how do you use tokyo on top of that uh and sort of how do you package this and test this at least in a somewhat trivial way um but the core that we have for tokyo zookeeper now is actually pretty solid like the the the way it's implemented internally its internal state machine um is now extensible to the point where we could add another um another method or two and they were pretty straightforward to add to the existing setup um and so that means that the the next stream in the series will probably be adding a bunch of the different methods and see how they tie into the infrastructure we've now built um and also add things like listeners and watchers and other things that are more zookeeper oriented uh as opposed to just sort of like almost general uh protocol level infrastructure in fact one thing that would be really cool is if there was a way to take especially the packetizer that we have and write it up in a more generic way so that you could have any protocol use this as its internal state machine um but I think that's a that's a task for a much later in different time uh all right I I think that's where we're going to call it for today um if you have questions about as well I'll push all the changes and uh and maybe tidy up some of the debug out but but I'll push all of that and if you have questions about the stream for today if you have questions about uh zookeeper or the implementation we're doing or upcoming streams then just like ping me on twitter or on patreon um and I'll take a look and try to get back to my plan going forward is that the next stream will be on this as well it looks like it it might be feasible for us to get pretty far with this library over the course of a uh one more stream um but I think it will be at most two more streams of this um I also want to do some more things on uh standard library implementations like try to implement our own new text maybe um but there are a lot of cool projects we could tackle so if you have ideas for that as well then let me know uh I hope you've found it interesting and and uh educational and like feel like you were debugging with me as opposed to just watching me tear my hair out but yeah I think that's all for today uh thanks for coming out to watch and uh I'll see you next time I hope all right bye