 And we have Matt. Do you want to call that quorum for a quick update? Sure. Who do we have from on the quick side? You guys want to actually, I'll scoot out, and you guys can actually huddle in front of the laptop. So we have Dan and Ben here from the Google Quick team. Hey, one sec. What? Do you want to take the big room or? I don't think so. Doesn't matter. You just from joining over there? Hey, how's it going? Awesome. Yeah. Do you both want to just give a little status update? I guess that would be great. Yeah. OK. I guess that's about what we have done. What's going on now? Right now, quickly finish all the platform implementation. And we are able to build all the quick call libraries as external dependency. And on the quick pipeline, the stream between the on-way and the quick stream interaction, we have on-way buffer to quick memslize conversion. And on the quick lesson aside, we have received message. We have time system. And we have alarm scheduling. And we have proof source. But it's a fake leak implementation, which can still allow us to do integration test. But we will eventually be a real one. And also, yeah, this is a thing that we have done. And also, I'm working on the send message to allow sending source address. And also, what's the question? So can I actually just recap that for people less familiar with the Quick Stack? Yeah. I think, essentially, part of the way the Quick Library was built was for this neutral platform thing. I think Matt's familiar with it, too. So essentially, there were a bunch of utils for doing addresses the way that Quick is doing an address wrapper and alarms and all that stuff. So that's basically done. And then some of the basic utilities are done. So Dan's basically working now on the larger building blocks to get things working. So we have the fake proof source, which, again, isn't going to be a real cert validation, but that'll obviously happen next. And then what are the other major chunks to do now? Right now, I'm doing a send message to allow sending source address. And also, working on building the Quick HTTP library, which still depends on some upstream fix, because we depend on speedy code. And what happens, really, making the code work for Amoyan. Got it. OK. And do we feel like we have a good plan of how we're going to do the actual integration with the HTTP connection manager? Or is that TBD once we figure out all of the plumbing stuff with the library? I think so the HCM codec stuff. Basically, how we want to do the monolithic Quick codec. Quick codec, actually, long time ago, I had a kind of change to mimic that. But back then, we don't have the Quick library yet. So it's basically just a document about interface. Like here, we should send a quick stream, a codec stream, like right here. But at a high level, we think, essentially, we'll hand each packet up to the dispatcher. It'll dispatch it to the Quick codec, and then the Quick codec, like the HTTP codec does. Yeah, I mean, that's my current thought, is that we'll essentially hard code the listener, so that we can instantiate an HCM. And then we'll have a Quick codec, and we'll just spit out the normal messages that it already expects. And I think everything else should mostly work. Yes. I don't have a Quick codec, like a detailed design. Yeah, that's fine, yeah. Yeah, basically, same as what you are, what do you think? Yeah, OK. In the project planning doc I shared before, like there's a Quick codec session. Actually, there was a link about the change that I tried to make in the past. About how our HCM, like, avoids the concept, how that interacts with QuickStream, but we don't have QuickStream being dependent on that yet. OK, so I guess, just so I understand, based on the current status, do we have a very rough estimate of when you're planning on having a proof of concept working? I mean, is that one month away, like three months away? The reason that I'm asking is that I'm starting to think about the L4 hashing component. So I'm going to do a doc, probably in the next couple of weeks, of essentially finishing the basic UDP proxy and then building a Quick hashing component that would hash on connection ID. And that would allow people that don't have a hashing L4 load balancer to stand up an L4 envoy that would hash on Quick connection ID. And then it would forward the packets to the back ends that would actually do the L7 termination. So I'm just trying to get a feel for it would be nice to align those things within a couple of months so that for people that want to do a proof of concept alpha, they could stand up the entire system. So do you have an idea of when we could get an end-to-end professor? For integration test that only supports Quick work on a single work thread, we can have it maybe in two months. OK, all right, cool. Yeah, I think that probably roughly aligns with the time frame in which I would be working on the end-to-end L4 proxy stuff. So I think then what I'll do is I will target within a month to get a doc going where we can discuss what we want the L4 hashing component to look like. And then I think that by then hopefully we'll have, like you say, a simple end-to-end integration test. And my assumption is that you're going to work on enough client support so that we can do an integration test locally, right? Yeah, that's actually something I want to talk about now because there's just one blocker for us to really have a quick, to use a current quick line code, which is a GURL. In our quick code, we just assume the serve party. Sorry, what's that? GURL. What is that? Well, I don't think we should be amen to use our existing quick client. I think that with fairly minimal work, we should be able to plug the codec class into the client side. Yeah, that's my. Underneath the codec, we just use a quick client directory. Right. But when you still need. Oh, the underlying library codec seems that we have a URL class? Yeah. Oh, actually, if we use a codec. So maybe we need to talk about that. Yeah, I need to look more about whether we really need your. What I think is on the client side way, even it's OK not to use a codec at all. We just use a quick client and tell a quick client to give it a URL and they will send a request for you. Well, we should. I think now I think we really want to have. So once we have once we have the actual codec class that takes like a packet and spits out streams, it should be pretty easy to use the existing envoy client test library rather than using the existing like quick, simple client. Yeah, and and that's that's where we want to go. Also, because what what we're going to do is on the envoy mobile side, we're going to make that into a production quality implementation that does like TCP fallback and a bunch of other stuff. So anything that we can do on the integration test side to like get us to a non-hacky solution so that we can have a proper client would be good. Obviously, we don't we don't expect you to do all of that production quality work for for an integration test. But it would be great to think through, you know, what would it take to to fit in within the existing envoy concepts for client side? Yeah, so I don't I don't think like if it gets you up and going faster, I think it's OK to have your like a first synthetic test land with the Google simple quick client. But I think essentially, we should be able to reuse and 90 percent of the work we're doing for for for the the envoy codec abstraction to be able to just say, OK, create a codec and then use on this client. And yeah, yeah, that's my that's my thinking also is that if we can make this work on the server side, we should be able to have a codec client that basically supports quick. And I and again, like I admit, I don't I don't know all the details, but it seems like with all the plumbing work that you're doing, it feels like it shouldn't be too bad. So I'm like so basically I'm very well with the raw passing stuff and just a quick stream interface. Yeah, I mean, essentially the the way that the envoy abstraction work is you you have that the codec layer basically pulls in data and spits out streams. And so for server side, you know, we've talked about pulling data in from the socket and then handing it in the ECM, you know, through the codec and the codec spits out streams. That code, again, with the minor differences for like client side server side. So like, you know, OK, do I expect a URL and a method or do I expect a status code or whatever? Like that same code is used in an envoy to create the envoy test clients. So for GFE, like we have Jetstream and we have like a completely synthetic test client, but for envoy, the codec class is used both on the server side and on the client side. So like once you've done that code, you're 90% of the way they are to having it. And and once we do that, like I said, once we move towards doing a production quality implementation for use in actual client side production traffic, the more work we can do to fit that into the existing codec client abstractions, it'll just get us closer to actually having that having that work. OK. Yeah. So, right, right. So I mean, we don't like we don't obviously figure that out right now, but that's something that we should talk about and track towards. And that's something too that I'm going to be spending I'm going to increasingly spend time on quick in in this half. I'll be focusing again, mostly on the L4 portion now, because that's work that you both obviously are are not doing. And then once the L4 hashing component stuff is done and I finish UDP proxying, then I'll be available to help with, you know, like. Yeah, I was going to say essentially whatever, right. How different on the quick side, the client versus server codec is, but it could be that like if if we land the server side, you know. Yeah, and and this actually comes back to some of the conversations that that we've had where there is some overlap here in terms of how on the client side, we might want to support happy eyeballs and like quick to TCP fallback and stuff like that. Oh, yeah. So I've got some ideas on how that might work. And again, like obviously the expectation is that both of you don't don't do that. But in so far as we can, you know, get closer to that direction that would. And I will say absolutely when you get to that point, we'll want to sync up just if we have five years worth of lessons learned for you. Absolutely, right. And and and again, like we'll be putting together design docs for all of that. So, you know, I don't expect to do any coding before we all review. But but but yeah. So all right, great. Is there anything that we can be helping you both with that would make things go faster, or do you feel like you're mostly unblocked at this point? Other than the drill, I think like nothing is blocking us. Yeah, but it will be great if like because I've been creating some like standalone issues. Yeah, if someone can pick up, it will be great too. OK, yeah, I don't I don't know what the time frame that will happen. So let's just keep opening issues and then we can obviously track those. The other thing is it's worth calling out on the the Envoy UDP Slack channel because I know not everyone is on the call from other people who are interested in helping out. So yeah. Yeah, and I think I think most people are also on that mailing list, the Envoy Quick Dev, so you can either post in Slack or you can reply or just send an email to that list if there's particular issues that you might want people to pick up. And like I said, I think my initial focus is going to be on the L4 hashing component mainly because everyone that isn't Google doesn't have that. So I mean, for anyone other than Google to do a production quick deployment, they will all have to have this thing. So I would like to get that going. Just again, just just calling out. I do think it's super worthwhile to do when when we hit prod initially and we're deployed and live, I think. We made it all the way to Chrome Stable just hashing based on IP port. Really? Oh, oh, wow. OK, I just essentially work. We addressed a client side and basically said if you're getting bad QoE, you did a port rebinding. You just disable quick and then you latch it where you had a port rebinding, so I got it once we did all that stuff. We got an extra five percent penetration or something, which is good. And it's people coming, but you can totally OK, use quick without it as long as you've got the client side. Right, right, right, right. You know, it's something that I think I want to do anyway, just because it'll cover the basic UDP proxy case also, which is something that people have been asking for. So also for anyone who has deployment of transparent proxy, you absolutely want that because one of the biggest QoE gains we got was when we stopped rejecting quick out of transparent proxies and just threw forward. Yes, yeah, huge difference. Yeah, plus it was a nightmare dealing with it. So yeah, OK, well, I'm just saying like it's it's not as critical as you would think as long as it works, which took a while. Interesting. OK, I just assume that it just basically wouldn't work. But all right, that's good to know. Yeah, it does actually depend on what time I use. So you can use a much, much longer idle timeout if you're willing to do air flow lashing. Yeah, maybe just ashing. I mean, that's really what it comes down to. Yeah, we'll use a 30 second idle timeout like from those currently. Then it doesn't matter. Rebindings are actually extraordinarily rare on that time. Got it. OK. Intuitively, if you haven't done a lot of UDP before, you could assume that a lot of people have hard coded like 30 second timeouts, even the tropics flowing, which turned out to not be the case. So yeah, OK, well, you know what? I like I said, in the next couple of weeks, I'm going to write a small doc on this topic and we can just discuss there. Well, sounds good. OK, great. Awesome. Thank you that that fully answered my questions. Is there anything that anyone else wanted to chat about on the quick side? That's pretty much it. I yeah, I think other things we can discuss after we have the. Oh, sorry, there is one other thing that I wanted to point out. Just so everyone is aware, there's a group of people that are moving forward with trying to get on boy working with open SSL versus boring SSL now now. So before we all freak out, here's here's what I'll say. What we have said is that we are not going to support open SSL in the main repo. We will only support boring SSL. The other thing that has happened is that what this group is doing is that they are where it makes sense. They're trying to put in an abstraction layer to basically abstract, you know, abstract, boring SSL and open SSL from the underlying from from basically the TLS needs. My standpoint is that I don't want any of you to be blocked on trying to get quick working in any way with open SSL. Like we will just assume that it's boring SSL only. And essentially what we'll do is just if it'll be up to them, if they want to try to get it working until then, we can just completely disable quick on an open SSL. I don't think you can. I mean, given that boring SSL has this custom suite and the built and actually I thought that the having quick disabled and the no fit stills or as much as we need to might be a really good to make sure it like that we do play on the compile it out because the boring people are going to want to have it. Yeah, yeah, yeah. So so basically I just I just wanted to warn you all that this is happening and just make sure that you understand that there's no expectation in any way that you support open SSL. So like feel free just to move forward to just like assuming that boring SSL is the is the only thing. But I just bring it up because you might see PRs or you might see people commenting in certain ways and feel free to push back. Yeah, I brought it up with then with regards to the stuff. Yeah, well, it's it's a blessing in disguise because that will make sure you make the open SSL stuff. Yep. OK, great. Cool. Well, I tried to bring all the things. Oh, interesting. Well, again, I think I think that's something that they can totally do. And that's something that we totally have not signed on for. But yeah, I think, you know, if you work, it'd be cool. Yeah. So like if if they want to do the work to make it work with open SSL, they can absolutely do that. But I just I just don't want us to spend time supporting that because we'll just stop progress actually out of curiosity. What were the hooks in? Do you know? Is it just like zero RTT? I think it's the RGT stuff. OK. Is there something else? I mean, I'm pretty sure it's all zero RTT because I'm boring SSL of women's TLS one point three, but it is not. Oh, it's it's the one point three stuff. OK, it doesn't mean one point three with zero RTT yet. This is the last time I checked. Got it. I didn't have to reach sales in the last week. So essentially tip of three, tip of three works for the one point three. OK, thanks. Cool. Thanks. All right. Thank you. Is there anything else that anyone else wanted to chat about? I guess I'll just give a quick reminder that the CFPs for Envoy Khan are due July 12th, so in like two weeks. So or one and a half weeks. So if anyone wants to do a proposal or needs help, I am actually it would be super awesome to have a proposal on the key stuff. So if if any of you are interested in doing a talk on just the the quick integration work, that would be, I think, really interesting. So I would encourage you to do a do a proposal for that. I'll fill them in on detail. They look they look very skeptical. OK, I didn't really want to talk. That's right. Yes. OK, cool. Does anyone, I guess, have anything else? Oh, another thing is like while figuring out the big more stuff, like I realized in the lesson that I thought probably I need to make some change about it because as Alisa said, we want we want to we want the Envoy core to still build without good dependency. Yeah, that's not considering the design doc. So I will make some change to make the listener stuff. Sounds good. Yeah. And and, you know, on that topic, we can we can do it in the simplest way possible, which might just be that if someone specifies it config and it's not compiled in, it just basically throws an error like we don't we don't need to do anything intelligent or something like that. Yeah, great. OK, thank you all for coming and getting that update. Right.