 Hey there, happy new year. Happy new year. Got Harvey here as well. You should have brought them flowers down here. Yeah, yeah. Not for this meeting. Hold update. Oh, you already put the one up there. Yeah, but the drone's back here. There's an emergency conflict. Yeah. I want to, yes. I'm going to show you the get to it. Works for me? Since we're starting with V3API update, I think that's up to you. Okay. So V3 Alpha, many of the major PRs were merged last week. So anyone who's working with the APIs internally, if there's a V3 Alpha alternative, you should be using that. Most of onboard internally is updated. Now, in theory, they should make absolutely zero difference to configurations between just like bootstrap, which comes from the wire. If you notice issues, please speak up and say something because we are planning on freezing V3 after V3 in the coming days. And also cutting the 1.13 release. Ideally before end of week. So a lot of this lies in us getting a sort of useful signal from folks who are actually, you know, working on master its head and who might notice things, issues that arise as, you know, they actually do rollouts and they, they bump to pass where we've cut V3. We've noticed in like the recent, a couple of recent PRs, there has been some memory increase in like per endpoint and per cluster. The per cluster one is not significant. Per endpoint one, I'm actually about a merge one, which will increase the minimal endpoint overhead by about 20%. Which is something to do with V3. It has to do with switching away from house to cluster load assignments. I don't know why exactly, but you may notice that you may also notice some additional CPU usage. If you do a lot of configuration processing, like for example, you know, ingesting large configurations, some of this is to be expected. We haven't quite quantified what that is. We expect it not to be like, you know, a factor of two blow up, but it could be a single digits or low double digits overhead. So we would like to learn more as we progress, but I don't think any of these things are going to be show stoppers for the 1.13 release. So onto the 1.13 release, it's as you know, it's time for our quarterly release. And I'm not sure there's been a lot of stuff which has happened in the last quarter. One of the major things will be the V3 API. There's lots of other things which are going to make their way in there. I'll be looking at kind of that release later this week, if we don't hear any, you know, if there are any fires burning. Yeah, I'm not sure what else there is really to say there. Okay. I think that's a good summary. Are we, so are we shooting for the end of this week or do we think that that's not, not realistic still? I don't know. How are you going to deprecation? Yeah, I mean, I think, deprecation is probably the long pole. That's my guess. I have everything set up. It'll take me like an hour per PR. Once you land yours. I was reminded last night. Oh jeez. I messed up. Okay. I'll do that. I'll do that today. Did you do the, the changes to the runtime thing or did you know something? Yeah. We'll sync. I should be able to get one of mine, if not both of them in today and the other one in tomorrow. Like again, 98% of the work is done. It's like merging Harvey's and doing a tiny bit of cleanup. Yeah. It's like, like spelling of config strings and tests. Okay. Cool. Yeah. I think. Yeah. Yeah. I think that on those runtime keys, my personal opinion is that we can play it a little fast and loose just in this sense that I think people generally are not setting them. And if they were setting them, they should have told us. So I think it's okay to just like change the keys and just make sure we change it. We changed it to something that made sense. Yeah. Yeah. Yeah. Yeah. Yeah. That should be. Awesome. So yeah. So I think optimistically end of week. If there could be potential slippage inch to inch next week, but definitely no later because we have a bunch of other reasons. That I can't slip further. And particularly like right now we would like folks to refrain from merging any high risk PRs. So obviously you kind of made in that state indefinitely. Yep. You should take that on maintainers because not everyone's here. I did on the channel. Oh God. I'm so. Thanks. Okay. Okay. I just wanted to briefly mention this has come up before, but and it came up again recently with the increasing number of extensions that people are trying to merge into the main repo. I think we are going to have to start thinking about either initially doing multiple Docker images. So you could think of like a, you know, like a core Envoy version with like only security vetted things or something like that. And then there's a, yeah, per that, per that tweet. There's the Envoy ultimate edition, which has all of the, all of the extensions, but I think the next logical step here, which is something that the community is really going to have to talk about is whether longer term, we keep every extension in the main repo, or we end up having a, you know, something like Envoy filter example that ends up being a little more official. Just in this, so like all of the overflow extensions would go into that repo and the bar of merging those extensions would be less. And then if they want to get them back in the main repo for inclusion in like the core images or something like that, you know, we would require fuzzing or like various other security type things and documentation and stuff like that. So there's no real policy here. I think I'm going to put something together for, for community discussion in the next couple of weeks. But if, you know, if folks have thoughts here, you know, I think that would be useful, you know, or if anyone has any thoughts on the call right now. Yeah. I think that the thing to notice is that submit to Envoy proper their extensions for a variety of reasons. Now, one of these is because, hey, part of the official Envoy distribution is going to be everybody's Envoy binary. It's going to have and so on. So the other is get CI coverage. The other is like so that it's, you know, like probably I don't know that it's considered to be one of the high quality Envoy first class extensions. I figure we can at least provide the CI coverage one by this external repo and that's essentially what we'll be shooting for, right? Let's think if you want them, you're going to have to work for them at a, probably a higher bar, right? Yeah. So if you're, if you're interested in this topic and if you're, if you're a vendor, you probably are, you know, I would say definitely reach out. I think it would be an interesting conversation to have. Does this, oh, quick question, Matt. Does this then broach the question? Bring us back to the question of loadable modules at some point. Yeah. I mean, you know, it, it, it does. And there's the open issue there, which again, I don't really think it's that hard. Like I think someone could implement it. I don't know that that solves the question. I don't know that that solves the problem though, because even with loadable modules, you're still probably going to expect, you know, some type of bundling and especially with C plus plus, you know, they all have to be compiled at the same time. Yada, yada, yada. So I, I, I agree that loadable modules potentially makes it better. I don't know that it really fixes the underlying problem in the sense that people are still like many vendors today from what I've seen in the envoy ecosystem is they're expecting like people to bring their own envoy and have it just work. And that means that all of the extensions are already bundled in. So I think just from a community perspective, you know, there's been some ongoing work in like having config dumps, say what extensions are compiled in. It's just like, this is a very complicated topic where we have a tension between people that want to have a very limited trusting trusted computing base and vendors that want like the kitchen sink so that they don't need to worry about deployment issues. It's, it's quite complicated. Yeah. I think if we want to go to loadable modules, simply based on the way of assembly, no VM, even if it's there, C plus plus modules will be the only thing that really makes you avoid having significant breakages. Yeah. Basically on a longer basis with no internal API stability. Yeah. Anyway, if you're if you're interested, please reach out to me on Slack. I'll type something up and we can we can hash it out in a doc. The only other thing that I wanted to mention about the stable release policy is that this has also been kicking around for a long time. I think post one 13. We're probably going to start a policy where we start maintaining multiple stable branches. So that will include probably back porting security fixes to at least the last two releases to give people at least a three month window to upgrade. And it will probably also mean in conjunction at least with the Istio team of back porting bug fixes also. However, this will require community stable maintainers. So we've talked to various people who might be interested, but I would expect to see another policy update here probably in the next couple of weeks. But we will need community help. So if you're if you're interested, please also reach out. I thought Istio wanted at least a year worth of releases. I think it right. So I think if they, I think if people want more than X, they're, they're going to have to pony up the resources, but that's something that we will have to hash out here. Like I don't, I don't think it's reasonable to ask, you know, the primary envoy security team, the upstream maintainers to say back port security fixes to the Istio team. So I think that there's going to be a lot of resources for releases. It's just too time consuming. So I think that, but these are the kinds of things that we will have to hash out for sure. I mean, technically they already have resources because they're already doing. No, right. Right. So like this, this, I mean the, the quote get resources may just mean that the Istio people agree to do it. And if they don't do it, it just doesn't get done basically. But again, you know, I mean, I think it's really important that the upstream maintainers will and won't do versus what the community will and won't do. So for example, it's great if the community does X, but I think it's more important to say that the upstream maintainers will not do X, meaning like the upstream maintainers will only back port security fixes to the last two releases or something like that. But again, we'll help with the scaffolding. I mean, I mean, I mean, I mean, this is really important that we have to work together to make the last two releases or something like that. But again, we'll help with the scaffolding to, you know, allow people with early access to do the work to do more, right. Yup. I'm done. I'm going to guess at Yon for anyone hasn't Yon. Welcome in Yon. So what actually now and on the. I think in. I think. Is that a lot of awesome work in the areas of. the areas of like security fixes from Netflix, they actually do issues around codecs, around bootstrap sort of like sort of node enhancements for exactly this kind of thing that Matt was talking about for extension discovery. And also in the space of like the APIs and so they made a bunch of contributions towards the V3 API. So yeah, that's what we've been super awesome. And I think, I don't know, Leon, do you want to say anything about the way that we think about this? Well, I think the plan right now is to focus on some of the data plane work and make sure we get some sanity there, maybe doing some refactoring and in general helping the reviews and other stuff. Great. And then if at all I wanted to talk about file system normalization. Yeah, just to one thing that we, most of this discussion happened on the OSX dev channel. It just turns out that C++17, standard file system, just still remains completely unusable or unpredictable between POSIX and Mac and Windows and so forth. So we finally realized it's easier to just deal with all of this in terms of POSIX or in terms of well POSIX or Windows specifics. So we're basically looking, we've almost got a patch together that gets rid of standard file system altogether from the working tree. And I'm also looking at the fact that there's a tremendous amount of overlap in the API between the way we're doing IO handle for sockets and the way that we're doing the file system API. It looks like there's got to be a super class of that or we should be able to abstract this enough that all these things look similar and are usable by everybody without a great deal of hassle. So that's our current project and why we've kind of gone off the rails again in terms of getting the core master compiling on Windows because this problem was to be dealt with. Yeah, on the standard file system side, I think that makes sense to me just because I think that will likely allow us to upgrade to C++17 since we already know that on Envoy Mobile, like standard file system is not going to work for the foreseeable future. So my advice is we already have the file system abstraction. I think as long as we just plumb that through everywhere and then even if it's not implemented yet on Envoy Mobile or in Windows or whatever, we can get that done. And then on the IO handle stuff, there's more work that needs to be done there. So if you could flow any proposal by myself and Dan, who's working on Quick, she and I can take a look. I'm happy to do that. Actually, I realize now what your point is at me because the next agenda item is mine. I just wanted to put in a plug before I release it. We've been iterating a little bit on a proposal to experiment with improving a header map performance in the worst case. And it's almost ready to be distributed. And I just wanted to see if, say, if you are interested in reviewing it and be on Slack, I'll of course put a link on the channel as well. Could you get, I mean, could you give some, like type of quick summary of what it is? Sure. So there's some order and squared, there's some order and operations around headers, which the system doesn't know about yet. And there's also a little bit of a conflation of being fast versus coalescing behavior, which I think would be better to clean up a little bit. And actually it doesn't, the doc does not specify what to do. It just gives a bunch of options and pays some performance graphs from what I had experimented with like a month ago, just to kind of get that out. And I wanted to kind of get somebody full-time to work on this. There's also like another issue, and that is like every time we show this code to security auditors, then we tell, we put them at envoy. I know Matt's looking away because he has solutions here, but historically every time we have security auditors, folks zoom in on this code. It's not that the code is incorrect. It's that it's, it burns time with the auditors because they're looking at very low level mem copy, malloc, you know, buffer resizing kind of stuff, which definitely, and I realized with the app salvers, like, you know, replacements for that, whatever we do should hopefully simplify things and make it so that this is not the focus of auditors time. Auditors can focus on like the interesting stuff. The only thing that I would say is, I think there's two separate things here. And what I would suggest is that the proposal that I put out to replace all of the low level stuff with AppSeal, I think I'm hoping that will be non-controversial because I think algorithmically, that should roughly replace the current implementation with something that is much easier to understand. So like there shouldn't be a lot of discussion around Perf, like we can do basic benchmarks, but it should be roughly the same. I think what Josh is talking about, because I think Josh, this is based on that PR that you did like a month or two ago where you did this adaptive thing that like occasionally makes a map or something like that, right? There's a couple of other options, which I never, I never prototype, but I listed out there. Yeah, but functionally, like the last time we did audits, I was doing audits for like M squared header stuff and there were a couple that were like, you know, we get these reviews where someone adds a new header lookup and it's like finding this header is going to be, you know, organizing it and like adding those up and it's like, it'd be nicer to just not have to worry about it and say the status structures are efficient enough, we don't have to stress them again. And also like one thing I said that Josh is looking at it a while, hopefully we were discussing this in a day, was like, could we just rethink everything and start with a simple map and then go from there and like, yeah, maybe that isn't performed enough, but like let's actually do some simple benchmarking and actually, you know, start off. Is that any of the simplicity here could allow us to remove a lot of the other things that have historically been complicating here. And, you know, I understand the original innovation here, but over time, you know, trade-offs change, you know, forms of app style, flat hash map might be different than that of, you know, STD, unordered map, all this kind of stuff. And we should just like, get a clear understanding of what complexity is buying us where we have it and what is the essential complexity and what is actually optional. I think that's all fine. I mean, and that sounds like a great use of time to actually look at this from a holistic perspective is something that I've wanted to do for a long time. My point more is that I think we can land a functionally equivalent code cleanup relatively quickly. And that will be probably non-controversial and make auditors like much happier. So I would suggest that we just do that because I don't think it's that hard. We'll give us a little more breathing room to hopefully look at this from a, you know, like a top-down perspective. Also, Josh is cleaning up, I think links to like the four in-flight open issues we have regarding header cleanup. And it's saying like, let's do all of it. Yeah, yeah. Yeah, I think that's the most uncomfortable part. Although I think that if we make some data structure changes, I'm not sure if the other data structure swaps will still make sense. No, they probably don't. It's just, I think it's just a matter of timing again. Yeah, because we really have weeks of analysis to do before the real one lands and you could probably do the actual swap one in a day. Yeah. Yeah. So, I mean, but on that point with all the fuzzing we have now in a bunch of other stuff, like does it really matter? Probably not. So I'll defer to all of you to decide what, you know, what we should do there. But yeah, it would be great. Like this is one of the areas where, you know, that code is really from three and a half or four years ago. And it hasn't really had any, I would say rigor as performance analysis since then. I think that's it. We had for the agenda. Does anyone have other things they want to talk about while we're here? Yeah, this is Gary. I just had a quick question about the dynamic forward proxy. I saw that it was checked in there, Matt, back in the summertime. And according to the documentation, it's done as an alpha state. I haven't seen a document document update since then. I've been playing around a little bit with it. It seems to be stable. Didn't have any issues with it. Do you have an update as far as, is it expected to be stable or are there other things? To be, to be perfectly honest, the way in the past that we have gone from alpha to stable is when I hear about people using production and issues. Sorry, there's some, some bad echo there. So I think it's probably stable and we can probably remove that tag. Yeah, like again, we don't have any rigorous process right now moving to alpha this table. As robust, I think it's marked as robust to upstream and downstream untrusted upstream and downstream. So it's sort of surprising that it's still in the alpha state if it's marked like that from the security perspective. I think there's, I think again, this probably needs a little more rigor in terms of how we mark different things. I think the security perspective is obviously what the intention is, meaning we would fix any issues that come up there. I think the alpha status was more just typically when we put out a new filter that doesn't have a lot of production coverage, you know, I think we're just trying to say that it doesn't have a lot of production coverage. But I do know implicitly that there are people using it in prod. So I think it's probably okay to, to, to switch it now. But again, that's yet another process thing that we probably need to look at. So I will, I will make a note there. Okay. Yeah, that's great. Thank you. Has anyone done any sort of scalability testing in the sense that from the dynamic hosts that are sort of at scales to I don't, I don't know. I think the code has all of the right circuit breakers in place to avoid unlimited expansion and things like that. So I think that you could do your own measurements, you know, to, to, to see, we definitely wrote the code like with it in mind to, you know, use a limited amount of memory, but I think you'd have to do your own measurements. Okay, great. Thank you. And last question I had is currently the HB connect support is present yet. Are you aware of anyone that's planning to work on that? There's, there's someone that said that they were going to do it, but I haven't seen any progress on that. So I would assume that it's not being worked on right now. It's something that I would love to see land. I think we have a pretty good idea of roughly what needs to be done and it would just require a development resource. Thank you so much. Anyone else? Hi. Are you hear me? Yeah. Yeah. I'm working on, I'm coming and I'm now I'm working on the and I want to enable the employee city on 64 platform. But I have a few questions about it. The first question is which cloud platform is chosen by the invoice. I mean, I mean, the invoice is a CD platform support. The invoice is a CD platform supporting the arm 64 platform does support arm 64. Right. Right now we're doing our, all of our builds through Google's RBE service. And I, I don't know, but I doubt that there are arm hosts available. So it seems like it, at least if we do it through that system, it would have to be cross compiled. I know there are, I know there are a couple of people running arm builds right now. There's an open issue for it somewhere that has people linking to their builds. But as far as I know, there's nothing official. Yeah. Okay. And Google cloud. Sorry, what? Is it on Google cloud? Yeah. So right now the RBE system runs on Google cloud instances. My, my advice would be to follow up with Lee's in who's not on the call right now. He's our, he's our build guru. I, I don't know that anyone on the call right now is going to be able to, to give you like in depth low level details on this. And, and as a heads up, Lee's on is out of country right now. So there, there's a bunch of time lag. So email may work better than, than Slack. Okay. Thanks. Or Slack understanding there's going to be eight hour time lag is also fine. Hi, this is a rule off. I have a question. So you're currently using the Google Cush for quick. Would you be interested in another implementation using NGTC to make it work? I'm probably not. At least not without someone doing a ton of work on their own. It's just like getting it, getting it working at all is a absolutely herculean effort. So, you know, Yeah, I would say from, from experience, I, I tealed the quick launch over at Google. It took us like two and a half years of tuning to sort out all the weird corner cases in, you know, the pathological act patterns and stuff like I wouldn't want to go through that again, which is why we're doing the keys integration for envoy is that's what we wanted. We have, I don't think we have objections, but it's more like the, the open SSL versus boring SSL thing of like maintaining Pew is going to be uncomfortable. So if we have a good reason, like let's do it, but without it, it's easier to have one. Yeah. I mean, we, we did build the quick listener thing as an extension. So it should be technically possible to plug in a different quick listener. But, you know, I think that's the kind of thing that you would probably be mostly on your own. And I'm honestly not sure what technical reason, what, what technical problem you would be, you would be solving. Okay. Thank you. Sounds good. See you on a few weeks. Bye. See you in a couple of weeks. Thank you.