 Do we have any agenda? I'm not sure, I'm not sure, I'm trying to think. Is there anyone to have anything on the agenda? Yes, we have a long question about the compilers. Okay, we'll put that on the agenda. What was the end result of C++17? Can we use it if we don't use certain features? Is that it or what is the? The end result was that Lisa and Mike and I talked about his timeline for getting to C++17 because we were able to compile iOS with C++17 with some flags, but it crashes at runtime. And it just didn't have time to look at what that was like. So I told him that we could prioritize that factor. I know, I don't think it's that big of a deal, but I mean... Are you gonna move Envoy C++17? It's being discussed, I think we would like to, but the issue right now is that Envoy Mobile that runs on iOS and Android, there might be some issues. Okay. Anyway, we can get going. What was your question about compilers? Yes, we tried to build the unit test of Envoy with GCC and they were failing. So we were asking if you're going to support GCC or only one for Envoy. Which version of GCC? Six, four, I think. Definitely not for six. So this came up on the last call and I believe someone from Red Hat. So right now we do compile the source code with GCC. We don't compile the tests. This came up on the last call where someone from Red Hat was gonna do a PR to compile the tests also with GCC. I'm fairly certain that GCC six is way too old. So I think... I think it's seven or seven. Yes, it's GCC seven. Yeah, so it'll be seven or eight will probably be required. I would recommend reaching out to the folks at Red Hat and between you and Red Hat, if someone could just do the CI change to make the GCC. So right now, if you look, there's a CI job for GCC, but it only compiles source. It doesn't compile test. So someone just needs to go in there and turn GCC on for tests. And for the CI? Okay. Yeah. How we can reach to Red Hat guys? Is there... I think it's Brian Avery. Is that right? I'm not Demetri maybe. I'm not quite sure. I mean, you don't even need to talk to them. It's just someone has to go in and do the PR to CI to make GCC compile the tests. Like it's a pretty simple change. Okay, okay. We'll try. Yeah. Thank you. Anyone else have anything? So people from Google joined a couple of minutes ago because of Zoom issues. I don't know if there was any questions for us or anything or... I don't think so. We just got started and we're talking about GCC again. So we're gonna... Talk about this last time, I remember. Right. Yeah, so hopefully we'll get that fixed. Did the folks at Google have anything that you wanted to chat about? I had a question. What company authored the EGDS spec? Is that Alibaba? Alibaba, yeah. That is something that we as a community have to review. So it's on my list for today to actually go through that. I would really encourage folks from Google to also look carefully at that. I think Snow had looked at it and possibly had some concerns. My personal feeling for what it's worth is that at this time point, anytime we add a new XDS API, we have to put it through a lot of scrutiny because it's something that we're gonna live with for quite some time. So, and this code is already so complicated that I'm just weary of adding more complexity unless we're all agreed that it's the right way forward. So it would be actually great if folks at Google that care about the either, we can call it sliced EDS or incremental EDS. I think we all agree that it's a need that we have to have, but we should all make sure that we're aligned on what the API should look like. Right, yeah. Do you have someone, Josh, on the control plane side who could possibly help with reviewing that spec? Well, the obvious person is Harvey and he'll be back next week. What about Hannah? Well, I mean, I was thinking more of someone who works on the Google control plane. Yeah, probably we'll need to have Hannah look at it. Okay. Yeah, we will have someone. Do you think it would address, oh, sorry, who can talk like about this? Yeah, I was gonna say, when we're, keep going, I have a different topic, but we can keep going on this one. I only had one topic and I don't know what might be the resolution and it's the Nighthawk and the way we manage right now, the relationship between Nighthawk and Unboy with some large scale refactoring that we had recently, we've been running into problems that Nighthawk doesn't build and you put a single repository, we imported a single repository and we have issues. So building, you know, both products together. So I was wondering if maybe there is potentially some solution, maybe there is a way for us to be more proactive, making sure that Nighthawk builds at master's head or I don't know if that maybe creates some other problems. Just wanna put it out there that maybe that's something that's worthwhile looking at. I think there's, sorry. I was just gonna say, we have the same problem with Envoy Filter Example. Envoy Filter Example is doing something similar, pulls an Envoy and every once in a while, the build breaks and no one notices for a while. And we file a PR. I don't think we've reached the solution yet, but it would be nice to have one, yes. Yeah, it seems like also the repositories outside of Envoy get very little traffic from maintainers. So I had some PR sitting in filter examples for quite a while, which is actually fixing a build breakage. So I don't know if maybe we need to tighten maybe notifications or I'm not sure either. Like I have some automated way to sync these repositories to Envoy's head or that's, I don't know if it's worthwhile to make it part of CI. I'm not suggesting that, but maybe have a better way other than just have somebody randomly look at it and say, hey, it doesn't actually build, let's do something. Yeah, so thinking through a few things, I just wanted to point out that I think this problem is going to get worse in the next six to nine months, because I'm going to be proposing soon that we have like a separate sandbox repo for filters that either our alpha quality or like happen past the security bar, et cetera, et cetera. We're just reaching a point at which the number of people that want to contribute filters I think is outpacing our ability to actually review them. And I think we all probably agree that WebAssembly is the future here, but I mean, if we're realistic, it's probably two years before like WebAssembly becomes the de facto way by which people actually write extensions. So I just wanted to point out that, I think we have to think through this because this problem will get worse. I don't have any answer here, but just like brainstorming a couple of things. I think that we could do a Nighthawk CI job like we do Envoy filter example. So we do actually import Envoy filter example and do look for bill breaks there. It doesn't fix all of the bill breaks, but it fix some of them. We could also look into some other things which are more operational, but like we could look into Slack notifications or like some type of status page, when our Ansela there repos are broken so that we're like low priority emails, that would go out to some public lists that tell us when it's broken. Like I think there's some low hanging, just like DevOps fruits that we could probably do that would make it easier. Maybe put it on the maintainer list to at least look at it once a week. Yeah, yeah, I mean, I would rather though have some type of notification. So it's like whether that's email and to be honest, if we do this thing around, let's call it filter sandbox for the lack of better phrase until we have a proposal. I would expect the community to actually get in there and fix the filter sandbox because the maintainers and primary PR owners are not necessarily going to be doing that. So I think we might actually need a public email list like Envoy bill breaks or something like that, that anyone can subscribe to and we'll subscribe to the maintainers and maybe the maintainer on call would look at it and fix it if it's easy, but the expectation would be that the community would go and fix it, but that would at least raise awareness. It's like people would know if it's broken. Would it kind of run nightly? I would think it would either run nightly or like maybe on every master merge or something like that. I think we can sort that out. I'm open to anything really. Okay. All right. I would suggest that maybe, I don't know how we want to track, one of the things that I have noticed and maybe we want to make a new, I hate to propose yet a new repo in the org, but we actually don't have a place for discussing organization wide issues. It's like, you know, like we kind of need a place where we can open issues against things that affect multiple projects. So, I mean, I'm wondering if I should make a like community GitHub project, which will only accept issues and we could use that for discussing like org wide issues. I'm not sure if that would be useful. That's just a thought, you know, that we can use that for these types of things. Right. I don't know. I don't have a good sense because I'm not a power user for GitHub. So I don't know how visible it is, how easy it is for community members to come in and say, I have a community wide problem and hey, here's the thing that just for me. I would imagine like if someone did have like a community issue, they would first go to the issues page and we could like pin an issue that says, you know, move your community issues to Bob. Yeah. And like these types of things don't happen so I feel like if someone opens it in a project and some maintainer realizes that it's a community wide issue, we can just move the issue. So I don't, my general mode of operation is to, you know, try to have as little process as possible and then we can figure out what's working or not and then we can always fix it later. But I feel like there is some low hanging fruit here that we can probably do. Okay. And on this topic, this is the kind of thing where if we come up with a list of things that we want from a tooling perspective, this is something that CNCF is actually pretty good at. Like we can find a contractor to come in and like help us build some scripting or some stuff. Like we have to clearly specify what we want. Okay. Yeah. So I would suggest as a next step, whatever you're comfortable with, either I can make a hosting ground for an issue, we can discuss in GitHub or we can start a Google doc. I don't think it really matters. Well, let's try the GitHub hosting, you know, the separate project for that. Okay. I will make that today and I will send out an email to everyone about that and then we can just use that for org wide issues. Okay. Sounds good. Okay. I did just wanna briefly give a update on this header map thing that I'm doing and then maybe since we have the folks from Google there, we can briefly discuss next steps. Does that sound good? Yeah. So I, this refactor has been extremely heinous but I'm actually almost there. So it's like I have one more massive PR which is mostly changing types. So like moving the right types and then I have one final PR which actually splits the interface which means that like request headers won't have status, response headers won't have method. And the nice thing about this PR is I've had to fix all kinds of tests which had like pretty busted assumptions about using the wrong headers. So it actually has cleaned up quite a few things but I would suggest though that once I land the next two PRs, that's probably the time to stop and then talk about next steps because I think that there's a lot of things that we could do at that point. I have a pretty clear idea of how to implement static registration of 01 headers but then there's the question of whether we wanna get rid of that system entirely. So I would love to hear from all of you on what do you think, like once we get this portion of the refactor done it'll give us a lot of flexibility in terms of what we do next. So is the next step to pause and then do some perf analysis? What do you all suggest? So yeah, definitely do some perf analysis. I'm trying to get a use case, a real worst scenario basically and see how it works and when it works well and it doesn't. And I would just suggest using different implementations under underlying implementations. And maybe you know, maybe request headers will be one type and responses will be another, but the metrics is still an open question. Okay, well I think what I'll do if this works from you is that I will land the next two PRs because in my opinion, it can only make performance better because now like request trailers have no 01 map at all. Like response trailers have like two entries in the map and the request and response headers are now basically split. So they're using half the memory. So it can only make perf better. And then at that point, I can do a small write up on I think what would be involved in implementing static registration of 01 headers, but I'm not going to implement it because I don't want to waste the effort if we collectively decide that that's not the way that we want to go. Yeah, I think that the key, I kind of agree that the next step would be to set up like a benchmark that we feel is representative of a few, like actually probably several different benchmarks that's representative of different scenarios that we might have of like external, like if on was an edge proxy or if it's an internal proxy. Common patterns that we would see in those environments and I don't, I'm not actually sure of all the things to vary and maybe we can like brainstorm in that doc or a new doc. What would be like a good kind of just simple request flow Oh, I guess that another thing that would probably come into play here is when there's path matchers that involve header matching, where we don't know and compile time what the names of the headers that they want to match on are. Correct. So I'm just thinking of a few things but I'm sure that I wouldn't think of on the spot. Yeah, and collect that. And then we can maybe set up some nine clock based benchmarks where we can just have a script that runs through all those and takes an hour or whatever and get to use some numbers so that we can make rational decisions on what the data rep should be. Yeah, that sounds great. And I'm torn here because from like a pure programming fun perspective, like I feel quite confident that we could make the 01 map fully extensible. Like I'm quite positive that we can make it extensible both at compile time and even probably at config time so that people could basically configure all the headers that they care about. With that fully said, I agree that the complexity is probably not worth it. So, I think maybe whether we iterate it on the dock or the GitHub, I think it is worth it to explore from a operator perspective. Do we want it to work well in the general case? Like are we okay if we tell operators that they need to specify like what are the headers that they care about? So that's the only thing that I would suggest is that beyond benchmarks, I think we should look at some user stories and just try to make sure that we capture the ways that people would realistically use it. Yeah, that's a good question. I think we've tried to get this information before. I think for us it's challenging because it's considered PII. Yep. We can just simply go and scrape a whole bunch of... Yep, makes sense. And say, oh, this is how people use it. Yep, yep, yep. Even though I wish we could. So I think it would be good to get also input from community. Agreed. This is actually was the biggest data point for us is how people outside of Google use it. And unfortunately to us it's somewhat opaque at this point. And yeah, and just like one last point that I was gonna make. And again, I'm not actually proposing that we do this, but as part of this whole refactor hell, I've fixed a ton of assumptions where basically there was code that was assuming that like the only implementation of header map was header map impulse. So it's like, that's all fixed now. So we do actually have the potential relatively easily because there's very few places that actually make header maps in the prod code that we could have different implementations. So again, not proposing that we do that, but I think this is an interesting problem because we have a huge menu of possible solutions here. Right. But by the way, thank you for doing this huge work leaf refactors. I personally think it's really valuable to separate by types and have a static, essentially static enforcement of behavior. I think it's a really, really good thing for a critical piece like header map. So kudos. I caused the problem in the first place. So I feel like it's my duty to fix it. We've all done that, but thanks for stepping up and doing all that. I was scared of that one. Yeah, it's been really awful. Actually not in the prod code, but that the tests are just like heinous because there's no easy way to just like refactor it with find and replace. Like it's very manual and awful. Yeah. Anyway, but I'm almost there. I'm like sorting through various final issues. Okay. Okay, great. That sounds like a great plan. I'm super excited about that. I have like one more thing to bring up. Like we're similar to how we ran Hack It. We're also running like an internal fuzz it for like two or three days next week. And I have a list of like proposals and things that we have and like proposed tasks for fuzzing. But I also wanted to solicit if there's anything community important that people file an issue or people follow up with me or whatever, if they want like a specific area fuzz or they have an idea of like something that's important to them. Awesome. Are you able to share the list that you have now? I think I can scrub through it and make sure I can. Like I can, I would love to share it with you just so that you can kind of see like also like it's, I have a doc that's kind of a collective tracker of all of our current fuzzers. Okay. And I think that that would also just be helpful to have. Maybe I will just publish the current ones as well onto our like buzz read me page as like a sort of status update. Yeah, that would be good. I'll actually do that. Yeah, I think that would be really helpful to everyone to have a sort of like status of current fuzzers. Yeah, I'd say that would be great. I'm happy to look at the list under NDA but I think from a community perspective it would be great if we could do like a cleaned up version and just let them know what, what you think the status is. And if that's something that would better go under a GitHub issue, do you, should I just file a GitHub issue with like proposed fuzz targets and have people comment or? It's up to you. I mean, I think having the current, I mean, it's hard to keep this stuff up to date but it's like having some markdown page in the repo that has some status I think would be useful. And then I already made a label called fuzzing. So, no, so what I would actually do is I would just triage all of the issues that are marked fuzzing currently in GitHub. And then for like future work or ones where you want to solicit people, I think that would be good. Oh, the other thing that I was gonna point out real quick is that I am going to, this week I'm gonna put some Google summer of code projects up for Envoy. One of them I'm gonna do around documentation but like, I feel like fuzzing might actually be a good summer of code project. So if you have any ideas there, we could put something up. I have like a really big class of fuzzing a fuzzing project that I was like debating whether or not to bring up in a channel but like just generally like we have dictionaries that you can fuzz like you can use like keywords and specialized things. And I think that it would be really interesting to take like to somehow plug into our integration test like interesting fuzzing produced like headers. Yeah, that would be great. Like something like that. Like I was thinking about that but like I don't have like a good design for how we would do that. But I think that's something that we might want to do is to like have a way of kind of linking our fuzzing produced stuff with integration tests that we have and vice versa. I would suggest maybe if you have the time just trying to capture all of your current thoughts for future issues in GitHub. And we can at least use that for some group discussion. I would be hesitant for Google summer of code of doing anything that's too complicated. But if you had just like some low hanging fruit that has never risen very high on the priority list but you kind of want it done, that's a perfect Google summer of code project. Okay. And from what I understand Google summer of code projects are like do March 1st. Yeah. So you and I can talk offline. I can tell you where to post things. I'm going to do one on docs. If you want to do one on fuzzing, you can. There's no pressure. Sounds good. Yeah. Okay. I have a quick idea for one just wanted to bounce off you which is I hadn't started to write a bug about this but to like have a more user friendly view of the statistics in the admin port like directly within the server. Sorry. Sorry. What's that? Like to have a better user interface into looking at the statistics right now. You just get a raw dump of all of the statistics that you have to refresh to get an update. And there's an issue about this. Sorry. And just by user interface do you mean like an HTTP interface or like, okay. The JavaScript thing that would actually like where you interact with these things. My personal opinion there is that we should we should probably make that extensible in some way. So like I think it would be really easy to make some of the core admin handers extensible. So maybe like if we wanted to put, you know like some HTML handlers in there we could have that be one of the extensions in the build just because I think most operators probably don't care about that. And you know, just in the interest of less attack surface it feels like we should make that separable. Sounds like it goes back to the admin UIs and extension project. Well, I mean, it's an extension project or at the very least I feel like it should be compilable out probably. Oh yeah, definitely. Yeah. So like I think as long as it's compiled out that makes sense. You know, there's another entire class of project here which I've talked about with Alyssa off and on. I think it relates to stats and relates to runtime. Part of like stats and runtime right now they're basically free form, right? It's like in the code base we have a bunch of strings. And from an operation standpoint it actually is kind of difficult. And you know, for a long time we've talked about what would it mean if we actually had like a proto schema for every stats or you know, every runtime value that actually in the schema had things like help text and like whether it's a gauge or like what the runtime ranges are with validation. I mean, this is a, it's not a small project but like it's also not impossible. And I feel like there would be so much benefit here because now, you know, like not only could you have better checking of what runtime you know, values there are and deprecated features and all of those things, but you could write tools like you could have automated help text. You know, so that's something that if we were looking towards like an interim project or something along those lines like I think that would be really amazing. And would be along these lines of like having better debug ability like better dumping in the admin part. Yeah, there's a lot that you could do there. Yeah. And that would also allow, I mean, if you think about where you can go with that is that if you think about today like the runtime discovery service is basically like a flat struct you could move to sending an actual proto like that can be validated or in the future, if we fetch stats out, you know VN API instead of having like a flat map with strings you could basically dump at like an actual proto. So I think it's something to think through it could be very powerful. Yeah. Cool. All right. I have to drop, but thanks. This is a great discussion. Thank you. Thank you. Hey guys, do we have a recording of those sessions? Like I want to watch it for the compiler talk like last week. What you do, do you have a recording of the Slack recording or something? I don't think we are recording any of these sessions. At least I'm not aware of it being recorded. We could potentially bring it up, but that will be like somebody would have to be present there on the call and record it and then put up the recording. Okay. But I think it's doable but I don't think we are doing that right now. Okay. I think you can just bring it up on Slack like, you know, maybe Unvoy users or I don't know where it would go or maybe send to Unvoy maintainers leases. Yeah, it's certainly recording here. See on the Slack, on the Zoom, it's recording. So I thought maybe you're uploading it somewhere. No, I think it's just, you can do it for yourself. Ah, okay. Got it. I don't think it's, so mine seems to be off. So mine's. Okay. I don't know. Yeah. But maybe it is. I'm not very familiar with this product. So it could be that there is something recording somewhere. Okay. Thanks. Thank you guys. All right. Bye.