 Hi, Greg. OK, here we go. You doing it? Yeah, we're in. That's the best we can do. So usually, this is the one taking all the notes. Nice bow tie. We're going to go ahead and take notes. Oh, vice versa. Yeah, I wish I were going to prefer. Do you use an app? Just glad they didn't block it. OK, single S, right? Did you just say that? Yes. I guess let's give it a couple minutes and then we can get going. Do you want to come around or are you just going to? I'll just listen to him and I'm relating to the seat. I'm more on the floor. OK, if there's things that folks want to discuss that are not currently in the minutes, please go ahead and add them. So far, we have bots, more discussion on filters, extensions, and a chat about whether we should try to merge data plan API and Envoy somehow. Can people hear me? OK, I don't think I can hear you. That sounds like a muscle. OK, everyone's muted. Got it. Should we get started? Yeah, OK. First up on the agenda is bots. So I had opened a GitHub issue, which let me actually find real quick bots tracking. I'm going to link it into the document. And the general kind of feeling from the maintainers at this point is that there's a whole bunch of random things that bots could probably help us with, like helping actually fix DCO, doing release tasks, like doing cleanup, helping to assign people issues who aren't maintainers. And the feeling also is that there's a bunch of people out there who either don't know or don't want to learn C++ or they're just not interested in doing network programming. But there's people that would like to help. So I think our thinking is that we'd like to get kind of a pretty good idea of all of the different types of automation that we would actually like. And then possibly try to reach out to the larger community to find people who might be interested. So that's the general thinking. So I don't know that we have to kind of discuss it at superlame today, but would love to hear people's thoughts on either what type of automation people would like to have, any ideas on venues in which we could find people that might want to help, or anything like that. I'll throw in one plug for a PR I have up right now, which is about sort of automating sort of issue creation for deprecation. I think I wrote that in like an hour or two, and it's just using PyGit Hub and Git Python. And within 50 lines of code or whatever, you can do a lot. This isn't a particularly heavyweight thing to do. I think it's, in fact, I think most of the work here would be just making sure that this is operationalized. We have literally like probably 20,000 lines of Python code at Lyft that does this type of stuff. So if you're ever looking for a different job, you can bring this script to the big leaves. But yeah, I mean, you can do like super amazing things with the GitHub API. So I think this is kind of a larger issue, which it'd be really nice to figure out, like we could definitely use a bot that we actually can write and deploy. It's not super clear to me like where we would deploy it. So I think that's probably a question that we could take offline with Chris from CNCF, just to talk about like, should we deploy it on something like Heroku or like, do we wanna get some kind of small AWS or GCP account where like we could deploy on a micro instance or something like that. I just don't really know how these things are typically done. Yeah, I mean, also for some projects, they're interesting, you know, more complicated, let's say web hooks and things like that. You know, for, I'll avoid in the scale of our development, I think like a cron job which runs every minute that would be better. Yeah, that's a great question. Yeah, there's like super simple things to do with that said, having done a bunch of GitHub programming, like doing the actual web hook stuff, it's also super simple. So if we actually have a host running, like the problem with the web hooks is that the web hooks are lossy, which we found. So you still probably have to do some type of cron job anyway, but if you wanted to do like a cron job that ran less often and then do web hooks to have it be faster, you probably have to do web hooks to do things, like if someone typed like a slash a sign, like you would want that to happen right away. Or if we did a bot that could help people fix DCO, you probably want that to happen because you'd want the bot to be able to comment back and forth. But I think that like if we find a place to host a bot and we can then find people to work on it and actually work on deploying it, it's more, it's just again, it's like, it sounds simple in theory, but it's like, if we have a bot deployed somewhere, we need to figure out like source control for the bot code, like how do we deploy it, permissions for deploying. There's just like a bunch of kind of non-trivial things to sort out. So, okay, so I guess I would suggest that maybe we could take it to the GitHub issue and we could just put all of our ideas there in terms of things that we would like to see. And then maybe we can talk to Chris over at CNCF to figure out like, how would we host something? That's probably what I would suggest. And on a related topic, I find as a reviewer, the GitHub interface is pretty terrible in allowing me to see updates on the last time in which a particular PR has been modified and give me an idea of, as I do my sweeps every few hours, I would really like either a better, I don't know if anyone's aware of other tools to interface with GitHub's API, which allows you to sort of get what's changed, which I'm assigned to in the last six hours. I feel that would actually help improve the velocity of turnarounds of reviews, at least for the ones that I'm doing. Yeah, I don't know of anything off the top of my head. That doesn't mean that it doesn't exist, but even what you just talked about, we could write 100 lines of Python code that would basically do a sweep and send people emails. So it's like, there's, I think there's pretty low hanging for it. We could do some Slack integration also. So you can just like ask. We're just going full hipster here. Yeah, Slack, that's right. Yeah, we can. Yeah, that's true. Yeah, yeah. So at Lyft, again, not to plug our totally hipster development thing, but we have like a whole set of bots that like are on Slack and like also actually go through and talk to GitHub. And you'd be surprised at how little code that actually is like the Slack API is super simple. They could have API is super simple. So it's more just like, where do we host the code? So, okay, how about this? Why don't we find like, where to host some code? And then actually getting like a Slack token and like a GitHub token that we share is super simple. And then I suspect that we could hack on some stuff pretty quickly. Yeah, some, sound good? Okay. Okay, you're doing that action item? Yeah, so I was gonna say, yeah, whoever's typing, just take the action item to me and I will figure out where to actually host code. Okay, anyone out there have any further comments on bots or automation or anything? Okay, okay. Next topic is, so people might have seen, I've started to do PRs for the repo reorg. So the goal is to kind of get the extension code in kind of a more known and consistent way, both for making it easier for folks to discover extensions, like learn from them. And also in the future, I think one of the ideas is that we would like to be able to have code owners for extensions that are not actual core maintainers. And I think this will help us scale. So this kind of discussion item is, an open-ended discussion. I don't really have an answer right now, which is in the future, as we increasingly get people that want to have extensions that are potentially in the Envoy repo, you can think how the Linux kernel works with device drivers, there's a whole process in terms of how do we actually scale that in terms of CI. And that ranges from things where, we have extensions in the repo where they're not actually tested in CI and they're only given like a cursory look for maintainers, that this is somewhat sane to like they're tested as part of CI to we actually have some type of dual sign-off process where we have first level owners who will do most of their reviews and then a maintainer who just does like a quick sanity pass. So there's lots of different models here. I don't really have any answer now. There's a bunch of thoughts. So I guess I just wanted to kind of start this discussion and see if anyone has any thoughts or strong opinions on, if we were to start to extend the repo to allowing both different organizations and companies potentially to have filters or extensions, whether they be for stats or access logging or filters, how do we actually want to make that happen? So one thing I'll throw out there is I think it is a good idea to try and maintain a reasonably high quality bar as we do this because one of the main advantages for upstreaming code from companies and having all this code in the central repositories as we move on boy as things break for these extensions, we fix them. If these extensions don't have suitable tests and that kind of stuff, that's gonna be much harder, right? Yeah. Yeah, so one thing that had occurred to me is that what we could do is actually have a new repo, which is basically call it something like on boy extension sandbox or like something like that. And that repo is basically a total free for all. Like it's not a free for all and that everyone gets commit access, but it's more of a free for all in the sense that every extension in that repo is not necessarily endorsed by the core on boy kind of maintainer situation, but it's a place where people could actually collaborate. And if an extension shows like a particular quality bar or if the people that work on that extension want to promote it basically into the core on boy repo, they would have to match on boy style, they would have to do a CI and test, they'd have to do code coverage. And then they would likely also have to essentially volunteer to be owners and maintainers of that extension. And that would involve not having a single point of failure. So having at least two people that can basically do reviews. And I think again, this is not a fully formed idea, but I think what that would do is that that would allow people to actually host extensions in the on boy org. And then if the extension looks promising, it would allow people to kind of agree to a higher bar that would then allow them to promote that extension into the main distribution. I see one comment. Oh, Chris's double books. Okay. Yeah, so that was my rough thinking just in terms of having that kind of dual layer. And again, like all of this has to be worked out, but like the way that I would see it happening it is almost very similar to the way that CNCF has started to talk about their like sandbox versus incubation kind of graduation levels. So like it's the kind of thing where to get into the sandbox extension repo, you just have to get the endorsement of like one maintainer or something like that, right? But then to actually get into the main on boy repo you'd have to go through the entire review process. You'd have to agree to like being owners and like doing code reviews and stuff like that. So that was my like super rough thinking. And I feel like that would be a way of kind of scaling this out to allow people to do extensions, but not bog down the main repo with extensions that might not be as well tested or as well supported. We would need to solve a similar sort of problem that we have with on boy filter example for CI, right? Yep, yeah. So it's like likely what would happen there is that and again, like I'm just brainstorming now, but I think the way that I envision fixing on boy filter example is again with bots. Like every day we basically bump the on boy Shaw and do CI and then on on boy filter example and on on boy sandbox, like if CI breaks, you know, there'd be some email or like some type of some type of notification, but that would be more of a best effort fix. Like someone will come along and basically have to fix it. Whereas obviously if we break CI in the core repo, that's a very real problem. But my thinking here was there's a issue that I opened and, you know, I was gonna kind of write down some thoughts and kind of give just give people the chance to comment. I honestly don't know what the right answer is here. My fear is that, well, it's not a fear. It's kind of the current reality is that on boy is becoming popular to the extent that there are going to be increasingly be a lot of companies that want extensions. Like they're gonna want extensions for their products, like whether those basic security products or logging products or stat products. And I can just see that there would be an explosion of extensions in the main repo. And I feel that if we don't get ahead of this, it's going to become chaos. I mean, I like the idea of, yeah, at least having differentiated standards for review and the separation in that it will allow our reviewing load to scale. Yep. Just from a purely selfish point of view. No, no, no, no. I mean, it's like there's absolutely no way that we can, you know, require the core maintainers to be reviewing, you know, 16 different stat extensions and like 14 different logging extensions. Like it just doesn't make sense. So I think there has to be this kind of like multi-layered approach. It's more, I've kind of come to the opinion though that trying to do a multi-layer approach within the main repo is probably gonna be pretty chaotic because you know, it's like you're either not gonna test it in CI or you're gonna have like relaxed standards for this one directory, which is kind of horrible, right? So it's like to me, it's like if you're in the repo, you basically have to hear the standards. And then we have this other repo, which is a little bit more of a, it's not a total free for all, but it's a little more of a free for all. And your expectation for the other repo then would be that if, you know, we have some CI failure, that the person responsible for checking in the code would either fix it or that we would nuke the code from that repo or what recourse do we have been here? I don't know. I mean, those are good questions. Like I think what we'd have to do is, you know, there are gonna be changes, for example, like if we change the filter API, you know, today what we do is we'll go through and we'll basically fix all the extensions. Of course, if we change the filter API and there's this giant sandbox repo, you know, it's like we probably have to go through and fix it. To some extent, I think it's self-correcting in the sense that as Envoy becomes more popular and there's more extensions written in the bar to like change these APIs just gets higher and higher. So, you know, I do feel like if people kind of change a core functionality that breaks a bunch of sandbox filters, they should probably go and actually fix them. But this all has to be codified. So that's why I think before we do open season, like in the main repo, like there's already people that are emailing us off-list to say we'd like to do filters for X, Y, and Z. I'm just like I'm really hesitant to start letting people commit filters until we really think this through. Yeah, Matt, one other thing then just again thinking about scaling is, you know, if we only make it best effort to CI on the extension repo, the sandbox repo, as we get more and more developers and more people breaking things, we'll get, you know, essentially hidden failures and things like that. And we'll actually slow down the velocity there as unless we make it, you know, fully pre-submits oriented to CI approach. Yeah. At any point, is there a huge difference between that and the proper repo itself? I mean, you're on block. Yeah, I mean. Yeah. I was guessing maybe there was some, you know, we can put in place some policy around, hey, if this remains broken through no fault of our own as- Yes, right, of course. Yeah, right, right. Then we would nuke that code from orbit. Yep. And I think that's totally reasonable. But I do think that per this discussion, I think there's gonna be actually like a pretty large document that comes out of this, like whether it be in GDoc or Markdown. And I really feel that we have to, like we have to nail this policy before we start, otherwise it'll be total chaos. So it's like my current thinking is basically that for extensions today that are basically either written or kind of endorsed by core maintainers, we just keep going with the status code. So like for example, I'm gonna add a tap dump extension next quarter. And it's like, you know, that's something that as a core maintainer, like I will own, like I will make sure that it works properly. But I feel like for other organizations that aren't core maintainers that want to basically own extensions, like this is where we have to kind of get this policy down. So what I can do here since I don't foresee anyone else jumping at the bit to actually sign up for this is I can go through and just do like a straw proposal on what this would look like. But again, I don't really have the answers here. Like I think this is gonna take some iteration. So I would love to work with other people who are interested in this. So if you're interested in this topic, definitely reach out. Like I'm sure Josh would be. So like I can definitely work with Josh on this. But if there's other folks that are interested, let's chat. And I would actually suggest that we start like a small working group with like three people or something and just like let's try to hammer out like this proposal. And then we can get it out for people to actually review. Yeah, I would also be interested in that. Yeah, I think Google Doc would be better than a GitHub issue because allow us to explore the multiple threads of conversations that happen in parallel. Sure, okay. Yeah, so you can assign that to me just to kind of at least do like an initial straw proposal. But if you're interested in actually helping with the proposal, I don't have the answers here. So I think it's gonna be a collaborative process. One other question I think is working about is what about the extensions that aren't currently in the main repo? Like do we status quo just leave them there? Yeah, right. So like I think extensions that are in the current repo they're already blessed extensions. Like they're being used in production like we maintain them as core core maintainers. And I mean, to be clear, like I see that those extensions growing but I think the point is we have to keep the quality bar high and more importantly for extensions that none of the core maintainers run in production. It is critical that like there are like owner's files in there and there are reviewers who can actually do first line reviews basically. Yeah. Okay, on this topic real quick, something came up this morning in code review. I just, since people are here I wanted to discuss it really briefly. So I've been moving the code over into the extension folder and there was a question as to whether we should move TCP proxy and the HTTP connection manager. I had put some verbiage in the GitHub issue of why I actually moved them. The TLDR there is that I moved them because they are loaded as extensions even though they might not be able to be compiled out. So my preference was just for consistency to basically keep all of the extensions together. The alternative is to keep them where they are which I think is a little worse from a code discoverability standpoint. And then the option three that I threw out is actually make a new directory called something like core extensions and those extensions would follow the same directory structure as extensions but they would not be able to be compiled out. So those are I think our three options. So I wanted to throw that out there for discussion. Yeah, I mean, that comment on the GitHub issue was largely just a reflection of the to do is you had in there to reduce dependency of things like web stuff and TCP proxy. Do you think it would be possible in general to use dependency of all of core Envoy on HTTP connection manager and TCP proxy? It would be possible if we made the admin handler optional. Now given that I don't think that will ever happen like probably not. So like given that the admin handler depends on the connection manager like I don't see us ever actually being able to compile that out. So the reason that I moved it was not so that it could be compiled out. It was just for consistency and I still see value in that but I don't feel super strongly about it. So I guess my opinion would be either move it with the rest just for allowing people to learn from code and having all of the extensions be together even if they're not optional. Or I would propose that we do this core extensions directory where it's very clear that everything in there is not optional. Like it's basically, it's basically compiled. I think the existing structure is fine. I think what would be nice is we just, we're making an exception to the rule which says you're allowed to from core code depend on TCP proxy or HTTP connection manager just those two or there's a third one, whatever. And we actually put this into check format. We can easily just analyze the build files and verify. Yeah, that would, okay. Yeah, I will make a follow up issue because as I'm doing this, there's a bunch of follow ups that are becoming clear. So like for example, in order to compile out Redis you actually have to be able to have pluggable health check which is something that we probably want anyway. So like there's a bunch of follow up items here that will come out of this. So I'm gonna do a tracking issue for basically follow ups and I'll be liberal with my two views. Okay. Okay. You have this last item on the agenda. We can address this next time around. It's not really pressing. Maybe just leave time for any questions. Sure, sounds great. If there's no questions, we could talk about that now. Awesome. So in the two minutes that we have, yeah, so what's come up recently is Istio is considering moving to a model. They currently have a model similar to ours where their APIs are separated from the core. Very good reason for the separation. You don't wanna force dependency on all of onboard as consumers APIs and logically the APIs are a form of specification, the onboard proxies and implementation. Those are essentially the two guiding reasons at least for us and I assume for Istio. It is a lot of overhead. I'm sure everyone's experienced the issues of having to check into docs in one repo, make a change here, change the docs there, change the API in the data plane API repo then change the SHA back in the main repo and so on. There's a lot of sort of funds there which could be avoided if we were all in a single repo. Now, obviously then that has a disadvantage that if you just do that naively, you force dependency on all of onboard. The way this is solved, I believe, I think it was came up in the kinds of Kubernetes and I think also Turban Labs mentioned they had done something similar is you essentially just have a periodic bots jobs, a Chrome jobs essentially, which go in, they can be called bots if you want, actually synchronize just the API subdirectory from your main repo into its own standalone repo and then folks form that dependency there. It seems like that would be a lot lower overhead for developers. Yeah, that sounds pretty awesome to me. I think if we can get the tooling, like we should definitely do that. The other disadvantage I see to having the separate repos is reading the history, it would be really useful when you see a code commit to also see the docs in the same diff just so you can tell, hey, this is exactly what the behavior is. Agreed. Yeah, so I mean, that seems like a perfect compromise to me. It is to basically move the API and the docs into the main repo and then do like a nightly sync out to the existing data plane API repo. So I would suggest that we open a track and get up issue for that or maybe just link it to the bot issue, but this will be blocked on us having some type of bot cron system. Yeah, I'll take that China match open the tracking issue. Okay, I think we're getting kicked out of today. Unless you would like Harvey to every morning come in and do that sync. Oh, absolutely. Yeah. All right. Cool. All right. Thanks everyone. Thanks. Bye. Cheers.