 Okay, I don't have much to say at this point, but thank you everyone for coming. Some great talks. I thought it was a really good mix of end user talks and technical talks, so hope you all enjoyed it. Just wanted to reiterate that the project continues to need help, and we need help across a large variety of domains. So not just people with deep networking expertise, but we're always looking for maintainers just across all of the different filters and components that Envoy has now. So not just core networking, but we have so many different protocols that we support. And as people have talked about during this meeting, we have a lot of different security-related filters like OAuth and Xdoth and Jot and all of these things. And there's actually a lot of domain-specific expertise in terms of knowing how all of those things work. And frankly, the maintainers don't always know how all of these things work. There's so many standards involved. So if your organization is really passionate about things like Jot or whatever, we would love even maintainers that are specific to certain filters. So if you're using particular protocols or particular filters and you want to maintain only that thing, that is definitely possible. So feel free to reach out on that side. We're always looking for people who do release tooling and backports. So feel free to obviously reach out if you would like to do that. Documentation is an easy way to get involved in terms of just doing fixes or doing examples. Ryan has done a tremendous job with all of our sandbox examples. I think we have like 30 of them now and that helps people ramp up a lot. But working on that is obviously fantastic. CI is an ongoing pain point, always work that can happen there. And there's just lots to do. So please reach out over Slack or email. You can always reach me and I'm happy to route you to other people. So and with that, thank you again for our diamond sponsor Tetris. And we've got a bit of extra time. There's a couple of maintainers here. So just wanted to open it up to see if anyone has any general questions that were not covered previously. And either I can try to answer them or there might be someone else here who can. So the question is with Envoy Gateway, there has not been a standardized controller for CUBE ingress and will that change? I think people are gonna have different opinions on this. Obviously for better or worse, there's been a large vendor ecosystem around Envoy. I can't speak for others. I think that's been fantastic. It's been one of the reasons that Envoy has grown a lot. So I'm very thrilled for all the vendors that have invested so much with Envoy. I think the flip side of that in terms of the large vendor ecosystem is that it tends to be confusing for certain end users in terms of how they want to approach the project or how they want to use it. So it's both good and bad. And again, for better or worse, if you go online today and you search for Kubernetes ingress, you find blogs and blogs about Nginx. And I mean, Nginx is a fantastic project. It's been around for a long time. I don't have anything bad to say about it. Envoy is a different piece of software with different types of capabilities. So I think from the project perspective, we're biased we would like people to use Envoy when they're using Kubernetes. And there's obviously synergy with the CNCF ecosystem. So we think that can be better. Whether that can be achieved with the vendor ecosystem is a different question, right? So I think Envoy Gateway has the potential to provide a better out of box experience for running Envoy on Kubernetes. And I think if we do it well, we can still allow the vendors to provide value on top. So I actually think that we can get the best of both worlds. So I am hopeful that Envoy Gateway, I'm hopeful that in two years when you Google, Kubernetes ingress, you see Envoy Gateway, right? And not the Nginx controller, right? And then I'm hopeful that all the vendors that today have their discrete solutions are providing value on top. That's my personal hope, whether that will happen. I don't know. Oh, JP found keys. If you're missing your keys, let me know. The question is about load balancer extensibility and why are we trying to move new load balancers into extensions? I think Envoy is seven and a half years old and so there's a lot of legacy in terms of how the project has evolved over time. And I think that one, in general, we're trying to move as much as possible into extensions to avoid forcing everyone who doesn't want certain features to have to take that into their main build. So that's one. I think we're just trying to have that happen. Also for load balancers specifically, there are non-Envoy based XDS clients now, right? Namely, namely GRPC. And GRPC does not want to implement all of the load balancers that Envoy offers. So by making it an extension, it's much more clear that it's an optional implementation. So we would like that to happen so that it's consistent, right? But obviously that's the seven and a half years of debt. Question about the build system and just about how people do extensions. So this is another one that's been talked about for years and years and years. There's been an issue open, I think, for seven years around why does Envoy not support dynamically loadable modules? And it's like we can talk about this over drinks, but there's no easy answer here. Static compilation has a lot of benefits. Being able to load dynamic modules has a lot of benefits. And I don't think the project is opposed to dynamically loaded modules, but no one's done the work. So clearly no one has cared about it enough to actually implement it upstream. And whoever does that would have to deal with the binary compatibility issue, even if it's just matching up the SHA, the Envoy was compiled with the modules. I think the other side of it is that a lot of people use Lua, like a lot, right? So I think for a lot of those dynamic use cases, I think Lua is widely used. Now we have WebAssembly. I eventually would like to see more rust within the Envoy code base itself, but that's not really gonna solve the static compilation problem. Like that's its own compilation problem. So I'm not sure, right? I mean, it's like I think there's pros and cons in terms of being able to write a module and host it on GitHub and have it be easy to plug in and maybe share that code a bit better. And I wish there was an easy answer here, but there's not really if people want to help out in terms of dynamic compilation or other mechanisms. I think the project is certainly amenable to that. I wrote that code a long time ago and I mean, as far as I know, it works fine, right? I don't think there's been much development. I think that there's lots of interesting use cases. So I would love to see more work done there. One of the, this is me just speaking, but one of the areas that I think is really cool is that I think there could be a mashup of the transcoding filter with the tap filter to actually allow people to look at actually searching for requests that, for example, have protoboffs or those kinds of things in them. I think that's technically possible and it would be amazing. So I think there's work there that has not been done, but it could be done. That would be pretty cool. And when you couple it with systems like backstage or other stuff that allow you to do more hands-on development interaction or the folks at Lyft have done with a bunch of their work on the developer offload, I think there's amazing potential for actually doing request and response capture and doing other types of things there. I think the Lyft folks have solved it in a slightly different way by injecting their proxy server in there allowing you to capture it that way and that's fine. But I think that general area is quite cool. Luke? Yeah, just to comment on what we have looked at at Lyft, we've looked at the tap filter in particular for that same reason in the last slide I showed. What if you could turn on debug just for the duration of this request? And we kind of looked into doing a similar thing, but what if you could turn on tap just for this? So every single request and response flowing through, the developer will have enough insight to figure out what was actually going wrong. It's like one of those things that we kind of want to do, but haven't gone around to, and in the interim we use that custom ingress proxy filter that we have that it shows us every request that goes in and out if you were to offload to it. So that's kind of how we're doing it now, but the tap filter has been on the roadmap docs for a while kind of thing. If we're ever successful in doing it, we'll be sure to hopefully tell the community. Cool, any other questions? All right, well thank you all for coming, and I think there's a group reception after this. So thank you again.