 Alright, thanks everyone for coming to the Envoy Gateway project update. My friend Arco from Tetrate was supposed to be up here talking with me, but unfortunately he couldn't make it due to some last-minute travel issues. So before I get started, quick show of hands. How many of you have heard about Envoy Gateway before? Awesome. Has anyone checked out the docs or even tried out the project yet? Cool. Well, hopefully I can convince the rest of you guys to at least check it out. So quick introduction. What is Envoy Gateway and why should I care about it? So Envoy Gateway provides a batteries-included Envoy proxy experience. It's built on Gateway API as our primary configuration language, and it supports multiple user personas because of that. So you've got multiple different resources, you can have your SREs, your admins handle certain resources, and then leave only the routing resources to your application developers so they don't have to worry about the rest of the config. Envoy Gateway is extensible, and it's built with community horsepower. So as you know, there's a ton of different ingress and Gateway projects. We've got two of them that are actually CNCF projects, emissary and contour. Envoy Gateway is kind of built to be the official opinionated wrapper around Envoy Proxy. So we brought a lot of different people in who've worked on these different projects. Like I've got emissary, we've got contour folks that have helped contribute to the project, and the main goal is just to start with all of these learnings that people have had from working on these different projects and bring them into Envoy Gateway as this new project from the ground up that's built as this collaborative open source core that everyone kind of contributes to. So real quick, when you bought it running, you basically got an instance of Envoy Gateway and it watches static config and dynamic config, and then it will create and manage a fleet of Envoy Proxies. And the Envoy Proxy is of course what actually handles the different requests that are coming in and getting them to your applications. A little info on what our resources look like and what they translate to. First off, we've got the Envoy Gateway resource. So Envoy Gateway, Envoy Gateway resource. That will give different types of config about what you want Envoy Gateway to do, such as setting the controller name and a bunch of other fun stuff. This is something that is only read at bootstrap. So if you change this, you do need to restart the controller for that to take effect. Since Envoy Gateway is actually separate deployment from Envoy Proxy, you can totally restart the control plane without interrupting the data plane. Next up, you've got your Gateway class resource, and that is going to help you configure things like the protocols or sorry, getting ahead of myself. That's the Gateway. Gateway class controls which Gateways that we're going to be watching and we're going to be configuring through Envoy Gateway. Since you might have other projects in your cluster that are being configured by the Gateway API. We also have an Envoy Proxy resource that will attach to your Gateway class, and that lets you configure different things about the Envoy Proxies that we're creating. So since you don't install those deployments of Envoy Proxy on your own, you need different ways to configure aspects of them. We give you options to override things like the image of the Envoy Proxies that you're running, set annotations on the services that we create, so you can do things like set up your cloud provision load balancers, and then set up pot annotations on the Envoy Proxies themselves so that you get support for service matches like Istio and Linguity. Next, you've got your Gateway classes, so that's what I said you've got support for the different protocols and how we're going to be listening for requests. Those roughly translate to XDS listener resources. Then you've got your HDP route, your PC route, et cetera, your general routing resources, and those control where we're going to send the traffic. So you've got support in there for how we're going to match the traffic, any kind of mutations to that traffic you want Envoy to do, and then where we're sending that traffic. Those translate to XDS routes, and then in those HDP route and your PC route when you reference backend service, those create XDS clusters. A little bit about the architecture at a high level, so we've got this at the bottom idea of a provider, and this is what is responsible for watching the different types of configuration, static, and dynamic, and then translating it into the XDS IR and the NPR IR. So the reason that we have this set up is that we can have different providers for the different places you might be pulling in config from. We have a Kubernetes provider, that's how you watch Kubernetes resources, we can have a file provider, and in the future we can have new different providers if we need to, so that we can watch configuration from different places. Then we split it up into the two IRs as an internal API, and that helps us control exactly what we're doing with all that information internally. So we can separate the internal API from the public-facing API, and that way we kind of decouple it from how you're configuring it to what we do internally. Once we've kind of converted all of that resources into XDS IR, we will then have the XDS translator subscribe to that, and as new updates are pushed from watching configuration, the XDS translator will process all of that IR into new XDS resources. After that we have the XDS server, which subscribes to updates from the XDS translator. The XDS server will then push those updates out to those fleets of Envoy Proxy. On the infraside, since I mentioned, we have support for automatically creating the Envoy Proxy instances. When you create like a gateway class, that will trigger us to a gateway resource, that will trigger us to go and create a new instance of Envoy Proxy. So you can have multiple different gateways that are being managed by a single instance of Envoy Gateway. What that does is it lets you have multiple different deployments of Envoy Proxy that can have totally different configurations. You can keep them separate like that. So quick recap about our 030 release back in February. We added support for the experimental APIs in the Gateway API, such as GRPC routes. Those give you first class support for your GRPC services. And then we added support for TCP routes and UDP routes. So that lets you handle raw TCP and raw UDP traffic. You can do stuff like TLS pass through with these things. If you know that you just want to operate on TCP and UDP traffic at a high level. We also added support for request mirror filters and response header modifier filters. So the primary one lets you configure where else you'd like to mirror requests to other than the place where we send it. If you get a response from the place you mirrored it to, we don't do anything with that. But that lets you do stuff like shadow traffic and whatnot. The other main theme of 030 was the support for advanced APIs. So on Gateway API add support for the idea of this external ref filter where it's a filter that you can put on things like a GRPC route or an HTTP route and it lets individual implementers of the Gateway API decide what they want to do with this. So you can add support for your own custom resources. Envoy Gateway has introduced a couple. So far we have rate limit filters and authentication filters. So you attach these to your GRPC routes, your HTTP routes, and this lets you configure rate limiting and authentication. Right now the rate limit filter only has support for global rate limiting and you can have matching based on the headers sent from the request. And then you can also do matching based on an individual IP or a CIDR range. Like I said, it is global only right now, but we have designed the API in a way that will let us do per route different rate limiting in the future. Authentication right now we only have support for JWT filters and similarly we have built out the API so that we can add new different types of authentication filters in the future such as OIDC. For the JWT filter you just add your GWcast location, your audiences and your issuer and then you're off to the races. So quick updates about our 040 release that's coming. We've already cut the RC last week and I will be cutting the final release when I get home from QCon on Friday. So main things that we wanted to prioritize for 040 is the user experience. So we added home support since it is broadly adopted by the cloud native community and that should help people manage their installations a lot easier. The second thing is we added this new EGCTL CLI tool and this lets you do a couple fun things. Right now there are two primary features. One is that you can actually pull config from your managed Envoy proxies and check like hey what are all my XDS clusters that I've got in here you can just dump the entire config or you can do a translation command and this has two main purposes. One is you can provide gateway API config and this lets you see what XDS resources Envoy gateway would create from that config and then it also lets you validate that gateway API config. So if you want to create like an HTTP route and you want to know if it's going to be valid or not before you even try to control apply it you can use EGCTL to validate a bunch of that config and just see if it's going to air it out or not before you apply it. Next theme of the 040 release is extending gateway functionality. So we added support for custom bootstrap config for Envoy gateway and Envoy proxy sorry with the Envoy proxy API that I mentioned earlier so if you want to add different config the Envoy gateway doesn't normally support you can completely override the bootstrap that we're giving to Envoy and with your own one. There's also support for custom Kubernetes settings in there such as what you want the resource allotment for the different Envoy pods that we're creating to be. And the last one is one that I'm really excited about which is the control plane extensions. So one of the main goals for Envoy gateway is we want it to be this open core that a bunch of different people can build off of to build their own value added solutions on top of it. So what the control plane extensions framework is going to let you do is hook into the translation that Envoy gateway is running via these DRPC hooks and that will let you have control over the resources that Envoy gateway is generating. So as an example let's say you wanted to modify the routes Envoy gateway is creating you could create an extension and then every time Envoy gateway creates a route you have the ability to have that extension modify that route before we finalize it. So you can do cool things with this like introduce your own resources Envoy gateway will watch those resources hand them back to your extension and then your extension can do fun stuff with them and modify the XDS. Real quick red map for what's coming up zero five zero release is up next and it's going to be focused on observability and performance and scalability. So we already know that we want to add access log control metrics and tracing and we really want to fine tune the performance of Envoy gateway so that you can feel comfortable about running in production. We also have a future wishlist item of this XDS patch API and that would provide a public API for end users to basically create resources in the cluster or on the file system so that you can patch XDS resources with your custom config. Lastly I want to give a huge thanks to all the contributors I couldn't fit everyone on this screen but I grabbed as many people as I could. We couldn't have gotten this far without all the hard work from the community so big thanks to everyone who has helped contribute to the project and everyone who's gotten involved in the time sense. And last of course if you want to get further involved with it we've got our docs up here and then the project is github.com Envoy proxy gateway. Thanks for your time. Does anyone have any questions? Thank you for the talk. I have one question about the so imagine if you have a TLS connection and an HTTP connection inside that or HTTP protocol you want to offload the TLS stuff but you want to pass the the HTTP stream from one and to the other end without looking into it so essentially TLS offloading you know the like the legacy classic version of that is that something that we can do with a TCP listener in the Envoy gateway configuration? I don't actually know that at the moment but if you want to hop into our Slack there should be a link on the docs we'd love to chat further about that. Thank you. Hi all right great presentation. I have a question about rate limiting like the the DPI that you proposed two things one is it like when when you say a global rate limiting are you also talking about like the dude is it a centralized rate limiting like do do Envoy for like instances do they coordinate between each other so that like they have a fixed amount or is it decentralized so like each Envoy instance has a its own yeah so the way you would normally do it is the way the gateway API set up resources usually point back to the gateway and then the gateway points back to the gateway class so Envoy gateway will have one gateway class you could have one or more gateways below that each gateway will correspond to a different Envoy proxy and then global in the sense for rate limiting that you would attach it to a gateway and then it is global for that instance of Envoy proxy as opposed to different rate limits per route so global in the sense that like if you create a rate limit for header x-foo it will be that rate limit on that header for all routes for that instance of Envoy proxy but if you have another gateway for a different instance of Envoy proxy it wouldn't necessarily apply to that one unless you attached the same rate limit to that other gateway I see cool and the other the other question I think it's more general though is like could I could I rate limits like based on a specific filter that I created so like if I if I and that's I think that's more general questions like how can I attach my own filters my own like Wasm filters are like well Envoy filters I would compile it with my Envoy like instance but like with an Wasm and dynamic filter right like how can I also like rate limits based on a header that that it added to a request or something like that gotcha um that's a great question I know actually since you mentioned Wasm filters that is something that we have been kicking around as an idea for a future future release but for the topic of rate limiting based on a filter I don't think there is any way to currently do that right now especially since the existing filters are mostly only the ones that are prescribed by the gateway API but I do believe that if you have one of those filters and it adds a header so you have like a request header filter that you add onto it and it's only adding a specific header to one route and you know that filter that header is not going to be on like any other requests then I think it should work thank you hi um I I notice quite a lot of overlap it seems at least with uh the concept of gateway in Istio is that something that is going to get consolidated at some point or are these projects going to do their own version of this um like I said there's a bunch of different projects to implement um ingresses and gateways and even then there's probably a bunch that are going to start using the gateway API uh we don't currently have any plans of consolidating between Istio or something but we do want to make sure for people who want to use service meshes that they can use all the service mesh functionality if they want to use Envoi gateway as their ingress solution as opposed to the uh you know Istio gateway so it would be uh kind of choose your own situation whether you want to use Istio's solution for gateway or if you want to use Envoi gateway and then also use Istio as a service mesh hi next question here on the side sorry no worries um you mentioned in the in the beginning that this meant to managed individual gate proxies as well as the ones in Kubernetes the docs state mostly the installation of the Envoi gateway to be in Kubernetes can we also run it outside of Kubernetes so we've designed the API in a way that in the future you will be able to but at the moment it is Kubernetes only all right um then a follow-up question is the the intent to just manage ingress gateway kind of capabilities through this gateway uh Envoi gateway or also all the other capabilities of the Envoi sorry like what specific capability so most um things they pertain to the gateway API as we have in Kubernetes so all kinds of ingress functionality like rate limiting what about rate limiting between services so not just services i think that would definitely be more of a service mesh use case i don't think that's something we currently plan on exploring right so if we want to build our own service mesh on top of this gotcha oh you mean via the extension framework i mentioned for example yeah um i think in theory it should be possible but i don't know exactly what that would look like off the top of my head all right go with the extensions framework is just give extension people who want to develop an extension like total control over the xds so right now for zero for zero it's just like the initial framework and we're going to be building on it more over time so um like i mentioned right now you can kind of hook into the xds resources and modify them as they're generated but an upcoming goal is we want to also add support for extensions into the infra pipeline so that you can have total control over when envoy gateway is generating infrastructure as well all right thank you well thanks everyone for coming