 Hello, okay, cool. Hi everyone. Hope you guys are having a good start to your week. Thanks for coming You guys just saw a little bit of a demo about on my gateway And now we're here to give you some updates about the project and just give like a little bit of context about what we've been working on And why so my name is Alice waska. I work for ambassador labs. I'm a maintainer on on for gateway I'm our cut as gupta. I'm a software engineer tetrate, and I'm also a maintainer on the on Wikipedia project It's really quick just to kind of set the scene We're just gonna share some recap about what we've been working on a little bit of info about the future and some contacts around Why we've kind of been focusing on what we've been focusing on What the talk is really about is to hopefully kind of peak your interest in on the gateway if you haven't you know Kind of trade it out yourself or you've just been following the project from a distance We're hoping that we can kind of convince you guys to actually give it a try yourselves after this talk And really what it's about is mostly just getting more people excited about running on void proxy in more environments Really quick just some high-level things that we went over in a couple of our last releases that I wanted to draw attention to Our one we've added a bunch of extra support to on the gateway for configuring exactly how it watches resources So now you can limit it to watching certain different namespaces This is a hotly requested feature specifically for like those multi tenancy situations or multi business unit things where you Want to make sure that each on boy proxy or each on boy gateway is only allowed to watch resources that are relevant to it Next we've added support for TLS termination for TCP routes. That was a really important feature for us We've also done a ton of work this last release around observability and access logging is a big part of that So now you have a bunch of control over exactly what we log what the format of that is and you have all sorts of Abilities to push that to different syncs and stuff for the logs and metrics lastly, we're really excited to add support for Gateway API 1.0 GA and We've added our own cell validation to our own custom resources so that way you guys can get earlier apply time validation Instead of having to apply your resources check the logs check the statuses and see what went wrong We're also super excited to announce a refresh of our docs website. We used to be building our docs on Sphinx We felt like they felt a little bit too dev doxy in progress and kind of more like developer resources They got a fresh new coat of paint. We're building them with Hugo now So the docs have a brand new look and we're hopefully I Think most people would agree with me when I say that they're a lot nicer to look at now And we hope that they're way easier to navigate So if you haven't given our docs to read we'd love if you check them out if you did and you encountered friction in the past We're hoping that it's a much nicer experience now So apart from just generating and translating XDS a core responsibility of on my gateway is to manage the on why proxy fleet Infrastructure so we've added features that allow users to support different deployment modes Many of our users wanted to run on my gate many on my gate base in a single cluster Each controller isolated from the other to support different tenants or business units So we've allowed that by allowing users to specify a list of namespaces on namespace selectors Allowing us to limit the watch resources to a few namespaces When we started this project based on the gateway API semantics We decided to map a single gateway resource to a single on by proxy fleet When users started using it the feedback we got was that users wanted to map multiple gateway resources To a single on by proxy fleet to optimize CPU and memory So we so we introduced the merge gate base field to achieve exactly this We also added more fields in the on by service and on by deployment to expose more use cases So as Alice mentioned our focus has been on day two operations and we've added support to expose control plane and data plane telemetry Metrics can be pulled from an endpoint, but can also be pushed to an open telemetry collector Access logging can be configured in various formats and can be pushed to different things We've also pre-built these Grafana dashboards that our users can produce I'm excited to announce that on by gateway version v06 now supports gateway version v1.0 In this gateway API release it allows Implementations to share a conformance report aka scorecard and all core and extended tests for the HTTP and TLS profiles pass for on by gateway This release also introduces cell based validation Which is big which is baked into the CRD itself and is executed in the Kubernetes API server Eliminating the need for a validation webhook an admission controller and a dedicated gateway system namespace Reducing the operational burden on the end user We this release also added support for the HTTP route timeouts and we implemented and support that now Well, we're super excited for the gateway API project hitting their GA Milestone We also recognize that there's a ton of functionality in Envoy proxy that it just is not supported by the gateway API and Realistically they can't nor should they add support for every single Envoy field because they're aiming to be a general API That's used by a bunch of different projects. So to extend the gateway API We've introduced three of our own custom resources that you saw demoed earlier These resources are meant to kind of collect all that extra configuration mostly specific to how Envoy proxy operates and Often condense them to these three separate areas based on where we're talking about handling the traffic The kind of idea with these is that they're designed so that many different people can be working on them at the same time If you have features that you want to see added into gateway API or sorry into Envoy gateway And then if gateway API ends up adding support for certain features We have in these at some point in the future We will start deprecating them out and then we'll go with whatever gateway API prescribes for them If there's feature parity there The first of these resources is called the client traffic policy So this is a resource meant to consolidate all the configuration about how Envoy proxy talks through downstream clients We've added support for a brand-new feature here in this latest release, which is TCP keep a lives The thing to note about all of these policy resources is that they have to be in the same namespace as whatever they're targeting So this one will target gateways only and you can target the entire gateway to kind of set blanket Defaults for everything in that gateway or if you want you can target specific listeners by setting a section name So that you can have different configurations for listeners on that gateway The next recess resource is called the back-end traffic policy So this is both for configuration all about how Envoy proxy talks to your back-end services But also for configuration that might be route dependent such as you saw earlier We took our previous resource, which was the back or the rate limiting filter And we have now folded it into this policy resource So you can configure rate limiting here just as you could previous versions using that filter all of that same config now lives here We've also added support for a bunch of other cool features The last of these resources is called the security policy We wanted to separate this one out from the other policy resources because we know that a major concern for a lot of Organizations is locking down access to who has the ability to view and edit certain resources in the cluster So really common use case we would see is that people might have access to configure route level things But they might not have access to more sensitive or security related information All that sort of stuff is meant to live in this resource. So we've also taken the previous Authentication filter and we've now folded that into the security resources. You saw demoed earlier We also added support for the brand new feature of course, which is new in this release So we thought that was just a big table stakes feature We needed to support and we put a lot of priority on that Something to note specifically about the security policy and the back-end traffic policy is that you can also attach them to route resources So unlike the client traffic policy, which can only attach to gateways You have the option of setting this default kind of for your whole cluster by attaching it to gateways Or if you just want to attach it to a single route and set the configuration there, you can do that It has this kind of concept of implied inheritance So if you set it at the gateway level you can do defaults if you set at the route level You can set specific route level config and the route level config will always win in a conflict Thanks Alice for sharing all those features with building into this project But as an end user what happens when you want to unlock a feature in on by proxy that is not available in on by gateway today Maybe because on my gateways still playing catch up Or we don't think it's a common use case today To solve this problem we introduced you an on by patch policy API an API that lets you modify the XDS resources generated by on by gateway The patching operation is based on Jason patch semantics Widely used spec and familiar to put to many developers When you see the word patch, you're probably thinking that this might be this can be misused So you disable this API by default only allowing admins to enable it at startup This API can only target gateways and must live in the same namespace, but the gateway is created Limiting the access to a few such as platform admins Patches are also very hard to get right So we've added a status field that can provide feedback on the operation of the patch Hatch the patch succeeded is the is the XDS resource that the patch is targeting valid or invalid We even run the validate all helper command helper method Associate with each XDS resource to make sure that the XDS output is seen This API is also part of our eG Cuttle translate command which lets you run Which lets you generate the XDS output offline before even applying this resource in a cluster TLDR you can continue to configure on by config without forking the project Release v06 add support for service import as a back-end ref Enabling multi cluster ingress use cases when used alongside an mcs controller like Submariner A lot of the updates you see on these slides are driven by users who've been running on my gateway for a while Have seen bottlenecks at scale and have contributed back to improve the runtime performance One such example is using the same reconciled request signature Which eliminates the thundering herd problem by coalescing all these reconciled requests during periodic syncs For more information, please visit our website or a github Repository get involved. Please try out version six and give us feedback This progress we've shown today Would not have would not have been possible without the contributions of all these people and depend about So thank you for your hard work Thanks folks We got a bunch of questions earlier in the previous talk about on the gateway But if there are any additional ones, we'd love to answer them for you Cool. Thanks. Oh, yeah, what's what's your question? Yeah, it definitely is more of a super user feature that's we have it disabled by default like arco mentioned main priority for this resource is just Giving people who are kind of already know exactly what on what config they want to set the ability to do so now We do have the e gctl CLI tools You can totally like kind of help use that tool to write your patches beforehand and figure out if they would work or they if they would Not work As far as kind of future thoughts around the evolution of that feature I'd kind of have to defer to arco since this is more of his brainchild So we kind of have two extension points You can either apply a patch at runtime using this API or we also provided a way to add grpc hooks and And and modify xds resources at runtime using a sidecar or another extension source But as you see that you can never guarantee That the patch will work because across versions the way we translate xds resources might change If we upgrade to an envoy proxy version that field might go away. So that's something we cannot guarantee Good question though. Good question I'd say in regards to people wondering about what about this versus istio ingress There's a lot of your meshes that have sort of started offering kind of ingress solutions and stuff What I would say is that our goal is to work well with meshes in terms of if you want to use the mesh for the mesh And you want to use us for ingress. I'd say that's the ideal combination Like we're really focused on solving the ingress issue if you have spent a ton of time focusing on solving the meshes you There's a bunch of other solutions for mesh to like Glingard II We want to work nicely with all the mesh providers, but that's I'd say The most common thing that I hear about users who start off kind of they know they want to mesh for security reasons They're still learning about Kubernetes or they're kind of scoping out what? Configuration the ingress providers have and whether they should go with them For simple cases a lot of the times people are well served by the ingress solutions offered by istio But we're kind of more focused on being kind of like that expert for the ingress thing So we're planning to have more features better support first party if you want to last handle the ingress That's where we're gonna focus on we want to play nicely with meshes So it's just really up to you if you have a simple use case. You might totally be served well by the istio ingress To add on to what Alice said If you see we're optimizing the the ingress case and we've added features like rate limiting you might not see in other controllers The other thing with respect to istio is It's a control plane just for ingress is so in a way We are a control plane that is managing much lesser on-vibroxies So the design decisions we can make are different. We've enabled XDS delta XDS by default and Which other controllers cannot do because it's hard to keep so much XDS state in memory so that's why we're trying to optimize for that use cases and Yeah, I Think we're always oh I think right now we don't have super like long-term roadmap We're mostly just focusing on what are the key adoption blockers like what are those core features or Behaviors that people need to see change and on the gateway for them to feel really confident and good about running in production As far as kind of inner service to service communication goes that's definitely not going to be a focus area of ours It's definitely an area best left to meshes and I think most people that I've kind of talked to that really big clusters Or they have really huge use cases. They're best served by having a dedicated ingress and a dedicated mesh back-end policies as in Gotcha I'm not sure. I don't think it's currently High priority for us, but we definitely be open to it. That's a good question So we've added enough support in our infrastructure API is to let you Configure to let you run beside let's say an NLB And so yeah, we think it's a great fit for sight for it to be side-to-side where you can use a cloud-based load balancer to kind of spray traffic into Different layer 7 proxies that can add some intelligent routing at the layer set level and then load balance to the back to the back Use case. They see really commonly in people that use all sorts of different ingresses regardless of what your solution is that Envoy Gateway should play nicely with is this idea of having like Arco mentioned layered ingresses So you might have one master ingress that all the traffic floods through and then it makes really simple kind of dumb traffic decisions based on where to send that to other Gateways essentially so that way you don't overload any one thing the one at the very top just handles very bare-bones routing decisions And then you have different environments for different kind of business cases different, you know tenants solutions like that where you can kind Of offload the complexity into separate gateways We're getting kicked out We're getting kicked out. I just have I this is my day job as well. So I've got a couple of things that I'll add when you all are done Are there any other questions? Cool. Yes, I said this is my day job to just to point out the NLB in front of Envoy Gateway is It's a thing but you especially if you're in managed Kubernetes EKS There's a few little configuration items you'll need to set. I've written that up I think that went up as a blog on the tech trade website In terms of Istio and Envoy Gateway if you want end-to-end TLS, which you probably do you currently actually need to have Istio Run a sidecar alongside Envoy Gateway as if Envoy Gateway was just another workload in the mesh That is a bit fiddly to configure But again, I worked that out and that's documented somewhere on the tech trade site I think on our docs page so this all works, but it's it's kind of early days with some of these integrations So yeah, I just thought I say that because I was the one who spent the time working both of those out and they're they're published If that would help anybody cool. Thanks very much. I'll go to us