 All right. Well, thank you everyone for taking time out of your busy schedule to learn more about Envoy Gateway. My name's Danian Hansen. I'm an engineer with Petrate. And my name's Alice Moskow. I am an engineer with Ambassador Labs. So this is an Envoy Gateway project update. Six months ago, we released the project publicly at EnvoyCon or at KubeCon EU. And we received a lot of positive feedback on the project. And when we released the project, it was really nothing but a design spec and a public announcement that we are forming the project. And when I say we, it was Petrate, Ambassador Labs, as well as VMware that came together to announce the project. And thanks to Matt Klein for kind of shepherding the group together and getting us going. And we've accomplished quite a bit in the last six months that we're going to go ahead and cover during today's session. But before we get into what the project's been up to, I wanted to just take a moment and give a quick history, which I've mentioned a few pieces of it. But as most of you know, Envoy started back in the fall of 2016. And even though Envoy became synonymous with Service Mesh, it actually started as an edge proxy within Lyft. And as I mentioned, in the spring, Envoy Gateway was born with the focus around these two areas. How do we bring Envoy to the masses? Envoy has become very popular, both again as a sidecar proxy for Service Mesh, as well as an edge proxy. But we wanted to take Envoy to the masses, take it even further. And one of those areas that we needed to address is how do we make the user experience simple? How do you make it simple to consume and start getting value out of Envoy proxy? And so why should you care about Envoy Gateway? So Envoy Gateway provides that batteries included experience, right? I don't have to set up my own XDS control plane. I don't have to go ahead manage a bootstrap config file on my own. I don't have to know all the details of XDS to start deploying Envoy in my infrastructure to start routing traffic through Envoy to my back end services. And what you'll see here in a moment is two very simple commands will get Envoy installed and start routing traffic to your back end. And one of the key aspects of Envoy Gateway is that it supports multiple user personas. And the way that this is accomplished is through Gateway API. So what you're gonna see here in a moment is some of the system internals of Envoy Gateway. And we decided on using Gateway API to provide our user interface as opposed to creating some project specific API. And so Gateway API was created, I think about three or so years ago. I was actually one of the founding maintainers of the project and have since given up my maintainership there. But we have other maintainers and contributors to Envoy Gateway that are Gateway API maintainers. So we're tied very closely to the Gateway API community. And it is a set of APIs that is gaining wide adoption for service traffic management. And we decided to adopt Gateway API again instead of creating our own project specific APIs. And that's what gives us this multiple user persona support. And so what you're gonna see is there are certain resources that would be used by an infrastructure team for creating the Envoy proxy infrastructure. And then a separate set of resources that would be used by your app dev team. And the primary resource would be the routing resources, for example, HTTP route for routing traffic to your back end application. And it's extensible again, not only through Gateway API, but we have some native extension points within Envoy Gateway. But it is extensible in a sense that we define through APIs a basic common level of configuration and management. And these extension points allow Envoy Gateway to run, let's say, in different providers, or to support different types of functionality like auth n and auth z. And again, those extension points are both native as well as through Gateway API. And one of the focus areas moving forward with the project is focusing on these extension points. So what the project has been focused on for the V02 release, which just released a couple days ago, is that we developed a solid foundation of the project. And as we now transition from establishing that solid foundation, and now focusing on advanced use cases, this is where we're going to be spending a lot of time here in the near future is with these extension points. And I say it's built with community horsepower, right? So creating an open source project is pretty easy to do. Creating a community around an open source project is more challenging. And I'm really proud of where we're at with the project. We have a ton of different contributors, which you'll see here in a future slide that span a bunch of different affiliations. So this is not a project that is run by a particular company or affiliation. We've got quite the diversity already in the project, and we're excited to see that grow. Right. And so if we look at Envoy Gateway, pretty straightforward, right, it runs, it consumes configuration, both static and dynamic configuration, the static configuration is used to startup Envoy Gateway, and then the dynamic configurations used to configure its runtime. And that dynamic configuration is expressed again through Gateway API resources. And then it manages a fleet of Proxies, Envoy Proxies. And let's dive into Envoy Gateway a little bit deeper, right? So just like the picture we saw back here, now we're zooming into Envoy Gateway. And we see a few things that Envoy Gateway is comprised of. The first point is we have a provider, right? And so currently the only provider that Envoy Gateway supports is the Kubernetes provider. And Alice is going to show you a demonstration of Envoy Gateway using the Kubernetes provider. And a provider is responsible for certain tasks for watching the resources, the dynamic configuration, right? How are those resources expressed through Gateway API resources? But those Gateway API resources in a Kubernetes provider, those would reside in a Kube API server, right? So a user creates those resources. And when you start Envoy Gateway and you say use a Kube provider, it uses a Kubernetes client, specifically the controller runtime client to communicate with a Kube API server to watch those resources, right? As well as to watch any of the dependent resources for managing the Envoy proxy infrastructure, right? It's the provider is also responsible for storing, persisting that data, right? So again, Kubernetes provider, pretty straightforward. That's stored in etcd. As well as for service discovery. And Kube provides that via Kube DNS or core DNS, depending on how your cluster is configured, but typically core DNS. And if we think of Envoy Gateway, how does it take this configuration, especially the dynamic configuration, just think of it as this kind of this this pipeline, right? And so as a configuration comes in, the provider ends up passing that configuration after it's been validated and so forth. It then publishes that, right? So all these different system components communicate over a message bus via pub sub. And what we the way we implement that message bus is through what's called a watchable library. And all these details, we if you go to the project documentation at at gateway.envoyproxy.io. If you go to the project documentation, we have all of the system design documented. And so it's a really good place to start where we've had contributors come in and given us really positive feedback of looking at the system design documentation. But this diagram and and and all the information I'm going to cover is in the system design documentation. But again, all the all the different components communicate over this message bus. And it's a simple kind of translation pipeline of translating the gateway API resources into our internal representation or internal data model, what we call our IR. And you see the IR is actually split between an info IR and an XDS IR, right? And so that resource translator will go ahead and take those gateway API resources, translate them into our into our internal data model. And one of the things not depicted on this diagram is it should actually have an arrow going out of the system as well. And that arrow is for updating status of the resources as they reside in wherever those resources are persisted. So again, with a Kubernetes provider, those resources are persistent at CD. And so the resource translator not only translates the resources in bound from gateway API resources to our internal representation, but outbound, it's responsible for updating status, right? So as it translates and says, Oh, this is a good translation. We're good to go. Let's go ahead and update the status of the gateway class or the gateway or the HTTP route. But again, as we as we go through this configuration pipeline of taking the dynamic configuration and putting it through the pipeline, we have the resource translator translating those external resources into our IR. And though, you'll see that we've got an, an infra manager and XDS translator, those two services subscribe to the resource translators, whatever the resource translator publishes, right? And the infra manager right now, the only implemented infra manager, again, is a Kubernetes infra manager. And that infra manager is responsible for taking the infra IR as input, and then as output creating the necessary Kubernetes resources to manage an envoy proxy fleet. So it is managing the deployment resource that's used for deploying envoy proxy. It's responsible for managing the service that front ends the fleet of envoy proxies. And also a service account that's used to secure the deployment. And then on the very similar pipeline here on the right hand side as well, where the XDS IR or the XDS translator consumes the XDS IR. And then it publishes to the XDS server and the XDS server via Delta XDS pushes configuration to the managed proxies. Let me hand it over to Alice. So Danion just mentioned that a couple of times that we are using the Gateway API resources for configuring envoy gateway. So let me go into that in a little bit more depth. So as of the 0.2.0 release, which is our first functional release, we have full core support for all of the Gateway API fields and gateway classes, gateways, HTTP routes and TLS routes. It's really quick to go over the first one, gateway class, which is how you're going to get started. You have the main thing is this controller name. And it's a string, but I had to cut it off since the presentation's a little short. But that is what's going to let envoy gateway know that it is responsible for managing this gateway class and all of the gateways that reference it. So that when you create those resources, it's going to make sure that envoy gateway is updating its configuration and watching those. So the next thing you're going to create would be your gateway. And this one is going to, again, reference that gateway class, and you'll set up your listeners here for how you want envoy gateway to listen for requests. You can add multiple listeners on your gateway. And then once you create this in your Kubernetes cluster, it's going to trigger envoy gateway to create some resources for you. So first thing it's going to do is spin up a deployment that is going to have envoy proxy running a service for that envoy proxy with the port that you established in the gateway, and then a service account for that gateway as well. And so as far as getting service traffic, traffic to your backend services, you'd be using HTTP route or TLS route. And for these, you can again refer to the parent gateway, which is going to trigger envoy gateway to configure that specific gateway to route traffic to this backend service that we've defined in here on the provided port. You can limit it to certain host names, which is not required, but is optional. You can again, have it do all host names by just leaving that field out. There's also support for filters, redirects, things like that right now. And the main benefit of kind of again, why we chose gateway API resources for configuring envoy gateway is one, it's established and two, it's well known. So we want people to get that benefit of not having to jump between a bunch of different APIs, certain different projects creating all their new CRDs. And you have to memorize like, okay, what is the CRD is that specific to this project? Is that a more broader Kubernetes resource that I can use in different things? So this takes a little bit of that cognitive load off. And then additionally, you benefit from what Damian said about the multiple user personas, you can have your cluster operators, your admins worry about kind of your gateway, your gateway class resources, and then let your developers and the people that are worrying about traffic getting to a specific backend service, just focus on the HTTP route and limit what they need to be aware of and the number of resources that they need to edit to get traffic going to their service. So I'm going to quickly make sure your wifi is going well. And then I'm going to try to do the live quick start for you guys. So give me a sec since you've got to pull this over to this other screen. Yeah, well, Alice gets that set up to I just want to emphasize in that last slide, you saw three different resources gateway class, gateway HTTP route. And if you look at the gateway API documentation, there's other resources as well. There's, you know, there's resources for being able to share secrets across different namespaces. This is called a reference grant. There's TLS route, UDP route, TCP route all basically protocol specific routing resources. And we support gateway class gateway, HTTP route, TLS route. And we have plans to support some of the other resources as well. But even though you see that those three primary resources are used. Again, in the early days, I was a part of gateway API and we went from okay, we've got an ingress resource. And now we're going to go ahead and have multiple resources. And it is needed because of the feedback we were getting from user, the user community that you have these different user personas and a single resource does not work as intended. Right. And so we need to have these different resources so that the infrastructure, you know, the cluster administrator infrastructure admin has resources that can control the gateway actually exposing that infrastructure, right, creating the envoy proxy and then, you know, exposing that envoy proxy on the actual infrastructure. And so even though we have these multiple resources from a developer standpoint, it's really just interacting with the routing resources. And again, that's not going to say that those are the only resources we're ever going to support or that we're only going to support core fields. Those are just what we have available ready for you to try today. And the 0.2.0 release. And we're going to be adding support for more features, more different fields in the extended support for gateway API and future releases. So let me see if I can get started with the quick start here. So these are the envoy gateway docs at envoy proxy.gateway.io. And then I'm just going to jump over here and walk you guys through the quick start really quick. All right. So first thing you'd probably want to do when getting started is hop over to this quick start. Like Danian said, you can get started with just two quick commands and a cluster. And you can do this in a real Kubernetes cluster that has load balancer support, or you can try it in a local cluster. So I'm just going to copy the first command up here, which is installing the gateway API resources. Yeah. So in addition to this is essentially, this is installing envoy gateway, right? And so in addition to installing envoy gateway and installs the gateway API CRDs, it's going to install our back that's used by envoy gateway. It's going to install a surgeon job. And this job, what it's responsible for is creating the TLS assets that are used to secure the communication between envoy gateway, which is the control plane and the managed data plane, the envoy proxies that it manages. So we have mutual TLS. It'd be easier as a backup plan where we also had a recorded demo to. Sorry, give me just a sec. Yeah. Yeah. The screen is not showing up on Alice's on her desktop. She can't see anything going on. Give her one second here. But as I was mentioning, the first command here is installing envoy gateway. And then by the end of that installation, envoy gateway should be up and running. You could tail the logs. You can look at the deployment and see that it's running. You got a backslash at the end of it. One more. There you go. Hey, magic. There we go. Like I said, the CRDs, the RBAC, the service, the deployment. And you do see that surgeon job again that's used to secure the communication, create the TLS assets that get stored as a Kubernetes secret. Go ahead, Alice. Thanks. All right. So those are just the CRDs mostly. And then next thing we're going to do is get started with the installing envoy gateway. So this next command, right? So envoy gateway is up and running. Can you do maybe like a deployment or something just to show that it's up and running? So kubectl, get deploy. Envoy gateway, there it is in the Envoy gateway system namespace. And it is ready and it is available. So it's ready now to have users declare their desired state of the proxy infrastructure and then start routing traffic through that proxy infrastructure. And that's what the next step is going to be to create those resources. Yep. So this will deploy a gateway, a gateway class and a simple HTTP route. And then one last piece that it deploys as well in this next part of the installation is a sample back end application as well. What we use is echo server. That's part of the ingress conformance Kubernetes ingress conformance testing. And what's cool with that is that it just allows when we're testing and through curl commands, you can actually see some of the details of the pod that responds to those requests to see what namespace and the pod name and so forth. So when you start doing traffic routing, it gives a good illustration of traffic routing scenarios. It's really quick. I'm just going to show you again what we have installed from the demo that I just played. So we can see we've got our gateway class right here and that has the controller name letting envoy gateway know that it is responsible for managing this one. And then I'm going to quickly show you the gateway as well. And you can see that once it was created, it got assigned an address after it spun up a service was a load balancer. And really quickly, I'm going to take a peek into this gateway. And then we can see here the main things to pay attention to are the attached routes. And we can see right now we're just using HTTP. And that is for the HTTP route that we just created as well. And the status is really key here, right? Because you know, the infrastructure admins, they go ahead and just deploy their gateway class and gateways and just be able to see that, okay, that it's ready to go same with your dev admins or your developers. They just go and list gateways and say, Oh, look, here's the gateway that I can attach to. Let me just start creating routes. And there's the deployment status of the envoy proxies along with a service. And we plan on expanding kind of all the plumbing that goes into surfacing status for gateways. So keep an eye out for that as well. For sure. So when we get we read all these resources and it processes them, if there are any errors with them, any issues, it will report that back to the status of each resource. You can just tell not only if there's a problem, you don't have to necessarily dig into the logs to determine what that is. The status has a lot of useful info about maybe there are certain fields that aren't supported at the moment, certain things that you might have misconfigured or any other issues that it was able to detect. So this is our HTTP route really quick here. And you can see that this one is only supporting traffic on hostname example.com. So any other requests that are not from example.com will not work. But we're going to use a curl command really quick to test traffic to this. So I'm just going to scroll down here to section four, pulling up requests for an in cluster. That's for your cluster supports external load balancers, right? So scroll down some more. And it's going to be right where your mouses go to the right down. There it is. Double click on that. Thanks. I can't read it. Yeah, no worries. So that's just setting the environment variable for the status address of the gateway that was shown here, which is 34 121 122.105. That has an issue. There is, let's see here, what arguments in resource form must have a single resource? We got a we got a bug in that stuff. Oh, we need the on boy service. So I see we do need to still need the export on boy service. Can you just bring this back to your yeah, maybe just slide it over go through the commands and then you can kind of just scroll back and give us one second. It's a little challenging trying to do this where you can't see what you're working on. Oh, I see. I was copying the wrong command. Yes. The first thing I was trying to do is just grab the service. And then the next thing I'm going to do is grab the host of the gateway. Alright, and last thing we would want to do is just send a curl request to test out on the gateway. So this curl request has a we're passing in the header for example.com. Since like I mentioned, this is not going to accept requests from anything of an example.com. And then additionally, as we saw in the gateway that I showed back in the slides, we've got 8080 set as our port same as this one. So when we make that request, we're going to make the request to port 8080. But you can of course change that to 80 or 443 if you're doing TLS traffic. And so this is just a request to the echo service, which is just going to print back the request headers and information about the request. But it's also going to be able to tell us which pod served the request and a little bit more info. So I am really quickly going to jump back over to the docs and see if I can show you guys the TLS traffic working. Really quick Alice, because I see that we're getting the one minute signal. So the other part of the demo we're going to do is just patch the gateway and add a add an HTTPS termination on that gateway. It's one command. And and basically what that gateway can now do is terminate TLS traffic and it uses a secret that a user would create to typically your infrastructure admin would create that secret for that is used to terminate that TLS traffic. But go ahead and please take a look at the documentation and and give it a try yourself. Yeah, so we had a little bit of friction with the demo, but really quick before we finish off, I just want to go over some of our contributors that we've got from Envoy Gateway. So far we've already got a lot of contributions from many different people. We've got a lot of expertise on this project people coming from various different projects. I work on emissary ingress. We've got people working on other projects as well like contour people from that experience on Istio and as well as people that are working on Envoy itself. So we've got a lot of people working on making this as the best solution for an API gateway and bringing in a bunch of experience from people who have learned lessons building out these API gateways with emissary when we got started on that there were a lot of issues and design decisions that weren't great because we didn't really know what we were doing. Big goal with Envoy Gateway is just to combine all that experience and build something from the lessons we've learned from the ground up. And really quick. You might have time for questions. Man, I know they're telling us to stop. Do we have. Okay, one question. Yeah, we'll take one question and then afterwards Alice and I'll be right here. You're more than free to go ahead and ask us. Go ahead, you in the back. Yeah, so like Dan, you mentioned there are going to be multiple extension points we're looking at in the future specifically for often stuff, but we're also going to be looking at adding like a XDS like patch in mode so you can have sort of more direct access to the config that's underlying there for people where you kind of outgrow the needs of Gateway API. If you are not familiar with the API, I encourage you to go look at the docs for that and check out all the different things you can do with it because it's config actually maps surprisingly well to XDS config. So you can actually accomplish a lot with the CRDs, but we definitely know that'll be a limiting factor for some people. So we are already planning out ways to extend the functionality. Yeah, we have an issue defined for kind of this native XDS mode, we're not going to try to just, you know, duplicate everything that XDS does already. And so we're focused more on kind of that initial to kind of intermediate use case. And when you get to a point where it's like, okay, you've got a lot of advanced functionality going on here, let's use the escape hatch that XDS mode. So that we're not again duplicating everything that XDS has to offer. All right, well, thank you. I appreciate it.