 I'm Shane Ut from Kong, and I am one of the Gateway API maintainers. And I'm Rob Scott from Google, and I'm also one of the Gateway API maintainers. And today we're going to be covering everything about Gateway API, why you might want to switch from Ingress to Gateway API, and exactly how you can do that. So let's do an overview of these APIs real quick. Let's take a look at what we're talking about. So the Ingress API, which everybody's familiar with, it's pretty simple. It can do host and path matching. You can forward to a service. You can do TLS configuration. It's been around for a long time, and there are lots of implementations, like 22 plus implementations. There are some limitations of the Ingress API. There's not enough features, and this ended up leading to custom extensions everywhere. So extensions that were, and the biggest problem was extensions that weren't portable, like the traffic splitting, header matching, and sticky sessions were some notable ones. It became this annotation Wild West. So it had an insufficient permission model. So Gateway API intends to be the next generation of Kubernetes routing and load balancing APIs. It's designed to be expressive, role-oriented. We have 15 plus implementations right now, and we graduated to beta in July. So if you take a look at this diagram over on the left, you're going to see that in Gateway API we have three main kinds of resources. We have the Gateway class. If you're familiar with the Ingress class resource before it, Gateway class is nearly identical. Then we have HTTP route and a bunch of other route types at the bottom. We'll focus on HTTP because it's the most similar to Ingress, but we'll show you very soon that HTTP route and the Ingress resource are also very similar. Gateway is kind of a new concept in this API, and we'll be talking about that just a little bit after. Now, Gateway API has a ton of features. I know that we've got lots of feature requests throughout Kubernetes networking for many of these things, whether it's request redirects, rewrites, header matching, method matching. There's a lot of things in here that we've had feature requests for a long time. Gateway API finally allows you to do many of these things. But some of you may say, hold on, I can do this with Ingress. There are a few things you can do with the Ingress API today, but everything in that blue box is entirely new with Gateway API. So a lot of net new things in Kubernetes because of Gateway API. You may be wondering how on earth can you provide all these new features when not everyone can implement everything? To accomplish that, we introduced the concept of conformance levels. This is a very important concept to understand in Gateway API. Every feature, every field in Gateway API is assigned one of these conformance levels. To start, most of our features and fields are our core conformance level. This is similar to Ingress. Every single implementation has to implement that field or feature and do it in a conformant way that passes our conformance test. But we recognize that not every implementation can implement everything. So we have another layer and that's called extended. So an example of this would be header matching. So where prefix matching might be core, header matching is just not quite everyone can support it. So we have conformance test coverage for this. And if your implementation can support it, we document that well. But otherwise, we recognize not quite every implementation of API is going to be able to cover that. And then finally, we have a custom category that is very little in this. But if anyone's familiar with regular expressions, anyone dealt with regular expressions out here? Okay. Yeah. So they're a special thing. So with regular expressions, you may realize there's some variation depending on the underlying implementation. So whether you're talking to, you know, engine X version of regular expressions or envoy or HA proxy or whatever cloud you're using, the variation of regex that support is going to be slightly different. So custom means we understand this concept, but it may not be completely portable across your implementation. So you have to understand what is actually implementing that. Now I promise that I explain Ingress and HP route and how they kind of are parallel to each other. So maybe to dive a little deeper, let's look at them side by side. First off, we're going to look at the simplest ingress and HP route resource I can think of. We're going to walk through step by step, step and see the parallels between them. Now first off in ingress, we have a concept of ingress class name. So in this example, our ingress is being implemented by engine X. Now in HP route, what we have is a parent refs concept. And this says, hey, attach this HP route to the gateway called engine X. You can attach HP routes to more than just gateways, but we're focusing on gateways now. And I know many of you don't even know what a gateway is yet. We'll get there. I promise. But just we point up to a thing called the engine X gateway. Next step, we want to do a prefix path match on login. And you can see these APIs look nearly identical. And then finally, they're both forwarding to the auth service on port 8080. So starting off, you can see there's a lot of similarities here. But let's extend this example just a little bit. Let's say that we want to add another implementation of our HP route. So in gateway API, that would mean I want not just the engine X gateway to implement this HP route, but I also want the contour gateway to implement this HP route. In this case, you just attach another parent wrap. So you're saying, not only do I want the engine X gateway, I want the contour gateway to implement this. Now with ingress, you may be familiar, you need to create an entirely new ingress resource to do this. So instead of two ingress resources, just a single HP route with just one additional parent wrap. Now let's say you wanted to add another path match. So for example, let's say in addition to matching paths that start with slash login, maybe we want to do an exact match on slash auth. This is pretty straightforward in HP route, just a couple of lines. You can do the same thing in ingress, but again, it's just a little bit more verbose. So you may see a pattern here that the same things are possible, but maybe just a little bit simpler with gateway API. Now adding a host match, let's say I only want to match requests to example.net. Again this starts very similar, but some of the people that are paying extra close attention may be able to see where I'm going with this one. Let's see what happens when we try and add another host match. So we want to match example.net and .com with HP route. You just add one item to the list and you're done. With ingress, you add an entirely new rule. So again, it's just one step simpler, just one item in a list versus an entirely new rule. So Rob alluded to gateways via parent refs. So parent refs point to gateways. In gateway API, gateway is an actual resource. So in gateway API, we have gateway class, which is like ingress class. This tells you like your controller, what routes am I responsible for. And as we just talked about ingress and HP route have similarities, but we now have gateway, Kubernetes resource. This lets you represent your load balance or your proxy as a Kubernetes resource. You can define your listeners and addresses on it, attach your routes to it like you saw before, and then the configuration stays the same across implementations. So in this example, we have a basic gateway set up with an HTTPS listener. So HTTPS port 443 for the hostname example.com, your TLS config, routes can attach to this listener. There are a couple of common types of implementations today. I represent working at Kong and in cluster implementation. So we deploy a gateway resource and then that ultimately results in the proxy, the gateway, being deployed in a pod on the actual cluster. The addresses are usually provisioned with a service type load balancer. And this behaves the same way as portable, it goes on any Kubernetes cluster. And I work at Google and so I work on GKE's implementation of gateway API. So that's representative of a cloud provider implementation of this API. So when you create a gateway with a GKE gateway class, what's going to happen is we're going to provision a cloud load balancer behind the scenes to represent that gateway. So in case of GKE, we have an ILB gateway and an XLB gateway depending on what you're most interested in. My advantage of a cloud provider implementation is that you can load balance directly from your cloud LB to pods without any kind of intermediate hop. Of course, because it's a cloud provider implementation, this is usually limited to clusters managed by that cloud provider. So talking a little bit more about route attachment. Routes attached to gateways with parentWrest fields, we did that. The same route can be attached to multiple gateways. So if you needed to migrate between implementations, or if you needed to implementations to serve the same route, you can do that. So in this case, we have engineX and contour both serving the same route. Gateway owners, the ones who make the gateway resources can specify where those routes can come from. This is our concept of allowed routes. So developers can attach their routes to gateways in different namespaces. So in this example, we can set trusted namespaces for attaching routes to a gateway. So there's two listeners here. Both of them are just doing HTTP. Store.example.com only allows attachments of routes from the namespaces labeled store. And then api.example.com only allows those attachments from namespaces labeled api. This is optional extension, but because same namespace just works, but it's something that you can do to kind of make that distinction. So let's walk through a couple advanced examples here. As I said, we get a lot of feature requests. I live mostly in the world. I used to be an end user of Kubernetes, but I may have gotten just a little bit detached. So back when I was using Kubernetes more, I thought that something called canary routing or traffic splitting was a pretty common thing that I wanted to do a lot. Anyone else want to do traffic splitting? Anyone? Okay. Good. Okay. We put a lot of effort into this, so I'm glad that it actually worked. It's something that people will appreciate and use. Now throughout these two examples, I'm going to show you how it was possible with ingress today, thanks to some very creative implementations that built things on top of ingress api, and then how we've made it easier with gateway api. So first to understand what, in this case, canary routing or traffic splitting is doing, the idea is I want to match my request to example.com and send 90% of traffic to my prod service and then 10% of traffic to my canary service. And you can think of many different examples of traffic splitting. I want to highlight ingress engine X. That has a great implementation, about as good as you could possibly do with ingress api and as we alluded to, annotations were the way you extended the ingress api. So you have one ingress resource that allows you to send your traffic to production and then you create a second ingress resource and that allows you to send 10% of your traffic to your canary endpoint and you're doing that with those annotations there. So this was possible with some creative implementations of the ingress api. We tried to make this easier and native and portable with gateway api. So instead you have a single HP route and you just have some traffic splitting. So you just say my canary service should get 10% of my traffic and my production service should get 90% of my traffic. So just a little bit less configuration than you'd need with ingress. All right. Another advanced use case here is the idea that some organizations, especially the larger organizations did not feel comfortable having their TLS certs in the same namespace as the rest of their configuration, their application, their ingress, et cetera. And some implementations had a creative solution to this. They said, hey, you can put your secrets in a different namespace. I'm going to highlight what contour did. Secure has an annotation on ingress that allows you to say my certs actually live in a different namespace. So that secret name down there, test secret TLS actually references a secret in the prod search namespace, a completely different namespace. So this is a powerful concept, but we want to make this easier to do with gateway api itself. So our certificate references on gateway just have a namespace field. That sounds dangerous, Rob. You're right. I think some of you in here may have seen this inside a hold on a second. This doesn't look quite right. This doesn't seem safe. And if you had those eyes of caution, like hold on a second. Good for you. I understand this is something that does not seem good on the surface because, you know, just being able to reference any secret in any other namespace without any safety mechanism is probably not the best. So in gateway api, we built something to enable this safely. A handshake mechanism. We created a reference resource called reference grant. And this allows you to complete the handshake. So you have one resource that's saying, in this case, a gateway saying, I want to reference this secret in another namespace. And then you have a reference grant in that namespace that says, I own this secret and I trust references from gateways in that other namespace. So it's basically a way to complete that two way handshake and ensure that both parties, the owner of the secret and the owner of the gateway, agree that this reference should take place. We use this same model a few places in the API, but it allows some really powerful use cases. So you're probably wondering, should I switch? Maybe not. So if Ingress API and the eco, if the Ingress API is doing it for you, like if you have everything that you need there, you may not want to switch. But if you, one thing that's important to realize is that Ingress is really not going to get new features. It's kind of complete. So the gateway API is going to have a lot of new features. It's more expressive. It's extensible. It's portable. But in addition to that, we also just are going broader than Ingress. We're working on the use cases for mesh, L4 load balancing. So we've graduated to beta. It's ready to use. And it can be used, it's a CRD-based API, so you can install it on any cluster with Kubernetes 116 or greater. If you are using something less than 116, we'd like to kind of hear from you afterwards. We're just genuinely curious, just kind of like, what's going on? So to get started, like I said, it's a CRD implementation. So you just deploy the CRDs, pick an implementation, and then follow our guides. They all work pretty much the same. So you'll deploy a gateway, you'll create HTTP routes, attach them. Everything we just showed you kind of previously. Here are several great implementations that we have. This isn't an exhaustive list that you can get from our websites. We have under-gateways website implementations. You can just kind of pick the one that's right for you, and then our guides should kind of walk you through the rest. So now we're going to do a little bit of a demo of using the two different types, the kind of in-cluster and the cloud provider gateway. So yeah, if you got this far, you may have read our talk description. One of the things we promised is that we were going to show you from start to finish from creating a cluster all the way to using it. We want to show you exactly all the steps involved. There aren't many to use this API. So in this case, I'm going to use GKE cluster. But of course, any Kubernetes cluster will work. On GKE, we offer an option to manage these CRDs for you. But again, you can just install them in any Kubernetes cluster. Same thing will work. So to have us manage CRDs for you, you just add a gateway API flag when you're creating a cluster or updating one. And we'll run through that process now. I've sped this up a little bit. As you can see, it usually takes a few minutes for a new cluster to come up. But I didn't want you to all wait through this. So this is being sped up a bit. But at the end, we'll have a new cluster. I'm using GKE 124. Simple cluster. Nothing too fancy going on here. Now we actually want to get our gateway classes because when we choose gateway API to be enabled on GKE, we also bundle a couple gateway classes for you. In this case, GXLB, that's our global external load balancer. And regional internal load balancer is available just by default on these clusters. Now we can go ahead and actually apply a few base resources. We're going to use these throughout the entire rest of the demo. And in this case, we have a service and a deployment for our V1 version of this application. And then the same thing for our Canary version of this application. Our application in question is just Echo Server. If any of you have worked with Ingress or anything else, it's just a really, really easy way to just print out information about the request the pod received and also the pod name, namespace, and some other useful metadata. So with all that, let's actually create a gateway. It's a really simple gateway. GKE gateway, we're using our internal load balancer in this case. And just listening on HP on port 80. We'll create that gateway and in just a little bit, we'll be able to get an IP that we can make requests with. Again, I've sped this up because it takes three minutes or approximately to get a new gateway up and running. But eventually, you'll see we have some IPs that we can make a request to. But this on its own is not very useful because we just have a gateway. And a gateway doesn't know where to route things. It's really just the entry point to the system. So let's create an HP route so it has something to route to. In this case, we've got an HP route that's going to point to store view one on port 8080. And you see that parent ref there? It's attaching to the GKE gateway we just created. And we're gonna be reusing this HP route a bunch, but this is our starting point. So we'll go ahead and apply that HP route. And HP route is good to go. Because you may have noticed, I said, I'm using an internal load balancer for this demo. I actually have to get somewhere into the network. The easiest way to do this is I'm just going to exec into a pod and I'm gonna run a few curl requests. And you can see we're hitting a store view one pod. Each deployment has two pods. So if we make a few more requests and grep for the pod, you see we're kind of being split between the two different pods. So at calling, we're working on the calling gateway operator. This is in technical preview right now in our calling incubator. It's an in-cluster implementation, so that means the gateway, the proxy, is actually in the cluster. However, you'll see that the APIs all work pretty much the same thing as GKEs. So unlike GKE, we'll have to go ahead and deploy everything into the cluster. So our CRDs, our deployments and so forth, wait for our deployments to be ready. And then this is our operator. We'll wait for, sorry, I messed it up. Go. Okay, we'll wait for our operator pod to come up. So the operator is responsible for looking for the gateway class that we talked about earlier that it's responsible for to build gateways and then attach routes to them. So I think all I did here was just, yeah, the pods are running. So a gateway class, like we said, pretty simple. This is very similar to the ones that are in GKE. This just tells the operator be responsible for these gateways, these routes that are attached to us. So we'll apply the gateway class. It doesn't come with the clusters, of course, because this is the portable implementation. And once the operator sees it, it'll pick it up and the market is accepted true. I'm ready to serve, I'm ready to create gateways. So in the gateway, this is almost the exact same thing as what GKE had, except it's a Kong gateway class name. Same thing, we just go ahead and apply that. And then we wait for the gateway to be ready. Behind the scenes, this is, of course, like I just said, going and actually creating the pod on the cluster via deployment. Ultimately, once it's ready, you'll have a load balancer IP address that you can go and make requests to the gateway. In this case, this is a gateway with no configuration yet, so that's just kind of the default response. So before we set up the Store v1 pod and so forth that we were routing our traffic to with HTTP route, in this case, we're just gonna add the Kong gateway. Same routes and everything that we already had. We'll just update the HTTP route to say, I also want that route to be served by Kong gateway. Maybe it was there before. Okay, there we go. Okay, so it's still empty. We didn't configure it yet. We update the HTTP route to add Kong to the HTTP route, so it's also serving that traffic. And then you can see we're getting the same backend. We're getting Store v1. So traffic splitting. We talked about that a little bit earlier. We're gonna update the same HTTP route. Both implementations are serving it and say that we want a weight of 50-50 between the two backends, the services. So Store v1 and Canary should both be getting 50% of the traffic. We'll update the HTTP route to do that and then start making requests. And as you can see, Kong's implementation is going 50-50, hitting the different backends. And now we'll step back in time a minute and we're gonna make the same request, but to the GK gateway IP here and you're gonna see we're getting the same responses. So again, two different implementations of the API connected to the same HTTP route and everything works. We just demoed two. I think there's 15 or more implementations of this API. They'd work the same way. Again, the goal here is to have a portable experience no matter what underlying implementation you're using. If you read our talk description very carefully, you may have said, hey, we promised a demo of cross namespace traffic splitting. I'm sorry, this was, we write these CFPs like five months ago, it's a long time. We thought we'd have this out of experimental. It's like two weeks away, we just missed the timeline. But this is moving from our experimental channel to the API to standard in a couple weeks. And so we know what this does. We know how it works, but the idea is that reference grant idea that we demoed earlier, you can use the same concept again, but to reference a backend in another namespace. So we've been traffic splitting within the same namespace. That's the default, that's the easiest thing to do. But there are some legitimate use cases where you want to traffic split to another namespace. And this enables you to do that. So again, in this case you create a reference grant and this says, I trust references from this HB route namespace, HB route in the stable namespace to talk to me, my service in the namespace that I'm in, canary namespace. So again, this is just another powerful use case of reference grant. Now I'm sure some of you are saying, hold on, this looks like a ton of work. Yet another new Kubernetes API to learn, I'm sorry. But we are trying to make this transition just a little bit easier. So we've created a tool called ingress to gateway. So if you're interested, ingress will, just out of curiosity, how many of you have Kubernetes clusters right now that have a bunch of ingresses running in them? Okay, great. I would love if you would try this out. The whole idea is this tool looks at your cluster, whatever's currently in your kube config, gets all the ingresses in it, and then tries to print out equivalent gateway and HB routes for you. This is still early in development. It works for me on my cluster, but I know, you know, we'll see. So if it doesn't work for you, please let us know. We are trying to make this better. But again, we're trying to make this as easy as possible. And as Shane said earlier, you may not need to move to ingress. But if you move from ingress to gateway API, but if you're interested, we'd love to get your feedback on this. Yes, please. So what's next for gateway API? Service mesh, like I talked about a little bit earlier, is kind of one of the bigger ones. We have what's called the Gamma Initiative, and we're looking at taking these APIs, like HTTP route and stuff like that, and see how they can apply to the service mesh use case. Gateway class, gateway and HTTP route all just recently went to v1, beta one, but we are trying to get to v1 very soon. TCP route, UDP route, TLS route, they need more feedback implementations, but we are hoping to get those into beta very quickly. GRPC route just released as an alpha, so go check that out. And then ultimately, one of our big stretch goals that we've had for a while is we want the gateways to be an alternative to a service type load balancer. If you wanna get involved, we actually have two weekly meetings. So the one on every Monday is kind of our regular gateway API meeting for like ingress and so forth, and then our Tuesday meetings are Gamma, that's the mesh related meeting. We would like anybody who wants to come to come, all backgrounds, even if you just wanna come in and listen, but we'd certainly like feedback and ideas and stuff like that about how to go forward with these things. So yeah, our website's gatewayapisigs.khs.com, we're on SIG network gateway API on Kubernetes Slack and gateway-api is our repo under Kubernetes SIGs. I just, before we get into questions, I just one last call out, I forgot to put this in the slides, but Keith is here, John is maybe here, I'm not sure. But if you're interested in the mesh side of this, there's another talk at 11 a.m. tomorrow. Is that right, I think? But yeah, look for it. Really, really interesting things happening with this API. We focused on the ingress side, but it's much broader than that. With that, any questions? Have you given any talk to multicluster in gateway API? Yes, I know at least one implementation supports that, so in GKE we have multicluster, we combine this with multicluster services, so one of the things you can forward to, we focused on forwarding the services, but you could forward to a service import, and just build on top of the multicluster service API for multicluster gateways. Should've gone down the middle. It's a lot of pressure to make this worth it. You showed how in the GKE example, you took care of setting up the load balancer. So Shane, I'm wondering, for Kong, you have a gateway class defined. When you create a gateway, does the operator do anything with, say, creating a service of type load balancer, or is that still left to the cluster operator? Yeah, the operator does that under the hood. It's supposed to be kind of opaque to you unless you care. So are you able to influence things like, do I want an NLD, or those sorts of cloud provider-dependent things? Yeah, we're at that level. Okay, thank you. So one of the big advantages of Ingress, of course, is that it comes built into Kubernetes when you first install it, which is helpful for bootstrapping. Are there any future aspirations to have Gateway API also come with Kubernetes included with it, rather than as a CRD? Yeah, that's a good question. I know that's been discussed a lot, but I think, from my perspective, the goal is to keep Kubernetes the project from growing even wider. There's already so much in the core Kubernetes repo that we're trying to pull things out, like most people, but not everyone using Kubernetes needs something like Gateway API. This is not everyone needs load balancing in their cluster. So this is something that you can bolt on as basically an extension of your cluster, and that's the direction a lot of new APIs are going. Admin network policy is kind of following the same idea, and there's other APIs that are also kind of going to be, I think a lot of future Kubernetes APIs are gonna be developed this way. Do you know if there is a support for HTTP root in external DNS? Lost track of that. I know there's been work with external DNS and Gateway API, and it has some support. I can't remember the status of that right now. Me neither, right? Yeah, do you? Sorry, I'm not either. Yeah. In the back. For traffic shifting, is there's like an idea for using the scaling API or something to make that a little more easy to drive from like KEDA or some other traffic monitoring tool so you can have traffic shift automatically versus like setting configurations? Like a sub-resource. So if I'm understanding correctly, are you talking about like filling a zone or something like and then spilling over, is that? Oh. You're trying to attach that to the traffic splitting? Is that specific to the KONG operator? No, I think he's talking about, I could be wrong. I think you might be talking about like, is there in Gateway API a way to express attaching like weighted traffic splitting to scaling your deployment? I don't think we have anything like that today. If you're, right, I don't, yeah, I don't think we have anything like that today in the API, like a way to express that. All right, there may be something that have come up in the past, but come to the meetings, come to like get on GitHub with us and like tell us about that because we'd love to try and see if we can fit in a use case like that. Sounds cool. Yes. Progressive, basically the traffic shifting part of it, but progressively in a different way. How are you envisioning doing the, not just 10% canary, right? Do you want to go to 100%? Do you want your canary to become production? Yeah, I think today we're kind of leaving that up to the operators to kind of figure out as opposed to making that something that we express like in this API, but open to it, like open to it. If it's something that can be generally applied across most implementations, I mean, we also have like extended and other feature sets basically to try to capture things that can't be done everywhere. We'd be open to putting something like that in. Yeah, I'd also recommend watching that KubeCon talk that was yesterday. If you missed it, there was one by Sanskar who's over there demoing just how to use this API with in that case, Flagger, but something that helps automate that transition process for you. I think really those kind of automated transitions are going to require something on top of this, like Flagger. I think, I can't remember if Argo supports this API yet, I thought they were working on it, but I can't remember. As you're heading out, by the way, if you wouldn't mind leaving feedback for us, we'd really appreciate it. Yes, please.