 Okay, let's get started. Hello everyone, thank you very much for joining us to today's CNCF webinar, The Evolution of Ingress Through the Gateway API. I'm Jerry Fallon and I'll be moderating today's webinar. We'd like to welcome our presenters today, Kasslin Fields, Developer Advocate at Google and Bowie Dew, Staff's Off-Ware Engineer at Google. Just a few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There's a Q&A box at the bottom of your screen, so please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. With that, I'll hand it over to our presenters for today's webinar. Hello and welcome, everyone. I think as you mentioned, this is the evolution of Ingress through the Gateway API. And there we go. Like you said, I'm Kasslin Fields and I'm a Developer Advocate at Google. I focus on DevOps, Kubernetes, containers and all sorts of things. And I'm also a Cloud Native Computing Foundation Ambassador and today I have with me. Hi, I'm Bowie. One of the leads on Kubernetes Networking and GK Networking at Google. It's a good- Cool. So let's get started. So today we're gonna be talking about Gateway API and I wanted to tell you all a little background on how I found out about this, which honestly I don't really know. I am a Kubernetes kind of person so I'm in all of these Kubernetes groups and things and Gateway API was one of those things that I heard about at some point and I was like, wow, I would love to learn more about that. So I learned about Bowie and the team creating Gateway API and I started learning about it. So what I'm gonna share with you all today is what I've learned so far. I'm gonna give you all a little bit of a demo and Bowie's gonna jump in a little bit deeper. So to start off with, what is Gateway API and kind of where does it come from? So in terms of Kubernetes Ingress and the ways that kind of networking and traffic within Kubernetes are handled, traditionally these types of roles have been a single type of role and it's been very self-service in style, but over time as more and more companies are adopting kind of the DevOps methodology for dealing with Kubernetes and adopting cloud native technologies, this is changing. There are now many roles that kind of touch this space and so the permissions and the ways that people interact with these tools are changing as these roles are kind of diversifying. So there are several tools in this space. There's kind of proxies, there's cloud load balancers which I'm sure most people are familiar with and there are middle proxies like nginx and envoy and these types of things. So traditionally these kind of categories of tools have been more separated, but as DevOps methodology takes hold as these roles are transforming, these tools are also transforming and the intersections between them are getting more and more, there's more and more overlap between the feature sets. So the kind of landscape here has been around Ingress and networking has been evolving. So the goals of the Gateway API project are to better model the personas and the roles that are involved with services and load balancing. Like I mentioned, they've been kind of changing with the adoption of DevOps and we want to support modern load balancing features while maintaining portability or maybe predictability. We'll talk more about what that means in a little bit and we wanna have standard mechanisms for extending APIs in this space. So that as the API grows and as vendor specific behaviors are needed and implemented, everything is kind of consistent and standard. So how is Gateway API going to accomplish these things? For dealing with personas and roles, you have to incorporate that into the resource model of how you do Ingress, how you create load balancers, how do different roles use those resources. So part of that is the resource model, part of that is RBAC, role-based access control. And if you're familiar with Kubernetes, you probably, or lots of other things, you probably have some familiarity with RBAC, but I'll go into a little bit more detail on that in a minute. In terms of modern load balancing features and making sure that the stay is portable and predictable, we're going, the Gateway API is gonna do that with levels of support and creating a specification for everything to kind of conform by and doing things like conformance testing. And I'll talk more about that. And for standard mechanisms for extension of the API, part of that is in the resource model and there's kind of a polymorphism like strategy that the team has adopted here, which I'll talk more about again. So, did you have something to add, Boley? Nope. Okay, sorry. So, modeling, let's first talk about services, Kubernetes services. A service as a resource kind of handles a lot of jobs. If you are just getting started with Kubernetes, the first kind of exposure that you get to services is probably as a method of exposure for your applications that you're running in Kubernetes. So there are a variety of methods for doing that. You can have cluster IP type services, node port and load balancer type services. And I know in some of the early tutorials I did with Kubernetes, you'd actually do like cube CTL expose whatever your pod was. And that is how you enable sending traffic to that application, how you connect it to other applications. So services are a method of exposure. Another important point that sometimes people forget about or don't think about enough is that services are a method of grouping back ends. A workload is often not just a single pod, it's often several pods altogether. And so you give those all selectors and the service kind of connects back to all of those different selected pods. So it's a grouping of back ends. And it's also a way for you to place attributes on your service. For example, you can describe how to handle traffic for the service with external traffic policy. And then there's things like session affinity. So services kind of have a lot of jobs and so as such, they can be kind of complex to handle. So there's this concept of ingress that Kubernetes introduced and we're introducing Gateway API to kind of separate out the chunks so that they make more sense with the different roles that have to deal with services. So let's go over those roles that we keep talking about. The three roles that we're gonna kind of focus on here are the infrastructure provider. And this is gonna be whoever is providing the infrastructure for cluster creation. This could be a cloud provider or for example, an internal platforms of the service team. There's also the cluster operator or cluster administrator, network operations, site reliability engineer. These types of roles do cluster management once it's been created. They're responsible for overall policies. For example, what services to expose to the internet. Very important in terms of security especially. And then of course there's the application developer who is actually building the applications and thus they have some stake in the services that expose those applications and they have some stake in defining traffic routing to those services. And this is kind of how we think about these roles. Now I mentioned RBAC earlier. I wanna give a brief overview of what Kubernetes RBAC means or kind of RBAC means in general. And RBAC is a way of describing roles using clauses. So a RBAC rule kind of has a user, a role, a verb and a resource. So our example here is Alice, our user can act as a cluster operator, which is our role with permission to update the thing that Alice can do. The configuration of a gateway, the resource that Alice can do this action to. So the important thing here is to remember that there are many different types of roles, different personas that need to be able to do different things. They need different permissions. So this is a way to kind of control that. And we need to divide up the resources so that roles can be given different types of access depending on what they need to do. And I wanna mention that the Kubernetes API implements RBAC, so Kubernetes API is mostly have kind of the same set of verbs. And so if you're familiar with Kubernetes RBAC, you'll know a little bit of this already. And this will relate back to the way that gateway API is implemented, which you'll see. So I wanna talk for a second about Ingress. So Kubernetes has this concept of an Ingress class and an Ingress resource. And the Ingress resource in Kubernetes is more of that self-service model I mentioned, how the roles have been changing over time. So this is, we think not as representative of what these roles are doing now that kind of DevOps has started to become the way of things. So Ingress is a self-service model. The Ingress class is the part that's created by the infrastructure provider. If we're talking about those roles. And then the application developer kind of manages both the Ingress and the service. And Ingress is limited to kind of simple L7 descriptions. So it kind of ignores the cluster administrator all together here. And this is arranged differently in the gateway API model. And this is all centered around those roles that we just talked about. So the idea here is to decouple along the role concept. So the first one, we talked about infrastructure providers. They are gonna deal with the gateway class which basically sets up a template of, this is the types of load balancers basically that you can have in Kubernetes for your various services. Then the cluster administrator will take those templates and actually implement them as actual resources in Kubernetes which are called gateways. And the application developer cares more than about routes and services. So this is a little bit closer to what they actually care about. It's all about the types of traffic that are coming in and where that traffic goes. Making sure that it's going to their services in the way that they want it to. So in terms of concepts, the gateway is all about exposure and access. It's kind of a load balancer. It's taking in traffic and then the routes define more where that traffic is going to go. It's all about routing, protocol specific attributes. And then the services are like we said about grouping pods, dealing with backends and allowing for that type of selection. So here's a look at what the API kind of looks like. You'll see this several times. It'll make a lot more sense probably in the demo if you wanna check that out. So there's the gateway class which I mentioned is kind of a template and there's the gateway itself. And then we've got these two HTTP routes that are defining where traffic goes and then they go to the backend services. So you can get kind of an idea for what this flow looks like. And now Boe, Boe is gonna go into more detail on that in a minute. There's one more thing I wanted to talk about here which is portability and predictability. I used these words earlier. What does that really mean and how is it being implemented in gateway API? So one challenge here and one thing that a lot of people have been really excited about with Kubernetes for example is customer resource definitions. You have a very specific use case and you want Kubernetes to do something very particular. So you implement a customer resource definition. The challenge is that sometimes the ways that these are implemented aren't very standardized. And so they can be kind of challenging to deal with. So in developing the gateway API we know that it's gonna grow over time and we wanna make sure that extensions to that API make sense, they're kind of predictable, they're really easy to use. So we're doing this with these few concepts. We've got the core API of the gateway API which is kind of the core functionality should be 100% predictable and portable. You should know what it looks like. It should be pretty easy to use. Then there are going to be this concept of extended API where you've got core API concepts that you're extending just a bit further out. And we wanna make sure that those conform to a standard specification. So that's kind of the idea with the extended API and then implementation specific APIs is gonna be all of those edge cases where I really want it to work this way for my specific use case. You can still do that and it doesn't have to conform to our expectations. If it's just kind of a one-off type of thing. So core must be supported. Extended is kind of a feature by a feature thing. It might be supported, but it really must be portable. So we wanna make sure that conforms to our specification and it's part of the API schema. And then implementation specific is no guarantees there. It's whatever you need for your specific use case. And how will we make this happen with conformance tests? So if you're doing an extended feature definition, then we're gonna require that you do self-contained conformance to make sure that conforms to the specification and everything kind of makes sense between the core API and the extended API. And all extended features must be checkable statically. So conformance all around for the extended portions of the API basically. And extensibility, so I mentioned earlier that this is kind of a polymorphic style that is being adopted to try to kind of make this really extensible. So what we mean here is gateways can refer to different kinds of routes for one thing. You can have HTTP, TCP or even custom routes. So we've got this concept over route and it can be used in many different ways. And you can also route to backends in kind of a polymorphic way. And if you're familiar with the concept of custom resource definitions in Kubernetes, like I mentioned earlier, this is using custom resource definitions to make this work. Cool. And now Boa is gonna go into more detail in all of this. Yeah, thanks, Kazzlin. So now we're just gonna go into sort of give a taste of the API. So, next slide. So we'll go through a user story where we have the different roles. So imagine we have Alice, the IAS provider. We have Bob, the SRE. And we have Carol, the application developer. Next slide. So first we start with Alice, the IAS. She's the person who's providing, let's say the cloud provider or in many cases the person who sets up the overall infrastructure for your Kubernetes clusters. And as part of this, she has basically two flavors of load balancers that she wants her users to use. She has one that's external and this basically creates a load balancer that's accessible on the internet. And one that's internal and this is internal to the VPC. This kind of comes as part of the infrastructure for Kubernetes clusters that she is providing. The way she will sort of express this in the API is that she will create two gateway classes that come basically pre-installed with the Kubernetes cluster. She will create a gateway class called external and then because it's Alice cloud, so she has a controller called Alice.io that's going to facilitate all the gateway APIs. In addition, she has another gateway class called internal and similarly it's controlled by the same controller. What this gives Alice's users is a way to kind of talk about these functionalities without actually talking directly about Alice.io, a gateway controller, or kind of knowing the internal implementation details. For example, Alice may be changing her controller or she may be changing attributes about how the controller works and it's set up and so forth. And kind of the users of her cluster actually don't need to understand that. They just need to understand that, oh, I want a load balancer on the internet. I'm going to use a gateway class of external, similarly a private load balancer. I'm just going to use internal. So it kind of allows a bit of decoupling between the underlying platform and the users of the cluster. Next slide. So now that Alice has set up this overall infrastructure, next comes Bob, the SRE. So Bob is in charge of managing actually stuff that's running inside the cluster, not just the infrastructure itself. And he's also responsible for various policies, especially around load balancers. For example, it would be bad if arbitrary users can just kind of expose their applications on the internet. Probably be quite unhappy. So Bob wants a couple of things. So Bob wants that only certain namespaces can deploy external LBs. So imagine that he's going to control how to label namespaces and he basically says, if I label your namespace with internet external, you can create an external LB, otherwise, no. Like I don't want you to be able to go on the internet. Also, the second thing that Bob wants is that he knows that developers want to be able to experiment and just kind of do their dev work so anyone can deploy an internal LB. And finally, anyone can deploy an in cluster proxy for testing. Bob knows that Alice.io is going to be charging for these LBs and then maybe he has some custom configurations. So he wants to use the ACME.io proxy. So he basically deploys that as well. So first we have these gateway class definitions and Bob actually edits one of the definitions for external to basically say that only those with internet external can on their labels on their namespace can create gateways of this class. Internal, he's going to allow everyone, that's a default. So he's going to allow everyone to create internal class and then actually, Kazlan, can you click? He's going to actually install an ACME.io proxy and this basically adds another gateway class. He's going calling a test. He says, hey, if you want to test things out, use gateways of this class. Next slide. So now we get to Carol. Carol's the dev. She's going to be developing the application and kind of writing descriptions about how her application is routed and attributes about it. So she has two applications. One is store, one is checkout. The first one store, she writes a route for store. So this is store.acme.io and it has a bunch of rules about matching, let's say root goes to the store service and then the search path goes to the search service and her other application is a different host name. It's checkout.acme.io and it goes to a checkout service. And if you click, okay, okay, go back. Yeah, so just to highlight the alpha, basically the schema looks like this but one thing I want to note is that we added in a bunch of advanced, well, not really advanced but basically modern load balancing features that you can express with HP route. And one of the ones that I think has been commonly asked is this header matching. So a lot of the common HP load balancing features we have added into the new API as opposed to the previous ingress. Okay, next slide. So to continue the story, so Carol, the dev, she wants to test out her applications. So she creates a gateway of class test in her namespace and you see that this is a gateway definition and she calls a test and she defines a bunch of listeners that refer to routes. And actually by default, you can take up, have your gateway pick up all the routes in your namespace. It's just an easy way to use it. And you'll notice that there's a dashed box around all the resources. These are all created in Carol's namespace. So she has to control over all the objects in here. So click. So a gateway to kind of get into more detail about what it is that what does this API represent? So a gateway is actually a request for an LB and you'll notice here that it's actually quite under specified. For example, I'm requesting a listener for protocol HPV but I didn't spill in the port. I didn't fill in the address I wanted and it's up to the controller to kind of fill in the blanks. This basically makes it easy for the application developer to basically specify only things that they care about. For example, in Carol's case, she only cares about exposing her applications to an HTTP for testing. She doesn't necessarily require a particular IP address or hosting it on a different port than port 80. And the controller will basically look at this and say, I can satisfy your request for exposing your services and routes in this way. I will stamp it out in the next slide. Yeah, so here you see that she only specifies HTTP. It's under specified, but it's okay. The controller can fill in the blanks. Next slide. All right, so once she has created the gateway, this is where the controller machinery just kicks into action. So the gateway is somewhat the cornerstone resource that's going to tie in basically gateway classes, routes and services altogether to actually create the load balancer. And like I said before, she picks up all the routes in her local namespace. Next slide. Yep, next slide. Okay, so now it's all wired up. And actually this wires up the gateway with the Acme.io proxy class that Bob has installed previously and he's given kind of access to every single dev on the cluster. And this lets Carol test her stuff. So next slide. Yep, so she does curl. You'll notice that in the status, it's just a raw IP address because probably this one doesn't integrate with DNS in any way and she can test out her stuff. Next slide. Okay, so one thing to note is that Carol cannot create an external class gateway and serve production traffic. She's just not allowed. And you'll see here, basically if she tried to reference the external class, it's just denied by the controller. Next slide. Now, Bob the SRE manages external gateways. So clearly at some point Carol, hopefully after she checks in all her code and it all works, wants to put it into production. So what does he do? So actually he can explicitly reference her route and it's actually in a different namespace. So you'll see this namespace Bob. Seems like that's not great practice to name your namespaces after yourself, but this is his production namespace. And from that namespace, he creates a gateway of class external. He's allowed to do so because his namespace has the appropriate labels and he references a route in Carol's namespace. So he actually brings Carol's routes into this gateway and doing so exposes it into the internet. Funny enough, like this has been a very commonly requested feature for Ingress is to basically refer across namespaces. But the important thing to notice that you have to be able to do this safely. So it is very easy if you have cross namespace that you ended up having situations where someone can expose something that you didn't intend to expose or the other way around. So Bob has to create a reference from gateway to the route and the next slide. Also Carol needs to allow her route to be exposed to prod gateway. Now, there are some sort of easy defaults. In this case, Carol has used the, okay, just allow every gateway to target my route. But there may be cases where you want to restrict who can expose your route. For example, imagine that Carol can also create gateways, although it's just for the test class, but it would be bad if she could just suddenly arbitrarily expose developers' gateways in random namespaces. Next slide. So here, after all this is wired up by Bob, Carol is able to basically deploy her application to store.acme.io and it works. Next slide. Okay, so recap on the user stories, right? So what are the roles of all these API objects? So gateway class basically allows you to, as the IAS provider or even the SRE, kind of support multiple classes of load balancing, multiple implementations, side by side. And it reflects the capabilities of the IAS or underlying deployed infrastructure. In fact, you can install a new gateway, well, assuming you have permissions, you can install new gateway classes, lets you mix and match different implementations as you see fit. It also gives a layer of abstraction from the underlying platform. For example, if you're managing a cluster and you're kind of building a platform for your users, you can kind of hide them from the fact that, oh, this is going to be a specific kind of load balancing that we're using versus just, oh, this will be put on the internet, this one's cheap, this one's internal, this one's special in some other way. The next API object is gateway. So this defines how your application is exposed. What is the virtual IP import? What kind of proxy is going to be used? And then also this is the thing that ties it together to kind of kick off the controller action to expose your application. Finally, we have route and service. So route defines routing for your application for a given protocol. And then services, basically, I think what service does really well is define how your back ends are grouped. And this is the decomposition that we're hoping we can move towards. As mentioned earlier, it's like, how do we take these things that are just all focused on service or all combination service and ingress and kind of decompose them into their different roles? And then once you have to decompose them into different resources, you can assign them to different roles. Next slide. So also going into what we're seeing in the user story, so we can deploy multiple classes of load balancing. We can control access to different gateway classes and we can also control access to different gateways. So basically this lets you do cross namespace sharing in a safe manner. And it's kind of, you can think about it as access is a handshake between gateway and routes. Next slide. So if you are astute and thinking through, you know, what's happening here, clearly if you have a bunch of routes, they get merged into a gateway and gateway and routes may not be compatible. So when you kind of do this sort of merging, when you have split up the object across, like what used to be a single object, which is ingress, you kind of split into multiple objects now with gateway routes and so forth, you can result in conflicts. Very common one would be same host, same path, right? So how do we handle this in the API? Next slide. So it turns out the details probably are tricky and you should read the comments in the documentation, but we have three main principles that we are following in terms of how to handle conflicts. So the first one is do no harm. So when you have a conflict, you don't want to break things that are working. For example, if you have a load balancer in the store, after you store is all wired up, you want to drop as little traffic as possible if someone absolutely pushes a conflict into the system. You want to be consistent. So you want to provide consistent behavior when conflicts occur. So if you had a state, you know, object A and then object B got created, it should probably be the equivalent to having B and A because in Kubernetes, the notion of time is quite loose and you can't really guarantee like, I knew that A happened and then B happened and then B and A, unless it's like stably expressed somehow or recorded somehow. So prefer more specific matches, less specific matches. I think this is just expected behavior that many people have intuitively. And then as I said before, you should decide all these conflicts using stable properties. What I mean by stable properties is basically it's written in the database. So creation timestamp and then maybe some sort of canonical order such as namespace common name. Finally, the thing that's really confusing me with conflicts is that's very hard for humans to understand. So we should basically make it clear which configuration has been chosen and then all the conflicts will be communicated via the object status. And that's something that we've been working on extensively in terms of defining the APIs to really make it clear in the status. So next slide. Now let's talk about extension points. So right now in the alpha, we have a couple of extension points. As Caslin alluded to earlier, they use basically this sort of polymorphic-like reference. So we have a bunch of extension references that can reference arbitrary CRDs. This is where you can plug in your own custom CRDs for your, for example, your vendor-specific properties as well as just allow you to sort of reuse the API without having to kind of throw all of the different fields into one big resource. There's a couple of existing extension points that we have in the alpha. Gateway class has an extension point for parameters. Gateway listeners each have an extension ref for customizing listener properties. And their routes have this notion of filters, which allows you to have extensions in terms of how the processing works. And then backends can be referred to more than just services. Ingress actually has this in the GA form is that you can point to just more than services and the same thing gets carried over to the gateway APIs. This is an area where I feel like we really need feedback to kind of understand, okay, are these super awkward? Should there be broader investment in API machinery to make this happen smoother and what does that look like? So definitely feedback is needed. Next slide. So one thing that we're very excited about in the service APIs and gateway sort of group that we're working on is that we're actually very close to the alpha. So it should be cut in a matter of weeks. What's currently in the alpha, we have support for basic application data types. So there's gateway class, gateway, all the different routes listed there, support for TLS and kind of defining how to deal with TLS. We have authors who are going to have implementations both for in cluster proxies. So it's like a merging style where gateways kind of definitions get merged into a single infrastructure as well as a provisioning style is basically you create a gateway and it kind of provisions a low bouncer for you. What remains to be done, of course, a lot remains to be done. I think the key thing is to get feedback from users. We need to define those conformance tests that we talked about before. And then one thing that we punted for now that was in previous conversations about gateway is how to do delegation. So not only delegate from gateway to route but then actually from route to other routes. Can you carve off, say, sections of your website and then kind of delegate it to a different team, a different namespace? So not only can you share a gateway cross namespace but you can also share pieces of the routing itself cross namespace. And then with these things, hopefully we can gain confidence towards beta. The next slide. All right, now we're ready for a demo. One thing I wanted to mention before we go into the demo that I wanted to mention at the beginning and forgot is even though Boy and I are both from Google, this is an open source effort with more than just us involved in it, right? Do you want to say anything there, Boy? Yeah, there's a landing page at the bottom. It's like a sub-project of SIG network and we have a lot of participation from many different vendors. And really we're looking at creating a portable sort of standard and a lot of the discussion is around how to do that and that's where the difficulty is. Yeah, so now like I mentioned, I'm pretty new to this stuff but I got to do some cool hands-on trying this out. If you're interested and want to do that as well, feel free to join us at this bit.ly link or find the thing on GitHub when I move over to that in a minute, sorry. I need to like change this surprise. There was a terminal under there all along. Okay, so what I've got up over here is in GitHub, I've got Google Cloud Platform slash GKE networking recipes slash gateway in here. So this is a fun little tutorial if you want to try this stuff out for yourself. It's pretty easy to get started with. I didn't really have too much trouble with it so I would recommend if you want to check it out. And I'm just going to kind of run through this a little bit. So if you are new to this like I am, which I assume pretty much everyone is, this is a good way to get hands-on. What you need is a Kubernetes cluster. You'll install the API customer resource definitions. You'll install SEO and you'll need to, I think I've already got this stuff here, but I'll run it again just to make sure. So I'm going to need these kind of parameters set. I'm going to copy this and paste it over just to make sure that I've got all the setup. There we go. Cool. So then we hop into the tutorial. So we talked earlier about gateways and the different styles kind of that you can use gateways in. Like you can have a gateway that routes to a single service. You can have multi-service gateway. You can have a gateway with multiple routes that all connect to the same kind of load balancing gateway resource and you can do traffic splitting, which is pretty cool. This tutorial is going to cover all of those different situations and it's going to show you how to do the traffic splitting tool, which I think is pretty cool. So first off, I've got to apply the demo app. I think I already have that running. Cube, CTL, get, oh, I got, yeah. I think that is the demo app. I guess I should do like foo and bar. Is that what it creates? Namespace deployment, yep, we can see. It creates the deployment foo and it creates the deployment bar. So I already ran this command, but if you're following along at home, watching the recording or something, run cube CTL apply demo and you'll want to be in basically this repo that we're in right now. So you'd have to get this from GitHub and do that. So let's do something actually cool with it. So you can see here, I could open it up over here too, but I'll just point over here. This is what a definition of a gateway class looks like. We talked earlier about how a gateway class is basically defining kind of a template for gateway resources that you'll be able to create later. Since I have Istio installed, just as an easy way to try this out, this is gonna be Istio style gateways. And I can apply that. I wonder if I already have one. Let's see, cube CTL, rate class, I don't know. So I already did this. I've got my gateway class up and running. But how you would do it is just to run the thing from this repo. So I've got my gateway class. Next thing is to create some gateways. And this is what a gateway definition looks like. You can see we've got our gateway. Important thing here to know about is the listeners for your gateway. What traffic is gonna be coming into this gateway that it's gonna have to manage? So we're listening on port 80. We're listening for HTTP protocol traffic. And it's gonna connect to HTTP routes. We talked about our gateway routes earlier. So this is gonna be able to connect to routes of the HTTP kind of type. And you can see a cool diagram of how all of that works here. And so here's what the HTTP route definition looks like if you wanna check that out. It's gonna define kind of where stuff goes, the name of the service that your incoming traffic is gonna go to. And this one is a single service style, I think. So we're going to apply that. So what I'm doing here is creating the HTTP route. I already had my gateway like I showed earlier. And wait, so this is the, well, I had the gateway class. I just created the gateway and now I'm gonna create the route. Let's just copy paste that in here. Cool. And so now I should have a gateway setup and route setup. Yay, how do I know that that does anything? So we're just gonna run a curl command with those parameters that I set up earlier. So you can see here that it's returning some information. I really like the pod name emoji. That's a lot of fun. So you can see that we sent some traffic to our gateway to the port that our gateway was listening on and then had that routed to metadata foo is telling us that we're hitting the correct pod on the backend. Cool. So let's try doing it with multiple services on the backend. So let's first delete what we just made, at least the route anyway. And then what we're going to create is a new HTTP route. And this one, yeah, is what's defined right here. So you've got foo and bar, both of those services defined in this HTTP route. So we're routing all of our traffic to multiple services. So if we send in some traffic that is intended to be destined for the foo service backend, we can do this curl command. We're specifying host foo.com. And so you can see in the metadata that spits out that it's hitting the foo backend service. And we can do the same thing with bar so that we know that both of our backends are available and we can hit them both depending on what we put in. And so that's hitting the bar service on the backend. So cool. We did multi-service. This multi-route one gets pretty cool, I think. So we've already got our multi-service HTTP route defined. Now we're gonna define a new one to yet another service. First, I've got to deploy that app, which I think I actually already have running. Get deploy. I don't. So I will run this to run the new backend service that we're gonna route some traffic to. I mean, it said it deployed it, so I don't really need to check, but there you go, there it is. And then we're gonna create another new HTTP route that's gonna route some of our traffic to baz.com, well, to our Baz service on the backend from baz.com hostings. So let's apply that HTTP route and then let's send some traffic to it. So now we've got multi-service route and we also have multiple routes that can route all of your traffic to different backends. So that's pretty cool. And then traffic splitting. I mentioned earlier that it's pretty cool that the Gateway API also includes traffic splitting capabilities in it. So we're gonna create another HTTP route to our foo backend and, well, create another backend. So we're gonna have 20% of our traffic going to one backend and 80% going to another. So let's create a new foo store backend for this and we're going to apply our new traffic splitting HTTP route which is this definition that we were talking about earlier with the weighted traffic splitting. I just copy paste that and I do that. And then if you wanna learn more about what's going on, you can run this kubectl describe command which will show you more about what it's doing. The total weight is, it's fully weighted to V1 of our foo store application right now. So now I'm gonna create another tab here and I'm gonna have to run back up to the top of this and grab our export or export our parameter names again for this new tab. Otherwise this wouldn't work because that would be silly. Not that I did that the first time because I definitely did. And then I'm gonna set up a continuous curl command so that you can see as we flip over. So all of our traffic is going to our V1 of our backend right now. And now we wanna apply the new traffic splitting route of where it actually splits it. This was 100 and zero. So this new one is gonna do 2080. So 20% of the traffic is gonna start going to V2. So you'll start seeing V2 pop up here every now and then but mostly it's gonna be V1. And then we can switch our traffic completely over to our V2. So once I do that you should start seeing, see these are just all V2 now whereas before it was only sometimes V2. And that's pretty much it. So what you saw here was a single service gateway kind of using the gateway resource to load balance traffic to a single service on the backend and then creating a gateway with a route that routes traffic to multiple services and then creating a gateway with multiple different routes. So you can use one load balancer to kind of route your traffic in multiple different directions. So pretty cool stuff. And then we did traffic splitting too. So if you wanna try this stuff out for yourself all you need is a Kubernetes cluster. I use GKE for this because I work at Google. So that's what I have. But if you wanted to run it on like a mini cube or something like that, there's also the export for node port if you need to use a node port type service. So yeah, check this stuff out. And that's kind of what I've got for you. Yeah, thanks, guys. That's really cool. It's always good to have bleeding edge software not completely blow up. It didn't blow up. So yeah, so we're a SIG network sub project. We're basically trying very hard to get the alpha out right now. And then it's gonna be part of the upstream. The project homepage is there. All these will be available in the slides. If you wanna contribute, that's a community page. We have meetings basically weekly, which is kind of intense, two days a week actually for the try to get this alpha out. And it's gonna be alternating AMB time specific because we want to be inclusive of both the APAC and the Euro time zones. And there's a meeting code. So please check out the website, check out the calendar. The demo link is also available in the slides. And that's freely available. You can just do whatever you want with it. Yeah, so thanks for coming. I think we can move to questions. Okay, thank you both for a wonderful presentation. We have about a little less than 10 minutes left for questions. So please feel free to drop them into the Q&A box. Looks like we have a couple here. Lucas wants to know, what sort of implementation do you anticipate for on-prem clusters that can't rely on a public cloud controller like the one on alice.io? Is there going to eventually be an engines gateway controller and a tree flick gateway controller? Yeah, that's what we're basically for the alphas along with it is a bake off of different implementations. So clearly there'll be some cloud represented like I'm going to represent Google Cloud and we'll have an implementation there. But also some of the other participants are, for example, contour implementers, there's going to be other sort of, and then you saw one, the demo actually uses sort of Istio as the basis. So this is, we expect actually many different implementations and then it also supports both the cloud-based and the in cluster proxies. So in terms of flexibility, you should be able to use many different sort of ways to get at the API, implement the API. The whole goal is to make it so that this experience is portable across all of these different sort of ways of doing it, which is kind of in that very first slide in terms of the trends is that, you know, not just in cluster proxy, but service mesh and cloud are all kind of merging together. And we find that to be more and more the case going forward. Any thoughts on adding often or author Z for services? Yeah, that's a good question. One of the, it's actually an issue on the repo. So there's plenty of discussion there. There's a couple of things. One of them is we did spec out what it means to add just configuration knobs for MTLS. So that's one way to do authentication. The other one is that if you have a very specific your sort of requirement, there's two extension points that can be used. One of them is in the listener. You can say, okay, I'm listening to HTTP and I want to apply this special thing that's inserted there. There's another one on the filter of the route itself. So you could say, while you're sort of matching requests for this particular service, you can add your author and author Z sort of configuration there, really we would like feedback in terms of like how to make that generic. Cause I think a lot of the focus will be like, okay, how do we make this work for an engine X implementation and envoy implementation, a contour implementation, a cloud implementation. And that's where all the conformance and the extended versus the core comes in. If there's a extended portable way to do author and author Z definitely we are super interested. And there's a conversation I can forward the issue if you ping me on Slack. We have any other questions at this time? But there's someone said, does this depend on service mesh? Yeah. That was the question. So no, it doesn't. There's no service mesh. Requirement, right? There's no like required. Yeah, there's no service mesh requirement. Yeah. In fact, the demo is quite self-contained. If you look at the demo, you should just run it. And it's just, you know, like installing in grass, you just install it, get rid of it. I think that's the last question we had for Nick. Anyone else they would like to chime in with questions? We have a few minutes left. So if you have any, I don't think you'd like answered. Now's the time. Seems like we're good to go. All right, then. Well, then we will wrap this up. Thank you, everyone, for joining us today. And thank you to our presenters for a wonderful presentation. As I said before, today's webinar will be posted later today to the CNCF webinar page along with the slides. Thank you both again for a wonderful presentation. And thank you, everyone, for joining us. Stay safe, take care, and have a wonderful weekend. Thank you. Thanks.