 Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee. I'm a CNCF Ambassador and I'm a Developer Advocate at VMware Tanzu. So every week, we bring new presenters to showcase how to work with Cloud Native technologies. We'll build things, we'll break things, and we'll answer your questions. Today we have Dan Garfield and Rob Scott here with us to talk about the new Gateway API plug-in for Argo rollouts. Now, this is an official live stream of the CNCF, and as such, it's subject to the CNCF code of conduct. That basically means just be nice to everyone, be respectful to the presenters, to the other people in chat, and to me too, please. A friend who are joining us live, please do say hello in the chat. I love how global our audience is. Tell us where you're tuning in from. As always, if you have questions as we go, please put those in chat too. We'll answer them in the moment if we can, but if we can't, we'll definitely get to them by the end. We'll have a Q&A section at the end too. With that, I'm going to hand it over to Dan and Rob to kick off today's presentation. Hey, everyone. Thanks so much for having us. We're really glad to be here with you. For those of you that don't know me, my name is Dan Garfield. I am the co-founder and chief open-source officer of a company called CodeFresh. We make CI CD and GitOps tools. We make an enterprise version of Argo that we ship to our users. And I am also an Argo maintainer, and it's under that capacity that I'm coming to today to talk about some really cool work that has been done by our team here at CodeFresh to create a rollouts extension that allows you to take advantage of everything in Gateway API. This basically means that you can define your progressive delivery once, and it will work with any Gateway API provider. So that means any Ingress provider. So to help me do that, I asked Rob if he would join, and he's coming to us from Google and he works on that project. Rob, do you want to introduce yourself? Sure, yeah. I'm Rob Scott. I'm a software engineer at Google. I've been working on Gateway API since, well, the very beginning. This project started at KubeCon San Diego, which is nearly four years ago now. And it's been quite a process. I'm really excited about Gateway API. I like to call it the most collaborative API in Kubernetes history. We have something like 150 different people that have contributed to it. And it really is the next generation of many APIs. And we'll get into that shortly, but I've got lots to talk about there. How about at high level? I'm Rob Scott, and I do lots of Kubernetes OSS things at Google. That's perfect. And so if you don't follow Rob on Twitter, I think he is at Rob Scott. Is that right, Rob? Oh, Robert J. Scott. I wish my name is way too common. I wish I got, you know, that's my GitHub handle though. So halfway. Yeah, Rob Scott, GitHub. Okay, Robert J. Scott on Twitter. You can follow me on Twitter at today was awesome. And as we go along, if you have questions, you have comments, throw them in the chat. If we don't see them immediately, our host will either help out and interrupt us and let us know about them. Or at the end, we'll also have time for questions. And we also have a giveaway, I think, at the end. So stick around for that. And you'll have a chance to get some good stuff. So with that, the plan today is pretty simple. We're gonna introduce a gateway API. We'll talk about what that is, how exactly it works. And then we'll talk about using it with our rollouts. We'll give you a little demo and we'll have time for questions and break for high five and lunch. So with that, let's pass it over to Rob and why don't you introduce gateway API? Awesome. Well, thanks for the introduction. So yeah, gateway API is, as I kind of mentioned, a project that's been going on for quite some time now behind the scenes. But we're really getting to the point where the momentum has just gotten incredible. I think we have more implementations of gateway API today than we ever had of Ingress. We are about to go GA in a few months. And this project has really taken off. But it'll be easy to have gotten this far and maybe not noticed everything that's happening with gateway API because we've been doing a lot in the past few years. So let me just take a few minutes to explain what I think of when I think of gateway API. So this is a bit of jumble of words, but I think of gateway API as a single, unified, extensible, role-oriented API for Kubernetes service networking. And we'll dive into some of these words here in a little bit more detail because it's a lot. But first, let's talk about the scope. The scope of this is really large. It includes all Kubernetes services, both at L4, so that would be service type load balancer, and L7, which be Ingress, and service mesh. So we're trying to capture a lot within the scope of this API. Then extensibility, this is such a key part of this API. For those of you who may have used the Ingress API before, you may have noticed that the Ingress API really went for this idea of the lowest common denominator, right? So with Ingress, we tried to have an API that worked for every possible, every implementation, which meant that we only covered the common features, which was a very small set. We didn't provide a good way for extensibility either, so that just meant annotations everywhere. It's implementations working with what they had, but it was kind of a mess. So with Gateway API, we wanted to do two things. One, develop a much larger base functionality to find in the core API itself. And then two, we wanted to provide extensibility, lots of ways to plug in implementation-specific extensions on top of the API so they wouldn't be stuck with more annotations. So lots of work going on there, but I'm excited about the features that we already have and the ways that we have to build on top of this API that are not just annotations. Then the role-oriented part of this is so key to how we designed and developed this API. The Ingress API, when you think about it, is really just one key resource. You have a Ingress resource and more recently an Ingress class resource as well. With Gateway API, we wanted to, before we defined the resources, we wanted to define what the roles we were seeing in so many organizations, right? Because Kubernetes authorization via RBAC is very much dependent on the actual resource types. So you could grant someone right access to an Ingress, but you couldn't say, I want this person to be able to control just TLS, for example. Or I want this person just to be able to control routing configuration. With Gateway API, we tried to bucket those resources in sensible ways so that larger organizations that may have different people fulfilling these roles would have that kind of access and would be able to say, okay, this specific role, say an infrastructure provider or a cluster operator is going to have this level of access to this set of resources. So that's been defined well throughout the API and we'll get into that in just a little bit. And then finally, this is a Kubernetes API. Although this API is built with CRDs, it's Kubernetes native, it's an official Kubernetes API, it's open source, it's portable, and we've invested a lot in conformance testing. So all of this means that you're going to have a consistent experience across any of these implementations. We've got over 20 implementations of the API today and it just keeps on growing. And also, because these are CRDs, it means you can install the latest version of this API on any Kubernetes cluster you're running. You don't have to wait for Kubernetes 128 or 129. It's available immediately for you. All right, so... Before you move on, there's a comment that I want to acknowledge just about vocabulary. So there's some confusion around API Gateway and Gateway API. Can you help untangle that, please? Yeah, that's a really good question. We're working on improving our docs to clarify that a little bit more. But I think of API Gateway as a fairly broad category of products. And I think of Gateway API as, well, an API that enables some subset of API Gateway functionality. But API Gateway products are usually broader in scope than just Gateway API. And Gateway API also covers some things like Mesh now that aren't usually in scope for API Gateway. So I know that name is confusing. But yeah, that's probably the best way of differentiating those. Thank you. Cool, good question. Great question. All right, so yeah, let's dive into the resource model here. When you're talking about Gateway API, there's really three key kinds of resources here. First, we've got a Gateway class that defines the type of infrastructure that a Gateway would be provisioned with. Then you have a Gateway and you say, I want to, we'll use GKE as an example because that's the product I work on on top of Gateway API. And so in GKE, we provision a few different Gateway classes, one for an external load balancer and one for an internal load balancer as an example. So you might create a new Gateway of class Gateway GKE XLB. And so that's basically saying, hey, I want external load balancer provisioned by GKE. And that's what the Gateway represents here. And then finally, that next step is the routing layer and the routes are the real power of this API. In this example, I have an HP route and a TCP route. We have routes for lots of different protocols including GRPC, UDP, TLS on one's shown here as well. And I expect there'll be more as time goes on. But the routes are very, very full of capabilities. So as an example, in this example, we're showing that an HP route can do traffic splitting between service A and service B. You could also use TCP routing to do traffic splitting as well. There's a long list of features that we'll get into in just a little bit more time. But it can't look at the resource model without also thinking about the rules we had in mind here. So first up, we have infrastructure providers. And in the case of a cloud provider like GCP, clusters that you create will come pre-provisioned with a set of Gateway classes that we're providing for you. Some in some other organizations, you may instead choose to provide your own set of Gateway classes for each cluster. So we're thinking of that as kind of that infrastructure provider level. Next, we have gateways, which we expect are going to be owned largely by cluster operators. So in that case, you would have a cluster operator that might set up an external production load balancer, an internal production load balancer, maybe a test LB somewhere. And then you can kind of define the set of namespaces that can expose their applications through that load balancer, for example. So that's a really high level idea of one way that gateways could be managed. And then finally, I think one of the most consistent things we see here is that application developers are who we expect to be managing all of the routing logic across your applications. So although the actual load balancers, the Gateway classes, et cetera, are things that we expect are going to be managed exclusively by people above that, like cluster operators or infrastructure providers, I would expect application developers will be managing most of the routing logic here. So this is one of several ways you can do this. Of course, it's entirely possible for one person to do all of these things in the stack. But by separating this out into different layers, you have a great deal more flexibility as you define your authorization policies throughout your organization. All right, so keep on moving here. And there's a lot of features this unlocks. First up, let's look at what Gateway API enables. There's TLS config, there's HTTP matching, a lot of new things, header, method, query param. We're enabling the ability to cross namespace boundaries, which probably seems rather scary at first, but it's super powerful and we've made sure there's a two-way handshake. So both sides of the connection are agreeing to that. But what that means is you can say, I want a Gateway to be defined in my infrastructure namespace and I want it to be able to connect to routes in these four application namespaces, for example. And similarly, cross namespace forwarding. So you can connect routes to services in different namespaces as well. There's a lot of filters in the HTTP route, like header modification, request mirroring, request redirects, URL rewrites, there's a lot here. Now some of you may say, well, Ingress did have some of these things and you're right, but I'll just highlight what the Ingress API included. It's a very, very small subset of what Gateway API already has. And the difference is Ingress API is basically frozen in time. It will continue to be supported, but it's not going to grow. It is what it is. Gateway API is continuing to grow rapidly and the feature set expect will only continue to expand as we go. All right, but we talked about the huge set of features that these APIs have, but maybe we should just take a step back and focus on the similarities here. So I have a simple Ingress and HTTP route example here, and I wanna show you just the similarities and that this API might actually feel pretty familiar to you. So in Ingress, if you wanted to say your Ingress was implemented by say nginx, you'd say ingress class name nginx. In HTTP route, what you'd do is you'd say, hey, I want this route to be attached to my nginx gateway that you do that via parent graphs. At the same time, you could add another parent to the same routing configuration and have it implemented by more than one gateway. So that's a really neat feature, but fundamentally, these concepts feel pretty similar. And then we wanna do a path prefix match on slash login. And again, you can probably see the similarities between how those are configured. And finally, in both cases, these are forwarding to the auth service on port 8080. So if you're trying to do something like pretty simple, the API is gonna feel very familiar. The big difference is that HP route unlocks a great deal more functionality on top of this, but the basic things are pretty familiar and similar, whatever API you're using. All right, so next up, let's just dive into gateways themselves because we just saw that HP route and Ingress are awfully similar, but gateway is kind of this new concept in the hierarchy. And you may be wondering, well, what exactly, why do we need a gateway? So in Ingress, we had Ingress and Ingress class. And there was some significant variation in implementations here. For some implementations of this API on Ingress, all Ingresses were kind of merged together behind one load balancer. And for other implementations of the API, every Ingress resource was mapped to a different load balancer behind the scenes. With gateway, we've kind of introduced a resource to actually represent that level of the hierarchy instead of just having implementation-specific behavior there. So in this case, we have a gateway foo LB and you can attach as many routes as you want to that gateway, or you can segment out gateways depending on different groups of infrastructure, different groups of applications, environments, whatever it might be. So gateways really represent an instance of a load balancer or a proxy. They define listeners. So in this example, we've got a listener on HBS 4443, and you've got some basic TLS configuration associated with it. And you can attach routes to these gateways, but the key thing with the entire API is this config stays the same across any implementation, whether this is provisioning a Cloud load balancer directly or provisioning something in the cluster itself. And let me just talk really briefly about that. So first, we've got in cluster gateways. So in cluster gateways are really familiar to a lot of Ingress implementations today. And that's when you deploy a gateway resource, you're going to get a group of pods created in your cluster, often backed by something like a service type load balancer. And these are going to be your data plane, your L7 load balancing infrastructure for this gateway representation. The beauty of these kinds of implementations are they behave the same way on any Kubernetes cluster? On the other side, you've got a Cloud provider implementation, so the GKE implementation that I work on. When you deploy a gateway, that maps one-to-one with a Cloud load balancer. So a gateway is going to result in uspating up a Cloud load balancer, and it enables you to load balance directly from the Cloud load balancer to pods without any kind of intermediate hop. Usually, this is only available on clusters managed by that Cloud provider, though. So there's a lot here. I've really run through it, but I just want to get into what's next real quickly here. This API continues to evolve. There's a lot happening. We actually have two releases planned between now and October to give you an idea of the velocity here. Vo.8.0 is just on the cusp of releasing. We're just starting the API review process now. Gamma, which is our mesh data, Gamma, which is our mesh standard, is about to go experimental. So that means it's going to become part of a release and something that is formally supported. Believe it or not, I haven't had enough time to talk about the mesh side of things here, but we have three meshes that are already fully conformant with using this API for mesh. So that includes Istio, LinkerD, and Kuma. And really exciting to see portability across service mesh implementations. Using the same API, you can configure load balancers with. You can also now configure meshes with. Then, routability is this new concept coming in Vo.8. And that allows you to define in a standard and portable way where the gateway can be reached. So if you want a public, private, or cluster local gateway, that's a new concept that's coming in this coming release of gateway API. And then 1.0 is a release we're targeting for October. And that's a huge milestone for us we're anticipating that gateway, gateway class, and HP route are all going to go to GA. And along with that, we're gonna have a lot of effort on conformance to ensure that conformance is fully covered across the full feature set so that we can provide every user of every implementation of this API a consistent experience. With that, I think that that's all I've got for gateway API. I'll have plenty of time at the end to answer any questions unless I missed some that came in here. There's one great comment that I just wanted to single out that gateway API's vocabulary just feels so much more familiar than Ingress and it's idiosyncratic annotations. So concepts that this person, Jesse, has learned from other tools crossover with gateway API and they really appreciate that. That's awesome. Yeah. And then we do have a couple of questions but would y'all like to get at those now or would you prefer to wait till the end? I'll defer to Dan on this one. Let's actually, so there is a question about kind of guiding philosophy for gateway API. Let's actually say that one till the end because I think that's a really good discussion that'll take just a minute. And then in the meantime, let's pick off and start talking about how this is gonna tie in with our rollouts. Sounds perfect. So for those of you that aren't familiar with Argo rollouts we'll do just a very brief introduction for those of you that are already familiar and this won't be too long but if you're not familiar with the Argo project this is a CNCF project that is now graduated and it's made up of four tools, Argo workflows which is a general purpose workflow engine for Kubernetes, Argo events which is for eventing those workflows. Argo CD, which is a GitOps operator so you can define a source of truth and a target environment and it will keep things in sync and follow a policy. And of course, Argo rollouts which is for doing progressive delivery. So these are the four tools within the Argo project. We're gonna be talking about Argo rollouts. Now of course, Argo rollouts and Argo CD do integrate very well together but one thing I wanna call out before I move off this slide is that we have something called Argo Labs. Now Argo project labs is essentially where people can go and create add-ons, plugins, tools that are meant to work with Argo and work on them in an open source fashion. So we've got a couple of tools in Argo Labs. If you've used Argo CD autopilot this is something that the team at CodeFresh built which is basically an opinionated way of setting up Argo CD. And of course, we're gonna be talking about a plugin for Argo rollouts that is in Argo Labs today. So first of all, how does it progressive delivery work? It's very simple. You have canary releases or you have blue-green deployments. For a canary release, you deploy a new version of your app. You give a percentage of traffic to that new version and if it works, then you work their way up to 100%. So it's very simple. And most people at this point are familiar with the concept of a canary release. And the reason you do a canary release is because it reduces the blast radius of catastrophic failures. So many of you are familiar with the concept of using things like feature flags which are really for testing user interaction with features. Canary releases are really a way to de-risk your deployments. So every time you make a software change there's some risk that maybe you're gonna introduce some sort of breaking change. There's gonna be some sort of regression. And a canary says, well, you know, I'm gonna give this to 10% of my traffic. We'll do a health check. If everything works, it moves on. If it doesn't, and I had some sort of impact on those users, I'm able to roll it back very quickly and it's only a small subset of my users that have that impact. So it's a way of just reducing the risk, reducing the blast radius, it's very effective for that. Blue-green is a similar strategy except with blue-green you bring up your entire stack separately and then you switch over the traffic all at once. And that's a way to make sure that the new version is gonna work before it's exposed to users as well. So these are both great options for de-risking your deployments. We're gonna talk mostly about Canary today in this context. So within Argo, this is Argo CD, it should say Argo rollout, sorry about that. But you can see that in Argo rollouts we use what's called a rollout object. So this is a custom resource in Kubernetes and it is essentially a deployment with some additional information. So you can actually take an existing deployment and create a rollout that consumes that deployment. So you don't even need to modify your deployments if you don't want to. If you did want to change a deployment to a rollout, you literally just need to change the kind from deployment to rollout and the API version from delivery to Argo project off of one. So, and then you can add your additional steps. So you can see here we've got a strategy defined to do a Canary and this is gonna run an analysis template. The analysis template is basically how we're gonna do a health check. And analysis templates integrate with Prometheus, with Datadog, with basically any metrics provider so you can run that. And then it's gonna set some arguments and it's gonna set some steps. So that's the basic simplest way of looking at how these rollouts are gonna function. And I can even, actually I'll show you this first. So we had a great slide earlier about kind of the structure of gateways and APIs and things like that. So as an example, if we were looking at Istio, Istio has this Istio ingress and it provides this gateway that is allowing you to select which service stack is gonna receive which container, which traffic, which basically which one's gonna be exposed to users. So the way that this is gonna work with a Canary is you're basically gonna bring up a new service. You're gonna, in the case of Istio, you would attach it to that virtual service and then you would create pass arguments to the virtual service to set a percentage of traffic and routing to the Canary version. Now I've done this manually for years, right? So like way back in 2017, I think we built our first Canary step and it was inside of a CI CD pipeline and it was kind of complex. But now with our rollouts, it is so much easier to do because it's all declarative, it's all done as a matter of policy and you can just operate it that way, no problem. So using Argo rollouts today, we have a number of different providers that we support and each of those requires their own arguments. So for example here, you can see a collection of different configurations. These all do the exact same thing except you can see that each of them has a different traffic routing argument and the traffic routing specifies the arguments for ambassador, for an Amazon load balancer, for traffic, for what is this one doing? This is, oh, this one's Istio and then we've got another one over here that I think is doing IngenX. So each of these requires their own specification and so each of your rollouts needs to be specialized for the gateway that you're using. That's the way it works today. And just to show you what these look like in action, I've got my CodeFresh dashboard here where we're exposing Argo rollouts and if I look at this canary that's currently in progress, or sorry, this one that's finished, you can see this one was set up to do show 20% of traffic. It was gonna pause, go to 40%, go to 60%, 80%. This is what they look like and if I could actually kick off a new one right now where I run this and sync it and synchronize and this is gonna kick off a new canary so you could actually see it in action. And this is just how Argo rollouts works today. And so that's very easy and nice and makes things pretty smooth. Let's see, did it run? Should we see this sync happen? Let me kick it off again. What's out of sync? Oh, it says it's out of sync. Not necessarily critical or I didn't look at this one beforehand. So maybe there's some syncing thing going on. So not a big deal though, just to give you an idea of how a canary functions in general. So let's go back and look at these templates. So you can see I've got all my different specialized templates here available but what if we could use a single API? And this is what got us at the Argo project so interested in Gateway API is that every time we wanted to add support for some different gateway provider we had to build it from scratch. And so we had support built in for a handful and then people were always opening issues and saying, hey, what about Kong? What about glue? What about this? What about that? And it's very time consuming basically because us maintainers we have to go and learn each of these different gateways and figure out how they function in order to add the support in Argo rollouts. Well, we don't have to do that anymore because thanks to Gateway API we developed a plugin called Rollouts Plugin Traffic Router Gateway API. That's a great name. We're gonna have the marketing folks work on that one. But basically this provides a unified interface for Argo rollouts to consume any Gateway API compatible router and routing mechanism. So we'll throw the link in the chat here to that project, go give it a star, give it a like, re-appreciate that helps people get more visibility on the project. And this is a pretty new project. This is really where we're announcing it. So it's been in use for the last couple of months by a few different teams. And so far all of the feedback has been very positive but today we really wanna open it up and have everybody start using it and giving us that feedback. So what does this provide us and what are the limitations? Well, the great things about it first of all it really expands our support for traffic management providers within our Argo rollouts. And that's awesome. From my perspective, hey, it makes it easier because we don't have to reimplement it for every provider. But it also is easier for you as the end user because those templates that I showed you earlier where you have to specify a different argument for each provider, you don't have to do that anymore. You can just have one unified template that you use. And of course there's portability across these providers. So if you have a service that you've defined a rollout that you've defined you can use that in lots of different places and you don't need to tweak it or modify it for these different providers. Now there are some limitations and we'll talk a little bit more about these and Rob keep me honest, I'm not the gateway API expert. I work on Argo and I'm just a consumer of gateway API. So Rob can tell me if I get anything wrong here. There are some limitations right now where the granularity of control for routes is you may not have as many options with certain providers. So for example, if you wanna do something like header-based routing, you actually need to extend within Argo, sorry, within gateway API. There's basically a specific provider settings for those kinds of extensive things. So if you wanna do more advanced things there are ways to do it, but it may be a little bit more complex than if you were going with just a native built-in provider. So that's a minor complaint and something that the gateway API team is aware of and I'm sure is always improving. And we can talk more about those but if you're using Argo rollouts today you have native support for working with ambassador, AWS LB, Istio, IngenX, Apache, Traffic, and SMI which is a linker D. So those are all natively supported in Argo rollouts today but by using gateway API, we are also able to support all of these I think except for linker D. I don't think SMI is supported within gateway API but I think everything else is supported by gateway API in addition to all of these other ones, Envoy, FlowMesh, Glue, GKEs, Load Balancer, HA Proxy, Console, Kong, Kuma, Lightspeed, Stunner, Contour, Silium, BigIP, Anecdotal Epic, all of these additional options are supported. So this is 3X, this is the Apple presentation part of this, we now have 3X more support for different gateway APIs. So if you are using any of these, if you have a variety of them and you may have situations where you wanna use one over another, you don't necessarily have to be all in on one in order to use Argo rollouts with them, this just provides a universal way of doing it. And so this is actually now my default way of adding rollout support is to not care at all about what my underlying gateway is when it comes to configuring my rollout because I can just do these in a generic way. So let me just show you- There's a question that's very relevant that I think is kind of like, it's just a different angle at the same thing that you're saying, but does this mean that Argo rollout support for providers will track one to one with gateway API support? Yeah, so let me, yeah, to explain how that works. So there is a gateway API plug-in that we're showing off today. And that provides the interface for Argo rollouts to talk to gateway API. And I don't expect that there is any requirement on this plug-in to be updated regularly to support what gateway API is providing. Because that API is essentially universalized, it shouldn't actually need to be updated very often to support different things. It should be, hey, if somebody adds gateway API support to their provider, then it should just automatically work with Argo rollouts. Does that make sense? I think so. And it's worth mentioning that you have to pass conformance tests, right? But to be an official implementation of gateway API. And so therefore, if it's past that conformance test, it's probably, it's something you can work with with Argo rollouts. Yeah, exactly. That's something that we've been working so much on with gateway API is a very broad set of conformance tests. And just in the past month or so, we've started to add some centralized reporting so you can very clearly show not just conformance results, but what implementation support which features of the API because there's a pretty broad core that everyone supports. But then there's some extended features, like Dan was referring to, that not everyone is able to support, but we're gonna have a centralized way of showing, okay, if I need this feature, these are the implementations that have support for it. And maybe I'll just jump in real quickly and also mention that SMI bit service mesh interface, that the team behind that actually, or teams behind that have decided to go all in on gateway API. So that includes Linkerd, Kuma, et cetera. So that's what's called Gamma. It's gateway API for mesh management and administration, something like that. It's a fun acronym. But anyway, that's coming in the upcoming release. And so Linkerd and other meshes will be supported by gateway API as well, natively, which is really exciting. There's been some great contributions from the Linkerd team to get this going, but yeah. Oh, excellent. Yeah, okay, that's awesome. And then of all of these, I mean, we were talking about a provider-specific implementation, so for example, header-based routing. I think only 30% of these support header-based routing out of the box. So that's one of those examples, right, Rob, where that's kind of, that's still a little bit provider-specific because it's a little bit more advanced implementation. Yeah, I'd have to look at the specifics. I think header-based maybe slightly, the thing that is less common supported, I think header-based routing is fairly broadly supported but header-modification is something that is less broadly supported, but there are examples like that, certainly of some features in the API that are not going to be supportable everywhere. Yeah, okay, all right, perfect. So let's go into the demo a bit and obviously, you know, keep the questions and comments coming. So for the demo, I have something very simple here to just show you. And I think as far as demos go, it's almost a little boring because the whole point is that this stuff is just universal now. So I showed you earlier what our rollout would look like using a different provider, but here you can see I've got an Argo rollout. It's deploying an Argo rollouts demo. And under my strategy here, I've got my Canary and I've specified which services are being interacted with for Canary and our stable service. And under traffic routing, I'm specifying the plugin Argo Project Labs, Gateway API. And then I just pass in the route that we've created there and the namespace. That's all I really need to do. And this basically means that my traffic routing is going to go through the Gateway API so I can define it once. And if my underlying provider changes, it doesn't matter because the plugin is basically going to provide the pass through to the Gateway API. So here you can see I've got a Canary going that's set to go set the weight at 30%, then 60% and then 100%. And in this case, it's actually not doing an analysis template because it's relying on me to complete the promotion. So just to show you what that looks like. If I look at a current rollout that I have going on, here's one that I have going on. This is actually in GKE. You can see that it's currently paused and been going for five days, quite old. And this one's just using GKE's built-in load balancer. So I didn't install any additional provider, it's just baked in. And you can enable that pretty easily if I look at the... Jump around really quick. Oh, here's that Canary happening that I started earlier. Let's see, I was going to show you if I look at API. If I look at the example documentation here, you can see under the plugin, we have a couple of examples. So under Google Cloud, all you need to do is enable Google Cloud to use the gateway API standard. And you can modify an existing cluster just passing the argument gateway API equal standard, or you can do it when you're creating a new cluster and it will enable gateway API support within GKE. And of course it's going to be a little bit different for each provider, but just to set up the basis. But the point is that rollouts is universal and extended, right? So with my Canary that's going right here, I can actually promote this. So let's do our go rollouts, promote, let's promote our rollout demo. And this is going to basically set the next traffic progression to happen. So you can see it's now in progressing. It's spun up four additional pods here and so it's getting more traffic. So it's going up to that 60% range. These are all running and it is once again paused. Of course, we can stop this again and let's do another promotion and we'll go watch this progress again. So you can see, should see currently in healthy status. That's great. We've got those 44 second owns. There's a slight delay happening while it's spinning up pods it looks like. But you get the idea. I mean, we could continue to promote this until it is completed. Let's run again. So yeah, you can see now it's actually shut down. The old version and everything has moved to the new version and we've actually completed our rollout. So we're done with our rollout and we've actually got to the 100% traffic routed to the new version. So that's basically it. I mean, it's really that simple. It's really that easy to use and you can see the spec is universal. So I could be using this with any provider. If I was using Kong or Contour or any of these other providers, it's gonna work just the same. So easy peasy, lemon squeezy, all thanks to Gateway API. We extended our support in our rollouts and we provided an easier interface to use. So as we pass into the questions section, wanna give a call to action for you to become a GitOps champion. If you're not familiar with this program, we have a thriving Discord community and we offer exclusive content there and to help you get started, we're providing 100% off on GitOps certification. So if you have not done GitOps certification with CodeFresh, we're now offering this for free to everybody who came to this. That's not something we do very often, but it's gonna teach you how to do Argo CD, Argo rollouts to do canary releases, how to use application sets, how to manage your repos, and you can use that code API me, just make sure to use API all caps. You can scan that QR code and it does expire, I think it says on the 19th, so it's supposed to expire today. I'll go double check that code while we take questions. But in the meantime, you can use that code to get 100% off on GitOps certification. It is the most popular and fastest growing certification for GitOps in the world today. So I appreciate the chance to share that with you. So the code is API me. So with that, let's go into questions and you can pull my screen down and we can maybe come back to that question that came earlier about the philosophy for Gateway API. So let's see, who was it that asked? I think it was Jesse. Jesse Adelman asked, is there a guiding philosophy and first principles the work of a Gateway API has been grounded on? Yeah, that's a really good question. We set out to, well, do a few things. First was defining the scope, right? Trying to figure out what we were trying to solve for. And so that was really the next generation of Ingress. So trying to fix some of the problems that we had seen pretty broadly in Ingress API. But then two, I think, kind of alluded to this, but the idea that we wanted to provide a broad and portable core set of capabilities. So the portability part is, one, ensuring that what we're defining can be broadly implemented, but then two, really more than any API, Kubernetes API we've worked on before, really emphasizing conformance. Because we have a lot of Kubernetes APIs today, network policy comes to mind as another example of an API that doesn't have a built-in implementation of the API. But there are lots of implementations out there, but there are some unexpected inconsistencies depending on the implementation you're using. We wanted to do everything we could to avoid that case. And so that meant starting very early on with conformance tests from the outset of this is what it means to implement this API. And then along with that, trying to do as much as we could to work with a broad set of implementations, you'll see a lot of the people that have contributed to this API are people that have worked with previous Ingress or service mesh implementations or built their own custom APIs. So for example, Istio, Contour, et cetera, were very involved in the development of this API. And we learned from not just Ingress API, but their experience developing their own APIs. So again, I think collaboration, extensibility, and just trying to learn from mistakes we made in the past. Excellent. There's one more question about just the development of Gateway API, also from Jesse. But like, how do you, how does the coding team keep centered on the original concepts? You said like the step one was to define the scope. How do you keep at that level and not creep out? Oh, that is a challenge every day. We've gone through different challenges as we've evolved as an API. Honestly, the first challenge was just getting enough people interested to work on this together and get something like out that we all agreed on and then get it implemented. And so at that point, we were just trying to get something like going and working. And now we've gotten to the point where the scope creep is a very, very real concern because we keep on getting more and more enhancement proposals that all independently make sense. But unfortunately, there's too many to take on all at once. So what we've done is we've focused on, clearly defining what the scope of this API is. And so that basically is, is it L7 load balancing? Is it L4 load balancing? Is it service mesh? And then within that, agreeing with the group of maintainers what our roadmap is and what the priorities are going forward. So there's fairly little that we've said, well, this doesn't belong in API ever, but we have said, well, our short-term roadmap for the next year needs to focus on these things. So it is a constant challenge and a general call out. We can always use more help. Yay, open source. There is always more work to do than there are people to do it. So for anyone on this call, if you're interested, come check us out, get involved. We can always use more contributors. And if you wanna guide the next generation of APIs for Ingress, Mesh, load balancing, whatever, we'd love to have you. It's good and important work. I have a question about the rollouts plugin. And that is, you did a great job of showing the problem and the solution and Chef's Kiss. It's like, I'm all in, I bought in. But my question is like, what about functionality that goes above and beyond the gateway API spec? Well, first of all, is there such a thing? And then are there ways basically that plugins are limited versus directly in Argos rollouts? Yeah, so for Argos CD, there's not really a limitation for how the rollouts plugin operates. The way that we actually have been scoping Argo rollouts as a project is that we actually want more of these things to be plugin based because trying to maintain integrations with 100 APIs, that's really hard. It's such a hard problem that somebody actually started an entire project just to work on it called gateway API. So we don't wanna be doing it in the Argo project. We want Rob to do it. That's his job now. We're glad to pawn it off on him. So there's not really a limitation when it comes to how these rollouts operate within the context of Argos CD or Argo rollouts. The limitations are only in regards to the spec. So we mentioned earlier that, for example, if you wanted to do header modification, I think that's one that Rob brought up. That's something that's not provided with support with everything. And I'm not sure exactly. I think actually Rob, there's something in the gateway API spec that's for working with underlying services, like what is it that it's like its own extended features is what it's called? Is that what it's called in the gateway API? Yeah, you're completely right. So we have a concept of support levels in gateway API. So one is core and that means that we expect every implementation to support this feature. So that would be like path matching, prefix path matching. Everyone supports that. On the other hand, header modification is something that would be extended support. That's something that we recognize not everyone's gonna be able to support, but when they do, it's going to be in this way. It's gonna be, we have conformance tests to cover it. And kind of just one more shout out in that direction. We've been working, we have this concept of supported features that is coming in an upcoming release that will allow every gateway implementation to publish via gateway class just in status. Hey, these are the features that this gateway class supports. And we're hoping to take that to the next step and actually provide standardized warnings and maybe even prevent you from using configuration that doesn't work with the implementation you're using. So again, ongoing work to try and surface that, but right now we're trying to balance trying to have a good set of features while also recognizing that not every implementation can support every set. But smart, very smart. Y'all are so impressive. Jesse has another quick question. Are the conformance tests available? Are they public? Some of that. Yes, yes, so great question. Yes, so if you go to gateway API repo, there's a aptly named conformance directory within that. They're not that, I'm happy to explain how they work a little bit more. We don't have as much documentation around that as we should. So definitely another thing I should call out is we're a SIG network gateway API on Slack. So Kubernetes Slack and we are all, there's a great group of people there. We're always happy to chime in and answer questions like this. And yeah, we'd love to talk about conformance tests. We've got a few others in progress, but honestly, I think this is the most extensive set of conformance tests I've seen for any Kubernetes API. We have a question about Argo CD and rollouts. Would you ever merge those two together? Yeah, I don't think so. There's not really any advantage to having them merge as a project. In fact, we see it as a huge advantage that they're separate and can be used independently. We have users, for example, Salesforce uses Argo rollouts basically to handle all of their canary and progressive delivery everywhere, but they don't necessarily use Argo CD everywhere. So being able to use them independently, we view that as a huge advantage. And the way that they're even architected, Argo CD can be installed in one cluster, but then be connected to a hundred different clusters. So you're deploying and managing updates to all these different clusters. Argo rollouts must always be installed on every cluster that it's gonna be used in because it's part of the internal routing and management of Kubernetes itself. So you always need to install Argo rollouts anyway, even if you're not using Argo CD in order to do progressive delivery or even if you're using Argo CD in some sort of central management cluster. So there's not really any reason to merge them and version them together. They operate independently and we view that as a huge advantage. So I think this is the last question from the audience. And so those of us are really excited about Gateway API and wanna get started. What's the best way? What's a good way? How do we implement Gateway API in our clusters? Yeah, so again, I definitely would recommend joining a SIG network Gateway API Slack because there's a great group of people there that have already implemented the API and can provide tips. Also on our website, which has been linked previously, there is an implementation implementers guide. So that's also a starting point here. But because Gateway API is open source and almost all implementations of Gateway API are also open source, if it were me, I'd take a look at some of the implementations of Gateway API that already exists. So we have implementations of Gateway API that are translating to cloud load balances. That would be GKE, or we have ones that translate to Envoy, HAProxy, Nginx, lots of underlying data planes. So depending on what you want to actually implement it with, there's probably already an example for you. So I'd also recommend looking at previous work and maybe just contributing there too, depending on your use case. Yeah, I think if he's asking just, hey, how do I use this with my cluster? If you're using EKS, I don't think you actually need to do any enablement to enable support for Gateway API with EKS ALB load balancers. I think it's just enabled by default. For GKE, you do need to enable a flag that basically just says Gateway API standard. And so you just have to look up specifically how to enable it for your cluster or your specific gateway. But most Gateway API providers don't need any special flags. If you're using, you know, Glue Mesh, I think you can just start using it with Argo rollouts and you don't need to do anything special. It's just enabled by default. But you'll have to look up specific for your provider to see if there's something you need to do. Yeah, I think right now the support on AWS is limited to their lattice product. And then you can install other implementations, say Envoy Gateway, Contour, et cetera, on EKS. If you want to use it there. If someone like this person, Elsky, knows that they have non-standard protocols and dynamic ports, is Gateway API still for them? It depends on how broadly you want to, how invested you want to be in this. Certainly depending on how non-standard these are, there may be other people with your same use case. And if there are, definitely worth bringing that to Gateway API as a community and seeing if you can open source it. At the same time, we've seen others develop their own custom route types. So there's somebody I know that's working on an IP route right now, for example. And the beauty of that is you plug into the rest of the Gateway API system, ecosystem, you have all the infrastructure around it. But you can build your own routing mechanism that may work for your custom protocol or use case. So definitely interested in whatever that use case is, but we've tried to make the API pluggable so you can replace different components as necessary. One last comment from Jesse, just saying that Gateway API isn't very Google-able, but not really sure how to solve it. So in terms of trying to figure out whether, yeah. What? We are absolutely awful at naming. The first API name for this was Services API, which was almost universally as bad at finding this. Gateway API may be even worse, I don't know. But since we already did the rename once, I think it's just never going to be renamed again. So for better or for worse, we're stuck with our non-Google-able name, but we'll just become more popular than all API gateways eventually. If you put it in quotes and hyphen it, it works pretty well. So just one more time, we have the website on the bottom of the screen right now and also going to the Slack workspace. If you can't Google it, you could ask the folks in the Gateway API channel in the Kubernetes Slack workspace to help guide you in terms of implementing Gateway API. And I think that's it. Is that anything that y'all would like to say in closing before I do my closing statement? Oh, I stand frozen. Oh, y'all are quiet, so. No, thanks so much for the time. It's been great to present and talk. I always love to talk about Gateway API. Thanks for the great questions and discussion. And so excited to see that Argo integration here. This presentation was stellar. I appreciate you both so, so much. And I appreciate everyone who tuned in today and watched live and participated in chat. Also those of you who watch the recording, thank you so, so much. It was great to have our new friends, Dan Garfield and Rob Scott here. Thank you again for sharing your time and expertise, especially about Gateway API and the Gateway API plugin for Argo rollouts. Here at Cloud Native Live, we bring you the latest in Cloud Native Code on Tuesdays and Wednesdays at noon US Eastern. Thanks one more time for joining us today and we'll see you at the next one soon. Thank you so much everyone, appreciate you. Thanks Whitney.