 All right. Thanks everybody. Welcome to our talk. Flagger, Linkerd and Gateway API. Oh, my okay. Uh, so hello, everyone. I'm sans call. Uh, I'm a software engineer. We've works. I maintain flux and flago. I also try to contribute to get the API and Kubernetes upstream. Sometimes, um, I graduated college like five months ago. So this is my first coupon, my first stock, everything. Uh, so I'm pretty excited to be a little bit nervous as well. So Jason, awesome. And yeah, I'm Jason Morgan. I am a technical evangelist for the linker D project at buoyant. It means it's my job to tell you to check out linker D and why you should use it. And thanks so much for coming. You can find me on Twitter on GitHub, if you want that for some reason, and you can also find me if you have questions or want to yell at me after the talk, Jason or at Jason on the linker D slack. Thanks a lot. Okay. Uh, so how many of you have actually tried all the gate to API or even just like gone through the rocks or something like that. That's a pretty good number. Actually, I wasn't expecting that. Uh, so for those of you who do not know about the gate to API, um, it represents the next overall and how we do traffic management in community clusters. That's a pretty fancy tagline. What it is actually is it's a, these new set of objects and you sort of API is packaged as custom resource definitions that lets us do networking and load balancing and community clusters in a robust manner. So how many of you have actually like, you know, a bookmark about your ingress annotations, like thousand of them, right? So this is actually trying to address that by making sure that us as like maintainers of projects and you as end users have a nice time when you're, you know, configuring networking in our community clusters. All right. Awesome. Uh, so let's talk about the why for gateway API. I'm not going to read this cause it's pretty dense block of text, uh, but this is loosely taken from the gateway API site. Uh, my translation is, you know, what you're getting at a gateway API or the why here is that if we can get a shared set of standards that allow different tools, different projects and different vendors to work together without actually needing to work together directly, right? We all win. And the bigger why is that there's actually a lot of stuff in the cloud native space, right? A lot of projects, a lot of tools, a lot of different ways of working, right? And as you get into this realm of the different custom resource definitions and the ways different people work, right, that the more you can do in a standard Kubernetes fashion, the better off you'll be, which kind of gets into you. How does the gateway API help us or help you as users, us as projects and the companies that they're provide more value to folks? Well, first and foremost, we get more and easier interoperability between our projects. And that's what today's talk is, right? Sanskar and I got together and we built an integration between linker D and flagger without either of us. Like I didn't look at flagger and he didn't look at linker D. We're just able to make this thing work beyond that. If you do it well, and if this gateway API standard really becomes the new way of defining how traffic rolls into and around our clusters, we're going to have fewer custom resource definitions to manage so we can, we can handle our traffic. You know, and together, right? What we're hoping for is that, you know, gateway API is going to help projects like linker D and flagger get together and pair up to slay the predator of complexity in Kubernetes. And thank you for the pity laugh. Right. Oh, so I want to discuss more about progressive delivery. So I just want to like quick show of your hands. How many of you are implementing progressive delivery in your clusters? Or, you know, try it out. Okay. That's pretty good. For those of you haven't, let me try and convince you that you should be implementing progressive delivery in your clusters. And you should probably be using flagger to do to do it. Okay. So progressive delivery solves a very specific set of problems. And that problem is basically how do you introduce new software versions into your cluster, right? How do you have a V one of your application? And if you want to introduce a V two of your application, how do you expose that V two to your end users? You could do the easy thing. Just, you know, do a rolling update. Just replace V one with V two. Everything would work fine. Right. So if you take a look at this very complicated fancy diagram, you're, you could just go from step one where you have V one running off of your application up and running. You could just go directly to step six, right? And you could have V two of your application up and running. The problem with that is what if V two is not working as expected? What if they're bugs? What if you have broken features unintentionally? You do not want your users to end up using a buggy version, right? You do not want them to have a bad experience of your website. So the problem that progressive addresses is how do you make sure that your blast radius is as confined and as configurable as possible, right? And progressive delivery answers that using iterative steps in a progressive manner to make sure that your users have a better time experiencing a website. So how it basically works is that instead of replacing replacing version one or your application with version two, you leave it up and running. And then alongside version one, you create another deployment, which will run version two of your application, right? And then something like flagger will start slowly shifting traffic to version two of your application, right? And that means some users, a very small subset of your users are going to end up experiencing version two of your application. And one of the prerequisites here is that you need to have a robust observability stack, something like Prometheus data dog, whatever, whatever works in your organization. It doesn't matter. But as long as you have something which can track the performance of version two of your application, you're good to go. What happens basically behind the scenes, which you don't really see is flagger hooks into your observability stack and make sure that version two is working as expected, right? So it measure, it can measure stuff like latency or I don't know, request success rate, you know, like how many 503s are there? Or how many 404s are there? Whatever fits your SLOs and KPIs flagger validates that. And only when that validation is hit, when that validation is met, does more and more traffic start shifting to version two of your application, right? So more and more users start experiencing version two of your application and that all this happens in a progressive iterative manner. So as to make sure that if version two is not a good version of your application, the impact, the disaster and the related risks are very, very limited, right? So here we see that some, some traffic is progressively shifting to the new version. And once we reach a threshold in this case is 50%. Once we reach a threshold, we can be confident that, you know, version two is working amazingly well. We are amazing software developers. So, you know what? It's time to go ahead and promote this version. And by promotion, what I mean is right now, if you notice, we have two deployments, right? The green one and the blue one. The blue one is running V1 and the green one is running V2. But the thing is we don't really need V1 right now because we are confident that V2 works just as well, right? So we can start replacing the blue pods with the V2 application, right? So what happens here basically is we're doing a rolling update. So that's slowly and slowly, the first deployment is going to start running V2, the primary deployment. And then eventually we don't need our Canary deployment, the beta deployment, right? So we just get rid of that, right? And that's our Canary deployment strategy. Yeah. So this is how you do a Canary run. All right. And so now that we know a little bit about progressive delivery and Canaries, can you tell us about Flagger and how it helps automate that process? Sure. So Flagger, if I were to describe it in one line, it's a Kubernetes operator that automates the entire process of progressive delivery, whatever things I explained in the last slide. It automates that entire process for you for free, right? So that means it lets you shift traffic to your, you know, V2 applications in production gradually and it rolls back safely. So if you were to deploy on a Friday, let's say, and for some reason, unfortunately, the version two of your application was not what you're hoping to it to be, Flagger or detect that and automatically roll back all traffic back to your last known stable version so that you don't get like a thousand alerts on page or duty or something like that. Yeah. Uh, okay. So next thing is that it has a very extensive webbook mechanism, a very flexible webbook mechanism, which allows for a lot of load testing, acceptance testing at really lets you have a lot of control over how you want to make sure that your Canary analysis is driven forward, right? And it also has a lot of support for observability platforms like Prometheus, Tritador, CloudWatch, Influx. I'm not going to go through all of them, but it's a lot. And this is, this used to be one of my favorite things about Flagger, just how many things it can work with. You know, it can work with pretty much every ingress of service mesh out there. You know, I'm not going to go through all of them, but pretty much everything is supported except for like a two, three or three. But, uh, this was the past and sort of is the present, but the new semi-present and the new coming future has gateway API in it. I am really, really excited about that because what it means for me as a maintainer, for us as maintainers is that we don't need to write thousands, thousands lines of custom code integrations and look at how Istio does virtual services or how does SMI does traffic split, right? All we need to do is be compatible with gateway API and we are automatically compatible with the entire Kubernetes networking ecosystem. And that is something which really excites me. Yeah. So we talked a bit at the beginning about, you know, the better interoperability, right? And the whole theme of this talk is that if Flagger can learn less about service meshes and ingresses, you all benefit as consumers of tools like Flagger because people like sans car can focus on writing new features and making that tool better and not, how do I support the next new tool that's going to arrive in that somewhat gigantic ecosystem? Speaking of the gigantic ecosystem, let me tell you the good news about service mesh. Uh, linker D is an open source project. Uh, it's created by the folks who over at point, the people that, that pay my salary here. And I'm going to tell you, uh, that it is the lightest weight, the fastest and the most secure service mesh for Kubernetes out there. I'm going to say that for a bunch of reasons, but one of them is, I believe it's, I believe it's true. Uh, it is the only service mesh to hid graduated status within the CNCF. So that means we are ranked among the most mature products in the space and you can rely on us for production, uh, for production use. On top of that, we're used by small companies, by very large companies, by governments, by nonprofits, we work in a bunch of different spaces. In fact, if you didn't see it at KubeCon Valencia, the folks over at Xbox Live gave a talk about how they use linker D to handle multi cluster and provide encryption for all their applications and the, just the story is it was fairly simple and straightforward in spite of it being a very large and complex, uh, environment. So a quick view, how does, how does linker D work? Right? Or how does a service mesh work in general? So here I've got a picture of a Kubernetes cluster. We've got some standard components, the ingress or what hopefully we'll start calling the gateway. We have a web front end and two back ends, foo and bar. Linker D or a service mesh works by installing a control plane, which is your command and control interface for this tool. And then by routing all your traffic through a series of small load balancers, in our case, the linker D to proxy and having, having those, those proxies make up our data plane and they'll do stuff with our traffic. The stuff in question is things like providing mutual TLS, allowing us to, allowing us to get standard metrics out of every single application without needing to instrument that at our actual application, you know, improve on built-in Kubernetes load balancing and other, other things that you can hear more about if you check out the point booth later today. So with that, we want to kind of go back to gateway API and the point of this talk, right? How does this all work together and want to give you an overview of some of the objects? So right. So these objects here basically represent the core, the backbone of the gateway API project, right? So as I said before in my previous slide, that gateway API is nothing but a bunch of API's custom resources that have been modeled together, right? And these represent those objects. So we'll take the first one. That's called the gateway class. Pretty simple, very similar to English class. It basically represents a type of gateway API, a gateway API implementation. So for example, if you are, if you have an infrastructure provider who's provisioning clusters for you, they will probably install it for you, this gateway class object, which is because they will install the type of load balancer. Yeah. Right. And just to stop, if you see on the left-hand side, we've got pitchers at happy folks doing jobs, right? So we have the goal here with with gateway API is to give different objects for different folks so that we can use the Kubernetes role-based access control to decide who's doing what and in what way, right? So that we don't have to, we can make the blast radius of security changes smaller. So we've got the gateway class, which is the type of load balancer we're going to get or the way we're going to bind to that backend network. And then our cluster operator or our platform engineers or whatever we call them in our organization, they're going to be the ones concerned with the gateway. We used to be the ingress. How are we going to get, how are we going to build that front door for our cluster? How are we going to be able to begin getting traffic in? Now past that front door, we have a new object type. Right. That's going to be our HCP route. Yeah. So HCP routes basically determine the routes, right? So there are a lot of routes, TCP route, GRPC route. We're not going to concern ourselves with those. We're going to talk about HCP routes. So as Jason said, gateway is basically the front door or the load balancer, right? So ingress used to do this differently. Ingress had your load balancer configuration and your routing configuration in the same object. This hasn't split up in my opinion, which is a very good decision into two different objects. The gateway defines your load balancer configuration and HCP routes define how you forward traffic to which services receive what traffic. Right. And this is something which your application developers and SREs are really concerned about because they are the ones who are going to determine that I want traffic which has a path V2 in it to go to V2 of my application, right? Okay. Okay. So it's time for some pain or YAML because there is nothing better than YAML apparently. So we're going to look at some YAML. Okay. So here we have a gateway object. So it has a name. It has a name space, right? And then it has something called a gateway class name, right? So gateway classes are cluster scoped objects. They're not name space. So you don't need a name space to refer to a gateway class. So each gateway object is basically an instantiation of a gateway class, right? So here this object is basically of a type foo lb. That's all it's saying. Then since gateways are basically load balancers, right? They are a layer on top of load balancers and load balances listen for traffic. You need to define a bunch of listeners. So here we define one listener which listens on port 80 and it listens only for HTTP traffic, right? And one of my favorite features about the gateway API project, this is really powerful, is the robust ACL mechanism you have here, right? So one of the very missing features about ingress is that you could not cross namespace reference things, right? You have a service in the other name space that ingress needs to be there. It was very hacky if you want to do something else, right? Gateway API really takes that into consideration and builds a very, very nice and robust model and how to address those concerns. So here you see we have something called allowed routes. And it says that it's only going to allow routes to assume responsibility for this traffic, which are of type HTTP routes. That means any other route which are of like TCP type or GRPC, they cannot assume responsibility for any traffic that flows into this listener, right? Furthermore, that HTTP route needs to be needs to be in the namespace which have that which has that label, right? Otherwise that HTTP route is not allowed to handle that traffic which is flowing into that listener. So does that mean if I've got like a back end namespace, right? Where I don't intend to expose anything to the internet, I can ensure that folks can't create routes to bind to this game. Yep, exactly. Exactly. And the vice versa is also true. If you want someone to have access to that, you can enable that explicitly as well. Great. Now let's look at an HTTP route, right? It has a name, it has a namespace. One thing which I would like to point out is the namespace of the HTTP route and the namespace of the gateway are different, right? And this is another example of the cross namespace referencing, right? Which is not possible earlier, but it's really, really easy to do and in a much more intuitive manual, right? Then we have a parent ref, which basically says that this HTTP route's parent is that gateway object. So each HTTP route needs to have a parent to get traffic from. So once the gateway is established, the load balance is there, the HTTP route needs to say that this is the load balancer I want traffic to be coming from and I want to do something with the traffic. And what that it does, what the rules are applied to the traffic that comes into a ship route is determined by the rule specification, right? So you can really go on town on this. There are like so many extensive configuration options on what you what you can do. You can modify request headers. Recently we added support for modifying response headers as well. You can determine where it goes, the weights, etc. You have a lot of flexibility on what you want to do here. Here we have a pretty simple rule that just says forward all traffic to back in about full service on port 8080. All right, so this is the messiest diagram and one of our last slides. So let's talk through it real quick. So what I want to do is talk about the star of this show, right? Which is the work that we didn't have to do to make this actually happen. Or more importantly, what is the interface between Flagr and your service mesh, in this case, Linkerty. That is this HTTP route object, that orange thing in the center of our diagram. All right, so I'm going to talk through this a little bit. So we have Flagr on the right hand side in green. Flagr is the operative that's going to actually automate that Canary rollout that we're we're all here to hear about. Right? And Flagr gets its instructions from a custom object called the Canary. So Cary defines, hey, what deployment do I care about? What are my testing criteria? What are the steps that I'm going to use? And how am I going to validate that? On top of that, when we install Flagr, we give it a connection to a Prometheus that comes with Linkerty or whatever Prometheus is storing, Linkerty's metrics data. So we talked a little bit about Linkerty, but one of the things it gives you is standard metrics for every single application in your environment. Things like what is the request success rate? What is the latency? And we're going to use that information in this actual demo or Flagr more accurately, Flagr will use that information to decide, is this a good new version? Or is there a problem? And we should roll back. All right, so Flagr gets its instructions from the Canary, it sees a change in a deployment object and decides it's time to do a new Canary rollout. So it's going to spin up a new version, a new Canary service to receive traffic, and then it's going to tell the HCP route, please shift some percentage of the traffic. In our example, 80 percent of the Canary, 20 percent of the primary. And that's where it stops. It doesn't want to understand. We don't want it to understand the internals of our service mesh or ingress or whatever the tool is. We want it to express its will and you know, we'll be like the Anson and Star Trek, right? Like make it so. So with that, we're going to talk about the Gamma extension. So the Gamma extension is a extension to Linkardee that we actually wrote as part of this demo exercise. And we built it not by talking to the folks at Flagr, but instead looking at the HCP route specification and figuring out how it works and watching that HCP route and translating the changes that it makes or it sees to some Linkardee objects, specifically the Linkardee service profile. That's important to me because I care about Linkardee internals that shouldn't be important to anybody else here, but it's on the slide. So I got to talk it through. After that, our Linkardee destination service, what controls how traffic gets shaped inside your cluster, is going to read from the internal Linkardee object and make the actual changes in real life. So with that, who wants to see it in action live with an internet facing demo? All right. Let's do it. It's demo time. There we go. All right. So we've got an application running with our happy cuttlefish. If you go to podinfo.cevo.59.io, you'll be able to actually see this thing happen live. But good luck getting to it with conference Wi-Fi. On top of that, if you want to see Linkardee's view of this environment and what's going on, you can go to dashboard.cevo.59.io and actually look at the traffic split because I just exposed it all to the internet. Yeah, you know, why not? One thing I want to note, the folks at CEVO provide really easy to get Kubernetes clusters. They spin up fast. Check it out. They have a booth downstairs on the conference floor and they've been really helpful to us in doing this demo. There's a lot going on on the screen. I'm just going to tell you what's happening real quick. In the top left, or my top left, hopefully it's the same for you guys, you folks. Pardon me. You're going to see some pods, right? We have a generator, which is actually generating some traffic for podinfo. And we have our podinfo application, specifically two pods called podinfo primary, right? That is the primary service that Flagger has configured for, you know, the version, the stable version of our application. For all these pods, you see that we have two containers running per pod, one for the application, one for the Linkerty proxy, right? Which actually does all the service mesh work. On the right hand side, you see an active HTTP route, right? It's called podinfo. It's for the podinfo service. All the traffic is going to the primary, none's going to the canary. And at the bottom, we have Linkerty's view of traffic in the cluster. So it sees podinfo primary, the service, getting 50 something requests per second. But we're going to go ahead and change that right now. Is the font possible for everyone? Or does that need to be zoomed in or anything? Okay, cool. All right, like the thumbs up. All right. So first off, we're going to look at the canary object and just give you a sense of what's going on here. Right. Sounds good. Do you want to? Yeah. Thank you. So let's get past all this and then we have a spec here. So here we're basically defining how Flagger is going to drive the analysis forward, right? So we say that we want to shift traffic every five seconds on the interval. And then we want to do it until a weight of 95% is reached and we want to shift 5% traffic with every step. So basically what it means is it's going to shift five percent of traffic, right? To the canary service every five seconds until 95% of the traffic is going to the canary. And at which point it's going to say that, okay, since 95% of traffic is going to the canary, I am satisfied. Flagger is satisfied with the second version of your application. And it's going to go ahead and it's going to promote it. And the way it determines whether your version two is working fine or not is by looking at these metrics that you have to find here, right? So Flagger comes inbuilt with a few metrics because the Flagger philosophy is batteries included. So we have a couple of metrics. So here we have a request success rate which measures 503s. And then we have your request duration which basically is latency for your requests, right? And then we say that we want to use Gateway API because this is a Gateway API demo, right? And then basically we say that the target ref is a deployment and the name is Pod Info. So that is the thing we want to target, right? Pretty simple. All right, and with that we're going to make a change to our Pod Info deployment. Canary is our Flagger canary or sorry, the canary instructed Flagger to look at that deployment and detect changes which it will do in a second. And last time I did this I made a mistake. So we're going to try really hard to do it right. All right, and so we're going to replace our cuttlefish with a logo that demonstrates how well we all work together. All right, so let's go back to the other one. What we're going to see here, God willing, is a set of new pods come up right here. So this is the moment of truth where we check whether or not I fat-fingered that. And oh, come on, two or two. It'll happen. It'll happen. It'll happen. There we go. So we see our pods have come up, right? We have a new deployment. And what we're going to see here in a second is a new service is getting traffic, right? And that is the Pod Info canary service. If you look at the top right, we see that right now that Pod Info HP route is specifying 90% of the traffic on primary, 10% on canary, right? One thing to note at the bottom, we have an active view of Linkerty's insight into the services, but it's at a 10-second polling interval, so it goes a little bit slower than what you're going to see in the HP route, all right? And we can hop over now and look at our actual application. So one, if you feel like going to the dashboard, you're going to see the new deployment starting to receive some traffic and some metrics about it. And if we look at our Cuttlefish, they'll wait patiently and be replaced by our successful demonstration of getting ready to slay complexity together, the predator complexity. There we go. And that's the demo, folks. Thank you very much for coming to our talk. I hope this has helped. Yeah. Thank you. Thank you so much for the talk. We have eight minutes for questions. Any questions? Hello, that was a great demo. Thank you. Quick question in the gateway API spec you mentioned, you could reference resources across namespaces. Can you talk about why that's a feature? To me, that sounds like a flaw, to be honest. Okay. Because if you delete namespaces, then other namespaces are impacted, things like that. So pretty much, I mean, we have a gateway API maintainer right here. He can probably answer it way better than I can. But from what I understand is that when you have different teams and like you have an application team which is working on application A and application B, but they don't want to provision new load balances for each application, right? So when you have a team which owns one namespace and you have another team which owns another namespace, you want to be able to hook up to the same gateway but do it in a secure manner. So that's why both the gateway and both the SDP route have these ACL mechanisms, right? So both of them explicitly have to say that I want to hook up to this gateway and the gateway option has to say that I want to allow this SDP route to be hooked up to me, right? So I understand it can be a security concern, cross namespaces always have much higher security risks, but this is built in a much more robust ACL way. So yeah. And if you want more clarity, he's always there. Is the Flagger and LinkerD integration feasible in the current Ingress spec, or is it something which is feasible only after migrating to Gateway API? Yeah, great question and thank you for that. So Flagger and LinkerD work together now without the need to use Gateway API stuff in particular. And if you look at our docs or you see some other examples we have of talks showing this integration, we do it with the service mesh interface spec today, right? What we're hoping is that Gateway API will become a standard tool for Ingresses and service meshes to work together and we can all use that instead of what we're doing with SMI. All right, does that answer your question? So just to answer your question more, I don't know visually, you can go to the docs of Flagger and like so right now it uses or something called traffic specialist as Jason said. So it works already, but we wanted to build something SMI or LinkerD agnostic. Yeah, that was a wonderful demo. So with Gateway API sort of unifying, standardizing the feature set across all Ingress, what do you suspect will be the new criteria for choosing one Ingress over another, right? If they're all basically supporting the same set of features with the Gateway API? So Gateway API is, the maintainers have been really smart about that, I would say. So what they have done is, first of all, they've done two things. There's a stable channel and there's an experimental channel. So stable channel has everything which is agreed upon by the community that this is a hardcore requirement. And then all of the good stuff, all of the like the experimental features obviously are in the experimental channel. And the second thing is that there is a shared, there's a division on what kind of features need to be supported. So I think there are three features, three division, sorry, there's core, there's extended and there is custom. So core features need to be supported by every project that says that, okay, I am Gateway API compliant. They need to do that, otherwise they can't claim that. Then experimental features need to only be supported, sorry, my word is experimental, yeah, extended. Extended features only need to be supported by few service meshes or Ingresses. They can decide whether they want to support that or not. And custom features are basically Gateway API is way of saying that if you have a very custom thing, like you have a cookie policy that no other Ingress or service mesh provides for, you can have that inbuilt into the Gateway API. But when you switch over to another Ingress, like if you migrating from let's say engine X to like Linkerty, you decide it's time to try a service mesh, that might or might not be supported. So you need to be careful about the divisions that you're actually using. All right, do we have more questions? Yeah, one in the middle there. Sorry if there was anyone to the left, I wasn't looking that way. Those are good demo. How does this canary rollout, or sorry, this canary deployment compared to like the Argo rollout canary deploys? Okay, I have never really tried out Argo rollouts. I just went through the docs. So take this for the grain of soy. But so Argo rollouts, as from my understanding, they have their own custom resource definition, which basically tells you to take your deployment spec and put it into their custom resource. Basically migrate your deployments into their specific custom resource. We decided not to do that, because if you are a big organization with 1,000 deployments, we do not want users to go through each deployment and convert it into a different custom resource. So what Flagger does is it creates, it looks at your deployment and it creates an exact replica copy of it. And then one of them, one deployment becomes a primary deployment and the stable deployment. And one deployment becomes the canary deployment where you run all sorts of tests, right? So I would say that's the main difference between Argo rollouts and Flagger. I can tell you from the perspective of someone, a consumer, not like writing that. They're similar, but they have different philosophical approaches, right? That's the big thing I noticed. Today, the integration between your service mesh or at least LinkerD and Argo rollouts and Flagger is both at that service mesh interface level, right? And hopefully in future, it'll stay the same from our perspective where we'll all work together at that HCP route or coordinate at that HCP route layer, not at specific implementation nuances. Does that help? Yeah, thank you. Cool. I think we have time for one more if there's one last question, right? On the back. Thanks. I was just looking at the Flagger docs and it looks like there was already something there about Gateway API. So did you add that after you implemented the Gamma component? And if you didn't, what does the Gamma component actually do? Can you say what you said again? Just, it looks like the Flagger docs already said something about support for the Gateway API and you mentioned you had to implement something in a component you called Gamma. So what does Gamma, what does it do? What did you implement? Oh, great question. Thank you so much for that. So we didn't make any changes to the Flagger as part of this, as part of this, right? The LinkerD folks, right? Our side, we built an extension to do traffic splitting on HCP routes instead of on the service mesh interface. So Gamma is a subgroup within the Gateway API specification that is worried about how do we take Gateway API and make it work with service meshes and not just ingresses. So I called the little extension Gamma because yeah, that was it. Does that answer your question? Thank you so much. Awesome, we are out of time. Thank you so much for a great demo and talk and let's give it a round of applause.