 So welcome to the gateway API workshop. I'm Flynn and this is Mike. I'm Mike. I Am a tech evangelist with buoyant I work primarily with Lincordie in a past life I was also the original author of the emissary Ingress API gateway and These days I work also with Gateway API and I'm a co-lead for the gamma initiative Over to Mike. I'm a product manager at Microsoft now I'm currently working on our upstream open-source service mesh team with Istio and Gateway API I've been involved with the Gateway API project for over two years now and prior to that I was at hash you core working on console service mesh So been around the service mesh space for quite a bit and you also used to be a co-lead of gamma Yes, yeah I was one of the founding co-leads of gamma along with John Howard from Google and Keith Maddox from Microsoft So what this means is that you have a couple of people up here who now work in marketing and management trying to Talk to you about technical things Wish I'll go luck I should also apologize in advance if you see me making weird wincing phases or something It's because I broke this color bone a week and a half ago. It's not a commentary on Gateway API Or because we broke our demo Well, we might have broken the demo anyway. I mean we'll find out We also We haven't really gone through and done a lot of workshops in this format before So on the one hand, I don't actually know if it's gonna take an hour and a half to get through everything But you know, feel free to ask questions And I believe there are a couple of microphones out of the audience if you want to or just you know, yell out And we'll try to help people out And if we finish early then we finish early and that'll be great because that will mean that everything went Swimmingly and it's really easy to use So Who are we here for? Yeah, we're here for platform engineers application developers infrastructure people Really anybody who's trying to work with applications in Kubernetes if You are doing applications in Kubernetes. You will always have to solve problems that Gateway API is here to solve If you are doing Kubernetes and you're not doing it for the purpose of applications then I'm not sure what you're doing exactly because Nobody runs clusters just to say they're running the cluster. Everybody's trying to do something with a cluster So that's what we're here to talk about and one of the strengths of Gateway API is really That it's also for platform engineers, too So even if you're not running the applications yourself, right and empower the application developers on your team that you're building for To be able to do things autonomously like on their own very much so On the agenda today, we will talk about the Ingress problem Which is separate from the Ingress resource and separate from Ingress controllers Yeah, we'll talk about how Gateway API relates to the Ingress problem and to service meshes as well Then we're gonna go in and we're gonna do a workshop to let everybody get their hands dirty and Hopefully things will work You will need a Kubernetes cluster The two of us are running K3D because the two of us really wanted to set things up ahead of time to make sure that it was gonna work Get a local development running with the conference Wi-Fi You don't have to use a K3D cluster. You can use your favorite cloud provider If you don't have a favorite cloud provider, and you want to use the Cvo cluster There's a link up there which you can get yourself set up for you will need kube control. You'll need help We are going to demo things with the bat command, which is just kind of a more polite version of less If you don't have that and you don't want to install it just type less instead no big deal And why Q shows up a couple times nice I use bat instead of cat it colorizes the output in addition to a few and it also pages It's just it's a nice tool. It works out really nicely Why Q is a similar tool for yaml so it can colorize you can filter it in the same way that like jq works with json The Workshop is currently set up to download either the linker DC li or the istio control cli Depending on your choice of service mesh as you go through the workshop The workshop source. I apologize for not putting a qr code in there you can clone the boogantio slash Gateway API. Well or gamma workshop See, this is what we moved three last minute Let's uh, there we go. Like I said gateway API workshop You can clone that and you will see Both all of the resources that we're using all of the scripting that we're using the readme is Actually, the executable code will be going through So that's a good resource to follow along Or you can frantically try to type things while we're doing it on the screen. It's up to you Whatever you are doing though Please make sure you have an empty cluster I don't think that the workshop will necessarily break your production cluster, but I Don't know Let's not risk it. Let's not risk it. Let's let's just not I Throw a little cubicle of cluster info in there at the very beginning just to make sure you're on the right one Yeah, if you want to use a key 3d cluster for this There's a script in the repo called create cluster that is H that will try to do the right thing for you Given that you are on conference Wi-Fi. The right thing might be a little tricky to come by but you know We'll see what happens and again if you want to use Civo, then that's the repeat of the link Okay Let's talk a little bit about the Ingress problem When you work with Kubernetes you start off by getting a bunch of workloads running inside a cluster With that's what your cloud native application is and you will instantly run into this problem that your users are outside of the cluster But your workloads are inside the cluster and one of the things that clusters try really hard to do is To prevent people outside the cluster from messing with things inside the cluster This is the Ingress problem. You have to have a way to let people use The things inside your cluster or what's the point? This is the first problem. You're gonna have to solve with cloud native always so we tends to do this by sticking some sort of a Thing right there on the edge of the good of the cluster whose purpose in life is to provide you some control over who can get through that boundary We refer to this thing as an whoops as an Ingress controller or You will see in some of the stuff we do here We talk about gateway controllers because gateway API uses gateways as opposed to the old Ingress resource But that's its purpose in life is to provide you control and to be able to outrequests and do all of the fancy stuff That you would like to do and to terminology to be a bit confusing We will try to be explicit when we're disambiguating between Lower case ingress as in like the functionality of bringing traffic into your cluster versus the capital I ingress v1 Kubernetes API We'll go over some history of that in the beginning and then we'll move on and mostly be talking about gateway API I was gonna say I think these next couple of slides are the only places we talk about the capital I ingress resource Back in the bad old days Kubernetes invented this thing called the ingress resource with the intention of using it to solve the ingress problem and I think we can say at this remove that it didn't work that well It has been widely used. So yes, there's a problem that everybody has. Yes It is everywhere and and we've seen the pain points with it Ingress is actually a really nice example of the way Widespread use does not necessarily mean something is a well-designed API for that use We found that it had trouble with standardization Okay, so I apologize I was about to do the slide points out of order terribly sorry one problem that we found with the old ingress resource was that you tended to have one or a small number of ingress resources Irrespective of how many workloads and how many developers you had So it tended to be a major point of contention Where a developer would want to go and add a new workload and they would have to go edit the ingress resource Or they would have to ask their platform team that carefully guards it because they don't trust any of their application teams Not to screw it up. Yeah, if you have this one big centralized resource that controls a critical piece of functionality in your cluster The ops guys tend to get very paranoid about letting the developers mess with it. And so this tended to make things slow Ingress also There were a lot of things it could not do so for example, there was at least a long time where it couldn't configure TLS termination and such and You know, that's kind of basic functionality. You need this sort of thing So what ended up happening was that people just threw annotations at it To deal with their particular ingress controller and of course everybody did that in their own special snowflake unique way Oh, yeah, so we ended up with I Think Nick is fond of calling it the Wild West of annotations Yeah, the this is actually a big reason why Gateway API Does not use annotations for things because some of the Gateway API maintainers were so badly burned by annotations on the ingress resource So overall the ingress resource both Helped to solve the ingress problem and created another raft of different problems So go ahead go back one second. Yeah, so so Another part of it was just there was poor extensibility mechanisms And that's why people were resorting to these annotations is there was no Structured way to be able to extend it So that was like definitely another guiding principle in the design of Gateway API is Knowing that we can't build one API that will solve everything So finding better ways to enable that extensibility in a well understood pattern I think I might go so far as to say that the ingress resource had no extensibility mechanism. I think you're right Yeah So that led us to Gateway API You're gonna you're gonna see this diagram everywhere that you're talking about Gateway API Why thank you It is a project within the Kubernetes sig network We have this set of different CRDs in here that are Kind of originally intended to be ingress version two or at least to take a different cut at solving the ingress problem In a way that can be more extensible in a way that can be more structured It's not on this slide but another thing that I think is really important about Gateway API Is that it explicitly acknowledges that there are multiple roles within One application in Kubernetes where you will have people whose job it is to maintain the infrastructure on which your cluster is running And then possibly a different set of people whose job it is to keep the cluster healthy And then a different set of people to be writing the applications If you're doing this in a four-person startup All of these roles might be filled by the same person But if you're doing it in a large company, they are probably different people different organizations But they all have to work together. And so this is another thing where Gateway API was Deliberately trying to tackle that explicitly up front Yeah, and the reason it's not actually ingress v2 is because doing a v2 of anything in kubernetes is very very painful So that's why it became its own product or own project. It is still an explicitly Subproject within sig network, but it is moved out of tree. There are no plans To move it in tree in kubernetes Because we're actually really happy with the flexibility and speed that we have moving as an out of tree project Um, and yeah, it reached uh GA with a one point over lease last october So, uh, yeah, it is at a point where the core resources of it are stable There's absolutely still a lot of experimental work happening at a pretty quick pace. So Keep your eyes on this space and watch for uh continuous improvements to it And yeah, we'll get a bit into more of some of the like design principles on the next slide Do we have a slide about release channels? I don't remember I don't know The versioning scheme is a little bit complex. Um, but it's really given us the flexibility to be able to offer both a stable api for consumers as well as have some of the freedom to experiment and iterate and really Work collaboratively to design something For the future and and yeah, so like this is the successor to the ingress v1 api Which is effectively frozen at this point. It's been around a long time and new work isn't really happening on that Everyone who's doing ingress things now is pretty much all contributing to gateway api I think that's true Sorry, it's definitely true that no real work is happening in the ingress resource I believe it's true that everybody doing work in this space is involved with gateway api Which is pretty at least the vast majority like there are dozens of implementations at this point now. Yeah Since I don't think we have a slide on release channels and since you will run into this as you look at gateway api Um There's a concept of a stable release channel and an experimental release channel. So As we go through the workshop, we will be using the experimental channel Because of one feature that we'll talk about later Um The thing that I think is important there is At this point there's I'm not sure there's really a lot of significant difference between experimental and standard except for timeouts, right? Or sorry except for the port on the yeah, the the experimental channel is a super sad of the standard channel So The intent is that when we're adding new apis new fields New enum values potentially even small things like that That we can do them In this separate channel and make sure that they're Working as intended before we eventually promote them to standard at which point it is part of the stable api that has those v1 guarantees Yeah, we really We'll see if we can go through and yeah, we can get this written down while we do this all right We talked a little bit about the roll oriented design standard generic api is an interesting way of phrasing this one to me but The idea here is that If you think about working in kubernetes right now You learn how a deployment works and then you can carry that from job to job or cluster to cluster or roll to roll Because you already have this knowledge That transferability is a very useful thing in this ecosystem So that's part of the point of gateway api is that you should be able to learn how an http route works And then use it with istio or use it with linkardy or use it with Your favorite other ingress controller or service mesh or whatever And be able to carry that knowledge with you rather than having to relearn every time how your particular thing at this particular job works There is extensibility built into the gateway api We're not going to talk about it a lot in this workshop We're really trying to focus on the things that everybody is doing together Yeah, but but there are yeah, a few things if there's http route filters There are ways for implementations to add their own functionality in a way that is predictable and integrates well with the rest of these resources Integrates well and predictable. Those are those are important concepts here We're going to stick in this workshop primarily to the basics of what can you actually get done today without having to go and do crazy? stuff Should you start using gateway api now? Probably if you're starting fresh You should definitely look really hard at it Being able to learn something that you can carry with you as you move around in the ecosystem is very useful If you are not starting fresh and you have an existing deployment It might be more useful to just learn how it works Uh rather than immediately looking to port your entire world over to gateway api It's still you know 2020 is still a fairly young thing in kubernetes lands There are still things that gateway api cannot do We're working on that But yeah, it's still good to learn about it. You're going to run across it for sure And yes by all means come and help out building it building it. Um I don't remember if the most collaborative api was from looking over Individual contributors or organizations or both But it's really amazing how many different people from how many different places have been working on this And it shows it's really nice. I think yeah, I think it's both if I remember it was the talk that rob scott gave I think last last paris. Yeah, last october the us cube gone. Yeah talking about how So many people from so many different projects have come together and A lot of the core apis and kubernetes that are in tree are really just a handful of folks that are building them And the way that we've been able to structure this project has been great for being able to have a low barrier to entry For many different implementations or users to help with even little things our conformance tests adding implementation details for With their own implementation needs and helping kind of like shape the apis as we're building them So yeah, I just saw a traffic management before you may be familiar with some of these resources is to use virtual service Uh the smi service mesh interface spec traffic split. Uh, the linker d is used The ingress v1 api and a handful of others. There was definitely a proliferation This this slide causes me actual pain. I'm gonna skip it And with gateway api, hopefully you should just be able to focus on The gateway api resources the really two of them are a stable channel right now The gateway resource that you attach to um that represents your infrastructure Then the htp route is how you write your routes There's also a thing called a gateway class, which is not on here, but It's less important less important. Yeah, some somebody has to create it, but it you know, it may not be you Okay Things you can do with gateway api right now remember it got to v1 And the question then becomes so what can you actually do with it? We actually didn't put at the start. Oh, you can use this to route traffic But hey, you can use this to route traffic You can do things like oh if you see something that comes into slash gooey slash then it goes to this workload and slash fate slash goes to this other workload awesome You can do more fancy things like that like traffic splitting with it You including things like, you know routing processor delivery or a failover or whatever You can do dynamic stuff based on the headers of a request based on the htp method You cannot do filtering or routing based on the body Because that would probably be silly If you are using this We should also put on here. Oh, you can do routing for both your gateway controller and your service mesh And if you are doing both of them, then you get things like progressive delivery anywhere in the call graph instead of right on the edge where Only the gateway can see We have a demo app that we're working with that we will be using to show this off This is the faces app. It has a gooey that talks to a workload called face that talks to a workload called smiley and a workload called color The smiley workload when things are going well returns a grinning smiley The oops, we messed this up the color workload now returns the color blue The face workload puts the two together and then hands it back to the gooey So in all as well, you see a grid of grinning faces on blue backgrounds not green backgrounds Excuse me We also have two other workloads here smiley2 does return hard-eyed smileys color2 now returns orange in the middle At right or rather in the middle of this picture, but right at the edge of the cluster We have a gateway controller so we can figure this with a gateway api We have a service mesh in the cluster You get to pick between linkerd and istio Flynn will be deploying linkerd. I'll be deploying istio And we'll be showing that With this mesh deployed we have a single set Custom resources that will be deploying after the initial configuration And you should be able to walk through this exercise and have One configuration that works in either your istio service mesh or your linkerd service mesh We were very tempted to have me do istio and him do linkerd but we thought that might be a little dangerous Okay Let's go ahead and get this started I'll let you start and then I'll be ready to show the istio side of it So I have created a I have created a cluster already And When you do this You can do it with either demo mesh equals linkerd or with Demo mesh equals istio. I will do it with linkerd Michael do it with istio Feel free to follow along as we're doing this One of the fun things with this one is going to be that we don't actually know if this is really going to work Because you know as always we make changes up to the last minute. So I guess we'll see First thing we're going to do here is create Sorry, the first thing that I did here was create a namespace for the faces application. I have now This was making sure that linkerd Linkerd cli is installed I'm now going to make sure that my cluster can actually work with linkerd. Oh good it can And we will go ahead and install linkerd's crd's And then linkerd itself I'm also going to install linkerd vis which is the visualization tool Fancy And while this is yeah, well those are going to say how about let's switch to the other laptop and Michael gets started with istio Yeah, yeah, well, this is taking a minute for the code control plan to become available. Yeah, we can switch her to me. All right So I've got my cluster up and running And now I'm going to run with demo mesh equals istio And we'll make sure our cluster is correct And then get going so I'm using the istio ctl that I downloaded It's going to be version 1.20.3 Our pre-flight check looks good And we're going to install the minimal profile and the reason for doing this rather than the default profile is the istio's default profile installs an older version of The istio ingress gateway, so that's something that uses like bespoke istio configuration And it also like creates that infrastructure automatically Because we're going to be deploying this ourselves. We don't want that because it'll create conflicts around I forget if that's the port or like the host name that it reserves So to avoid that conflict. We're just going to do a minimal install And get going see I always thought you did that because k3d can't handle the maximum install But I guess I was wrong. No, I think you do is actually fine. Yeah. No, it's just a avoid Having two ingresses getting in the way of each other. Yep So we can flip back to this laptop for a moment. Let's take a minute Yeah, so viz finished installing. I'm going to run linkardy check to make sure that everything is okay I actually did not back up at the beginning and emphasize this and should have I am currently running An edge release of linkardy. So this is the completely open source version of linkardy Uh And I'm either using the latest version or like one prior to the latest version so And you know, it's taking a little bit to finish this last check There we go. We're ready to go on my side if we yeah, go ahead. Want to get going to switch back to Oh, there we go. Oh, no. All right. All right. We can continue here then The last thing that I did was to annotate the faces namespace To tell linkardy any pod that appears at this namespace just go ahead and bring it into the mesh This is the simple way of dealing with that on the linkardy side Now while we are running this bit we shall install the gateway api crd's themselves That curl looks really funny on such a narrow window So Linkardy does not ship with its own gateway controller So I am installing envoy gateway with linkardy And once I start this going Then I think this is a fine time to switch back to the istio install while we wait for envoy gateway. All right This should come up on the screen moment Oh, well We can switch All right. Well, let's just finish this out for a second. Um The last thing happening here is after getting envoy gateway going Then I will annotate the envoy gateway namespace so that when we go through and create the proxies the envoy proxy Then linkardy will automatically bring that into the mesh as well I can't do that first Because a lot of the sidecar based world has this issue where if you try to run a cron job The cron job will get stuck waiting on the sidecar to go away This is a thing that's addressed by kept 753 But kept 753 kind of only just landed It's not in kubernetes by default until kubernetes 1.28 And while we were putting this workshop together we were on 1.27 So I couldn't use that and this is the way around it But yeah, uh, good news for anyone using a sidecar based service mesh If you have issues with your application or sidecar starting out of order one before the other There's a better way to do that in newer versions of kubernetes now, which is nice Okay Let's go back to this his laptop for a moment here There we go. All right So now we've got istio installed and we're going to continue. We're going to Do a very similar thing. We're going to create the namespace Uh and similarly we're going to set it up for service mesh injection By labeling it And now that it's labeled, uh, we're going to create we're going to install the gateway api crds And then create our ingress gateway And as we pointed out we're using the experimental channel of gateway api 1.0 here Uh, and I guess I should mention one of the reasons that we're, uh, installing the gateway api crds Um Currently because they're out of tree they are not going to necessarily be in the version of kubernetes That you're installed by default So some depending on your cloud provider or distribution May ship with a version of the gateway api crds installed some cloud providers may manage those for you Keeping them updated or at a certain version depending on your install We're explicitly installing them manually to make sure that we have the latest version In particular I'm not sure there are any cloud providers doing 1.0 yet. Are there? I don't think so. I don't think so. Yeah, so 1.0 is recent enough that the cloud providers are still rolling out some of this stuff and For the most part they are going to be using the standard channel anyway So the experimental channel like I said, there's one feature in here that we Need the experimental channel for And so We need you to do that manually And yeah, so this is what uh the definition for a gateway api gateway looks like So it's pretty small minimal right here Um, you'll notice the only difference that we have between what you're going to see on flint's laptop in a moment and mine Is going to be the gateway class name So why that needs to be unique is because that is going to tell Which gateway controller implementation should look at this resource and pay attention to it and process it and Turn it into infrastructure In this case with istio The istio gateway controller is going to be the one that we're referencing from the gateway that we create It's going to use the istio gateway class who has mentioned gateway class earlier And yeah, the rest of it is going to be identical So we're going to set up just a single listener on port 80 with the htp protocol And we're allowing routes to attach to it from any namespace. Uh, there is important Uh, I believe the default uh, if you don't have this allowed routes configuration is only allowing routes to attach from the same namespace So yeah, and that's pretty different from the way a lot of the earlier ingress controllers work So as you're switching over to gateway api This is a great way to be confused Not that I have any personal experience with this With creating a gateway class or sorry creating a gateway forgetting to do that bit And then sitting there tearing your hair out with okay. Why isn't my route working? Yeah It's also a really critical part of that like role-oriented design where you can have your gateway administrator Create this gateway and the allowed routes configuration also supports like label selectors for namespaces and things like that So you can potentially scope it to only a single namespace is allowed to write routes that attach to this gateway Or you can open it up to everything or you can Have your application teams create gateways and just keep them scoped to only their namespace So it's really flexible And yeah, this is one of the really nice things about kind of this design and where we found it up at with this API So you want to go ahead and apply that one and then flip over and show the link of your one Uh jump over to it. So yeah, I'm gonna wait for a minute for this gateway to get ready and we'll switch back over to fluence laptop Yep So This is going to look remarkably like the one that he just did for istio Except that since I am using the envoy gateway As my gateway controller Then Whoops, I pressed too hard on my mouse Then I'm using This controller name Uh, I'm also calling this the envoy gateway class and in my ingress Or sorry in my gateway named ingress just so we can do this in an even more confusing set of names Uh, I'm using the envoy gateway class and then everything else is exactly the same as for istio And now I'm gonna wait for my Okay, now my my gateway controller is actually working. All right So So I will start the faces application installing and then we'll flip back and make sure things are okay with istio This helm install commands looks disturbing, but There's an explanation in the comment up there Um, basically what we're doing here is normally when you install the faces application It's set up to deliberately be terrible Uh because it was originally written to show off a bunch of resilience patterns So most of what's going on there is saying hey, we want both color and color to and smiley and smiley too But only smiley too Should be Bad right now. Yeah, and this is just a demo application. It's not any part of the actual Stuff that you should really care about for your workload. It's just a way to kind of Demonstrate, uh, you know gooey the different types of patterns that we're going to be able to demonstrate as far as like traffic shifting and switching and Yeah, I want to uh, switch back to the other laptop, please and we'll see how istio is coming along All right, so I've been uh, watching Uh, the resources coming up and everything is running now So I will jump back one of the cool things about this demo is also that For all that istio gets a bad rap including from people like me You know, you can see that it's taking us about the same amount of time to do both of these Honestly, the limiting factor in both cases has been pulling the images down It's kind of fun to see that that for all that we've rotated each other, you know A lot of this stuff really does work well Okay Let's uh go back to this laptop for a moment, please Okay, so like it says we should be able to hit reload in the browsers and see good things For them again, maybe not This is actually Not because the application is broken This is because the bit that we did not do Any guesses anybody We've created a gateway class. We created a gateway any suggestions for what we have not yet done Excellent Thank you. Yes, we did not yet actually route any traffic here We also did not break our demo as uh, I was scared about when I initially walked through this I got to confess I I tweaked to the demo to do this and I forgot to tell Mike about it So he was Very concerned that that the world has suddenly come to an end. Sorry So this is our first HTTP route where We're for our parent ref. Whoops Actually, let me back up. This is an HTTP route It's in gateway.networking.kates.io slash v1 It is an HTTP route I'm going to name it faces gooey route in case I need to deal with it later It's in the faces namespace This is why we told the gateway. Hey, you should allow routes from any namespace It's very very convenient in most cases to allow application developers to put their routes in with their application Because that way your rback is simpler people can more easily find the routes, etc, etc If you have to deploy your routes in a different namespace than the actual back ends that you're routing to That's also possible with gateway api There's a little bit more permissioning structure to allow that to make sure that it's safe that you can't just Have anybody directing traffic to anything else in your mesh You'll want to look for the gateway api reference grant resource If you need to be do that kind of topology Right And that is a kind of common thing if you have a Single like platform ops team that's managing your gateway and all of the routes for it And really wants that tight control over delegating to the application teams rather than letting them manage their own routes On the other hand If you can let the application teams manage their own routes That's often a better way to do that because it permits faster development. So HTTP routes are always associated with some parent Or possibly multiple parents although we're not going to show that one The parent here is our ingress gateway controller in the default namespace We are going to go ahead and match Any path with a prefix of slash gooey slash We'll get routed to the service called faces gooey on port 80 and We are going to use a filter to rewrite the url with just slash And the reason for that is that that's what the application wants that the gooey application expects that it will be seeing Routed paths and nothing else So this is how we can expose it to the public internet at a different path than yeah the one application might expect This is kind of table stakes for any ingress controller You have to be able to do path rewrites. You have to be able to do path matching and we're going to demonstrate that So I want to do that If I come back to the web browser It now loads the gooey and we get grimacing faces on purple backgrounds Because what the gooey is doing for each cell is it is trying to go and fetch a different path slash face slash And we have not routed that one yet So let's go through and put in a route for that the face route Is pretty much the same thing except that it has a different prefix and it uses a different backend rough So we have the gooey prefix is getting routed to the gooey workload and the face prefix is getting routed to the face workload But they're basically the same idea and if I do this then things start working and we have grinning faces on blue backgrounds And now if we can jump over to my laptop, I will walk through the same thing with Istio And it will look exactly the same That is the hope So we're just going to wait a minute see if we can get our laptop switched over to my spring All right So as you can see we're using The same HTTP route the same file. It's literally the same file. It's not just it looks similar. It's literally the same file And yeah, so We will now keep pedal apply it and jump over to our browser refresh and There we go. There is our First initial one of we've exposed the gooey, but the gooey is trying to go back through the ingress And reach that face app and it's still not able to do that. So Similarly, we're going to deploy the same face route app to expose that to the public internet as well again identical at the same file We'll cube cuddle apply it And if we jump back over to the browser It feels a little silly to be showing. Hey, we did the same thing and we got the same result but Remember that the infrastructure underneath this is completely different between these two clusters So that part's kind of cool we have Linkerd with It's rust based proxies with envoy gateway a completely separate product in front serving as the ingress And then istio with our envoy sidecars and our integrated Mesh or ingress that's part of the istio product. So it's Really kind of cool just how flexible this is that you can use this same api To control these completely different implementations It's a little depressing that we're still relying on envoy at both of them. We got to fix that. Yeah, we'll get around that Okay, um How about come back to this laptop again, please? You look like you're live. Okay Excuse me, so Excuse me Okay, so what else can we do here? Well one kind of Another table stake sort of thing for most gateway controllers is the whole canary concept this ab thing so Let's start by showing Depending on how you look at it you can look at this as a canary test or you can look at this as progressive to delivery Where what we're going to do here is we're going to take A certain amount of traffic from the face workload going to the color workload And we're going to shift it over to color two so that instead of getting blue backgrounds The ones that get shifted over to color two should show orange backgrounds We will do that by applying another htp route just configured differently There are a couple of substantial differences with this We should jump back over to the slides for a minute just so we can show on the mesh diagram Kind of like what we're going to be doing here. We can do that This is why there are two of us up here So, yeah, uh gateway api. We just showed how to do it For ingress traffic. That's what both of those routes were doing They were even uh, the the GUI it was expecting a public address So even though they're both inside the same cluster was actually going out of the cluster and reaching back through the public ip Through the public uh address to reach that face service So both of those were uh ingress routes. Well The GUI is the one trying to talk to the face workload. Yes. So that must be that request is coming from outside the cluster So it has to be the gateway controller doing. Yes. Yes Yeah, so as a little bit of background, um Get the gamma initiative Was a project that we started in 2022 to kind of figure out If it might be possible to use gateway api for service mesh implementations This is not the first time we've tried this having a common api For meshes because we really In spite of like how many different service mesh offerings they there are they really Solved very similar problems. They have different ways of doing them different pros and cons different functionality in some cases But really at the core, uh, they do a lot of the same things So uh an earlier iteration of this you may have heard of is the service mesh interface or smi spec Uh, that's where Flynn and I met actually through some of the work. Yeah, actually it is Um, and that's also where I met Keith Maddox And some of the discussions started in that project about hey this gateway api thing It has a lot of traction like there's a lot of people Working with this and it actually looks really similar to the thing that we're doing with our traffic split. Um, Maybe we can use that So, uh, yeah, we reached out to, uh, john howard, uh from, uh, google sdo team and reached out to flin, uh, Langer D and started to think about like Hey, this seems like a thing that might be a viable solution And also a whole lot of other implementations are betting on this too And that's really kind of what was the critical mass there is Because there were so many, uh Vendors invested in it. It felt like there was a high probability of it going somewhere of it Actually being a thing that we can count on to be the future of how you configure your service mesh Um, yeah, so we we saw that you could hopefully make things better For users by adopting these common apis and Yeah, liquidy started using gateway api For mesh traffic routing 2.13 one of the very relatively recent releases It's funny that that's relatively recent and it's also like I want to say a year old maybe. Yeah. Yeah, that's all right And then yeah, sdo started using gateway api for ingress much earlier on In 1.9. So that's been around quite a while But only added mesh traffic routing much more recently in 1.19 another I guess a thing that I want to emphasize a little bit is One of the really interesting things about the gamma initiative is that it's very easy for people to say Oh, hey mesh is a routing HTTP. We should just go ahead and do that, but it turns out to be complex So it was a much more challenging thing to do Um, partly because it ends up being more challenging than we thought it was There are a bunch of things that you can't actually do yet for service meshes Some of them are complex things, but some of them are fairly basic things like retrives which You would think would be really easy, but it turns out that people handle retries and timeouts and stuff like that Very very differently across implementations of the gateway controllers and service meshes So This is there's a lot of work going on very actively in improving this situation It's also one of the reasons why it's really great having so many different implementations as active in gateway api is It would not be as successful as it is today without the feedback and perspectives Of people who are approaching these same problems differently and it pushes us to make these more extensible solutions that allow For different ways of doing things linkardy and istio have different ways of handling retries um I think the envoy proxy recently added support for doing a very similar way to Linkardy is a method of doing retries So that's something where we see that as hey if we can add this in gateway api We know that there's multiple implementations that could support this because that's very critical for guys going into The upstream api is making sure that it's useful to more than one And not just something that should be one of those like implementation specific options But yeah, we see a path forward for this kind of stuff. So Yeah, it's pretty exciting and timeouts was one of the ones that until very recently was not available But there's now a a gap a gateway enhancement proposal That merged a few months back and support is now available in I think the latest istio Major release and the one of the latest linkardy edge releases. Yeah, that's from like six months ago Timeouts turned out to be Ridiculously hard looking at the different implementations But the core thing here is that we showed Or the core thing for gamma is that we showed bits where You would use the parent ref referring back to your gateway For the mesh, you're going to use a parent ref that refers to a service And in both cases you use back end refs to talk about where you want the traffic to go Um, I am going to skip these slides The oh go back to that one. That's that's the important one. Yeah, this is a a demonstration of what we are doing Differently as we're about to do this this Canary deployment here is that some of the traffic will get routed over by the mesh from face to color two Instead of going to color and then the rest of it will stay with the color workload Okay There we go so Yeah, here we have like we were saying our parent ref here Is a service We're talking about the color service. You need to specify which port it's on and then 90 percent of our traffic will continue on to the color back end And 10 percent will go on to color two now This may look like there is a circular route here Because we're saying take 90 percent of the traffic from color and send it to color But then it would just get You know resplit over and over again over again This is one of the things that's complex with service meshes when we talk about a parent ref We are talking about the part of a service that's Allocating a cluster ip and a dns name and we're talking when we talk about a back end ref We're talking about the part of a service that is a bag of endpoints And the two are not the same thing. So there is no circle here This is one of the more confusing things about gateway api for mesh. I think I will leave it as an exercise to the reader to determine how many cells should be orange If 10 percent of the traffic is getting routed, but you can see that some of it is getting routed and Let me do one more step and then we'll flip over to istio If we change the weights, we would expect the fraction of traffic that's orange to change So in this case instead of 90 percent instead of a 90 10 split, we're going to do a 50 50 split And then when I apply that we should see a lot more orange over there instantly And yeah, now let's flip over to istio and we will See that happening over there while we wait for the video to switch I'll point out that there is no requirement that the weights add up to 100 In the htdp route The important thing is just the ratio So you can do a weight of one and the weight of three and it adds up to 25 and 75 for example Doing over percentages just tends to be a little bit easier for me. Well, it's been easier to explain in some ways Um, yeah, can we switch to the other laptop, please? Thank you. All right So as we can see mine are currently still all in the blue and i'm about to apply the exact same resource the color canary.yaml I did make one tiny difference here as we were Working on this. Oh, that's right. I forgot to pull that I need to go back and double check if it's something that istio and ligardy like Have slightly different support if it's something that's specific to ligardy Or if it's something that is standard that I should go add to istio But the group that you'll see here Because service is in the core kubernetes api group I have an empty string for the group. It's a little bit awkward Uh on ligardy, you'll see you will have seen that flin had the word core there Um empty string works for both So that is typically what you'll be writing. I need to double check if it's actually okay for us to specify core But that didn't work for me. So, um, mine mine are empty string, but uh empty string works for both I just Hand edited my version To use the empty string to verify that it worked. It works with ligardy And yeah, now now that makes me feel much better Now that we have that 10 percent, uh, you can see a few orange smileys coming through And Same thing. Uh, we're gonna switch to 50 50 and apply And you have 50 50 now Why don't you go ahead and do the next step as well? And then the next step that we're going to do is fully switch it all the way over to 100 to our new orange service This is a thing that I wanted to show in particular because Having I just said the important thing was the ratio and then we just demonstrated one where the weight is zero So the ratio gets to be a weird concept zero is kind of a special case that says don't take any traffic for this Uh, on the other hand, like you can't do two back end refs that both have a weight of zero That will not give you a 50 50 split I don't know what that does, but I'm sure it does nothing I forgot if a controller would reject it I don't know. I wonder we'll we'll try it later Um But it's kind of handy to be able to use a weight of zero for this because if you're doing progressive deployment It makes it easy to go all the way over and then decide. Okay. Well, things are great Now I can go ahead and clean up and use the new version which we will let Mike demonstrate All right So jumping back to here what I'm going to do first is delete the old deployment now that we've Successfully done our switch over We want to make sure that we don't have this awkward route in place forever That is redirecting things that are going to the color workload over to color two. Ideally we can Get back to having our single workload just named color. So from a functional point of view Having color traffic always routed to color two forever will work fine Yeah But operationally it will get you into some troubling situations Because six weeks down the road people will forget about that And then somebody will do something with the color workload and go. Oh my god. Color is not working I need to restart it and then they restart it and nothing happens So yeah, that's definitely best practice to as you're doing these kind of transitions like clean up after yourself Yeah So what we're doing now is we're going to look at this color replacement yaml Which we're going to deploy Back to the original one This is we're deploying our color two service But we're deploying it back to the original one And then what we'll do is we'll be able to get rid of that route and you'll see that Traffic will just drop back to the original one. Uh, which is now running the new actual workload The only difference actually in the yaml is the environment variable named color So it's not a very profound change for this particular application So as you see like I deleted the the original workload everything's still going through to that new one currently um, I'll apply the uh yaml 4 that color workload for that color deployment again And roll it back out It's rolled out And Nothing's going to go to it yet But this is the change that should be invisible is we're going to delete that htp route So we're removing that redirect now that we've successfully deployed back to that original name for this service So we deleted the htp route and we jump over here And there's no change visible to users It's always nice when things work Okay, and I'll hand it back over to Flynn to talk about how rollbacks can we switch over here rollbacks are kind of a I don't know. They're a little underwhelming to watch, but it's an important thing to be able to do Okay, so What i'm going to do now is do a 50 50 split for over to Sorry a 50 50 split for the smiley workload between smiley and smiley too So when I do this What I should see is half of that will be grinning smileys and the other half will be hard-eyed smileys but If you were paying attention early when we installed this thing you will remember me saying that I deliberately configured smiley 2 to be bad This is exactly the same resource that we showed earlier for the 50 50 split just it's smiley instead of color So when I apply that I do see some hard-eyed smileys, but I also see some cursing phases because that's where the smiley 2 workload fails This is something that you this will happen. Yeah Yeah, this is the reality of you deploy a new thing. It has a bug sure your new feature You think launched, but maybe you need to pull it back because something's not right Fortunately, this is not a difficult thing to do. We just delete the http route and pretend it never happened There you go. That's it We've we've just deployed a bad service and then rolled back and it was horribly difficult Right and and that's just because we're not just deploying directly over the old deployment We're creating a new deployment and we're creating this http route to only shift a small percentage of traffic over So we can test it and hopefully catch those errors early and be able to roll back Before this gets out to all of our users So at this point the next couple of steps in the demo are prepping for the next thing i'm going to do Which is to say i'm going to make smiley 2 so they doesn't error all the time But let's go over to the other laptop to make sure that this works for istio while i'm doing that So again, uh same route Applying or sorry. Uh, yes the same route applying that And let's jump into the browser to check out it Same errors It really is nice when we get identical behavior in different environments And we're going to do the exact same thing to roll back. We're going to just delete that http route And with that gone our errors start disappearing Cool. All right. Can we come back to this laptop again, please? Merci So another thing we can do is Is You remember that we were talking about being able to do routing based on http headers Which is sort of the basis of an ab test Instead of just picking a random some set of your traffic to divert to the other workload You pick it based on specific values like the user's name or whatever So i'm going to do an ab test here based on the x faces user header specifically Um, i'm doing this with traffic going to smiley If the request has a header that says x faces user test user Then take that to smiley 2 And if you look carefully at this resource, you'll see that this is all one stanza. So this back end ref is a peer of this matches This back end ref Does not have a matches clause at all so This is under the default case if we have the test user header We go to smiley 2 if we don't we go to smiley And this is something that you can have like your ab test infrastructure do where you can like have it Inject headers like this into your services Um, yeah, I think like launch darkly and similar things like there's a bunch of them Yeah, this is also this is table stakes You need to be able to do this to have a functioning api for solving the english problem What I have here is one browser that has no user associated with it and in my other browser It says user test user because i'm just using mod header to inject x faces user test user into all the requests So when I do this We should see that bottom browser window instantly switch over to hard-eyed smileys while the upper browser stays that way And let's switch over and you can see this do you can do the same thing as well I don't think I set up my browser to do the the header inject, but I'm going to see if I might be able to do it manually So yep, uh, there we go deploying the same root cube cut all quiet and You can see that nothing's going to hard-eyed smileys currently Um, what is the best way to change that user to test user? Uh mod header Or you could just show it with curl come to think of it Hmm. All right I am going to skip over this for now That's also a fine I'm just gonna skip over for now rather than try to spend a few minutes getting something working Sorry for sorry for forgetting to check with that with you. Yeah, it's all right Mike's going great. Thanks for throwing me under the bus plan. I really appreciate that But as you can see though, um without that header applied traffic is still going directly to the regular service Okay, how about let's switch back to the other laptop then and we'll Continue a bit all right, so This is another one where We could Modify this route if we decide that our user base just really loves the hard-eyed smileys And so we want to do everything with hard eyes then We could do this by inserting another matches clause into the previous route But it's a lot easier just to delete that back-end ref entirely and this is now A route that will unconditionally Take all the traffic that was going to the smiley service and send it to the smiley 2 service So when I apply this one everybody gets hard eyes now We mentioned earlier This is not a state that you should leave your cluster in Especially because right now we have a smiley workload that's getting no traffic whatsoever and that will be very confusing operationally I'm just going to delete the ab test route Which will switch everybody back to normal grinning smileys, but You know cleans up for this next little chunk of the demo Um, let's flip back over to me. So I can just show the same thing. Oh, we're going to be Deleting that other back-end ref To remove that filter and making sure that everything is going over to smiley 2 instead So that everything is still going to smiley Actually, you could just do that one. Yeah, it's fine. You won't notice. You won't notice anything. So Let's go ahead and talk quickly about timeouts timeouts are a little bit interesting because If you look over the browsers, you'll see some of these where the cell kind of fades away And that's because the call took too long So we can use timeouts to make this a better experience for the user Demoing timeouts is a little bit weird because timeouts are Really about returning agency to the client. They actually tend to increase load on your workloads, which can be interesting And visually it can be a little bit weird to Make sure that we demo what's going on properly. So what we're going to do is we're going to start Down in the call graph We're going to add a timeout to the color service first And what you'll see in the GUI is if the timeout fires You'll see pink backgrounds instead of orange or blue So if I do oh, yeah, and this is a very important point I'm ashamed to admit it But linker d is not all the way up to gateway api one does zero yet So this is the only place in this demo where we have to do things differently between istio and linker d To do this with linker d. I have to use a different api group We're getting this fixed So in the linker d version you'll see that i'm now using This policy that linker d.io resource It's a long story, but this was the way linker d started experimenting with gateway api before it was possible for a mesh To be conformant with gateway api So this was the way we kind of had to do that Same thing with the parent ref Here we're adding a timeouts clause to the backend ref We're saying overall if the request as a whole takes more than 300 milliseconds cut it short return timeout And when I do this we should start seeing some pink cells showing up It's a little trickier to see that with the orange things than I would have thought but that's okay Um, we can do the same thing with smileys Again 300 millisecond timeout and smileys we will get a sleeping face whenever that time's out So you can already see that first there are fewer fading cells now Because we've what happens is the face workload has to talk to the smiley workload and the color workload So it will always take longer than either of them But now that we're cutting those short deeper in the call graph The front service actually behaves differently now. So we see fewer fading out cells, but we still see some Um, let's flip over to this laptop Uh-oh mic is editing something that makes me nervous. I'm just making sure the api group or version is Correct for the sd01 That's a good thing to check. Yeah I was the one who added the timeout part to the demo and so It's entirely possible that I could have copied the file and then forgot to change the api group for this deal So yeah, we can just switch over to my laptop again I'll be able to just show The one difference in this file that hopefully on a future release of linker d we'll get back over to the official gateway api This is not a permanent thing for linker d. Definitely Could we get the other laptop, please? Thank you. All right So if you can see just the top line of this file here, uh instead of the special linker d api version group We have the official gateway dot networking dot kates.io slash v1 Um, and then I'm going to try to deploy this smiley That's the smiley timeout, but that's The same as the other one that I was the one I just had to add it The color timeout's the same uh or gateway dot networking dot kates.io. Um, and I'll jump over to deploy this So color timeout distio um looks like that And we should start seeing some pink cells appearing. Yep. And there we go So now we'll deploy the next one to add some artificial latency and timeout handling on the smiley timeout Which again, using that api group Is the only reason that we have different files. This will eventually reconcile in the very near future Yeah, hopefully not that eventually And now you start seeing those uh smileys with the snoozy Icon because they're taking a nap That that service um is being basically cut off by the face service because uh it says Yep, sorry, you took too long. I'll just return just I'm gonna error out instead And ship what I have off to the end user rather than making them wait potentially for 10 seconds or something like that. Yep So why don't you go ahead with your laptop here? uh the next thing that we're going to do is To do that timeout also for the face workload itself But this one we're going to do with the gateway controller not with a service mesh So if you look at the I think it's going to show you the next resource Oh, right. There's some explanation in there because The GUI gets to make kind of interesting choices About what it wants to do for a timeout What the GUI currently does is it will keep the if the face service itself times out Then the GUI will just show the old data But for the moment We increment a counter in the corner so you can see how often it happens If it keeps happening over and over again, we'll fade the whole sell out as a timed out sell So hopefully we'll see some of that So let's take a look at this resource here um You'll notice the parent rough is now back to ingress Because we're applying this for the gateway controller to to take action Because it's the gateway controller is the one that's mediating the connection into the the face workload You could do this with a mesh as well, but the ingress controller is a often a more graceful way And we're back on the common api version as well because we're occupying envoy gateway or istio's ingress Right, and we're actually editing the existing face route Because we're just we already have a route that does this we just want to modify it slightly So we're just adding a timeout clause to it You'll also note that the timeout here Is deliberately less than the 300 millisecond timeout from either of the back end workloads Mostly because it makes it easier to see something different All right, so i'm going to apply this and then hopefully we should start seeing a counter up here All right, so i'm going to zoom in a little bit just so you can see that easier I'll also notice that the we see a lot fewer of the pink backgrounds or sleeping faces Because now the gooey is protecting us from that With the shorter timeout all the way at the front if we let this go long yeah, there we go We got a we got a cell that faded out a little bit or that that turned less opaque There's got to be a better word than fade for that So yeah, so this is how you can have a more robust kind of timeout architecture like within your mesh so that you don't end up with like extraneous long-lived things that are pointing inordinate load on your services inside so you're able to Kind of manage the health of those services inside your mesh more predictably But still have that control at the external layer to really protect the user from seeing all of the things that you're doing inside your microservice architecture So let's come back to The other laptop, please this that's the end of the workshop stuff Let's talk a little bit about gotchas Very important when things are not working The first thing to look at is going to be the status on your various gateway api resources Especially http routes One of the most common failures that you run across is you create an http route, but the parent ref is wrong Or the permissions are wrong So no gateway controller or no implementation of the gateway api actually claims it and does anything with it And you will see that in the status And this is that really you'll see that there's nothing in the status and that will tell you that nothing claimed it And this is exactly the problem that I ran into when I had that wrong api group where it said core I did exactly this stuff. I did kubectl get http route the the name of it in the faces namespace And check the status there was nothing there because the istio gateway api controller Didn't understand what group core was so once I switched it to the empty string Then I was able to check the status of that route and see Oh, it was recognized. It was populated the controller will list something that says like accepted true It'll say conflicted false And a handful of other things to kind of like let you know That that route was recognized. It was applied. It was understood and then it was actually programmed into the underlying architecture I've had the opposite experience with linker d If you don't specify group The group empty string or group core for us If you don't specify that it usually assumes that the group is gateway dot networking dot kates.io Which if you're trying to use a service is wrong Yeah, yeah, same thing. What is going on check the status is oh, okay now I got it And and there's also links on there for like the in the api Specification reference on the gateway api website Uh, you can look for gateway condition type gateway condition reason route condition type route condition reason And you'll find a detailed explanation of what each of the possible standard ones are Each implementation can also add some of their own if they have unique things that they're able to express Right as a human Usually you can just read it and it makes sense which is kind of nice You remember I mentioned earlier that there's a distinction between the parent ref and the back end ref for how the service How a service will be interpreted This is kind of important You can't do things like this in gateway api You cannot have requests that go to foo might go to foo or bar, but then requests that go to bar will get split between bar and bass What ends up happening is that if you have traffic that goes directly to bar It will be split between bar and bass But traffic that goes to foo will never be split once it reaches bar and the reason for that is that The parent ref Is only interpreted as the what we call the front end of the service the thing with the dns name and A cluster ip So if you address traffic directly to one of the endpoint ip's we're not going to split it and do anything funky like that Um the end result is that if you try to set up an architecture like this You are actually setting up Something that looks like this and This is not a bug so much as it is a thing to be aware of. I don't see this changing anytime soon Because if it does change, oh my god things get complicated It really actually isn't designed to protect you from Really nasty cyclical things where you start losing a complete understanding Of where your traffic is getting shipped off to yet. It's it's fun To go down the rabbit hole, but it's it's a bad idea, right? Yes Oh, can we just jump back over to my laptop for a second? I just want to show an example of kind of like what that status is going to look like that we were talking about Yeah Um, you know what actually let me talk about this slide really quickly because we're at the end and we should be Opening up for questions. Anyway Yeah, so we talked about using gateway abhi for north south and east west routing in the same cluster at the same time We mentioned earlier. Yeah, you should definitely consider gateway api for new stuff You should learn if you're not wanting to use it for new stuff right now There are pretty easy ways to use tools like argo and flagger to automate you to use gateway api for progressive delivery with all of their funky stuff Linkardy works well with those istio works well with those I don't think i've actually personally tried it with envoy gateway, but hey, it's a gateway api It'll work. Sure. Um, and you can check on the sig network calendar for the gateway api and gamma meetings Both of which tend to alternate time zones to try to make it convenient wherever you are in the world And with that yes, let's go back to this laptop So that we can take a look at some status Yeah, so just this is just an example Uh, I did cube cuddle get hdp root namespace faces and the face root is the one that i'm looking at as an example Um piping it to yq we talked about that in the beginning It's just like jq, but for yaml to be able to colorize it and uh filter So i'm only showing the status field And you'll see the parents group here is The parent ref at the bottom there Is saying that this hdp root is attached or is targeting The istio ingress gateway Um, well it's targeting sorry a gateway by the name ingress which Happens to be an implementation of istio in this case If when did the same thing this would actually look the same It would have the same name, but it would be a very different gateway controller. Yes So yeah, the controller name though you'll see is the istio gateway controller So that's how you can tell that this is being recognized and parsed and reported By my controller uh for istio and not the link or decontroller And then the list of conditions there you'll see when it was updated you'll See the type of the condition and then the status and then the reason For the two basic ones uh accepted true means the controller Says that it's syntactically valid and is trying to understand it Um and then resolve ref means that for the services that you defined in the back end refs That they are real things that it understands So if you make a typo there you might see something like resolve refs false And that's a great place to check if traffic is not going uh to where you expect it to go And that's basically it so if there are any questions now is a great time If there are no questions i'll be very surprised Yeah, go ahead So so that's are there plans to integrate egress functionality into gateway api and if so when Is that correct? Yeah, yes That's a really interesting question. It's definitely something that has come up There are a few vendors that are interested in working on egress functionality Um there was an attempt at writing a gap to propose some of this I think that we ended up holding on it for now because we couldn't find enough common ground between different implementations To really understand what the best way to do this was going to be one of the real challenges with gateway api is As you start looking at different functionality you realize that the different implementations can do very very different things at a details level And uh, yeah life can be really complex with that and egress in particular has Maybe even more different use cases So like when i'm looking at egress from the perspective of a mesh lake sdo I want like an allow list of specific domains or something like that that my traffic is allowed to exit to um when you have say a Telco provider or somebody building gateways for them working with egress What they're thinking of is I want to make sure my traffic exits on a stable ip from the tower So that is an Different concern even though they're both In the realm of egress gateway so Yeah, one of the really entertaining things that happened was when some of the folks dealing with 5g telephony started coming to get Away api meetings and all of us who were not in 5g telephony went. Oh my god. You guys are wait wait what explain that again It's really exciting. There's a lot of potential Maybe long term, but it's definitely not something that's on the immediate roadmap. Um, there's also Other efforts to deal with egress traffic underway in network policy That we want to make sure that we're not doing to do something terribly different from Uh, so yeah making sure that we're trying to have those conversations and Reconcile our approaches across the kubernetes landscape with other sigs is going to be an important part of how we approach egress as well Go ahead. Yeah. Uh, thank you for the workshop was very fun. Thank you uh, I got a question about How the priorities of matchers are working because in essence Http routes are decentralized. You can have multiple different objects matching on slightly different things And it is reasonable to assume that there is the same matcher in different http routes Which one will actually take precedence? Uh, the short answer to that question is you need to look at the gateway api documentation and there's a whole section on Exactly that question Very broadly speaking more specific things win over less specific things but there's There's stuff to read there There are at least for a good chunk of that there are conformance tests so that Most implementations should behave the same way um That's where you'll also potentially see things like if your routes are Targeting completely different http paths and they're able to like be applied simultaneously without conflict with each other Then that might be fine and that might work If they're Trying to do two different things like attach a grpc route and http route to the same listener You might get an error like a conflicted uh raised and the implementation will say that Sorry, you can't do that nope I get another quick question um What happens in a situation where the backend drive is not healthy from the point of view of kubernetes So the service itself doesn't have any end points back in it If the service exists, but it has no workloads, I think traffic to the route gets 500s But I would have to go look at that one up exactly And if there are several services back in it would Auto magic load balancing happen to the failover That's actually going to tell you um That's going to depend on the implementation and how it does that one We are being told that we are out of time I'm about apologize for the folks whose questions we didn't get to we will both be up here So absolutely come on up and ask your questions anyway And you can find us in these places or the cncf slack or via email. So thank you very much Thank you everyone