 Hi, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie Telasto. I'm a CNCF Ambassador as well as I lead Marketing and Vision, and I will be one of your hosts today. So we have Shariah here with us hosting as well. Hi, Shariah. There we go. He's going to be handling the Q&A, particularly for us today. So every week bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer all of your questions. But you can join us every Wednesday or a few other days with the week as well sometimes to watch live. So this week we have Flynn here with us to talk about Circuit Breakers and Dynamic Request Routing with LinkerD 2.13. Very excited for this. And as always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. So basically, please be respectful of all of your fellow participants and presenters. So with that down, I'll hand it over to Flynn to kick off today's presentation. Thank you. Happy to be here as always. Let's turn the screen share on if you could and we'll go from there. There we go, yes. We're talking about Circuit Breaking and Dynamic Routing, which really, I really should have pitched this as Circuit Breaking and Dynamic Routing, the good, the bad, and the ugly. There's a lot of good. There's some stuff that might be bad, and there's definitely some stuff that's still kind of ugly. So we'll be breaking all that down as we go. So a lot of this will be a demo. You can follow along if you like. The workshop source is that URL or you can just go and scan the QR code there. I will be doing this with a Civo cluster. It works fine with a K3D cluster. It doesn't really matter. Just, you know, choose what you want or just follow along with the demo and don't mess with anything on your end either way. If you do want to run this on your own cluster, make sure you're using an empty cluster. The setup assumes that we can do whatever we want with it. Please do not run this against your production cluster. Very bad things could happen. And that would definitely involve breaking things. All right, with that in mind, the agenda, we're gonna talk briefly about circuit breaking and dynamic request routing. We're gonna go and do a demo of dynamic request routing, then of circuit breaking. Then we'll come back to the slides for a second to talk about gotchas. And then we'll do a little bit more in terms of debugging things. If we have time, everybody crush your fingers on that one, right? Okay, dynamic request routing. This is a thing that originated kind of with LinkerD 2.12 and earlier we've been able to do this for a while using traffic splits from SMI. That was kind of limited to very coarse grained routing where you could say, okay, 1% of the traffic to foo, I want you to peel it off and send it to a new thing. The rest of it goes to the original one. Or you could say things like take all of this traffic and throw it over to another cluster if you're running multicluster. The point of dynamic request routing in 2.13 is that you can route on many, many more things like HTTP headers or the birds. You cannot route on bodies because that would be weird. But this permits you to do a bunch of really interesting things like progressive delivery anywhere in the call graph instead of right of the edge where the Ingress can control it. Likewise, you can do A.B. testing deep in the call graph or per user canaries or whatever. There's a lot of really interesting things you can do with this. These are configured with the Gateway API HTTP route resource. So no longer with the SMI traffic split. Part of this is that Linkerty is actively participating in the Gamma Initiative to bring the Gateway API into the service, the world of service measures. So this is kind of a new capability here. You use the parent ref of the HTTP route to bind the route to a service with Gamma. You can bind to services, not just with gateways. And then the backend refs are where you describe where you want the traffic to go. And we will show demos of this when we get to the demo. This is a good opportunity to talk really briefly about the Gateway API and Gamma. If you're not familiar with these things, Gateway API got started a few years ago, 2020. Broadly speaking, it got started because a bunch of people looked at the perpetually in beta forever Ingress resource that was not really as expressive as we would like. And we're trying to figure out, how do we do better than this? So Gateway API itself is at version 0.7 at this point, hoping to reach 1.0 this year, another one to cross your fingers there. Last year, an initiative got started within Gamma to figure out, sorry, an initiative got started within the Gateway API to figure out how to use the Gateway API to configure meshes as well as Ingress controllers. Simply because, you know, I mean, HTTP routing, for example, is a thing that applies pretty well at both in both of those areas. And Gamma actually does stand for something. I really should remember what it stands for off the top of my head, especially because I became one of the co-leads this week, but I do not, and I'm terribly sorry about that. I'll look it up before the next one of these. LinkerD, the project, I personally have been very active in Gamma because it's pretty interesting to us, right? We started to use it with 2.12. We picked up the HTTP route CRD at that point. The goal within LinkerD is to use the Gateway API as our new standard way for talking about classes of HTTP traffic, including GRPC. We wanna use it for traffic shaping and retries and timeouts and off policy and dynamic request routing and all of this great stuff. We like the Gateway API for this because it is quite powerful, it's quite flexible, it's on a good path to probably RDB in your cluster and maybe most importantly, somebody else can maintain all these CRDs as opposed to just us. There are also things that we're actively working on. There are a lot of things you cannot yet do within the Gateway API. Retries comes up immediately as a thing where you just cannot do that. Likewise, conformance is kind of a big deal. The Gateway API defines a standard set of conformance tests so that if you want to say you support Gateway API, you need to pass the tests. Originally, the conformance tests required that you'd be an Ingress Controller to pass them and LinkerD obviously is not, so we could not do that. As a direct result of that, we are actually still using HTTP route, for example, in the policy.linkerd.io API group rather than the official Gateway API API group. Conformance is an area that's changing very quickly in particular, there's been a lot of good work around conformance profiles so you can talk about Gateway API conformance for a mesh as opposed to for an Ingress Controller, so keep an eye on this, it'll likely just not be an issue for too much longer. All right, I'm not seeing any questions so far so I'm gonna keep going with circuit breaking. If we do have any questions, just toss them out in the chat or whatever. Circuit breaking, this is even newer than dynamic routing. The idea behind circuit breaking is that if you have a failing workload endpoint, it probably does not make sense to keep hammering the failing endpoint with yet more traffic and make it fail harder. So when we see a failure or when you see some group of failures, then the idea is that the circuit breaker will open so that traffic to that endpoint stops. After a little while, you can try another request, see if things are working and if they're working again then you close the circuit breaker and let requests go through again. The implementation in 2.13 is a little bit limited. Like I said, this is a very, very new thing. So 2.13 can only open the breaker when it sees too many consecutive failures to a given workload endpoint and failure here means an HTTP five series response. We will in the future be able to do, you'll be able to do response classification, you'll be able to do things based on GRPC statuses and not just HTTP statuses, but this is the way it is for 2.13. Another important thing here is that 2.13 currently configures circuit breakers using annotations on a service which is a little bit ugly but permits us to basically to get feedback really quickly about how well it's working for people so that we can make sure that what we're building is things that people want to use. You'll also note, if you look at the docs that all of the annotations currently have failure accrual in their name, which is the internal name for the implementation of circuit breakers and honestly, partly that's there just to remind everybody that yeah, this is gonna be changing. The annotations are not the way we're gonna be doing this forever. Okay, some examples really quickly about circuit breaking there and we will be demoing this. To break the circuit after four consecutive request failures, you say, hey, the failure accrual mode is consecutive which is the only one that's currently supported and the consecutive max failures is four. If you then set the failure accrual consecutive minimum penalty to 30 seconds as soon as the breaker opens, it will stay open for at least 30 seconds. Then it'll retry and if that retry fails the delay will get longer and longer up to the consecutive max penalty which here I've shown how to set it to two minutes. And again, we will be showing all this in the workshop so don't feel like you have to remember all of this right off the top of your head. Okay, I'm gonna describe the architecture of the demo itself and then we will go in and look at the demo and what we're doing here is we start with a cluster. In our cluster, we are sorry outside of the cluster we're gonna use the web browser as the GUI. It's getting this single page web app called the Faces GUI. It then will reach in and call the face service repeatedly. There's a grid of cells in the faces GUI. The face service inside the cluster will call the smiley service which is supposed to return a smiley and the color service which is gonna return a color. When things are going well, the smiley service will return this grinning smiley and when things are going well the color service will return the color green. The face service will then compose the two together and hand them back to the GUI. We also in this particular cluster we have a smiley two service which returns a hard eyed smiley and we have a color two service which returns the color blue and at the beginning of this demo nothing is talking to either one of those. So if you see blue hard eyed smileys something is going wrong. If you are familiar with the Faces demo elsewhere in many demos we deliberately start this off with the Faces demo being a horribly broken so that we do not often see green background smiley faces. In this particular case we're going to start it off with the Faces demo not being broken so we actually expect to see a field full of grinning faces on green backgrounds. All right, we're using emissary ingress as the ingress controller here to mediate access between the GUI and the rest of the cluster. We're not really using the ingress controller for anything so it doesn't matter which one we choose but that's the one I'm using. And of course we have LinkerD all over the place in the cluster and we are very much going to be using things that LinkerD can do inside the cluster. Okay, so let's see what happens. And as Annie mentioned breaking things is always a possibility. All right, well the first thing that's gonna happen here is that I am going to quit this and you get to see a little bit of behind the scenes stuff because I was testing this right beforehand. I disabled a bunch of it. There you go folks, that's how you know we're doing it live. So the tool I'm using here, Demosh, is also open source. You can find that on GitHub if you like, ask questions. Okay, let's try this again. And there's our demo architecture as shown. Okay, yeah, there we go, this is a start. We got two significant features we're gonna be talking about here. We're gonna be talking about dynamic request routing and circuit breaking. As running it right now, the GUI should be showing all grinning faces on green backgrounds and let us see if that's actually what we're gonna get. Okay, we are in fact getting grinning faces on green backgrounds. Everybody can see that, this is a good thing. All right, that's the same, I should point out. That is not a different web browser. That's the same one I was just showing you, just scaled down and fit into the corner there so we can see the text and the web browser at the same time. All right, as I mentioned before, that green color in the background comes from our color workload. The color two service or color two workload also gives a, wow, I cannot talk today. Color two gives the blue color instead of the green color. We're now going to try to shift traffic over to color two with an HTTP route resource. This is the HTTP route resource we're going to use. So the name isn't all that relevant. The namespace is relevant though. We need this HTTP route to live in the namespace with the workloads it's trying to modify. In this case, that's the faces namespace. And what we're gonna tell it here is when you see traffic going to port 80 on the color service, then route 90% of it to the pods behind the color service and route 10% of it to the pods behind the color two service. An important point here is that you'll see these port numbers scattered through. Those port numbers must be the one in the service record that the service is dealing with in this actual workload. The workload is listening on port 8080. Don't use that port, it won't work. Use the one in the service, which is port 80. So when we apply this one, we should see about 10% of those things shift over and get a blue background. There we go. I shall leave it as an exercise for the reader to determine what 10% of a four by four grid is, but close enough. Now, let's take a quick look back at the demo architecture just to follow exactly what's going on here. What we're doing here is that normally traffic coming through is going to hit emissary ingress who will hand it off to the face service, which will hand it off to the smiley service. What we're going to do though is hit the wrong button immediately, that's pretty awesome. Yeah, all right, what we're going to do here is basically use the HTTP route to add this dashed line to send part of the traffic over the color two. But from the perspective of the face service, it's not doing anything different. And this is a relevant thing because all of this means that you get to change the routing, you get to do all of these funky things without actually needing your application to care about it at all. All right, back over here, we can change the amount of traffic that the service is getting or that the color two service is getting by changing the weights in our canary resource. So if I change this instead of a 90-10 split, I can change it to a 50-50 split just by editing this thing. If we do that, we should now be seeing 50% of the traffic rather than 10%. So now roughly half of our things get a blue background instead of a green background. We can also, once we decide that we're really happy with this, like I don't know, maybe everybody likes blue instead of green, we can switch everything so that all of the traffic goes to color two. We could do this just by deleting that backend draft, but instead I'm gonna do it by changing the weight to zero simply because that can often be easier to do in a patch than deleting it. If you've ever tried to delete a stanza within a resource using kubectl patch, for example, it's really annoying, but changing a weight is really easy. So we're gonna show that that actually works here. I apply this one, we should see no green backgrounds at all once this takes effect. There we go. It's always nice when the things that are breaking are me driving the demo instead of the demo itself. I always appreciate that from the demo gods. All right. Another thing we can do is A-B testing. And for this one, we will end up using two browsers rather than just one. So we have one browser, which is our normal browser that we've been looking at so far. That is not sending anything fancy. And if you look at it, you can see that it says user unknown up here, which is telling us that this browser is not doing anything strange with any headers. But there is another browser that we have here that's using mod header, the mod header extension to send an ex faces user header with the value of test user. This browser right now looks exactly the same as the other one, except that it says user test user. So here, I now have both browsers visible. And again, these are the same ones that I just showed you with the normal user on the top. And the one that's actually sending the browser extension is on the bottom. So this is the normal user. This is the test user browser. So if I come over here, we can apply this HTTP route to switch the smiley between smiley and smiley too, depending on the header. So what's going on here is same thing, right? This has to be in the faces namespace. We're gonna act on traffic coming to the smiley service on port 80. And then if that matches a header with a name of ex faces user and an exact value of test user, all of that traffic will be routed to the pods behind the smiley to service. Otherwise, the traffic will be routed to pods behind the smiley service. You'll also note that I did this in lowercase. That's because HTTP to mandates that header matching not be case sensitive. So we just smash everything in lowercase and color good. So if I apply this one, we should immediately see the bottom browser that's using mod header to send over ex faces user test user. We should immediately see that one switch to hard-eyed smileys rather than normal smileys. And so what this has shown us is a thing that I'm not gonna fix. If I come over, I have to figure out the right way to show you all this now. So if I come over to my, this is my normal user browser, which you'll notice is showing us grinning faces. This is my test user browser, which is showing us hard-eyed smiley faces. And if I come back to this one, the problem here is that I cleverly picked the wrong window when I was doing those shares. Like I said, I'm gonna leave that and we'll call that a thing where I've broken something. Okay. Once we get to the point that we're done with the AB test and we decide that everybody really, really likes the hard-eyed smileys, then this time I will just delete the stanza. There's no real point in leaving in them, trying to leave the header match in, but also giving it a weight of a hundred and then giving the other one zero, we can just delete it. That's simpler. And as soon as I apply this one, then both browsers will get the hard-eyed smileys and then it won't matter so much that I picked the wrong one and labeled the test user normal and the normal user test. Okay. Any questions so far? I don't think so far, but there was a high from Chile, so high back there. Hola, Chile. All right. Now let's show off some circuit breaking stuff. So the first thing we're gonna do here is we're gonna actually switch the UI so that in addition to seeing the matrix of faces, I'm also gonna click the show pods button, which brings up this other thing that shows me which faces pod is giving me what kind of result. This just makes it a lot easier to see when circuit breakers open and close. I should also point out that this display of the pods is not really relying on any service mesh magic. I just arranged it so that the service, the face service hands back its ID to in a header so that the browser can show it to us. All right. Again, same browser. So at this point, we're getting responses from two face pods. Those pods are part of the face deployment. The face deployment is set up so that it gives each of its pods a particular label and then the face service goes through and selects the pods that are part of the face deployment. This is kind of Kubernetes 101. There's nothing particularly magic here, but it's important to bear this in mind because I'm gonna use this to cheat. I'm going to add more pods that are part of a different deployment so I can have them run different code, but I'm gonna give them the same label so that the face service will multiplex across all of them. And the reason I'm gonna do this is that the things that I add are going to be broken by design. So as we do this, we will eventually see some face two pods showing up. And the point of face two is that those pods almost immediately get stuck in an error state which we'll show here as a meh face on a pink background. And you can see that in fact, that is exactly what's going on. So now we have a bunch of face two pods that are returning a bunch of errors, all good. And so now we're gonna go through and annotate the face service to enable the circuit breaker to hopefully get rid of the pink background meh faces. As we showed before, we're gonna activate the consecutive failures mode. We're gonna tell it that it needs 30 consecutive failures before it does anything. And the minimum penalty will be 10 seconds. So once I do this, if you look down in the over here, you should see these numbers. So once we get there, those are actually the numbers are the count of failures for that particular pod. So once I apply this, nothing should happen until they go up by another 30 or so. And then we should see the breaker open and those pink meh faces should just disappear. Everybody cross your fingers. It doesn't take that long to get to 30 failures in this demo, thankfully. So it makes it a little bit more interesting. Let's hope so. Yeah. All right, come on. There we go. So now you can see a couple of things that are interesting. One of them is that we only have blue hard-eyed smileys. And the other one is that the face two pods almost vanish from the pod set. That then every so often, we see one of the pink faces come back. What's going on here is that Linkerty is allowing a request through to see whether that face two pod has recovered. And it doesn't do anything artificial. It just allows through an actual application request. So one of the interesting things with circuit failure, circuit breakers is that once they open, yeah, you will actually still see occasionally a real failure that comes all the way through back to the client. This is another thing that, I don't know, might change in the future. It's fairly difficult though with circuit breaking to both probe the service to make sure it's happy and not let any actual application request through. Okay, so that's it for the demo. We actually do have time for questions, which is great, but I'm not seeing any right now. I'm gonna go back to the slides to talk a little bit about gotchas and debugging. By all means, if y'all in the audience have questions about this, go ahead and sing out. So let's talk about gotchas. There are always gotchas. The biggest gotcha of them all with this stuff is that in 2.13, service profiles from 2.12 and earlier do not compose with the shiny new gateway API features. So if you have a service profile that defines a route, it will take precedence over an HTTP route. And it will also take precedence over circuit breaking and also it will take a bit more time. I think the audio might be a bit muffled, but let's see if it improves. Is that better? No, it's really good. Yeah, okay, sorry about that. I got a little bit too far from the mic. So yeah, if you have a service profile that defines a route, it will take precedence over an HTTP route with conflicting routes. And it'll take precedence over any circuit breaker that any circuit breaker for a workload that that service profile uses. And the reason for this is that if we did it the other way around where HTTP routes took precedence, we were able to come up with lots of very surprising behaviors on upgrades that would have badly confused people and possibly broken their clusters when they went from 2.12 to 2.13. And that did not seem like a good idea. The problem, of course, and the reason that I called this out as a huge gotcha, is that there are still a bunch of things that you have to do with service profiles in 2.13 because you can't yet do them with an HTTP route because the Gateway API doesn't yet offer us a way to do them. Retrize is the really the biggest one here. We're actively working on making all of this stuff better very quickly. We actually just like yesterday, I think it was, released edge 23.6.2, which includes support for get 1742 timeouts, which is a way of adding timeouts into HTTP routes. So this is getting better quickly. All right, but sorry, this is getting better quickly, but for the moment, yes, there are still going to be things that you must do with service profiles and that you will not be able to do with HTTP routes. And I apologize for that. If you are doing things with this that are confusing when you try to do them, then some rules of thumb for debugging. Make sure you don't have any service profiles if you're trying to use HTTP routes. That's the biggest one. There's another one along here though, where if you are trying to use HTTP routes, they don't work and then you realize, oh, hey, I actually have a service profile here, then when you delete the service profile, you may actually need to restart pods for the thing you're trying to apply the HTTP route to or the thing that you had service profiles attached to. The reason for this is that the Linkardee proxy kind of has to decide if it's gonna be a 2.12 proxy or a 2.13 proxy as far as this is concerned. And in most of the cases that I've seen, it has managed the switch correctly when you delete and reapply resources and such. Occasionally, that could be a little problematic. Another thing that's worth noting is that there is a new Linkardee diagnostics policy command that can help with this stuff. So let's go back for a moment to our demo to look at that Linkardee diagnostics policy command. I'm going to warn you, this is going to have a lot of information. So basically what I'm doing here is saying, I can't quite scroll up there. What I'm doing here is I just asked it, hey, show me all of your policy stuff associated with the Smiley service in the faces namespace on Port 80. And it is going to dump a lot of stuff out. It starts by, for example, confirming that there is an HTTP route associated with this. You'll also notice that this says HTTP one, because Linkardee treats HTTP one and HTTP two very similarly. It can go through and talk to you about back ends, which are all Smiley two at the moment. There's only one backend in here. It can go through and talk to you about, oh look, I can actually match everything. And then it's going to be exactly the same thing. I think the audio is breaking. Yeah, exactly, we noticed at the same time. Come on, internet, work. Basically there's a lot of information here. You should get used to using things like JQ or whatever to go through and look at things like this. But there's also a lot of very useful information in here where it can tell you a lot about what it's actually talking to and what it's willing to do. I hope that was audible. So what we're going to do here is we're going to restore one of the color routes. We're going to switch back to the 5050 color canary that I did earlier. And just in case you're curious what that looks like with both the circuit breaker and this, I actually turned off the browser so it wasn't shoving requests in the background, which is why the pink faces came back because it had been long enough that Linkardee was willing to retry those and then it took it a little bit to build up to 30 failures again. But now you can see the 5050 mix of blue and green again. And now we can check the Linkardee diagnostics policy for service color, port 80, in the faces namespace. We can see, okay, great, the color canary is there. But now if we come through, we see multiple back ends buried in here. I have to actually scroll back and forth because it's over both. But we can see it say, hey, yeah, I'm splitting traffic between these two things and it confirms a 50% weight on this backend and a 50% weight on this backend. So this can tell you an enormous amount of stuff about what's actually going on. Now, this is slightly incorrect in that Linkardee diagnostics policy will talk to you about circuit breakers, but it can be much more helpful to actually use Linkardee viz to go through and look at where the traffic is actually going to get a sense of if your breakers are actually working. So if we look at traffic to the face deployment, we see 100% success at 5.9 requests per second, which is pretty reasonable. We also see that the latency is absurd, but that's not the point of this demo. It is in fact configured to be absurd. So we can also go through though and ask Linkardee viz to show us pods in the faces namespace that match the service equal face deployment, which is exactly the same one that the face service is using. So this will show us all of our pods, whether they're the good face pods or the broken face two pods. And if we do that, then we can go through and see that, oh, look, these two are only taking a tiny amount of traffic because the circuit breaker is kicked in and these two are taking almost all the traffic. Couple of things that are interesting to notice here. One of them is really, what's up with this almost 100% success rate? Well, the answer is that there are a couple of answers, but the biggest one is just that it's not feeding it very much traffic. And so it goes through and miscomputes that. Also, oh no, actually, Linkardee will consider a four series response to success for this graph, but everything coming back here is a five series. The other one that's interesting is that the failures are much lower latency than the successes. If you remember before the circuit breaker kicked in, you saw a bunch more pink faces in the demo than you did the actual real ones. That's because Linkardee prefers to route to lower latency endpoints. So if you get into a solution like that for a minute, you'll find that it's failing really, really fast. Linkardee will actually end up preferring to give the failing endpoint more traffic until it figures out that it's a failure. Another reason why circuit breaking can be really, really useful. Okay, and another, that was another point. Linkardee viz stat computes things over time and it can take it a little bit. If you go through and you change something, it's gonna take it a little bit to catch up. So if we go through and work, to go through and play with things, then you would need to possibly wait a second or two to see that going on. I feel like my screen share just decided that it was going to break. So Annie, I might need to get you to swap back and forth there. Let's see. Don't worry. There we go, good enough. All right. Except that that's not the slide that I'm sharing. Annie, can you drop the share and then put it back on for a second? Yes, I can. Okay, there we go. Dropped and it's coming back. I'm gonna have to turn it off and then turn it back on. So everybody, this is gonna look weird for 30 seconds or so, I think. No worries. That's the universal rule of IT support. Or something like that, yeah. Yeah, exactly. All right. Yeah, while we're waiting for that to kick up, I wanna remind everyone of the Q&A. So if you have any questions for Flynn, you can leave them in the chat so that we will get to them after we get done with the main content, so to say. Yeah, the demo, I guess. All right, so what I'm gonna do here is cheat slightly. I think this is, as advertised, the break things part, surprisingly in the slide deck portion, which is extra fun. It's, and now, of course, my video is frozen while it's waiting for the share to reactivate. It's always, I'm used to worrying about the demo breaking. I'm not usually used to worrying about the presentation breaking. So this is really quite entertaining. Oh, now it's coming up perfect. And the middle part as well. Okay. I had to cheat and go back to just directly sharing a browser window instead of using OBS and all the fancy stuff. So if we wanna go through and switch views, then that might actually be a little slower now. All right, so this is another gotcha about dynamic request routing. You cannot sequence dynamic routing like this where you've got one route that splits foo between foo and bar and then another one that splits bar between bar and baz. That will not work. If you send traffic to bar, it'll do the right thing. But if you send traffic to foo, it will never follow the baz leg at all. The reason for this is that HTTP routes have to distinguish between what we've taken in Gamma to calling the front end of a service at the back end of a service. The front end is basically the DNS name and a cluster IP. The back end is the collection of end points that provide all the compute for the service. Routing decisions happen at the front end, which is shown in blue. But once the decision happens, it goes straight to an endpoint, goes straight to the back end. So when you set up, when the foo route decides, oh, I need to go to bar, it's gonna go directly to an endpoint, it will not go through the bar front end, so bar never gets a chance to make another routing decision. Effectively, if you try to set this up, what you're actually setting up is this. And this is a thing that I'm calling out not so much because we think it's really common, but because if you don't call it out, it's really confusing when it happens. If you don't understand it, at least a little bit about this, trying to figure out what's actually going on and think about it. So, that is it in terms of Q&A. We're having... Is the line breaking a bit again? I don't know. Did you hear the guy in the discussion? Now it's good. It was kind of breaking, but now it's working, I guess. Okay, so I don't know. I think... Yes, okay. No, it does happen, right? It happens in life. How you know we're doing it live? Yeah. Thanks. So much. Okay, so let's just work with the Q&A. So nice stuff. What is next for LinkedIn project and any sneak peek that you can give today with us? I don't have any sneak peeks in terms of demos. A lot of what we're working on right this second is really trying to get to a place where the new HTTP route world has feature parity with what we could do in 2.12 so that we can get rid of that gotcha about how they don't compose by just saying, yeah, they don't compose. It's okay. Everything that you used to be able to do you can still do in the Brave New World. Like I mentioned, I think I mentioned earlier, the first one of those is timeouts that, timeouts according to Gateway API GEP 1742. If y'all or if anybody listening is not familiar with the GEP process modifications to Kubernetes go through a thing called Kubernetes enhancement proposal or a KEP and modifications to the Gateway API go through a Gateway API enhancement proposal or GEP and one of them that got to an implementable point really fairly recently dealt with including timeouts in HTTP routes. This is something that a great many people have been after it turns out to be way more complex than you might think, oh, it's just a timeout. Yeah, but there's like two dozen implementations of the Gateway API and most of them have a differing set of things they can set timeouts on. So trying to come up with some sort of universal timeout proved challenging. There's gonna be a similar effort going through to do a GEP for retries and that's gonna be also kind of challenging. Yeah, things that you don't learn until you start digging into the design of the APIs themselves, right? Yeah, learning by breaking, right? The same thing. Yes, actually ideally learning by thinking about what you'll be breaking if you do something in a particular way and then finding a way that won't break things. But yeah, sometimes we learn by doing it and then going out, crap, this doesn't work. So can you share some good, more resources so that people can dive deep into the topic? Yeah, whoops, not that way though. Let's use this browser window for a moment. So this one, gatewayapisigs.kats.io, I'm gonna have to change. Yeah, yeah, yeah. Let's do this. I forgot that I can't just change that. All right, this is the main page for the gateway API itself. So this is a good place to start learning about things that are coming up for the gateway API and the things that you can start expecting to see in luxury. If you take a look in the reference section, then it's got lots of reference stuff about that. But also it has the enhancement proposals where we talk about lots of things, lots of things coming up about what to do. So that's a good place to kind of keep an eye on. This is the one that I mentioned earlier about GEP 1742 timeouts. Other things, let's see. There's always the linkerty GitHub repo, of course, where we have obviously everything that is going on about linkerty. And there are a couple of places where actually I will put back the slides that you mentioned. So there's much Academy, always a good resource. Next month we'll be coming back to linkerty certificate management again because we realized that we haven't talked about that in a year or so and so it's fine. There's also the linkerty forum where we have the Slack channels. The Slack channels are great. They're a good place to go and get lots of information. But it's very difficult to search things in Slack and it's particularly difficult when messages roll off of Slack's history. So the forum is intended to fix those two things. In particular, what else? Those are the best resources and I believe that I'm going to put up my, how to find me on Slack too. Any other questions that come to mind? Okay. I don't have any questions, I guess, but I think something you can add, something a user or someone, if someone wants to start with, how can he or she can start, right? It is something you want to add, if you can. So for that, I would recommend 100% go to the Linkerty Slack channels and talk to us there. And also, there are always issues for Linkerty. And so there's always things you can look through here with the good first issue label. These are all things that might be nice to go and take a look at. But yes, talking to us is always good because that's a good way to go through and both get some guidance but also get some direction in terms of things that would be particularly useful to come up with. I also forgot that I had this slide in terms of resources. There is a fundamentals of the Service Mesh online course now with a bunch of hands-on labs. And so that's a good way to get started with that as well. So yeah, cool. I was also going to mention you can go through and take a look at Boyant Cloud in terms of managing things. And yes, you can reach me as Flynn at boyant.io.email Flynn at boyant.io for email, wow. Or as at Slack or at Flynn on the Linkerty Slack. Man, it's not just, it's like my English is breaking, my presentation is breaking. It's lots of breaking going on today. Yeah, when it rains it pours properly. Yeah, clearly, clearly. And he's sitting back going, I don't know if we should invite this guy back, man. He can't present, he can't talk. Yeah, good stuff. It just happens, right? Yeah, glad to be here, thanks for having me. Okay, yeah. That was awesome actually, happy new in the session. Cool, okay. So yeah, so if we are done actually, I guess we're done with the demo. If you guys have any questions feel free to add. But as we are done, I guess we can end this session, right? So let's end the session. So thanks everyone for joining the latest episode of Cloud Native Live. We enjoyed the interaction and questions from the audience. Thanks for joining us today and we hope to see you again soon.