 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Manny Talasto, and I'm a CNCF ambassador, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer all of your questions. So you can join us every Wednesday to watch live. So this week, we have Flynn here with us to talk about 2.13 LinkerD and the Gateway API, very exciting. As always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or any questions that would be in violation of that Code of Conduct. So basically, please be respectful of all of your fellow participants as well as the centers. With that done, I'll hand it over to Flynn to kick off today's presentation. Thank you. Happy to be here. So today, we do have a few slides since we're talking about what's coming in LinkerD 2.13. There will be a demo, don't panic. Hopefully, the demo will even work. It's pretty simple agenda. We're just overview of the 2.13 features, and then we'll go directly into a dynamic request routing workshop. If you want to follow along, you can. Otherwise, you can just watch what's going on and mock me if the demo gods don't smile upon us. I am going to be using a K3D cluster for this. It doesn't really matter if you have some other kind of a cluster. There are a few commands that you'll need, most notably, could control LinkerD CLI, things like that. So 2.13, major release for LinkerD. There are a couple of significant features that are showing up. I'm only going to talk about two of them here. I'm going to talk about dynamic request routing here. I'm going to talk about circuit breaking. I'm really briefly going to talk about changes in buoyant cloud. I'm not going to talk about FIPS and the Azure Marketplace at all. I actually meant to take that off of that slide. So sorry. Dynamic request routing. So those of you who are familiar with LinkerD 2.12 and earlier, we had this thing in 2.12 where you could use SMI traffic splits to control where requests would be routed inside the mesh. The traffic split pretty much allows routing for things like, okay, for this particular workload, go and split 90% off to this other thing. In this case, you can take 1% of the traffic over to the new version of a foo workload. You can take 99% to the original workload, which is the sort of thing you would do for progressive delivery or a canary rollout. You could also do things like just completely take over traffic for one service and route it someplace completely different, which might be the sort of thing where you've realized that the foo service in your cluster is dead and so you're gonna route all of its traffic over to a different cluster entirely with LinkerD multi-cluster. That's about as far as you can take routing in 2.12. 2.13, we changed the world around. So in 2.13, you can route based on most attributes of the requests. So you can route based on headers, the path, the verbs. You don't get to route on the body. This is still a service mesh, not an application firewall or something like that. But even just being able to route with headers and verbs and such gives you a lot of flexibility that LinkerD 2.12 simply did not have. These are not configured with the SMI anymore. These are configured instead with the Gateway API. Again, if you're familiar with LinkerD 2.12, we introduced the HTTP route object from the Gateway API and LinkerD 2.12, but in 2.12, it was just a thing that you could hang policy off of. You can hang authorization policy off of it specifically. In 2.13, same object, but now you also get to control routing with it as well as hanging policy off of it. You can, of course, continue to hang policy off of it as well, that still works. This permits you to do some really fascinating things, like if you have a workload deep in your call graph, but you want to do header-based AB testing so that some of your users get one version and some of your users get another version, but you're far away from the Ingress where this would normally happen, you can do that now, which is kind of cool. You could do a per user canary rollout. You could do canary rollouts still the old way, but you have a lot more control over them now. One of the other ones that's kind of interesting is that you can use this conceivably to do things like sharding based on what region or request is coming in from to try to knock down latency and things like that. So let me talk very briefly about the Gateway API here. If you are not familiar with the Gateway API, it was originally designed for Ingress traffic, traffic coming from outside the cluster going to inside the cluster. Over time, as people have been looking at this, they kind of realized that there should be no particular reason that we can't use the same resources like HTTP route and control routing within the service mesh as well. This has given rise actually to the gamma initiative, which is I'm not actually going to try to expand to that acronym. It Gateway API for mesh management, something. I don't remember what the second A is for. Anyway, that's a project where a group of us are kind of getting together, looking at the Gateway API and trying to figure out how do we need to change this to make it work for the mesh as well as for Ingress. So in the future, the goal there is that you could have both your Ingress controller and your mesh using Gateway API resources in ways that make sense that are fairly unsurprising if you're already used to the Gateway API for Ingress traffic routing and that give you a lot of flexibility over what's going on in your service mesh as well. LinkerD also has a bit of an ulterior motive here, which is that if we can use Gateway API resources for this, we don't have to invent our own custom CRDs and teach everybody about them. We can just teach everybody about the Gateway API, which we would kind of rather do really. So as of 2.12, we have adopted the Gateway API as the core mechanism that we're going to use within LinkerD for describing classes of HTTP traffic, by which I mean things like, you know, here's a chunk of traffic for a given user going to this workload or GRPC calls to this particular service or that sort of class because using the Gateway API for that gives us a lot of flexibility to do things like auth policy, dynamic request routing, circuit breaking, all this kind of stuff where we want to identify a particular chunk of the HTTP or GRPC traffic in a cluster and then be able to take actions that are specific to that particular chunk of traffic. In the future, we expect to extend this on to things like, you know, beyond HTTP traffic, TCP traffic, whatever, but at the moment we're focusing on HTTP traffic. There's also one caveat that we have to list here, which is that we are actually still in 2.13, we are copying the HTTP route out of the Gateway API API group into the policy.linkardy.io API group. The reason that we are doing that is basically to do with conformance testing for the Gateway API. The Gateway API itself has a fairly detailed set of conformance tests that are all structured with the assumption that you're running a conformance test for a Gateway API based Ingress controller, which Linkardy is not. We're just now in gamma getting to the point of defining what conformance looks like for a mesh. So real soon. Is there an issue with the microphone? Can I read something? Now I think I can hear again. All right, let me try something else then. Yes, it sounds a bit wonky, but now I can at least hear. Can you hear me now? Yes, I can still hear. It's a bit low and a bit interesting, but better. All right, give me one moment to see if I can fix this properly. Okay, no worries. It's always good to have a take a moment too, to see technology always hits you. Technology, yay. Yeah, exactly. And when you take a moment to fix that, I wanna remind the audience that you can ask your questions throughout the whole show. So even if you're right now wondering, oh, I'm wondering about this part of this or something like that, just send your question in immediately and then we'll get to it when we have a good spot or I'll just ask it immediately when it comes in as well. So you can get immediate answers. So that's always a lot of fun. How about now? Am I still audible? I think it's good now. Yeah, it's quite good, yeah. So this is how you know we're doing this live. Exactly. Where was I? I think I had finished talking, well, okay, just to back up in case, I was mentioning that we copy the HTTP route out of policy.linkered, or into the policy.linkered.io group for reasons of conformance. Everybody heard that, right? Yeah. Okay, good. So we're kinda just now getting to the point where we are defining what conformance for a gateway API mesh looks like in Gamma. So hopefully real soon now, we'll be able to pull HTTP route and use the real gateway API HTTP route object because we will be able to have LinkerD pass the conformance tests for Gateway API itself using the Gamma stuff. We will still continue supporting policy.linkered.io. Don't worry, you won't have to immediately go and switch everything over. Hopefully that conformance thing will be coming in 2.14, but I cannot yet promise that. In the long run, we expect Gateway API and LinkerD to deprecate the SMI extension. We also expect it to deprecate the older service policy CRD, but both of those deprecations will be, these are still things we're gonna support for some time going forward. We have no plans to show up next week and say everyone must change their cluster config, so don't panic about that. All right, I believe, oh yes, gotchas. We should always talk about gotchas. There's a big one, there's only one big one, but the big one is that service profile and HTTP route don't compose. If you have a service profile that defines routes, it will take precedence over an HTTP route that conflicts with it. The reason for this is that the service profile is the older mechanism. We do not want it to be the case that you have a cluster running using the 2.12 mechanisms, and then somebody just randomly decides to throw an HTTP route into play with and they break everything. So this is going to be the case for the foreseeable future. The big changes here are that HTTP route will gradually gain more and more capability so that by the time we actually fully deprecate service profile, you'll be able to just define an HTTP route that does all of the same things that your service profile did in the past. In 2.13, HTTP route is still pretty limited because this is still pretty early days. So this is a great place to pay attention as we go forward. The next stuff I'm going to do is talk, actually, yeah, let's talk about circuit breaking here and then I'll talk more about the specifics of what these routes look like during the demo. Let me reiterate what Annie was saying. If you have questions, throw them in the chat. Now's a great time to get to some of these. In the meantime, I'm going to talk a little bit about circuit breaking. So circuit breaking, the idea here is you're sitting here, you're running along, everything is great. One of the endpoints of one of your workloads starts to fail for whatever reason. We don't really want to keep just hammering it with traffic while it's already failing. It would be better to stop routing traffic to the broken endpoint, give it a chance to recover and then come back to it. This is what circuit breaking is supposed to be able to do. You're supposed to be able to notice, oh, there have been too many consecutive failures or oh, the success rate has fallen too low over the last minute or two or whatever. And then route traffic away from that workload endpoint until things get better. Sometime later, you'll try it again, see how things are going and hopefully be able to bring it back in. Now, there's a little bit of hand waving involved in the next slide or two. The reason for that is 2.13 hasn't shipped yet as those of you who have paid a lot of attention to LinkerD will have noticed. The demo that I'm doing is actually gonna be doing using the latest edge release, which has HTTP routes in it, has dynamic request routing, but it doesn't actually have circuit breaking turned on. So this next slide, I'm gonna talk a little bit about what we kind of expect this to look like in 2.13. But please note that point at the bottom, these are not yet set in stone, these things could change. In the beginning with 2.13, you will be configuring these using annotations. We will probably have annotations that allow you to do things like, tell it, oh, I want you to do circuit breaking when you see more than some number of consecutive failures. We'll probably have an annotation for things like, how about if you see a certain number of failures over a certain span of time consecutive or not, or if your success rate falls too low. So we expect that we will see annotations that look kind of like these. It is possible that these will change. So this is another good place to keep an eye open. All right. So really briefly, one of the interesting things that we've been learning about Linkerty is that as we look over people who are running it, there are a surprisingly large number of Linkerty installations that are not being kept up to date and don't have any practical way of finding out when something has gone wrong and in fact are running with things being wrong. This is not just Linkerty for the record. This is probably true of every service mesh deployment out there. If you go and look at any of the other ones, they probably also have lots of problems with keeping their installations up to date. This is a big problem for meshes though because the whole point of a mesh is that it's fundamentally a tool for adding security and reliability to your application but we can't do that very well if Linkerty itself is not being kept up to date. So very quickly, we are opening Buoyant Cloud to the community. Previously Buoyant Cloud was entirely a paid product. Now with 2.13 we are opening up a free tier for it to try to help with this particular problem. Happy to answer more questions about that later but for now let's go on and do the demo. In our demo, we're running a cluster. We have a single page web app outside the cluster which is talking to a single service creatively called face. The face service talks to the smiley service and it also talks to the color service. The smiley service should always return a smiley face. The color service should always return the color green. The face service takes both of those things, puts them together and then hands that back to the GUI. The GUI has a matrix of these things and so if all is going well, you should always see a grid of smiley faces on green backgrounds. In our cluster, we also have a smiley two service which returns a hard-eyed smiley and a color two service which returns the color blue but nobody's talking to those in the beginning. We're using emissary ingress to mediate access from outside the cluster into the cluster and much more relevantly here we're using LinkerD to mediate all of the communications happening within the cluster, even communications to the two services that nobody is talking to yet. So let's see if the demo gods are gonna smile upon us, shall we? All right, we're gonna start by installing the most recent edge release of LinkerD and if you are looking a lot, you'll realize that this looks remarkably like the installing the latest production version of LinkerD process. It's just instead of using run.linkerd.io slash install, we use install dash edge. If we do that, it comes through and points out that I already had edge 23.3.3 and downloaded and so it's now gonna make that the default and at this point on my machine running LinkerD will in fact get me the LinkerD edge 23.3.3 CLI not the stable 2.12.4 CLI. At this point, I don't actually have LinkerD installed into the cluster. I do have emissary and the faces demo installs just to make the demo a little bit quicker. So let's go ahead and install LinkerD first. We'll run LinkerD check dash dash pre. We get all green check marks. We can in fact install LinkerD into this K3D cluster, always a good sign. We will install the LinkerD CRDs as you've seen from installing LinkerD 2.12 and then we'll install LinkerD itself. And then we'll run LinkerD check again to wait for LinkerD to come up and be happy and be running. Nobody is asking me questions. So I have nothing to fill space to it. That makes me so sad. Not so far, but let's hope that people will ask soon. Maybe so. I don't know, Annie, you have any questions? That's always a tough question. Yeah, maybe I have a question. What are you the most excited about for the 2.13 release? You're actually gonna see it in the demo. The dynamic request routing stuff can do some exceptionally cool things, which it's probably worth pointing out, actually, because I don't really know what people know about my background. But my background in the cloud-native world has always been about looking at this technology, not for the sake of the technology itself. Nobody runs Kubernetes just to say they're running Kubernetes. People run Kubernetes because they have a problem they want to solve. And I've always been looking at this from the perspective of, okay, if you're an application developer and you're trying to run an application in the cloud, how do we actually make that seamless? And how do we make that easy? And some of the stuff that you can do with dynamic request routing dovetails directly into that. And so I think that's really cool. All right, so we got that going. The phases demo is running. And now I'm gonna get both emissary ingress and the phases demo into the mesh. I'm gonna do that first by telling LinkerD, hey, anytime you see a pod appear in these two namespaces, go ahead and inject it into the mesh. And then I'm gonna do a rollout restart, again, in both of those things, to actually allow LinkerD to see pods being created and get them into the mesh. Normally doing this demo actually, I tended to do the restart and then the wait and then the restart and then the wait. And so this time I'm trying, restarting them both in parallel and waiting kind of in parallel. So we'll see if this is any faster. Silly Kubernetes tricks. Okay, so now emissary is going okay and we can let the phases demo come up as well. And all of this is kind of boring from really everybody's perspective. You don't really get to see anything going on from here, but this looks good. All right, so at this point, everything should be in the mesh and I should be able to show y'all a couple of web browsers showing the faces GUI actually running. One of my web browsers is going to be totally normal. And the other one I'm using the mod header extension to insert this header x-faces-user colon test user into all the requests going down into the application and we will use those later. So here's my normal browser. It says user unknown because it's not actually sending an x-faces-user header. And it is indeed showing us a grid of smiley faces on a green background, which is kind of nice. And here's my other browser. My other browser you'll notice says user test user. This is the one that's inserting the x-faces-user header into the requests every time it fetches one of these faces. All right, so those are the same two browser windows, by the way, you can tell because I just selected that. Perfect, and we have actually a question. Oh my. Yeah, from Oliver. Show us what being in the mesh means. Let me come back to that because I can. Yeah, I can show that. Let me go ahead and come back to that at the end after we run through and take a look at this stuff just because we could go fairly deep down the rabbit hole on some of that. And I wanna make sure we get through the demo before we do that. Although it looks like we have plenty of time, so that's good. Okay, simplest sort of dynamic request writing we can do is just a canary where we're going to take some fraction of the traffic going to a particular workload, and we're gonna send it to some other workload. This is a pretty basic thing that you do for progressive delivery. It's a pretty basic thing you do just to make certain that a new version of your service is actually going to function before you just throw all the traffic at it. I know testing in prod is a thing, but maybe having some control over testing in prod is good. So here's an HTTP route that will do a canary rollout for us. Couple of things to pay attention to. This is the color canary because we're going to do a canary of the color service. It lives in the faces namespace because the service that we want to affect lives in the faces namespace. It's parent ref. Basically, this is a route that's going to be attached to the service named color in the faces namespace. And I want to call a little bit of attention to the word service here. If you're familiar with Linkerty, you will also recognize that there are things in Linkerty called servers as opposed to service. In this particular case, I want the service. I'm basically saying, hey, Linkerty, anytime you see traffic that's trying to be sent to the cluster IP associated with the color service, this route applies to that. Because we're talking about traffic to a services cluster IP, we also need to talk about the port. And the port here is the port number defined in the service resource. That's why it's 80 as opposed to some cluster-based port number here. For every traffic there, we're going to send 90% of its traffic to the pods associated with the service called color. And then we're going to send the other 10% to the pods associated with the service named color too. And again, we talk about the port numbers and these are, again, the port number defined by the service. I'm hammering on that one a lot because it was very confusing to me when I first started working with this and I got it very wrong. It felt natural to me that the backend reps would talk about cluster ports. No, talking about service ports still. Another thing that's worth pointing out is that these numbers actually don't have to sum to 100. The important thing is the ratio between the two numbers, but for me personally, I kind of like to make them sum to 100 because it makes it really easy for me to just look at them and go, oh yeah, percentages, I can deal with percentages. So let's go ahead and apply this and see what happens. What should happen, 10% of the color traffic should in fact go to the color to workload. The color to workload will return blue instead of green. So we should start seeing 10% of these grids, 10% of the faces in these grids should start showing us blue backgrounds instead of green backgrounds. I shall leave it as an exercise to the reader to determine what is 10% of 16. But you can see, there we go, we're seeing a little bit of blue still mostly green. So overall, this is working. Good, the demo gods are still smiling upon us so far. What's happened in here if we go back and take a look at the demo architecture, instead of having all of this go here over to the color service, we're adding another one where 10% of it is getting routed over here happening in the mesh. I also want to point out that we are far away from Emissary at this point. Typically you do things like this by altering things at the ingress. Emissary would not be able to do this because Emissary has absolutely no control over this traffic but Linkardy can do it. Yeah, there is an audience question from Chewy. How does the way it's actually split the traffic? Is it spinning up more pods in a respective sector? It is not. What's happening is back at the demo architecture, I'm using Linky the Lobster, the Linkardy mascot, to represent the sidecars. Well, to kind of represent the sidecars, the sidecar proxies that are attached to these pods, the face workload actually only has one proxy. So I should really be showing this Linky going around talking to this Linky, but that just makes the diagram ugly so I didn't do it that way. But what's happening is that the Linkardy proxy attached to the face workload is seeing a request coming from the face workload to the color workload and this proxy knows about the HTTP route, knows about the weights on the two backend roughs and it is making a decision about whether to forward the traffic to the color workload or to the color to workload. So we haven't changed anything about the number of pods in the replica set. We haven't done anything different inside the pod. We've simply told that one proxy that it should be doing something different with traffic routing, which is part of the reason why this is really powerful. It means that we don't have to consume more resources. We don't have to slow anything down any further. If that did not answer your question, Chewy, please elaborate. All right. So let's come back over here. And to further demonstrate that, we can go and edit those weights. There's nothing particularly magic about a 90-10 split. So if we change this to 50-50, we should now see half of them turn blue. And, you know, I mean, worth pointing out, we're talking about percentages, we're talking about randomness and you know, stochastic splits and things like that. So we're not exactly going to have always eight blue backgrounds and eight green backgrounds, but we can look at this and see, yeah, you know, roughly 50%. I could also install LinkerDViz and then go through and look in LinkerDViz to see much more detailed information. But, you know, for 50-50 splits with stuff like this, one of the things I like about this demo is that you can just see it. Another thing that we can do is we can edit one of these back ends to have a weight of zero, which does the intuitive thing that you would expect where it just makes all of the traffic go to the other one. A weight of zero means don't send any traffic here. And so now we see all blue backgrounds. And again, we've done this, we haven't changed the pods that are running. There's, in fact, there's only one pod running for each of color and color tube right now. Don't do that in production, people. This is a very, very bad idea for anything but demos. On the other hand, running it in K3D in production is also, oh, okay, running it in K3D on your laptop is also a bad idea for production. Running it in K3S in a cluster someplace works really well. Okay, so, there you go. There's a really simple canary deployment. If there are any other questions about that, now would be a great time to bring those up too. No questions over, but Chewy did say that amazing, that's very clear, so. Oh good, I'm glad to hear that. Now, another thing that I mentioned earlier that I think is really cool is you can also do A-B testing very far away from your Ingress controller down deep in the mesh. And so we're gonna do that one next. The way we're gonna do this one is that we're going to arrange it so that if we see that Xfaces user, test user header on a request, then that is gonna get routed over to the Smiley 2 service, which will return the hard-eyed Smiley instead of the normal Smiley face. And again, we've got one browser window that is not sending that header, and we have one browser window that is sending that header. So, once I do this, it should be the case that one of our browsers shows the hard-eyes and the other one does not. So, let's take a look at the HTTP route for that. Once again, we are doing this in the faces namespace because we're working with workloads in the faces namespace. Same thing as before, we are attaching to a service. Only this time we're attaching to the Smiley service rather than the color service. This is making me realize I should really rename the face service just to, anyway, nevermind. So, this is a new one where we introduced this matches clause. So, this is saying if you see a header named x-faces-user, then you'll notice this is all lowercase because per HTTP to normalization rules, headers get normalized to lowercase. If we see that header and its value is exactly test user, then we will match this backend draft and send it to the Smiley 2 service. Otherwise, we will send it onto the Smiley service, the same one that we did before. So, if we do that, what we should see again is that the front browser window should end up showing us still the normal Smiley faces and the back browser window should end up showing us things that have hard-eyed Smilies. And that is actually what we see. It's so nice when the demo actually works. I really appreciate that. And again, this is the same thing. We're not changing anything about the deployments. The Smiley, both of them are still running. Both of them are doing the same things. We're just affecting the routing. If we go through, if we want, we can then go through and edit this as well. Same sort of thing where if we just delete all this stuff that we're left only with the backend draft that points to Smiley 2, then everything will get the hard-eyed Smilies rather than only the ones coming from the one browser. So this would be an example where you finished your A.B. test. Everything is okay. You want the hard-eyed Smilies. You don't want to use the normal ones anymore. We could of course edit this thing again and then point them all back to the normal Smilies or just delete the route entirely. So there you go. There's a very simple demo of the Dynamic Request Routing functionality in Linkerdue 2.13. Like I said, I tend to think this is pretty cool because there's a bunch of stuff you can do with it that is really, really amazing, especially if you're accustomed to only having control of things with the Ingress. There is one important gotcha that I'll point out here that I really should have gone back and added to the demo but neglected to, which is that, and this is, I mean, this is gonna sound really obvious. If you want to do header-based routing, the header you're after has to be present at the place where you're doing header-based routing. Right? So in this particular case, the demo, the Faces demo has to pass the X Faces user header from the browser through the Faces service onto the Smiley and Color service. If it doesn't do that, the information that you want to be doing routing on is simply not present. Again, that's not anything specific to Linkerdue. That's just the nature of trying to do header-based routing at a particular point in your application. The headers have to be there. So with that said, let me see about showing some of the stuff that Oliver was asking. All right. So that's your typical, just show me all the pods that are running in the Faces namespace. You know, actually I realized I lied. I'm actually running two replicas of everything in here rather than one. I thought I'd turned that back down to one, but okay, fine. So I have two of them. An interesting thing that you might notice here is that each of them says they have two containers rather than one container. And if I pick one of these, click on the Smiley service for a minute. I repeat that command with the correct namespace added. You can come through here and you can see that in our containers, that's all of this stuff. Yeah, don't show people your certificates, people. That's a horrible idea. But you can see that this container is running the Linkerd proxy rather than running my application code. And then there's another container in here that is actually running my application code. So the first thing, if you wanna make sure that something is in the mesh, you can always go down this particular low level place and take a look at what containers are actually present. And while Annie is asking the next thing that she wanted to ask, I am gonna install Linkerdviz, which I didn't do earlier. And then I'll show you another way to answer the question. Yeah, perfect. So we have a Jeep is asking, how will Linkerd handle conflicting HTTP routes, assuming we can have multiple conflicts? You can have multiple configurations. The Gateway API defines a very elaborate mechanism for resolving conflicts between Gateway API resources, and that is what Linkerd will follow. I am not actually gonna try to summarize that one because some of the choices that they make, what's the right way to phrase this one? It's actually, you know what? I take it back. I will try to summarize that. It starts with more specific things take precedence over less specific things. So for example, if you have an HTTP route for a path of slash foo, that will take precedence over an HTTP route with a path of slash, because slash foo is more specific. There are a lot of corner cases that get fascinatingly complex that I'm not gonna try to talk about. I will instead refer you to the Gateway API documentation. The corner cases are all defined in a way to arrange it such that it's really not possible to have a tie. There is always a route that will have greater precedence than the other one. And so there's always a way for it to win and it's always a deterministic way for it to win, even though it can sometimes be very convoluted. Okay. So hopefully that answered that question. Yeah. And then Julie had another question, but we can also obviously go through all the diverse questions. Okay. So use the linker div is stat command. So in this case, I'm saying, hey, linker div is give me stats on the namespace faces. And you will notice that it first says, oh, all of these are meshed. Oh, and this is actually 11 out of 11 deployments, sorry, 11 out of 11 pods in this namespace are in the mesh. Traffic through here currently has a success rate of 100%. We're doing 35.3 requests per second, which is a little higher than I would have expected, but that's okay. The fact that it's a little higher than I expected mostly tells me that I've kind of forgotten other things I have running here. We get to see the 50th percentile latency and the 95th and the 99th from which you can interpret that. Wow, the faces application is really slow and you would be right. It's designed that way. And the number of active TCP connections going on here. We can also actually a bunch of other stuff we can do with that, but I think I'm gonna stop there because I want to also get to other questions. And yeah, linker div is is a great way that we can go and run down lots and lots of private holes as you can see from its help. There you go. There's all of them. And I can see that in emissary, we're only running two pods, both of them are meshed. Faces is fully meshed. LinkerD itself of course is meshed as is LinkerD viz. And so you can get a really quick overview of what's in the mesh and what is not right now. Okay. So if people have more questions about that, that would be good. I'm gonna do one other thing really quickly here. Yeah. My, I was looking over trying to get to the viz dashboard and the viz dashboard is not working right now, which honestly doesn't surprise me too much because I haven't tested it in this configuration in quite some time. So moving on. Does LinkerD automatically mount the containers per pod? If you look on this namespace, so this annotation tells LinkerD, whenever you see a pod created in this namespace, go ahead and inject that pod into LinkerD's mesh, which really is inject the proxy container into the pod. So whenever that annotation is present on the namespace or on a deployment or something, actually really, you need to put it on the pod template in a deployment so it appears on the pod. As long as it's present on the namespace or on the pod, LinkerD will automatically go through and do everything there is to do. Did that make sense? Describe the dashboard. Let's see if I've got the dashboard running someplace else that I can quickly take a look at. Yeah, Chewy, one of the things that's worth pointing out here is that LinkerD is designed to be operationally simple. So operationally simple means it should be really easy to get things into the mesh. You should not have to do a lot of work for that. And so there are a couple of different ways to do this depending on your workflow, but they're all pretty simple. Jibus faucet, or Jibus faucet. Sorry if I'm mispronouncing your name. Is there a cleanup procedure or technique? Delete the resources, which I realize sounds specious, but for example, there you go. There's all my HTTP routes. If I wanna go through and get rid of one of them, I can just delete it. And in fact, let's do that, Chewy. So if I delete the color canary, probably point out, yes, I have aliased K to coop control. Sorry, I should have said that earlier. Faces namespace. So if I do this, we'll instantly go back to having green backgrounds and everything. And that's probably the easiest way to go through and clean things up. Now, coming back to the dashboard for a moment. That's the wrong one. There we go. Here's the LinkardeeViz dashboard. This is actually, I have to be careful not to poke too much at things because this is the live demo for kubectrash.io, which is happening right now. But that's okay, I'm not gonna change anything or anything like that, right? But in this case, this is what the dashboard roughly looks like, where you can see for the kubectrash demo, we have quite a few more namespaces. And we were a little more selective about what we brought into the mesh and what we did not. But if I look at their faces namespace, then I can come in here and we get to immediately see, oh, so the face service talks to the smiley service and to the color service. And we can come down here and we can see there's color, color two, face, the faces GUI, that's the one that actually serves the HTML for the single page web app. You can see that the faces GUI isn't really getting any traffic because we tend to load the web app once and then never hit it again. You can see that the color and face and smiley services are all hovering around eight requests per second, which makes sense because the faces SPA has 16 cells that are each refreshed anomaly every two seconds. So that works out to eight requests per second. We can see that color two is currently taking no traffic nor is smiley two. And that tells us something about how this particular demo configuration is set up at this very moment. So yeah, that's basically what the dashboard looks like. We can also do things like if we click into the face deployment, then we can see who's talking to it. We get to see whether it's working. And we can come down here and see the top requests that are happening where for the most part, what's happening is that emissary ingress is calling over to the slash cell slash endpoint and then face is calling the slash endpoint on color and smiley. Something just happened there. Oh no, Prometheus showed up, that's right. So yeah, there's a bunch of information in there. And we can also even click on this thing and then get live traffic showing up as we go. So the Viz dashboard's pretty cool, but you can do all of this from the command line as well. All right. Great. And then there was a comment slash question from Julie as well about how their company was looking into each year, several years ago. Scared of the complexity. So we very often get the question about, hey, service mesh is having this reputation for horrible, horrible complexity. Is that warranted? And the answer is kind of depends on what service mesh you look at. LinkerD is designed to be operationally simple. And we like to think that we actually succeed in that. We're doing a thing at KubeCon coming up at Amsterdam where you can drop by at LinkerD Day, the first ever inaugural LinkerD Day happening at KubeCon Amsterdam. And you can bring in your laptop and fire up LinkerD and have it working in five minutes, which will get you a shiny new LinkerD hat. Sadly, I don't have a LinkerD hat on me to show you what it looks like, which that's what I get for letting my kids steal my LinkerD hat, huh? But we hear regularly from companies that have spent weeks or months trying to get a proof of concept working with service meshes and failed, and then they get it working in LinkerD in hours. So the operational simplicity is, it's a big deal for us. It's a thing we take very seriously. All right, if there are no other questions, or maybe another different way to put that is, yeah, go ahead and get in questions if you have any more. In the meantime, yes, the inaugural LinkerD Day is happening at Tuesday, day zero, KubeCon Amsterdam, hope to see you there. Yeah, this is, if you wanna try out Boyant Cloud, go to boyant.io slash demo, we hope you will. And on May 18th, in the Service Mesh Academy, we will be doing a deeper dive into circuit breakers and dynamic request routing. And we will go into much more detail than we did for this. So other than that, that's how to reach me. I'm also Flynn on the CNCF Slack, if that's easier, if you're already on that one, or you drop me a line. And yeah, if there's anything else, we actually finished with time to spare this time. I think that might be the first time this has happened. It works this way as well, but there could be a lot of questions coming in. So let's see, Boyant, great, great demo, by the way. Really great stuff. And as Flynn mentioned there, now is the perfect time to ask questions. We've already had a lot of questions, so that's always nice to see, but if you have any, start typing them away, so that we can get them as well. But so we have about 10 minutes if you have any questions, but obviously we can wrap up early as well, if everyone already got all of their questions in. Yeah, but Linkerdekon sounds wonderful. Looking forward to that. Linkerdekon is actually really interesting. As far as we know, it is the only day zero event at KubeCon that is all end users, no vendor talks at all. And that should be kind of a lot of fun, really. So yeah. And there's all that we're asking, is there good Henson and Tuyo to Linkerdekon Web app? There is. And this is one of those where I should really just remember this URL and I don't. So hang on a second while I get it for you. All right. So I've stuck a link in to get copied over to the chat. And yeah, there's a getting started quick start guide that is pretty quick. In the meantime, our backstage magicians, hopefully, there we go. We'll go through and copy it up. Obviously that is the 2.12 getting started because 2.12 is the current stable release. HTTPS colon slash slash linkerd.io slash 2.13 slash getting started slash will work as soon as 2.13 is released. I didn't explicitly mention this when I went through the slide, but Linkerd 2.13 is happening soon, like really, really soon now. I just knocked on my wooden keyboard case because well, yeah. But yeah, I'm looking forward to 2.13 being out. And I'm also looking forward to Linkerd today, which should be cool. Yeah, moving forward to that. CubeCon is gonna be amazing as a whole, I think. So everyone, oh, the in-person one is sold out already. So it's just about to say get your tickets, but online, everyone can attend still, so all of that. It'll all be recorded and available after the fact. So that is good. I was a little bit surprised to find out how early it sold out, but I probably shouldn't be surprised by that. Yeah, a big change, of course, being that all the DayZero events, now you get one ticket and you can go to all of them, which we're definitely looking forward to. For sure, it's very nice. Very easy for the attendee, for sure. Right, right. Okay, I think we're approaching final call for the questions. We could talk about CubeCon here for probably hours, but why not? Yeah, it might not be all that helpful. Yeah, exactly. But if you're typing one question, send it in now. Yeah, and if you are at CubeCon, whether you have questions or not, you can always drop by the Buoyant booth. There's, sorry, Buoyant will have a booth where we will be happy to talk to you about all things Linkerty. There's also gonna be a Linkerty booth in the CNCF Project Pavilion where we will also be happy to talk to you about all things Linkerty. I will be at one of those booths for most of the con, I suspect. I'm not sure if that's the sort of thing that makes people more or less likely to come by the booths, but you know, whichever, it's a fact. Sounds good, sounds good. It's always good to be available, so that's nice. And there will also be Linkerty maintainers and business folks on the Buoyant side and all that. So a lot of opportunities to get answers both to technical questions about Linkerty and to business questions about Buoyant. Perfect, and while we see probably no questions coming anymore, so we'll start wrapping up soon. But before that, I do wanna say that I thought, I saw there was a really fun comment after the demo, which was Parza said, demo gods be smiling, and I enjoyed that and had like a bit of a chuckle here. That was very nice, a good smiling demo, yeah. It's always so nice that the demo gods are smiling upon us, especially when the demo is about smiling faces, yes. Exactly, perfect. But yeah, let's start wrapping up. So thank you everyone for joining the latest episode of Cloud Native Live. And we're great to have a session about 2.130 Linkerty and the Gateway API. And we also really love the interaction and the question from the audience and tune in in the coming weeks as well because we bring you the latest Cloud Native code every Wednesday. And I said in the coming weeks we have really great sessions coming up, so tune in then. And thank you for joining us today and I'll see you all next week. One more link that I'd like to get pasted if you can. Oh, perfect. Which is, this was in the presentation as well, but this way maybe people will be able to copy and paste it if they want. That's the direct link to the source code for this demo. Okay, perfect. Well, that's even better. Everyone can go in there and see how it goes. Cool, thank you. I appreciate it. Looking forward to next time. Thank you everyone.