 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Maddie Talvasto, and I'm a CNCF ambassador, as well as senior product marketing manager at Camunda, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer all of your questions. So join us every Wednesday to watch Live, as well if you want to host these kind of sessions yourself on September 21st, the online program's calendar will be made for the next quarter again available, so look forward to that. And this week, we have Jason Morgan here with us to talk about looking down your Kubernetes cluster with Linkerd. Very exciting. And as always, this is a official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to chat or questions that would be in violation of that Code of Conduct, basically. Please be respectful of all of your fellow participants, as well as presenters. But with that, I'll hand it over to Jason to kick off today's presentation. Oh, awesome. Thank you so much. Hey, folks. Hello and welcome. So today, I'm going to talk to you about how we're going to lock down a cluster with Linkerd. So I'm going to show you how to set up MTLS. I'm going to show you how to restrict traffic so that only traffic in namespace works. And then I'm going to show you how to use the new HTTP routes that come from Linkerd 2.12 and the Gateway API, actually. I'm going to show you how to use that to specify by verb and by path what can happen. And hello, people from Raleigh and India, and hopefully other folks. I'm in Washington, DC. It's great to meet you all. Is it okay if I share some slides to start with? Yeah, that's perfect. All right. So just exit before I even do this, let me tell you who I am. I'm Jason Morgan. I am a technical evangelist for Boyant, the company that makes the Linkerd project. It's my job to tell folks how awesome Linkerd is and try to encourage you to use it. So today, I'm going to be talking about locking down clusters. But before I say that, I really have to say what is authorization policy and what am I talking about? Well, the standard setup in Kubernetes, the standard setup in Kubernetes is to kind of allow traffic to and from any pod. The standard setup in Linkerd is to allow traffic to and from any pod. But with the caveat that when you're using Linkerd, you have MTLS everywhere. So authorization policy refers to how do we restrict who's allowed to talk to whom in your cluster. And we call it authorization policy because things that aren't authorized don't get to work. So let's go to the next slide here. For clarity, with Linkerd, we are using service mesh based policy to restrict traffic. So what does that mean? Well, it means that we need Linkerd in the loop. We can restrict traffic to pods that have Linkerd proxy because that's how Linkerd does everything it does. We can restrict individual requests. So that is we can go and say, hey, listen, this service count is allowed to talk to this web app on this port using this verb and executing this path. And yeah, we can do it in a very fine grained way. And just a quick heads up just so we're clear. What is authorization policy versus network policy? Network policy, at least when I think of it, I think of network policy as firewall. I think of application policy or service mesh policy as layer seven. So what layer are we operating at? With Linkerd, we use the workload identity. So just in general, in Linkerd, every pod gets its identity based on the service count that you configure in Kubernetes. And we'll show you a little bit of that when we get in the demo. It automatically includes encryption. It's enforced at the pod level. So because we've got the proxy running beside your application, we enforce it right there at the individual pod level. So we use layer seven language and layer seven semantics. And hopefully you'll think after today that it's pretty easy. And that's actually all I'm going to do for slides. Just as a note, as we're going today, there's going to be a lot of stuff happening. The more you can interrupt me and ask questions and get clarification, I think the better this will go for everyone. So please feel free to interject whenever you can. And then for those aren't aware, here's our Linkerd site. You can go to linkerd.io just to check out Linkerd and we've got a getting started guide. If this is over your head and you're not sure what I'm talking about, check out our getting started guide. It'll show you how to get up and running with Linkerd in 30 minutes or less. So I want to kind of talk about what we're going to do today. So I've got a cluster. And it's got an application called BooksApp. It's got the Linkerd control plane. And it's got the Linkerd dashboard installed. And what we're going to do here is we're going to go to this BooksApp namespace. And we're going to set up namespace jail effectively. We're going to make sure that only things inside this namespace can talk to each other with the exception that we will allow our dashboard to talk to our applications in this namespace so that we can get some statistics. I hope that makes sense. Great time to say something if it doesn't. And we're going to go do a demo and I'm going to show you how that works. Okay. For reference, our BooksApp is broken up into four parts. We have a traffic generation service. We have a web front end. And we have two backends, authors, and books that power our site. All right. So I'm hoping you all can see my terminal. Hello from Nigeria. Oh, awesome. So I'm hoping you all can see my terminal. At the top here, I'm going to actually be putting in commands at the bottom left and bottom right. I'm just going to be showing you some watches. The bottom left here is just all the pods that are running in my cluster because we're going to do some stuff and I want you to see what's happening with our pods as we do it. The bottom right is going to be the current state of authorization policies. So what is authorized in LinkerD for the BooksApp namespace? So we're going to start. Our BooksApp pods exist, but none of them are part of the service mesh. That is none of them have a proxy running beside them. We can tell this because when we look at them, they have one container per pod and the proxy adds a second container. So for something to be in the mesh, you need at least two containers per pod. So let's fix that. I'm going to grab the deployment for the BooksApp, and I'm just going to add an annotation. Yeah, there's a first question from the audience. So isn't something similar covered by Istio? If it isn't, what's the difference? Yeah, great question. Let me go a little bit further and then I'm going to hop into that question. So first, let's get this running. I want to talk about what I'm doing here. So I get the deployment and I'm going to add one line to each YAML manifest. That line is going to say, linkerd slash inject dot enabled, right? It's just going to tell the LinkerD admission controller to go ahead and or the LinkerD webhook to go ahead and modify these deployments. So let's do that. And while that's going, I'll answer your question. So now we see new pods are getting created, right? And they've got they've got an additional proxy. So yeah, Istio and LinkerD are both now CNCF projects. They're both service meshes and LinkerD and they do some really similar things, right? The main difference being that LinkerD, well, hopefully you'll see with this webinar or this live stream, LinkerD is really easy to use. We also we also believe that we perform a lot better than LinkerD because we use a different proxy than Istio. Or we perform better than Istio because we use a different proxy. Okay, so we've got our app injected and it's running. Let's go ahead and just on another screen, I'm going to run a little port forward here. Yeah, Jesus. We're just going to run a little port forward and we're going to take a look at our app. We're running this locally. We can see that I've got an app. It's working. One thing that I have now that was pretty easy, right? And this is actually the start of our differences with other service meshes is to get up and running with LinkerD. You don't need to use any custom resource definitions, adding complexity or adding something like that. Yeah, so great question, Gara. LinkerD doesn't require to use gateways, virtual services, anything like that, right? Like the core LinkerD just uses Kubernetes services and an annotation to set up NTLS and we're often running, right? So with no custom resources, my app still works. I can get some good statistics about it. So if I go back here, launch my dashboard, we can see some details about BooksApp. We can see how it works and how it's talking to the various components, which hopefully looks pretty similar to what you're seeing here. And we can get some details about what's going on in our environment. All right, back to the actual work of locking this down. So to start, what I've done is I've generated MTLS between all my connections. So even though we're allowed, everything in my cluster is a lot of talk to everything else right now. We can see when we look at our policies in the bottom right, we can see that we've got two policies called defaults unauthenticated, one for the main route, one for the probe, right? The probe just being that health check. And we can see how many, we can see that nothing is unauthorized because it allows everything to occur all the time. But we're going to change that right now. We can look at our deployment. We can see some statistics about what's going on, right? So this is just some high level details about the traffic here in the BooksApp namespace. Going beyond that, right now we're going to start, we're going to start getting into custom resources and we're going to start getting into advanced configuration for LingerD. So the first thing I want to do is I want to set a policy inside my BooksApp namespace that says your default behavior should be to deny traffic. We don't want anything to work unless we tell it to work. All right, what's notable here, let's actually go change what we're showing. What's notable is even though I turned on my default deny policy, I'm still seeing traffic flow through, right? The reason for this is the default policy for a given proxy is set at startup time for that proxy, right? So this is a really important caveat if you're watching. When you set default policy, you have to restart pods in order for that policy to take effect. We can see that the pods are still running. Yeah, we can see the stats from the deployment. Sorry, I'm showing you that already. And now if I trigger a rollout restart, what we're going to see is we're going to see all the apps get, we're going to see all the pods restart and we're going to see all our traffic totally fall off and die. I just want to show you something really quick. If we look at our pods, right, one thing that if you use policy in LingerD 2.11, you're going to expect different behavior than what you see here, right? So let's talk about this really quick. So we see all of our pods restart. That is because the default behavior in LingerD 2.12 is to allow health checks to continue to exist all the time, right? Or except for in one very special case that we'll get to you at the end. So it's going to keep your readiness and liveness probes succeeding even though the app is receiving no traffic, right? One of the unfortunate consequences of this is we're going to see these things, like these things are going to slowly taper off to zero and then disappear and all of our stats from the dashboard for this app are going to die, right? They're going to disappear. What's happened is we've told LingerD deny all traffic and we haven't authorized anything, right? So we've got our deny working great, all the traffic is down and now we're going to need to, now we're going to need to authorize something explicitly so that we can get some traffic back. I hope this is still making sense. Folks in the chat, if you can give me the occasional thumbs up if you're getting it, I would be really grateful. Yeah, great timing. Yeah, so it does if you aren't prepared. So if you do this without knowing what policy, like without creating the right policy, you're going to take downtime, right? I'm doing this step by step in my example to show you how it works, but this can easily be a one-shot go where we apply everything immediately, right? And it all works. Policy, now when we first announced policy, like my boss, you know, said loudly, this is the biggest foot gun we've ever given LingerD users. This is a great way to create an outage with your service mesh if you're not careful about what you're doing. That being said, there is no requirement to take downtime to set up policy, right? You can actually do it all in advance and then go ahead and set something like a default deny. So what I'm going to do, I'm going to start allowing some traffic back, right? So the first thing I'm going to do is I'm going to create a server resource, right? So we've got a couple resources in play here. I think we're dealing with a total of six custom resource definitions that you have to deal with in LingerD, right? So it's more than none but less than 15, right? So it's not that big a handle to do. And again, you don't need to deal with this unless you're trying to add in policy. So we're going to create a server for our admin port. This is going to map a individual port in our application to an object that has policy applied to it, right? And I'll show you the easy animals in a second. I'm just going to go ahead and apply them and get that started. So after I make a server object, I'm going to create a policy to start allowing admin traffic, so allowing LingerD's dashboard to start understanding what's happening. So now that I've got this, we're going to start seeing some statistics come back, right? It's going to take a second, but we're going to see statistics come back because now we've authorized LingerD to ask about the admin port. So let's show you what we did here. So first, I created, let me make this a little smaller. First, I created a server, right? What it did was it was looking for all pods in the namespace, right? So inside of its namespace, it was looking for any pod that matched any label. And it was looking for pods that had a port called LingerD admin. And it told LingerD that, hey, on that port, the traffic is HP2. So instead of having LingerD trying to detect what the traffic was for this port, we just told it explicitly so that it's a little bit, it can do what it's doing a little bit faster. And then what we're going to do is we're going to apply a policy to this port. So let's look at that. So the next thing we do is create a policy, right? And what we're saying here is that for that server, or sorry, if you're in the namespace books app, we want you to accept, we want to accept mesh TLS connections. And we want you to accept mesh TLS connections from our LingerD Prometheus service. So that is the thing that collects the data. And then our tap service, which shows some cool metrics about what's going on in our app. Oh, we're not broadcasting. Okay. We are broadcasting to YouTube, but LinkedIn is down. Got it. But like, you know, we're not there right now. Okay, great. So that's what we did, right? So we've now allowed, let's go back to our diagram real quick. We have now allowed the Viz extension to talk to books app. But right now, all of these links here are down. So they are all being denied by policy. The only collection from our Viz extension, from this LingerD Viz namespace is connecting in, and specifically only the Prometheus and tap service counts are allowed to open connections here. That's why when we look at LingerD, we have data, but there's not much going on here because our app's pretty empty. So let's show you what's next. What I'm going to do now that we've got this, so now that we've got our initial stats working, we're going to go ahead and allow some in-app traffic. So what I have is I have four apps, right? And four distinct ports, right? One port per app or actually three ports per app because our traffic service doesn't actually accept any traffic. So we're going to tell authors, books, and web app. We're going to define a server that claims the actual application port on all of these instances. So we created one for authors, creating one for books, and we're creating one for our web app. They're all pretty identical, so I'm just going to show you one of them. And then after we create those, we're going to set up a policy that allows everything in our namespace to talk, right? Or only service accounts within our namespace to talk to other services. So let's show you how that's done. So first off, what you're going to see now is some new things. So right now, the unauthorized column, you're going to see those numbers start to drop, and you're going to see success rates and the request per second increase for actual authorized rats. Same thing on the left-hand side. We can see that our app is talking to itself. Of course, our app is broken. It's a demo app. It's broken on purpose. If you want to see more about how that works, I can send a link to a talk I did on debugging applications with your service mesh. So we've got some traffic going. So let's look at these objects. So first, we're going to look at the author's server. So this is what's going to tell Linkerty, hey, listen, map to the application port on this application. Right? So we give it some data. Hey, what are you? What namespace are you going to be in? How do you find your pod? Well, you look for a pod that matches the app name, authors, and the project books app. You're looking for a port called service. That's the name of the port that you define in your YAML manifest on your deployment. And then we give it the proxy protocol, which we don't have to set, but I like to set it so that we can skip protocol detection. Once we do that, we're going to actually allow some traffic. So this is the part that I'm excited about. We say we want a policy called books app only. What that means is we want it to exist in the namespace. So we're going to say if you are a policy target in the books app namespace, we want you to use this books app accounts mesh TLS authentication. And this is the mesh TLS authentication object. So this is another new custom resource definition. We had servers for picking ports. We have authorizations for mapping policy to servers. And then we have two types of authorization objects. We have mesh TLS authorization for what service counts should we allow to talk to this port. And then we have network authentication for what IP range should we allow to talk to this port, right? And we'll see one of those here in a minute. But what I've done here is I set an identity where if the identity of the workload certificate that is part of that MTLS connection that is auto-generated by linker D, but if the identity that mesh connection which is tied to the Kubernetes service count matches anything within the books app namespace, we want to allow traffic on the application port. So every app can now talk to every app, right? Which is why we see traffic here. And if we go back to our diagram one more time, we can see or we've got this link back basically set back up, all right? So these apps can now begin talking to each other again. So we've restored basic traffic, right? So right now in what, 20 minutes, we've gone from nothing to MTLS between every single app inside the books app namespace, where we have only explicitly authorized connections between our apps are allowed. And if we look at our books app here, I'm just going to restart my port forward. If we look at our books app, we can go like check out some of the books. We can view our authors, right? We can do whatever stuff we want. We can create a new book, create a new author. It's going to be me because why not? We can create a new author that's cool at a book. Shoot, I should have thought this up. Thought this through. How to link or dee? It's count five. It's a short book, but a good one. Oh, it doesn't work. It's okay. We'll figure it. We'll fix that in a minute. But so we're able to create, create authors and do things within our within our application. I hope if you're watching, you're slightly psyched about this and you see that it's not a huge journey, right? Like going back to that object, right? You know, really, what do we do here? Right? We created, we created a policy that mapped the ports on my server that mapped the ports on my various applications. And then we created a policy that said, if you're in the namespace, you can accept calls from any, any app in the namespace, any TLS app in the namespace. So nothing that's not TLS that's is going to be allowed in beyond health and readiness check probes from the, from the cuboid. Nothing else is allowed in. So you've already, you've already drastically hardened this down. Cool. I'm open. That's cool. Works for me, but audience, if there's any questions, concerns, comments, anything, just send the message into the chat and we'll get them answered. Yeah. If I get through this too fast, folks, you're in for terrible networking jokes and more slides. So up to you. And there is a question made is the Anna a good review by our car saying awesome. So that that looks good. But then there's also a question, is it possible to create stateful policies rather than stateful, similar to what a firewall does? Yeah, I don't, would you be able to rephrase that? In general, liquidity policy is not, is not like IP based, right? Like that being said, we can specifically authorize calls from, from IP ranges, right? Like what we're doing with policy and liquidity is validating the identity of a workload based on that mutual TLS that liquidity bought us for our cluster. Right. So when we, when we set up liquidity, when we put the proxies in every proxy gets an individual workload certificate. And that workload certificate is tied to its Kubernetes identity mutual TLS means both sides of the conversation. Like we have encryption between the, between the pods, but we also have the identity validated for each, for each workload. So we have the identity validated for each workload. And we can, we can use that identity from the server side of the conversation to decide, should we accept this request? So, Hossam, and I hope I said your name right, I don't understand what you mean by stateful policies versus stateless, right? Like these policies will survive application restarts. That's what you're asking, right? It's just something that you store in the Kubernetes API and that link or the, the liquidity proxy will check on when it authorizes a given request. But I'm happy to, I'm happy to dive in more if I miss, oh, I clearly misunderstood. So if you can explain it to me in a different way, I'm happy to dive in more. Okay. So now that we have that, right, let's lock it down even more, right? So our, our author service, if we decide our author service, hey, this is a really, this is a really sensitive service. So going back to our little diagram, we're okay if traffic talks to the web and web talks to authors and authors talks to, and books talks to authors, all that stuff. We're okay, right? But we, what we want to do is we want to make sure that if you're talking to authors, only certain accounts are allowed to do things. So we don't want traffic talking to authors. And we specifically don't want it, we want to specify who can do what on what port. So we're going to create some policies that use HTTP routes, which are Linkerty, oh, sorry, let me step back. Linkerty 212, another big part of what we did is we're beginning to adopt the Gateway API specification, which we're really excited about. There's great work coming out of the Gateway API group. And we're using HTTP routes to, to actually build, build policy for Linkerty. And as we look at what we're doing next with Linkerty, we want to continue to use Gateway API specifications to do that. Okay. So let's go back to our demo. So let's, let's isolate authors. And I want to show you a little bit about, you know, our first kind of edge case. So when we set up what, what we saw originally, when we, when we built these connections is that all of our pods stayed ready, because by default, Linkerty respected or set a default exemption for liveness checks and readiness checks, we call them probes. Right. But when we, when we set up HTTP policy, you have to explicitly, or HTTP routes, you have to explicitly authorize it. Hold on, I'm going to change this. So now I just want to get the pods in the books app namespace. So let's take, let's take a quick look at it. So we're going to create a route for, for authors. And we're going to create a policy. And you know what, I didn't do this, right? So give me one sec. I just want to show you what we, what we did here. We created a route that is, sorry, folks. Let's just look at, look at this object. So this is the thing that I created. And once I created this, right, we saw the author service go from ready to not ready or one of our pods, our application pod became unready. Right. And what happened is, what happened is we, when we create an HTTP route, it overwrites the exemptions that we make for health checking. Right. So let's just take a look at this, at this route real, really quick. So we see this route. It defines what server it applies to. So it's, it's using that previously I defined author server and saying like, Hey, on either of these paths, paths, you can, you can do stuff. But when we created this route, we didn't also create the, the exemptions for the probe. So we're going to have to fix that next. So let's create a probe exemption. So here I've got a new, a new document called authors probe. Let's actually look at that. It will allow the, allow the health check probes to pass. So we're going to create this and then we'll talk about what we did. The authors probe here, it creates, it creates a route that specifies that, that health check address, which if you look at the, at the YAML manifest for our app slash ping is our health check URL. And that creates a network authentication policy. Right. And it maps that, that network authentication policy of that route. So what we're saying is, Hey, listen, if you're not in the mesh, but you have any IP address, any IP address that the cluster could possibly have, we're going to let you hit that health check URL. Right. And this is our, our mapping object that actually builds that. I hope, I hope this makes sense. We're going to hide this now. And after we created that, we saw that the author service became healthy once again, because the key bullet was able to check in on that, on that ping address. Now, beyond that, right, let's, let's actually go look at our app again. Now, beyond that, if we go look at an author now, right, if we want to add a book, you know, new book is one, right, we're going to get, we're going to get a failure, right, because right now, our web app is not allowed to do a get. So we only authorized a get. We didn't authorize a put or a delete. Right. So only one verb is allowed. It's only the get verbs are allowed. So we fail when we try to, when we try to update it. Right. So we're going from, from course grained, right, where we open up the, or we lock down the namespace to really find grained on just the author's application, where by verb, we're setting a different policy. Okay. So let's keep going. So we're going to allow things to modify the route. And again, I should, I should show you what this actually looks like. So let me fix that. When we look at the modified route here, right, what we're going to see is we're going to say, Hey, listen, for things that, things that use this route, you're going to be able to do deletes, puts and posts on these applications or on these, on these paths inside of our web app. And when we create it here, we see that our, our modify route gets created. Great. That means that we still don't have. Yeah, Martin, that's a great question. I'm going to get to that just as soon as I'm finished here. Thank you so much for sharing that. Sorry, I didn't mean to speak over you there. So now that we've created the route, we're going to actually apply a policy so that something will like attach that route to our server and set some rules here. So we're going to create it. Let me show you what we did. Again, right. So we took that route that we just created that modify route route, which that's kind of redundant name, but whatever. We said, Hey, listen, use restrict yourself to whoever's mentioned here in authors modify authentication, right. And specifically, what we're saying is we're going to allow the web app to do this and only the web app. So books isn't allowed to use puts or, or deletes or posts, right? Books is only allowed to do gets. Make sense? I hope that makes sense. And that is the end of our demo. Exciting. Yeah. What we did here over the last 35 minutes, right, just to clarify, we took our app that was working. We didn't do anything to, we didn't do anything to, you know, adapt the app to our service match, right? So the core belief in Lincard D is that if you have an app and it works in Kubernetes, you should be able to add it to Lincard D and it still works with no changes, right? And try getting that deal with any other service match. On top of that, you get MTLS and statistics about what's going on. Then we showed you not just how to add MTLS, but to take that namespace and say, bam, absolutely nothing that isn't explicitly authorized will be allowed. That's what we did with that, that changed the proxy behavior to default to denying something unless it's authorized. Then we went in and we first, we took our apps and we said, okay, for the Lincard D admin port, that is the proxy admin port, across this whole environment, we're going to allow connections to the proxy admin port from the Lincard D visualization dashboard, right, so that we could get, you know, our fancy metrics about what's going on, right, and our fancy details about the environment so we could do things like tap the live traffic and see who's talking to what, right, in our cluster and what the, you know, what the performances of these various components, right, or go look at books. There we go. Man, books isn't receiving a lot of calls, so that's not really that exciting, but you get that, you get the gist of it, right. It's what allows us to, to get this, get this statistics or get these statistics. And beyond that, right, we showed you how do we allow application traffic to all things within the namespace. So we set up a policy that said, hey listen, star.booksapp.servicecount.whatever, right, if your identity matches anything in the books app namespace, we're going to allow you to do application traffic. So right there we had, we had a little box around our namespace that protected us from anything that wasn't, that wasn't in the namespace traffic. Even Lincard D now, if it tries to connect to, tries to connect to the, to the web server on one of these, one of these pods, it's going to get denied because we don't, we didn't authorize it, so it's not, it's not going to happen. If we add an ingress, right, which obviously you'd want to do if you wanted to share the traffic, you'd have to explicitly authorize it as the ingress, right, to that, to that port. And you can do it by changing that, adding another, another, like under star.booksapp, you know, you're going to get a good service on your namespace. And then we started, we have, we've censored this ingress, and only, only, only books and web talk to us at all, right, to any, and then one app would be allowed to make changes to our author's database. And it kind of, now it's an easy app, but it's not to do this across a bigger environment, like anything, to test, go in steps, right. I forget who asked earlier, let me see if I got this. Somebody asked if, if we had to take an outage. Oh yeah, I think it was Grov, you know, asked if we had to take an outage doing this. No, you just have to plan, you have to plan, you have to prayer, you have to test, right. And yeah. Would now be a good time to get to Martin's question, or? Yeah, absolutely. Yeah, Martin asks, in your experience, how much does the percentage consumption of a given deployment increase when liquidity proxy is injected? Yeah, so Martin, great, great question. So we actually care a lot about the performance of liquidity, right, just that go broader, right, liquidity, our intent is to be the easiest to use, the fastest, and the most secure service mesh on the market. I believe those things are true, not only because I'm paid to say that, right, like I actually believe, but y'all, you can test for yourself and see what you feel. We did some benchmarking of, we did some benchmarking of liquidity versus another popular service mesh, and we have some statistics about what exactly, what exactly is the memory, memory CPU, and latency footprint of liquidity when it compares to something else. So that'll give you a sense, but you're really only going to see, you're really only going to see what it changes when you actually deal with your real app, because like no app, no two apps are the same, like we're doing a benchmarking harness made by the folks at Kinvoke, said generic service mesh benchmarking thing, set of numbers, but real application data, real application traffic is going to impact the performance and the resource utilization of your proxies. I hope that was a good answer for you, Martin. Yeah, let's hope so. Martin, if you want to know something more, of course, ask Martin and we'll get to answer it. If you liked any of this, is it okay if I do my little plug now? Yeah, of course. All right. If you liked any of this, I hope for you to come do an in-depth search and production workshop. While it's not that complicated, it's really yet a sense of all the pitfalls you might run into before you run into them. It's happening the day before KubeCon. I'll be there. There'll be other people. You can say hi. You can hat if you haven't seen our hats. They're pretty cool. Lots of things that you should join. On top of that, if you haven't seen it, we're doing a, well, a bunch of folks are doing a little conference right before KON is like a warm-up. It's free. It's online. Check it out called KubeBrash. I hope to see you there. I'm sorry. Last but not least, if you have thoughts on this, you want to say hi. You want to be like, oh, Jason, Istio is the best. Come join me on the Linkardee Slack and tell me what you think and why. And I'd love to hear from you and love to see what things are you working with and see if there's any way I can convince you to run Linkardee in production. Perfect. The hat seems really cool. I think everyone should be lining up to get one of those now. Sounds really great. But yeah, we had the link to the benchmarking as well as the Slack now added to the comments so everyone can go hop on over there to check those out. But don't obviously leave quite yet because you still have your chance to ask your questions. If you have anything for more here for our speaker, so ask the questions away. We still have a bit of time. Yeah. Anything else that you want to add now, Jason, to before? We hopefully have a lot of questions coming in or let's see. The only big one, I just want to send you the Getting Started guide. If you haven't seen it, I promise you you can get through this in 30 minutes. 30 minutes is a long time to get through it. Linkardee is easy to use. I started believing when I came into Kubernetes, I was working with another vendor, another space, and I believe that service mesh was really complicated and really painful. While it was valuable, you had to be really good to use it. Linkardee is straightforward. It's simple. It's fast. Go through the Getting Started guide. If you can't get through it, I'll see it keep going. I'll say sorry. I'll buy you a beer or whatever, non-alcoholic beverage, whatever your preference is. Yeah. It is very fast. I've done it a few times. It's always very nice. Yeah. Awesome. Thank you. Cool. Yeah. But to kick off the Q&A, a question for me. While we see if anyone else has any questions, do you have any kind of sneak peeks? What's in the future for Linkardee? Is there anything exciting coming up or anything? Yeah. Fantastic question. Thanks for asking. We're really excited about the Gateway API and what it allows us to do. Linkardee's philosophy has been to really limit the number of custom resource definitions we add to your cluster. The reason we do it is we believe that for every custom resource definition you add, you add some element of complexity to the environment. The Gateway API becoming potentially part of core Kubernetes gives us a lot of really powerful tools for manipulating traffic, for doing traffic splits, for setting up things like retries, timeouts, header-based routing, egress control, all sorts of great tooling in a standard Kubernetes native way. With the next release, so Linkardee 2.12 just came out not that long ago. I don't remember exactly, but came out in August. It's been really cool and it gives you a lot of new functionality. We're working diligently right now on Linkardee 2.13 because we want to do a small release next that adds in something that a lot of folks have been asking for, specifically circuit-breaking and header-based routing. We're excited to see what's going to happen there. We expect to get that out this year. I'm really looking forward to it, and I think that it just adds more power to an already pretty powerful tool. Perfect. Sounds really cool. We like useful standards. Some standards can end up adding more, they can, on occasion, do more harm than good. The Gateway API standard seems really good, so we're really optimistic. Great. So final call for questions if anyone's typing away right now. Please push enter and send as soon as you can. But as always though, if you realize later on, I'm like, oh, I should have asked that question. Obviously, you can help on the Linkardee Slack or you can also help on over to the ClubNativeLive Slack and CNCF, and then you can use the CNCF Slack. So you can find out from there as well. Here's the little Jonah live chat on the CNCF Slack there, you can help on over there. But I think probably the Linkardee Slack is the best for Linkardee direct questions as well. So everyone has a lot of resources on their way. So perfect. But yeah, since no questions I can see currently, we had already a lot of questions throughout the session, so we handled the Q&A during that. So that was lovely. Do you have any final words, Dave, Jason, or anything just before I wrap up? I do, actually. Check out. If you like this, but you want a much longer in-depth version, check out our ServiceMesh Academy site next week, I think next week or the week after. We're going to do a really deep webinar into policy in Linkardee. I've done my colleague Flynn. He's awesome. It's going to be similar to this, but better and with more information. So check it out if this was good at all. Perfect. We had Flynn a few weeks ago in Cloud Native Live as well, so people might be familiar with him from there. Yeah, and we had a good review as well, so great session and everything. But yeah, let's start wrapping up. So thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a really good session about looking down your communities cluster with Linkardee. I really love the audience interaction this time as well. Thank you for all the questions. And as always, we bring you the latest Cloud Native Code every Wednesday. So stay tuned. We have a lot of great content coming up in the coming weeks as well. So thank you for joining us today and see you next week.