 Hello, everyone, and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Itai Shakuri. I'm a director of open source here at Aqua Security and I'm also a CNCF ambassador and will be hosting today's show. Cloud Native Live, every week, we meet to bring you a new set of presenters that will showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer your questions. Feel free to ask questions as we go along in the chats and we will do our best to answer them. This week, we have Jason Morgan here with us from buoyant to talk about linker D service mesh. Before we get started, I just want to remind everyone that KubeCon North America is upcoming soon in October, so if you haven't registered, it's going to be both in-person and virtual, really looking forward to that. Just a quick reminder and a bit of administration, I want to remind everyone that this is an official live stream of the CNCF and the such is subject to the CNCF Code of Conduct. Please don't add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, just be respectful of your fellow participants and presenters. With that, Jason, would you like to introduce yourself? Hey, thanks for having me on. My name is Jason Morgan. I am a technical evangelist with buoyant and it's my job to talk to folks about linker D and try and encourage people to use it. Thanks for having me. All right, cool. We're here to talk about linker D. Do you want to give just one minute background about it? Yeah. So, linker D is a service mesh and it is built specifically for Kubernetes. A service mesh is a tool that you add to your Kubernetes cluster that will intercept and work with the traffic between your applications. So, a service mesh works by adding a number of proxies beside your applications to handle the traffic between them and allow you to enhance them with additional observability features, make your request more reliable or more performant or to add security into the environment, like ensuring that your communications between apps are all mutually authenticated. Great. So, that's service mesh in general, and did you have a specific use case or story that you wanted to cover for today? Yeah. So, linker D is, today what we're going to go over is multi-cluster linker D. So, we're going to connect an application between New York and London using linker D's multi-cluster functionality. So, we're going to see how easy it is to set up and how you would reproduce something like this. Sounds good. All right. Should we kick off? Yeah. We are. All right. If you don't mind, I'm going to share screens. So, give me one second. Screen two, allow. All right. Y'all see my screen? Okay. We see it. All right. So, I've got two clusters. Well, I'm going to have two clusters by the end of this. We've got a cluster in New York, right? So, that's the Sevo Kubernetes New York data center and one in their London data center. Now, what we're going to do here is we're going to write traffic from our user, right? In this case, top hat person with the monocle. We're going to go over the Internet into a traffic ingress through to a front-end application in our Kubernetes cluster. Then, Linkerdee is going to take that communication from traffic to the front-end to this multi-cluster gateway through a layer seven connection between these two gateways and then over to a back-end sitting there in London. So, we're going to deploy an app. We're going to deploy some clusters. We're going to show the front-end not working until we connect it to the back-end over in London. And that's what I've got for slides. We can hop over to the demo. So, the scenario is that the application is kind of spread across two regions or something like that. That's not like a... This supposed to be like components of the same application, right? That we're... Yeah. Yeah. So, we've got the front-end and a back-end for this application and we need to connect the front-end to the back-end to make it work and the back-end is living in our London data center, right? So, this isn't... This in particular one is a bit contrived just so it's easy to demo, right? A more common use case might be like... I've seen examples of folks that, you know, they have an application that the scale is so massive that it needs to live in its own cluster. So, they use multi-cluster to connect, you know, one app to another in that circumstance or another one might be like we've got a customer who is centralizing their common functionality like logging and mattress collection in a single cluster. And then they essentially add their... Add their logging and monitoring agents to the match in the other clusters and send messages through to this central cluster so that the individual application clusters don't have to worry about running Log Stash and Prometheus and things like that. Thanks, Nass. Yeah. Thanks. All right. So, let's get started. So, the first thing I'm gonna do is I'm gonna use... I'm gonna use the Cebo CLI to create a couple clusters. So, let's do that here. Can you see my terminal okay? Yes. So, what I'm doing here is I'm creating a cluster called NYC2 with one node and it's gonna be a small cluster. And then we're gonna wait for that to finish. Then we're gonna do something similar in London. So, the terminal on my left is gonna be my New York cluster. The terminal on my right is gonna be my London cluster. So, while those spin up, we'll get going. So, the first thing I have to do to use Lincardy in a multi-cluster manner is I have to generate certificates for Lincardy that use a shared trust route. So, what happens is I've got these two clusters and we're gonna do mutually authenticated traffic from one cluster to the other. So, they all have to trust a common root certificate so that they can do that. So, we're gonna use a tool called STEP to generate a root certificate and then two different intermediate newer certificates. So, stepping back, Lincardy has a component that automatically generates a new certificate for every application inside your Kubernetes cluster that's part of the mesh, right? And it generates that certificate with this issuer certificate that it has. So, that's gonna be, our step now is to make both issuers trust the same root. So, one issuer will trust certificates from the other. I'm gonna use a tool called STEP to do that. We're just gonna go ahead and short circuit this because it's taking a second. So, all of these clusters create, oh, okay, I just got impatient. We're gonna go ahead and generate a new root certificate. So, first, I'm gonna just move into a temp directory. Just make sure it's empty. So, we've got an empty temp directory and we're gonna create a new root CA. So, I've got this tool called STEP. It's basically just an easier way to do open SSL if you're not already good at open SSL. I can send a link to it. It's also under our multi-cluster docs. Let me actually pull this over. We're gonna follow generally this multi-cluster tutorial that you can find right here. I don't know, is it all right to put in the chat or what's the best way to share that out? If you could just put it in some sort of slide, yeah. Okay. We'll try to put it on the screen as well. Okay, cool. So, we're essentially following that tutorial here, which has all the steps for all the commands for how we do something like set. So, we're gonna create a new root certificate right here. So, with this root certificate, we can now create an issuer. So, I'm gonna create one issuer for New York and one issuer for London. So, let's do that. So, here I'm creating a certificate for identity.linkardee.cluster.local. The issuer is gonna be called issuer.london.crt and we're using the root CA that we just created. All right, so we've got our one issuer and then we're gonna create the other for London. All right, so if we look in our directory now, we've got the root CA, a New York issuer and a London issuer. Let's go ahead and export some Kube configs. So, we're gonna use the CBO CLI again just to generate a new Kube config. Save it off. Export Kube config. Escape that. All right, so I can do K, get nodes. We can see I've got my node. It's got this node name. It's in our CBO New York data center. I'm gonna create one for London. So, we're grabbing our config now for the London cluster. Export Kube config. All right, so you should be able to see in my terminal, right here in the prompt, it's gonna tell you what Kubernetes cluster and what namespace we're in. So, you've got a sense of where we're working. So, again, we're sticking. The left side is gonna be New York. The right side is gonna be London. So, now we need to install Linkerdee, right? So, we're using the Linkerdee CLI to actually do the install. So, by default, the Linkerdee CLI is gonna generate new certificates for you every time. In this case, we want it to use the certificate that I created. So, we're giving it commands as to what's the root CA, what are your identity certificates gonna be so that you can create this environment. And then we're gonna go ahead and apply the resulting YAML to our cluster. Then we're also gonna run a Linkerdee check at the end so we know what's happening. So, here's the New York install with the New York certificates. And we're gonna do the same thing for London. And we'll let that install run. Any questions as we're going, this'll make sense so far. So, we bootstrapped empty Kubernetes cluster. So, Sievo, they're a Kubernetes service provider or they're actually a whole infrastructure service provider but we're using their Kubernetes service offering. We're spinning up a K3S cluster in these two data centers. Right now I'm installing Linkerdee. So, there's no apps here beyond the default traffic instance that installs with K3S. We did our install and we used the Linkerdee check command just to validate that the install took correctly, right? And that we're ready to go on this cluster. Now we're gonna add on, so Linkerdee has components. So, we installed core Linkerdee which is the functions of the mesh that you need to actually do work. So, that's add in the proxy beside your applications. That's give the proxy certificates, that common components like that but we don't have what we need to do multi-cluster or what we need for a dashboard. So, we're gonna install those next. So, we're gonna do the multi-cluster install. So, let me just clear this up. So, Linkerdee multi-cluster install and again we're just gonna send that over to kapply-f. And then we'll do the same thing on the other side. So, we're installing in London, that was pretty fast actually. And now we're gonna do the same thing in, or sorry we did it in New York, now we're doing it in London. We're also gonna go ahead and install the Linkerdee dashboard so that we can pop open a dashboard if we want. Right, so this is the visualization component. So, it's got a Prometheus, a Grafana, as well as the Linkerdee dashboard so that you can view things in a nice UI but this is not part of the default Linkerdee install as of Linkerdee 210. Right, so the goal of Linkerdee is to be an extremely lightweight, extremely fast, extremely easy to use service mesh. Right, like trying to make it as low complexity as possible. So, in general like this, the setup of connecting clusters between regions can be fairly difficult but what we do is, you know, with the routing we do a layer seven connection. So, I didn't set up any special routing rules between London and New York. Right, this is any two places that can make an HTTPS connection between each other, we can bridge clusters over that connection. Right, there's no special stuff that you need to do to make that work. Right, the only complex part is ensuring that all of your Linkerdee instances use the same certificate or use the same root certificate, excuse me. I guess maybe people would assume that they would need to create some kind of network level tunnel between the data centers to make the connectivity work between the ports, between the services but the service mesh obstructs that. Yeah, absolutely, right. So, all we're gonna do is, inside of our app we're just gonna, I'll show you exactly what we're doing but we're making just a DNS call to the appropriate service and then it's gonna route traffic for us automatically. There'll be no, like there's no special objects created, it's still just a Kubernetes service. Right, and once you're in the mesh you can use a Kubernetes service to talk in that way. So, we now have, we now have Linkerdee in each thing so let's just do it a quick step. So, kget pods, kget pods dash a. So, we're just gonna look at all the pods we have on this cluster. Make it a little bit smaller and let it in. You know, we've got our cube system installed so we've got standard stuff as well as some SIBO components. We've got Linkerdee, right? So, we have the identity, the proxy injector and our destination service. That's our core Linkerdee components. For multi cluster we have our Linkerdee gateways. This is the actual ingress point. So, again, let's look at our diagram. So, what we have right now is we have the New York and London clusters. We have traffic, we actually have traffic in both but I'm only showing it in the one because this is the ingress we're gonna be using and we have these gateways. But what we haven't done is told these gateways to talk to each other, right? So, that's the next step. So, we're gonna link these clusters as our next component. So, let's do that. Sure. I wanna do it on this side. Okay, so, let's talk about what we just ran there. So, what we're running is a command Linkerdee multi cluster link, right? So, we're saying, hey, we're gonna link this London Kubernetes cluster, right? We're gonna generate a YAML manifest. So, let's actually run that again. We're gonna run that again. That's just ignore, we're just one sec. Oh, yeah, okay. So, I can just run it again. So, what we do when we run the first part of that command is we just generate, we generate the plot security policies, services and service counts, roll bindings, that sort of stuff, the permissions that we need to tell one cluster it can talk to the other, then we need to apply it. So, from the London cluster, I need to generate this YAML and apply it over on the Kubernetes cluster. So, now I can do Linkerdee multi cluster gateway and see, okay, my New York cluster now sees the London cluster, right? It sees that link. Go look at our London cluster, right? And it doesn't have any peers. So, what we've done is we've given New York access to London but we haven't given London access to New York. Because in our case, we don't need it. It's not bi-directional. Our front end is gonna live in New York and it's gonna talk to you London and that's the end of that story. So, now we've got our gateways connected, right? And again, it's a one-way connection here. So, we've built this, we've built this connection here so that we've got our link set up. That's what we do with that multi cluster link command. Now, we can actually start deploying some applications. So, here on my New York cluster, I'm gonna deploy my front end application. And then we'll take a look at the YAML after we do that, oops. All right, we're just gonna go back to our redirector. So, we've created the front end app and we're gonna look at what all we made there. I'm just gonna go deploy the app on the other side as well. So, on the first side, I deployed my front end web app which is just a little engine X instance. And over here, I'm gonna deploy an app called Pod Info to this cluster, right? It's gonna get me my backend service which is Pod Info. So, we're creating a namespace, a service, a deployment, an HPA which I don't really need and Ingress which I don't really need either. And then on this side, if I do K, get all, right? Oh, I'm in the wrong game space. K, Ingress, Pod Info, right? I can see like K gets deployed, right? I've got one deployment, front end, it's not ready. I can look at our pods, same thing, right? Like it's failing, right? And we'll look at Y here in a second and we can check out our services, right? We've got our front end service and we have an Ingress, can you get Ingress? Right where nothing's going to, nothing's going to, or it's set up to wrap to our front end. So here, let's just do, so YAML, let's look at this. So it's basically saying anything that comes in to this application, I want you to send it to the front end app, right? On port 8080 and we can actually look at this IP, and it can actually be called. And we see that the service is unavailable, right? Because the pod that's going to populate that service isn't working. And let's talk about why. So if we look, I created a namespace, I said on the namespace, Lingardee Inject. So I told Kubernetes, or I really told Lingardee that anything that gets created in this namespace, I want to add in my proxy to it. Like someone has to be part of the network. But I've got a little config for this Nginx instance. So my front end is really just the little Nginx server, right? And so we set up a config and we're telling it, talk to this address, right? HCP podinfo-lon on port 9898, right? So what's happening is because there's no service, no valid service for podinfo in this cluster, it's not able to hit the back end. So it's crashing, right? That's why we're at our failure. And then otherwise I just have the config for, I just have the config for the rest of my front end, right? Let's go back to that link command really quick. So the link command, one of the things that we do is we specify the cluster name, right? So when we export a service from one cluster to another, the service name gets appended with, gets appended with like a dash and then a cluster name so that you know where it's from. And so if you look at the podinfo config, the engineX instance is looking for podinfo-lon, L-O-N or London, right? And here our cluster name is LON. So that's how we're gonna get that service in. It's a Q, TRL-L, I don't know if you can TRL-L. So if I do canGetService on this side, I've got, sorry, I need to change the name of this. And that's podinfo. I've got a podinfo service, but we don't see it, we see it on our London cluster, but not on our New York cluster, right? We don't have it. So we actually have to change or edit the podinfo service to export it, right? Tell Linkerdee to share it between these clusters. So that's what we're gonna do next. Still making sense? Seems reasonable to follow along. And that if you check out that multi-cluster link that we put in the chat, that will have these steps in a lot more, a lot more depth so you can go at your own pace. Awesome. So let's do K edit service podinfo. So here, what I'm gonna do is I'm just gonna add a label to the service, right? So we have to instruct the Linkerdee multi-cluster what services we wanna share out, right? So we'll do that by adding a label. And just so I don't type it wrong, I saved it off to the side. Yeah, so it is mirror.linkerdee.io slash exported equals a true, right? So I just add a label telling Linkerdee the Linkerdee multi-cluster component that this service should be shared from London to New York. So with that edit, we're gonna see a new service here called podinfo London, right? Or LUN, I really should have called it London. And then we'll do a watch, QCTL, get pods. And we're gonna see this front-end service go from a failure state to succeeding. And we're gonna see our app go from no connection to a connection between New York and London any second now. Any questions while we're waiting? Yeah, I don't see anything coming up on the chat. Guess was a really good demo, everything was clear. And yeah, let's see the magic happens. All right, well then I'm gonna show, the next thing we'll show is, you know, so we've got, we've got this going, I can restart it, but it does, it just picks it up on its own. I guess it's just a crash that we've got. We can come back to it. Yeah, we'll go ahead, we'll break it out. K, delete, pod, whatever, yeah. Right, so now we'll see the new one come up and it's gonna start successfully because it's got a back-end. You don't actually have to delete it but I didn't wanna wait for the crash loop back off. So now we're two of two running, right? So we see a success. And if I go back to this page, instead of seeing service available, I've got hello from London, right? As our back-end over in London, hey Matej, our back-end over in London has now connected and we're sending traffic. Awesome. Yeah, so what we can do, seeing as we have way more time than I thought we had, we can take a look over at, so I also did this earlier, right? And I created an environment left that, left it actually run for a while and I connected those to Boint Cloud. So Boint Cloud is a linkerty, or is a commercial product around linkerty that Boint created to make it easier to do, you know, multi-cluster management or just to check on the health of your linkerty instances. It's free for up to 50 workloads, right? So I connected my two C-Vote clusters in. We can look at what's been going on with the ones that we created previously. So the thing I wanna show you is this trust route thing, right? So, thanks, son. So what we've got here, like one of the things you wanna ensure when you're doing multi-cluster is to make that connection work between multiple clusters, the key component, right? I've stressed it a bunch, but it's one that is really worth remembering, right? I have to share that common trust route. So here I can easily identify what my trust route is by its signature and ensure that they match, right? And that's the big component in that's the big component in making this work. Then we can do some fun stuff like, like right now if we go look at the workload, right? So if I go check out, you know, the actual stuff, we could sort by HTTP metrics. I could sort by requests. Let's go see it this way, right? The things that are really getting hammered. So I've got a traffic generator out there. So we look at this thing. What I also have is this traffic generator service over in London, which is actually hitting this traffic ingress, sending a message from traffic over to our front end, right? And then we go from the front end to our gateway, and then we go from gateway over to our backend. We'll make it curve around a little bit. If you haven't seen it, Scaladraw is a great tool. It's free and you can do nice little diagrams, right? Now our user can send a message over to our traffic ingress, and it will get routed appropriately from New York to London, right? And back. And here, right? I can see the data on, you know, my requests and successes in this environment. And if I do something a little bit destructive, I can go start generating a bunch of errors or failures, right? So say we had, you know, a change to our front end or a change to our backend, right? And we started sending, we started throwing up occasional errors. What we're gonna see is that data reflected in the actual live traffic in our apps. So I can see the pod info is starting to send some 500s. That's cascading down to the front end, which is going over to traffic, right? We can also introduce, so latency still looks good, but we're seeing some errors get introduced, right? But our request volume is also going up. And then we can add something like a delay, right? Where for whatever reason, as traffic goes up, we're also seeing a delay. What we're gonna see is this latency also start to spike as we inject issues into our environment, right? Or as we have issues in our environment. Yeah, that's really nice. Since there are no questions coming up, maybe I can add a few. Yes. So I started to use here traffic for the ingress. Is there any relationship between traffic and linker D or is it just, you know, just you want to choose something that you like? Yeah, so you actually asked me at the beginning what was different about linker D, right? So linker D, unlike some other service matches, is just focused on the in cluster traffic or multi cluster traffic, if we're connecting clusters. But essentially the traffic inside the mesh, what we don't care for or what we don't worry about in linker D is how the traffic gets from the internet into your application. And so we work with traffic, we work with ambassador, engine X, or whatever the ingress is that you're using, like we'll connect to it and bring traffic into the mesh. And we actually did kind of a detailed session with the folks over at Sevo on, you know, kind of the do's and don'ts and hows and whys of integrating your ingress with your mesh. So that was a bit of a loop at the time. Sorry, the answer is, we don't have any special integration with traffic. Traffic works great with linker D as far as I can tell, like every ingress I've worked with, we haven't had a problem integrating, right? There's some details about the way ingress works. That makes the integration slightly different by the actual ingress controller, but they're all supported. Cool, very cool. What are the other use cases we've seen in multi-cluster? We've seen metrics being reported, collected, like error rates, latencies and so on. Are there any other use cases for linker D? Well, so like if you step back, right, like the reason to use a service mesh, right, like it's kind of why you go to Kubernetes, right? You go to Kubernetes because you want, ultimately, you have an objective, right? And that objective is I want my business logic inside my applications to work and work well and work reliably when I need it to work, right? So linker D is there to allow developers who are working on a platform that includes linker D to focus more on business logic and less on common functionality like NTLS, right? So it may be important from a regulatory standpoint that all my traffic between applications is encrypted, right, like even if I'm in say I'm in AWS and I've got three AZs, right? Like I want to know that that traffic between availability zones really is encrypted the whole time as an example, right? You can get that for free on your platform by adding in these services to your service mesh. On top of that, right, like there's way more I want to know about my environment than just are things encrypted. Like as I write an application, like if you've ever dealt with, you know, hundreds of microservices, like it's hard to get consistent metrics from every application. Like it's a huge, I can't tell you the struggle that I had at one of my previous jobs trying to get people to agree on a standard or even just to make you like, hey listen everyone, we need a slash metrics endpoint because it's part of what we do to determine whether or not your app can go to production, right? Like that was, that was a struggle, right? With using LinkerD, you don't have to go bother those application teams because you're going to get common metrics from every single application. Like I didn't instrument anything in this environment, right? I added LinkerD and now I can see the success rate, the request per second, the latency. I can look in, right? I can go further in and see what API, what paths within the APIs on these apps are being collected and what are the failures or response times by those particular paths? Let's actually pop that up real quick. So here in London, we'll do LinkerD viz dashboard. Right, we'll pop open. This is just the open source dashboard that comes with LinkerD, right? So I'm connecting over to London. So there's a bit of a delay, but I can see, I'm actually going to change plus those because this is the one that I'm not hammering with a bunch of errors. So let's make it a little bit more interesting. Sure, export. I'm going to go to London one. Oh, shoot, one sec. Sorry, one sec. Cebo, Kubernetes, config, dash dash region, London one, I want my London one plus there and I want to export that config to dot cube slash configs, London one, there we go. So I'm just grabbing the config, KNS, PondInfo, switching over to PondInfo namespace. Let's run that LinkerD dashboard again. Right, so I'm getting things with the mesh like access to this dashboard that gives me like detailed metrics so I can see all my namespaces. That same info I was seeing in Boyd Cloud, I could click over into the built-in Grafana instance and set up my own queries there or connected to the enterprise Grafana instance I have. I can go into PondInfo and I can see specifically what service inside this namespace or what deployment inside this namespace is having issues. So it's not my generator, my generator's happy. It's PondInfo here. So look at that app, I can see the map of my traffic, where things are coming from and then I can even see which service is returning errors. Well, it turns out I've got an endpoint called 501 that just echoes whatever status code you send it. So this has been artificial, right? But you can see you can diagnose where that's coming from and get the latency, right? Like which of these have like the worst response time, right? Oh, it turns out my slash delay one endpoint has been a one second delay every time it comes in, right? So now again, another artificially introduced thing but we can get this information, we can get it to our SRE team, to our application developers without having to get them to instrument anything, right? They just get it by being on the platform and you as a platform developer, just get the tools to deal with it. Now I could add in, if I wanted, I could add in a service profile and say, hey listen, when you hit PondInfo, no matter what, I want the maximum of a 500 millisecond timeout on any given call as an example, right? It's probably aggressive, but you get the point. I could then as a platform engineer, adjust this to fixed timeouts or do retries on a particular path in my API or whatever it is that I'm looking to do. Did that answer your question? Yeah, very much. Yeah. Cool, yeah. So I don't see any other questions coming up. So maybe you can tell us where we can go to learn more about Lingardee and if people wanna experiment with Lingardee, how can they get in touch with you? Yeah, so I'm gonna, I'll post a couple of things in the chat. So the best place to go to get started is check out the Lingardee Slack. So it's slack.lingardee.io. That's where you can join our community. We're super active, right? We're, you know, there's a lot of folks that will help respond to your questions, help you get going. Thank you very much. There's also our getting started guide. The getting started, there we go. There's our getting started guide, which will walk you through the initial steps of actually setting up Lingardee. And there's that multicluster demo that I sent you to. That multicluster demo is a bit more complicated than what I just did, as it includes using a traffic split to share traffic between East and West or between two clusters, right? The basic, the basics are still there, right? What we have to do is connect the clusters, export the service, and then just have anything inside the mesh, call out to that service, just like you would something else that's local to your cluster, right? So again, keep it simple. Don't make new custom resource definitions. So we don't want people to have to do, you know, deal with work with new object types just to use Lingardee or to use multicluster, right? It's Kubernetes. So we want, you know, we want Kubernetes, we want Kubernetes objects to be like the primary thing that you work with. Yes, our things, our guides are great one. So those are the big things. I checked that out and then, you know, we actually had another, yeah, Augustris. It absolutely works great on K3S, and there's a cool CLI tool that you can use. Or so the Lingardee CLI comes with a test, to let you know if it works. So if you want to do it on, like I'm running K3S on Docker in WSL2 on a Windows box, right? So there's a lot of room for walkiness in here, but I can just check, you know, does this work? So let me spin up a new K3S cluster, right? K3D create cluster native live, right? Cluster create, maybe? Don't type this enough. Yeah, so I'm going to create a new cluster. So this is a K3S cluster running, you know, in Docker on WSL, so a bunch of weird stuff. But then we can just validate if this is going to be a safe target for what we're doing. Or a kind cluster, or, you know, whatever you're working with. And then we can just run Lingardee check dash dash pre, and this will just tell us, do we have the permissions? Do we have the right object types? Is the API of the right version? Like can we do what we're trying to do here and install Lingardee? And it'll just tell you at the end. Yes, you know, in this case, our green check mark tells us we're good to go. Very streamlined user experience, I must admit. Awesome, yeah. Well, try it out. I hope, you know, it's really not too bad. You know, what we see is a lot of adopters, right? Find that, you know, the experience with Lingardee, like by doing things like reducing the, like, the cognitive load that you're under, right? So like what do you need to, we need to know to make it work. By keeping that really low, we allow you to get value out of the mesh, like get some business value for your organization out of that mesh without becoming experts in the way the proxy works, or, you know, the custom resource definitions. Like I haven't played with any custom resource definitions yet. Well, I'm lying, actually created the link, but that was on the backend using Lingardee CLI, right? But I've got one CRD created in these two clusters and, or, you know, instantiated. And, and now I'm doing a connection between New York and London, right? Right. And then it's even less when you're staying in cluster. You have, you need to work with zero custom resource definitions. We use your Kubernetes objects right at the gate, right? Oh, one more, sorry, GRPC load balancing. If you're using GRPC to do connections, a thing that, that like GRPC is great, right? Cause it allows you to, to, to multiplex connections over, so that is do a bunch of connections over one, one HB connection. So you can send a bunch of requests over one HB connection, which is awesome, right? But in, in Kubernetes, right? Because Kubernetes does what's called connection level load balancing, we're gonna get is essentially hot pods if you have a GRPC service. So one, one responder for your GRPC service is going to, is gonna get hotter than the others, right? Cause it's gonna get all the traffic. And so with something like Lingardee, what you get is right out of the box, request level load balancing for your applications, right? So that you're gonna balance that across all the different, all the different components. Cool. And I bet that is easy to set up as a, every other thing that we've seen here. Yeah, you've seen me set it up already, right? Like it's just, it's just there and it just works at the gate, right? There's no, there's no configuration to you. Like the design principles of Lingardee are be fast, be secure, be really easy to implement, right? Great, I'm convinced. I hope you have the answer as well. So if there are no other questions, I guess we can conclude this one and feel free to ask any other questions that you may have while playing with Lingardee in their Slack or in the CNCF Slack under the cloud-native-live channel. Jason, thank you very much. It's been a really nice, quick, fast, just like Lingardee demo. So thank you for that. And I'll see you next week on cloud-native-live, next Wednesday, every Wednesday. All right, awesome. Appreciate it. Yeah.