 live. Glad to have you all here. This is Cloud Native Live, where we dive into the code behind Cloud Native. So very happy to have you all here. I'm Annie Tolesto, and I'm the CNTF ambassador, as well as a senior product marketing manager at Camunda, and I'll be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer all of your burning questions. So join us every Wednesday to watch live as you have done now, or you're watching this on demand. Very much welcome. So this week, we have Jason Morgan here with us to talk about Service Mesh 101, an introduction with Linkery, very exciting topic. And as always, this is an official live stream of the CNTF, and as such, it is subject to the CNTF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants, as well as presenters. We will be taking questions throughout the whole session. So if you have any questions, comments, concerns, anything, just send them to the chat, and we will get to them. Hi to Ahmed from Egypt, for example. Thank you for letting us know you're watching and engaging. Perfect. But with that, I'll hand it over to Jason, and we will get started. All right. Hey, folks, how's it going? So my name is Jason Morgan. I work for Boynt, the company behind the LinkeryD project. LinkeryD is a graduated project from the CNTF, and we're a Service Mesh. Just a quick question to you. Is Abu Bakr going to join also? He's moderating the question, like the Q&A, to my understanding, if he wants to join up first. Happy to. I just want to know if anyone else was supposed to be talking to anyone else. Well, so I just wanted to start off with a little bit of like, what is a Service Mesh? Can I share my screen? Yes, I think, yes, perfect. All right. So I've got a little whiteboard here, and I have an example application, and it's really living in a Kubernetes cluster. So imagine this whole box is a Kubernetes cluster. So I've got an application. We're actually going to deploy an application that looks just like this in a little demo right afterwards. We've got some kind of web front-end and two back-end services, and so this is your app on Kubernetes. And I want to show you what a Service Mesh is and how it interacts with the rest of an application. So the long story short is a Service Mesh is a tool that shifts some responsibility from application developers onto platform operators, right? And it does it by installing a bunch of proxies in between your applications and those proxies manage network traffic. So they can handle things like service discovery, adding observability, doing things like request level load balancing, a bunch of other things. MTLS, if you care about encrypting and mutually authenticating connections between environment. So the process of installing a Service Mesh is fairly simple. So first off, you're going to take your Kubernetes cluster and you're going to import or install a control plane. So the control plane is the interface for us platform operators and our Service Mesh, right? So we install a control plane in our cluster and then we begin adding applications to a mesh. And the way we do it is we install a little proxy and we sit it beside our application and that proxy then handles all of its traffic on the network, right? It is your proxy or your arbiter for things that happen in app to app communication. So the proxies will handle setting up encryption, they'll handle giving you some insight into how your application works as well as anything else that you want that Service Mesh to do. Does this kind of make sense so far? Yeah, really good stuff. And if the audience you have any questions or something doesn't make sense, please do let us know so we can take you through the explanations on everything. Yeah, awesome. So with that, we can hop into a quick demo and actually go from the slides to real life. Sound good? Perfect. All right. So I'm going to a new screen. Can people see that the writing is this big enough? I can see it quite okay. Maybe it could be a bit bigger, but I think it's probably fine. I have essentially taken my whole screen and then it works at least. All right. So let's start off, right? Let's see what is the process to install Linkerd. So right now I have a Kubernetes cluster. It's running locally and it's got one app going, right? And that app is very similar to the one that we described. There's a web front end, there's two back ends, and then there's a traffic generation service. So let's just get started. So first and foremost, I want to install Linkerd, right? So the setback one more time that the goal with Linkerd is to make it as non-invasive as possible. We want you to have apps that run well in Kubernetes and then install Linkerd and have your app continue to run the way it was running before. So if we go over here, KNS, Moji, Voto. I'm just going to swap namespaces over here on the bottom left. If I go port forward, service slash web. I think it's web-service at 80.80. And I go to my web browser and go local host 80. And I can see I've got my application going. I can vote on things. I can check out my leaderboard, right? So I've got a web app. It's running in Kubernetes. And right now it works. And we're going to install a Linkerd service mesh, then we're going to add our application to the mesh, and everything is going to continue to work, right? That's our starting point with Linkerd. So what I did there is I pulled the curl, just to install the Linkerd binary on my laptop, right? It was actually already installed because I do this professionally. But you get the gist of it. And we're going to go, let's see, it's 1209 Eastern. We're going to have a fully installed Linkerd instance by 1214. So we're going to do zero to mesh in five minutes. After I install the binary, I just extend my path. We can go check our Linkerd version, right? So I'm running the latest stable, which is 2.11.1. There's a question, by the way, if there's a good time to take one. It's going to mess up a bit of your timing and the goal. No, I think we're good. We'll do questions and still hit our target. That's my bad. You have better. So what is the difference between service mesh and networking in communities, like Calico, Weavnet, and other networking? Yeah, absolutely wonderful question. Thank you so much. So it's the TCPIP stack, right? So the TCPIP stack is you've got these seven layers, like physical, like data link, IP network. There's seven layers. I can't remember them all. It's been a while since I've done my networking exam. But essentially, your networking tools run down at layer three or layer four of the stack. And they worry about IP address to IP address, maybe like namespace to namespace, whatever that is. That's where they live. Service meshes live up at layer seven. So they're an application focus tool. So they take the network and just use it however it works. And they don't concern themselves with the networking. They let network handle getting a packet from one place to another. They provide application level logic and control into your environment. So you'll see it a bit when we get into the demo. But they'll do things like inspect the traffic between your applications so you can see how the individual API calls are going. They'll do things like change the load balancing from connection level, which is, I open a connection from one point to another point, and everything sits there. And change it so that they see each request and then they'll balance those requests against all the available connections as opposed to just one. I hope that answers your question. And I'm sorry I don't know how to say your name. So with that, and please feel free to ask more if I didn't quite get it. Oh, awesome. With that, we're just going to test that our cluster works. So I'm actually running K3D in memory cluster on Docker desktop on my Windows 11 box. So I've got a lot of weird stuff going on. So I just want to check that this is going to work. And the liquid CLI gives me this handy pre-check flag. I can validate that things are going to be okay if I install, or at least that it thinks it'll be okay. Get a bunch of green checkmarks, which is awesome. And I feel confident to move forward. So with that, we're going to try to type linkerd install and see what happens. And I'm still, I've got two minutes. So we'll see if I hit my target. So when I type linkerd install, what I get is instead of anything actually changing on my clusters, this is all the active pods on my cluster. Instead, I get a bunch of YAML, right? And this is the Kubernetes manifest that's going to install linkerd. So I could put this in a Git repo, add it with a GitOps flow. I could generate this YAML using the linkerd CLI, or I could generate this YAML using Helm. And both the CLI and Helm in linkerd use the exact same YAML templates. So an argument that you provide to the CLI will work on the Helm chart and vice versa. So you don't see a ton of space between the Helm chart and the CLI. So to actually do the install and meet my targets, I'm only going to a minute left, I type linkerd install and I type it over to kubectl apply. When that happens, I get, you know, three new deployments being created. I have an identity service, a destination service, and one other that I don't remember. And you say, oh, right, in the proxy injector. So these are the three services that are actually going to be the core control plane. So going back to that graphic, that's this control plane. That is a tool to actually distribute our proxies, a tool to know where traffic should go, and a tool to give people their valid identity, or give our pods their valid identity so that they can talk to one another. And we're at 12.14. And I've got my core control plane up and running. We'll run a linkerd check just to validate that it's up. And status check screen and 12.14. So I feel like I'm there. So that's the core linkerd install. But as of right now, we can take a look over at our Moji Voto app. And none of this is actually in the mesh. So the second half of what we do after we install that control plane is we actually have to add these proxies in. So we're going to go ahead and do that next. Actually slightly lying here, because what I'm really going to do, now that I've got the linkerd control plane installed, as of linkerd 210, we broke the control plane into a couple different components. No, so Femiusif asks if pods are installed in every node on the cluster. The answer is not necessarily. When you install linkerd in production, you'll want to install it in HA mode, so you'll want at least three replicas of the common control plane components. But there's no daemon set with linkerd that necessarily installs one per node. I hope that answers your question. So now that I've installed the main control plane, I'm also, because I'm doing a demo, I'm going to want to install the linkerd dashboard. That's like our cool graphical interface and allows us to do some neat stuff that I'm going to show off in just a minute. So I'm going to install linkerd vis, and it will again generate YAML templates, and we're going to hand it off to the Kubernetes API to actually get the dashboard installed. Make sense? Hopefully. So by default, the linkerd vis components are installed in a different namespace. I want to be clear, this visualization component, this dashboard is optional. It's handy, and I'd recommend it for a lot of use cases, but it's not required. Now that I've got it, it installs a bunch of things from our, we've got this tool called tap, which allows you to see the metadata about every request between these proxies, and actually do a bit of wire shark style inspection of the calls between the applications. And it includes a Prometheus and a Grafana component, right? So Prometheus and Grafana are, like, Prometheus is like a time series database allows you to collect scrape data from the various proxies, and Grafana is like a visualization dashboard for it. We're going to run a linkerd vis check. There's another question. Yeah, let's go. Is there any performance impact on the apps due to the proxy container in the middle? Yeah, great question. Oh, I don't know. I'm sorry. Yeah, great question. So yes, there is. Whether or not it's a negative performance impact is going to depend on your application and the scale at which you're running, right? So we've got lots of examples. And if you look at KubeCon North America, some folks over at Entain, they're an Australian company, they do bedding basically, or they run a bedding platform. So what they found is because they were doing a lot of GRPC connections, adding linkerd into their environment increased their performance and allowed them to get over a 10x increase in the number of requests they could handle by adding linkerd. So the very short answer is adding any additional steps in your network path is going to slow down any given request, right? As you scale or as you do more and more traffic, the benefits of connection level load balancing or intelligent endpoint selection can outweigh the cost of having those additional proxies in the space. Does that answer your question? Yeah, I think it was actually a really good explanation. And I think there's also really great linkerd case studies in the CNCF case study area. I read a few of them. They're really amazing results. Oh, awesome. Yeah, there's a ton out there. Let me know if you have any more questions for all this. I'll put up the link to the linkerd slack at the end. Feel free to hop in, ask us questions directly. You can find me right there all the time. So now that I've installed the linkerd dashboard, what I'm going to do, you'll see there's this component, this tap injector, right? So that is that like linkerd adds the proxy by adding in what we call a mutating webhook. And that what that does is it sees, you know, an object type and it changes it on the back end. So you don't you don't see it necessarily. But what we have is, oh, actually, no, I'm fine. What we have is when we inject emoji Voto, it's going to get the tap configuration as well as the basic configuration in there to actually to actually get things get things set up. So yeah, there's someone asking for beginning level, I think humanities guidance. Do you have any tips or? Absolutely. So the linkerd docs have this getting started guide, which which walks you through basically everything I'm doing today, right? Like in depth, step by step, how you how you get this up and running. If you're totally new to Kubernetes and don't know how to send stand up an individual environment, the thing I would recommend is try something like Docker desktop or K3D or depending on what you're on so that you can get a local Kubernetes environment that you can get started with and also the Kubernetes docs themselves have a link to a getting started guide that's pretty pretty good, although it's been a while since I've since I've used it, but it was pretty good when last I checked. We're just going to look at the pods in the linkerd namespace cool. Pop it a dashboard. Sure. This isn't totally necessary yet. So I'm actually just going to skip that right now. We'll pop it a dashboard in a second. I just don't want to do it quite yet. So what we're going to do now is we're going to add. Yeah, you can absolutely you can sell linkerd on any Kubernetes distribution. So whether it's managed, whether it's your own version, it's fine. There's no, there's no networking or special Kubernetes API requirement, right? It just, it just works wherever you're going. So, and you can always test it. If you're not sure whether or not you can do the install, do that linkerd check command. So linkerd space check space dash dash pre, right? That'll tell you whether or not you have the right permissions and whether your Kubernetes environment is set up correctly to install linkerd. So you can always test that before you, before you run anything. So here, what I want to do is I want to add, I want to go back to that diagram. Sorry to keep flipping around here. Going back to that diagram, what I want to do is, oh, you're so welcome. What I want to do now is add the proxies to these components, right? So I'm going to, I'm going to inject my environment. That's what we call it is injecting the proxy. So what I'm going to do is I'm going to get all the deployments in the emoji voto namespace. I'm going to output them as YAML and I'm going to send them over to the linkerd CLI command. Linkerd CLI command is linkerd inject, right? And what that's going to do is look through the objects that you send it. It's going to see, hey, is this object a deployment set, a stateful set or a replica set and probably one more daemon set. And if it is, it'll add a little annotation that says, linkerd inject true. And I can show you that in a second. And then we're going to send it back to the Kubernetes API and that will tell the linkerd webhook to go ahead and change that deployment object and inject the proxy. So that's a lot of talking. So you can just see it. What you're going to see is when I hit enter, these pods are all going to get restarted. And now there's going to be two containers instead of one. So let's go. I want to just talk about a pod real quick, right? Kind of a funny name, right? This all came out from Docker. We had a little whale. And so like a pod was like a pod of whales swimming together. So we've got our, a pod is like a container basically, or like a little namespace for containers, right? And so what we do, the way a service mesh works is it sits a second container beside your first one. It sits a second container beside your first one. And then that second container is the proxy that does whatever it's going to do. So we see the number of containers per pod changing when we inject EmojiVoto. So it goes from one container per pod to two containers per pod, right? Yeah, thank you very much, Amit. So now we have EmojiVoto injected. The other ones have cleaned up and gone away. Yeah. So both FaZe, FaZe and Puffet have asked some questions about envoy and Istio, which I'll hop into in like just a couple of minutes if that's okay. I think that sounds good. Everyone can wait a few minutes, but we will get to it guys. So don't worry, we will get there. So what I'm doing here, but don't hold me to it, right? I promise to provide some answers. I just want to kind of show this injection process first. So what we're going to do now is we're just going to check that this proxy is ready. I can check it in a bunch of ways. Like I can just look over here and I know that it's ready because my pods are running. But let's go ask Linkerdee check. How's the health of the proxy in the EmojiVoto namespace? It's going to tell me, hey, this is how the overall thing's going. Also, the data plane, the plane's looking correct. So great. Why did I do this? I don't need this. So let's now take a look at the dashboard and let's see what we can see about our environment. Because we've added Linkerdee, before, all I could see is I had four containers. But I didn't really know very much. I don't know what it looks like as they're talking to each other or any details. So let's fix that. I'm going to type Linkerdee viz dashboard and I'm going to pop up a dashboard and we're going to use it to take a peek at our environment. So this is in that viz component. So when I installed that Linkerdee viz, this is what I was really installing. I now have a view of my cluster, all my namespaces, for each namespace, how many containers are in the mesh. And then I can see what we call the golden metrics for each namespace. So that is, what is the success rate? So of all the calls, all the application to application calls, how many are returning successful response codes and how many aren't? What's the volume of requests? So what's the request per second that are hitting it? What's the latency? Buy our latency bucket. And then if we want, we've got this handy dandy Grafana button where we can pop out a Grafana instance that's able to talk to our Prometheus and let us set up our own dashboards or use the dashboards that are already built. To be clear, if you already have Prometheus, there are documentation and it's very well supported to instead of using the in-memory Prometheus that comes with Linkerdee viz, to instead use your external Prometheus. So that's a very common scenario. Okay, so let's first take a look. We can sort all these namespaces by the success rate. So that is what percentage of the calls are actually successful. And the other thing we want to do is, or the other thing I want to do, is I want to do that port forward command again, because I want to see, does my application still work the way it was working before? Now I'm going to do a port forward. That's all that's happening there, just so you can see it. And let's go check out, let's go check out emoji vote. So if I refresh, emoji vote still works great, right? I can go click on my various apps. Oh, I didn't mean to do that. And I can vote on them. I can see the current leaderboard. So I didn't create a new custom resource type to say, hey, this is the virtual gateway so I can get my traffic over here, or virtual server, or anything like that, right? I'm just using standard Kubernetes primitives and my application still works as designed, right? With the addition of, I now have all these metrics about what's going on, I can go in, like emoji vote is broken, it's deliberately broken. We can go see why it's broken and how, right? I'll answer that in a second. We can see why it's broken and how, we can see all sorts of details about, like what is the actual environment look like, right? We just have much data, and we also have now everything, all our traffic is MTLS, right, in our class there. So let me, before I go any further, let's answer questions. Is that right? Yeah, I think that sounds great. So we had the questions about Istio as well as the sidecar and Envoy, and then there's the cloud services one as well. Yeah, so let me answer the one about Istio and LinkerD. So Istio and LinkerD are both service mesh projects, right? LinkerD is a CNCF service mesh. Istio is run by another foundation, and I don't know what it is, but like they have different design philosophies, right? Istio uses the Envoy proxy, so instead of building its own proxy, use an existing one called Envoy, which is a CNCF project, and it's a great tool, and it has tons of features and capabilities. Istio has a lot of features that you won't see in LinkerD, right? Like you'll hear people talking about running Wasm plugins or, you know, whatever, some other stuff that you can do because of Envoy that's in Istio, and so it's got a lot of features, but those features tend to come with a bit of a cost, right? So, you know, when we look at it, sorry, I'm just going to finish this. So it comes at a bit of a cost, right? So Istio can be a little bit complex to use, right? So with LinkerD, we don't require you to use any sort of custom resources to work with LinkerD, right? We do everything in a Kubernetes native way using Kubernetes primitives and stick to it, right? With Istio, you need to make your app work with Istio, so you need to write custom objects and custom YAML to support Istio. It also tends to be a little bit more complex from an operator perspective to use and run, especially when you get to scale. So that's our big delta. The other one is, you know, we've been releasing performance benchmarks lately now, so we're the folks that make LinkerD, so take the benchmarks with a little bit of salt, but if you look at, we got this benchmark suite from the folks over at Kinvoke who wrote basically a testing suite to decide what mesh performs better than another, and we ran the tests last year or earlier last year and then recently just the other day with our 2.11 release, and we found that LinkerD performs really well compared to Istio, especially in terms of resource consumption as well as the actual speed to send traffic along. Now, it's faster, but if you need the features that Istio has, you're not going to find all of those features in LinkerD, period. I hope that answers your question, Baze, and if it doesn't, let me know. I'm happy to share more. Oh, thanks for the benchmarks. I'll post them up. The other one is, we're not using Envoy as a sidecar, so that's a great point. So LinkerD doesn't use Envoy. The folks over at LinkerD decided to write a rust-based proxy for LinkerD. So there's some advantages to Rust that we see, one of which is a lot of work on modern networking is happening in Rust today. And by building on top of those Rust libraries, we're able to take advantage of advances in Rust networking to make our proxy more performant. The other one is Rust is like necessarily more memory safe than C++. So it allows us to avoid a lot of the memory management vulnerabilities that you see with a different language. So the LinkerD proxy we know is extremely small, is extremely fast, and is extremely secure compared to other proxies. It is also much more limited in its scope of what it does. It is not a general purpose proxy. You can't use it. If you want to set up an ingress with LinkerD proxy, good luck. It doesn't work. It only works with the LinkerD mesh because it only exists to support LinkerD. There are great projects like Emissary from the folks over at Ambassador. And a ton of others that are amazing ingresses that are built on top of Envoy. And it's a great tool for that use case. We think it's too much for what we're trying to build with LinkerD. I hope that answered your question, Puffet. And let me know if I missed anything. One more vast shit, ask the difference between AWS, Azure, Kubernetes. It's actually way too long a topic to get into now. Kubernetes is just a container scheduling tool that can run on top of whatever infrastructure you choose to build it on. Whether that's bare metal, VMs, or bare metal or VMs, that's all I can think of right now. And it works in whatever cloud you want to work it in. What type of communication happens between the control plane and the proxies? No, it's not very heavy. So Srinni, I think. So Srinni, it's just command and control traffic. So the big thing is the proxies have to ask the control plane what are the available end points for any given service so that they can do that intelligent load balancing that you get with LinkerD. And we call it EWA. It's a long story. It's exponentially weighted moving average. It's basically just a really good way to pick who's the fastest pod to respond to a given request. And you can read more about that in our docs. Let me know, Srinni, if that was an okay answer. So with that, I'd love to pop back into this dashboard and just explore what's broken in Emoji Voto. Does that work? Sounds really good. Did you answer soft pollution already? Say again? Soft pollution about... No, I didn't. Thank you so much. I totally missed that. Do I inject LinkerD on all pods somehow related to my application? What if I want to have database within the cluster 2? Yeah. Oh, so wonderful question. So you inject LinkerD where you want the benefits of the mesh, right? And the benefits are essentially security, observability and reliability, right? So we'll do better load balancing than you'll see in Kubernetes. We'll give you metrics and statistics about your environment that you won't see otherwise and we'll give you MTLS, right? So if you need that, you inject it. You can absolutely run databases and connect that and mesh them in LinkerD, right? The thing you have to do is be aware of whether the traffic is HTTP or GRPC or if it's some other TCP protocol, right? And depending on the type of traffic, you may want to tell LinkerD to treat that connection as just a generic TCP trunk, right? It has a generic TCP trunk and there's a lot more on that in our docs. And like I said, I'll post the LinkerD Slack. I'd love to get into it in a bit more depth with you if you want to hear it. But no reason not to inject databases and we do it all the time. Like we run Cortex for our production applications and we have Cortex fully injected with LinkerD and it behaves great. Okay, let me finish this demo and then I'll answer more questions. Does that work? Sounds good. All right. So here we have our EmojiVoto app, right? That we were talking about earlier and it's broken. So we see our success rate is below 100% and it shouldn't be, right? There's no reason it ought to be broken. So let's go take a look. So I click on the EmojiVoto namespace. I start with this little graph. It tells me, you know, what's the communication look like in this app? So instead of just, you know, a bunch of pods, right? I instead see a little service map that tells me who's talking to who, right? Now I can see on a per deployment basis, what is the success rate? What is the latency? What is the volume of requests? All that stuff, right? I can see that voting in the web service both have sub 100% success rates. So looking at it, right? My problem is somewhere between web and voting, right? It's not vote, bot and web. It's not web and Emoji. It's web and voting. All right, cool. So let's click on web and let's see what's going on. So with web, you know, I can see who's talking to web. So I've got Prometheus and vote bot talking to the web service and who is web talking to you. Well, it's talking to voting and Emoji. And here's our success rate requests, all that stuff. But beyond that, because I've got tap in the flow, I can actually see the live API calls that are being exercised in this environment, right? So let's just filter all the calls that are going on by their success rate. So I can see that from vote bot to the web service at API vote, I'm only seeing a 90% success rate. And I can see to the Emoji service, all the various calls that are going on. I'm actually missing something. So it hasn't popped up yet. So I'm going to give this a refresh, see if I can get my appropriate error here. Yeah, here we go. So from the voting or sorry, to the voting service. So from web to voting, when I post to this path, right on the API call, I get a 0% success rate. So we can, we can go now dive into voting and see what's going on. Does this make sense so far for the folks that are watching? I would love to hear something in the chat if it is useful. I'm sorry, I didn't mean to speak over you. No worries. No, we're getting answers. Yes, everyone is there. Thank you so much, folks. I really appreciate it. So now we're in voting, right? And we can see from voting's perspective, it's talking to two services or three services because TAP is also in there. And thank you, everyone who responded. I really appreciate it. So we got the web service talking to voting and we can look. Now, again, let's go to the live calls that are coming in. Let's sort them by the success rate, right? And I'm at the will of whatever actual requests come through. So I've got a traffic generator that's sending me some messages. So I can actually probably vote on the donut. Oh, no, it didn't work. Let me try this again. See if I can't make Linkery pick it up. There we go. Come on, buddy. Oh, there we go. So I can see that from the web deployment to this vote donut, I have some failure, right? And of course, we can see that by going in the UI if we try and vote on donut. In spite of it being like by far the best emoji in our list, it's not getting the recognition it deserves on the leaderboard, right? And that's because when we make the call from web to vote for donut, right, we get an error. We can dive in a little bit deeper. Actually do a TAP. So I want to do the live TAP on this traffic, right? And let's start it, right? And again, I'm looking at the voting service. Have I started? Yeah, I've started. So we just have to wait for something to call it. There's your problem. There we go. So we see some requests coming in. All right, let's stop now, actually. We see some requests coming in. And what we see is when you vote like for the checkered flag or for the woman shrugging or clap, right, we get an HTTP status code of 200 and a GRPC status of okay. When we vote for donut, however, because this is a GRPC call, we get an HTTP status of 200. So the HTTP call works great. But we have a GRPC error of unknown. So it's not that it doesn't know what's going on. This is literally a GRPC error code of unknown being raised. And so we see the GRPC error code right here. So we've got everything we need to be like, hey, folks that make the voting service, voting service team, please go fix vote donut because it's clearly having a problem. Now, this is a bit of a contrived example, but you get the gist of it. It's there to give you an easy way to just hop in, look at what your environment is, and let's get to the root of our problem as quickly as we can. And this whole time, we've gotten things like GRPC load balancing. We have all our connections upgraded to be two. We have MTLS everywhere. If we want, we can turn on policy so we can restrict what service is allowed to talk to what service. So there's a lot of power there, but there's no required complexity. If I look at Lincard D, so let's go back to that to that issue of comparison. If I do K get CRD, let's just grab for Lincard D. Grab Lincard D. I've got three custom resource definitions in this environment. There's actually four because there's another which is the traffic split SMI. So there's four custom resource definitions in Lincard D, none of which are required to use Lincard D. None of this is critical path to make the thing work. Right. Yeah, then there's another question about comparing between Gateway, Kong, GraphicQL, or Service Mesh. So for example, Lincard D. Yeah. So this is a great, a really great question. Let me go back to my diagram here. And then it's more questions coming in. Got it. So what's missing from this picture, right? What's missing from this picture is a way to do what we call North-South traffic, or traffic from outside the cluster to inside the cluster, right? So there's nothing there, right? So there's no built-in ingress or Gateway in Lincard D to bring traffic from outside the cluster in, except for when you're talking multi-cluster, which is a much different story. So Lincard D doesn't add an, or doesn't include an ingress. So if you use Kong, if you use Nginx, if you use Ambassador, you know, whatever it may be, right? You just, you basically just add the, you just add that ingress into the Mesh, and then you get all the benefits of Lincard D plus whatever that that ingress gives you natively. Does Viz integrate with Kubernetes RBAC? Will it be able to see its own names? So Viz is a really, it's a really very simplistic UI, right? And it's, it's not, like there's no, there's no, there's no user login, right? Like there's no login, log out, there's no user, there's nothing like that, right? If you're looking for role-based access control for a Lincard D dashboard, that's where we have products like Buoyant Cloud, which is a commercial product, which is free to use, right? For anybody for up to two clusters. But it's got, it's got RBAC rules and stuff, but the open source Lincard D Viz dashboard doesn't have, doesn't have any sort of role thing. There's, there's some cool stuff you can do. Like, so if you're using the ambassador edge stack, you can set up, you know, you can, you can decide on an account by account basis, who's allowed to access the dashboard and who isn't, but it's, it's all or nothing once you're in the dashboard, right? The, the positive thing is this dashboard doesn't give you access, like even, even when I do that tap, it doesn't give you access to the, doesn't give you access to any of the actual traffic, right? It just gives you access to the metadata about the traffic. So when I start this, I don't actually see, I don't actually see any packets. I don't see any data, right? And it's also read only. So there's not like, it's not a ton of need to expand it, right? There's not a ton of need to add a ton of authentication in here. Like I leave a version of this Lincard D dashboard exposed to the internet all the time. It's my job to talk about Lincard D and I've never had, I've never had any incident related to it. So yeah, I hope that answers your question Pharma, I think. And I'd love to hear from you if that, if that was useful. Puffet is asking. Okay. Yeah. So, right. Oh, geez, great, great question, both of you. So no, again, the Lincard D dashboard is, is it's all or nothing, right? And it doesn't even, it doesn't even come with even basic auth enabled, right? It's just, can you, can you get to it or can you not, right? That's the answer. Oh, sorry, Bruno. I just saw that got the names now. Were there any other questions or have I, have I answered everybody so far? Yeah, I think those were the two new ones. But if there's anything more, please everyone ask away. Yeah, that's, that's the bulk of what I was going to show you, you know, like the, the exciting things lately, right? We just published another set of benchmarks based on that, that Kinvoke testing suite, right? Not something that we wrote, you know, where, where we had great performance against, against Istio. But again, that comes at, comes at a trade off, right? If there, if there are features in Istio that you love and need, you're not going to get. Oh, right, Srinni, I did see that question and I, and I ignored it. I'm so sorry. So Srinni asked, does this support communication with services running outside of Kubernetes? So it depends what you mean. Like, yes, obviously you can talk with things outside Kubernetes into Kubernetes, but you're doing it through whatever your standard ingress path is. You don't have, you don't have like, you don't have the mesh, the, the service mesh, these proxies, sorry, here we go. These proxies right now, we have no ability for you to, for you to deploy them and extend the mesh beyond the cluster, right? The mesh is, is a inside Kubernetes situation only, right? You can do multi cluster, but again, it's only in Kubernetes. That being said, tune back in, in the back half or middle of next year, because we are looking at, you know, is it reasonable to use LinkerD for situations beyond Kubernetes? So that, that may be something that we do in the future. And I would really love, really love to hear from you about your, about your use case, because I think, I think it'd be cool to get an understanding of what folks are doing. Speaking of which, here's our, here's our Slack, so slack.linkerd.io, right? I can't, I can't actually post anything in the chat here, but if you check out, and I'll, here, hold on, I'll put it in the private chat. Yeah. Yeah. And if you want to share anything, you can send it to us and then, yeah. Got it. So Slack.linkerd.io will allow you to join, join the LinkerD Slack, and you can just reach out, ask me questions. I'd love to follow them. The Istio folks are also extremely cool. Or at least as cool as we are, because, you know, we're all working in tech. So you got to keep that, keep that in line. Femi asks, or maybe Yusuf, will this LinkerD version work with any release of Kubernetes? No. No, there is a limit to how far back, how far back any given release goes. If you want to check, just run, just run that LinkerD, sorry. I got a lot going on here. Sorry. Here we go. Let me make this way smaller. If you ever want to check if LinkerD will work, if the LinkerD version you have will work, LinkerD check dash dash pre, this will test whether or not you can install it in the cluster. If I run it now, it will fail because I can't install LinkerD because LinkerD is already installed. But on a, on a blank cluster, it will tell you. Raphael asks, can we see traffic to MQs like Kafka and Rabbit? Yes. Yes, kind of, right? So great, great question. This, like this level of detail that you see here, right, where we can go into an individual, like, let's go pop into, let's pop into web, right? You know, the way we can see the individual API calls and understand that traffic, that exists because the LinkerD proxies actually, it's aware of HTTP and gRPC calls. And we're able to, we're able to show you this, this level of depth for something like Kafka or Rabbit MQ. When you, when you add it to the mesh, what you have is just a point-to-point TCP connection that will encrypt and we'll give you some TCP statistics, but we don't know, you know, we don't know, like the, we don't know, we don't have any application aware data for it, right? It's just, it's just a generic layer for trunk. So it's, it's kind of limited. That being said, if you do have a big use case where you're looking for an inspection of the Kafka traffic, that is something that we've been looking at building into LinkerD and your feedback, like listen, user feedback and real-world scenarios is what, is what drives what LinkerD is and does, right? So we, we depend on, we depend on y'all out there to tell us what to put in LinkerD. And a great place to make feature requests is over on Slack, also GitHub, right? You can find LinkerD on GitHub, this LinkerD, where is it? Yeah. LinkerD slash LinkerD 2. This is where you can, you can raise issues also if you have something that you're looking for in terms of features. Well, Slack tends to be a really good place that put that in as well. Yeah, Srini, I'm happy to, happy to chat more about your scenario, honestly, would be, would be great to, great to hear about what you're doing. Raphael, same thing, man, would, would, would love to, love to hear your perspective and, and talk to you. If y'all can join the Slack, it would be fantastic to meet with you. I'm just at Jason and I'll, I'll look at who joins the Slack after this and, and come say hi to y'all individually. So Puffett, oh, sorry, that was Bruno, I believe. So as you mentioned, the pods are already doing MTLS. Is this the default one injecting the sidecar? Yeah. So it is possible, it is possible to turn off MTLS and LinkerD, but it's not simple or straightforward, right? And in general, it is the default. You just get MTLS everywhere. I don't know that it's not simple, but I don't know how to do it because I've never, I've never tried. And I'm not, I'd love to hear, if you are trying to turn it off, I'd love to hear why and what the use case is because I think what you'll find is, is there's no reason, there's no performance reason to disable MTLS when you're doing, when you're doing LinkerD, right? The, the better thing to do if you have like a given connection that you don't want, you know, like being meshed, you can, you can tell the proxy not to, not to look at or ignore all traffic on a given port, right? Which is a viable way to do that, that thing should you need to. But in general, it's better to keep MTLS on and just skip individual ports as required. Perfect. All right. How are we doing on time? We have a few minutes left, so if you have any final things, any audience questions, we'll still have a few minutes of time. Yeah. Well, I'd love to dive in if y'all have any, any particular questions. Some people, okay. So Bruno says, some people are allergic to TLS inside a Cates class there. You can certainly specify custom certificates. So, so the, the LinkerD, the way LinkerD works when it gets installed, this is actually another good one to put on here, right? The way LinkerD works is there's actually part of like three tiered architecture for certificate authority. The way it works is, is basically you build, in general, when you're looking at certificates, you build like a root certificate that is, establishes the trust on a given like domain, right? And that root certificate, you kind of keep really private and, and keep it offline. Then you'll do what's generally referred to as an intermediate certificate. So any like one, anyone use case, in our case, anyone Kubernetes cluster gets an intermediate certificate that is signed by the root that everybody trusts, right? And that intermediate certificate goes to the control plane. So the control plane holds on an intermediary and it uses it, both the public and private key to create and sign individual proxy certificates. So every proxy gets an individual certificate that is generated by the control plane. But that, that root and that intermediary can both be generated by you. And honestly, for production environments, we almost always recommend that folks generate their own certificates and use it. And then every cluster should use its own intermediary, all signed by the same root. Because if you're signed by the same root, then you can, then you can link clusters together, right? Using LinkerD multi cluster. You can, you can connect clusters, but you can allow each cluster to be its own security boundary. So if you decide that like one cluster is compromised for whatever reason, you can revoke the intermediary of that certificate or of that cluster. And all of a sudden it can't talk to the rest of the mesh, right? But each and every other cluster is still, is still set up. I hope that, I hope that answers your question, Bruno. Tanny is the thing that issues the client's certificates pluggable. Oh, yeah. It is not, right? So great, great question, Tanny or Tanny, Tanny, I'm sorry if I said your name incorrectly. The, the, the identity service generates those certificates. That being said, the source of those certificates is pluggable, right? So you can bring whatever, whatever CA architecture you want. You know, folks, if you want to use like cert manager as an example to generate certs at a vault or out of like some AWS certificate issuer, I'm not that great at AWS honestly. So I don't know what they have in terms of options for generating, generating intermediate certificates, but you can use cert manager to generate certificates from whatever authority you choose. Sreeni, maybe? I'm sorry. I'm not, not sure how to say your name asks if we can suggest any resources for a deep dive into liquidity, you know, the docs are really good. I'll be honest, like the docs are great and, and start using it. Like the nice thing about liquidity is you don't have to go super deep, right? Like go check out the getting started guide to get, get a sense and then go through, go through the tasks, right? This one up, I only joined point in February, right? And the first thing I did is I just started going through these various tasks and they were great, right? Cause I was like, okay, how do I bring my own Prometheus? How do I do, you know, how do I set up retries or timeouts, you know, this one debugging HSP applications with per route metrics. That was awesome, right? Really good. Shows you how to use some of the custom resources that do come in Lincordy to, to get better statistics or better, better insight into what you're doing. And then with all this, check out buoyant, buoyant.cloud. If you actually want to try like the, it's a hosted product, right? There's a, there's a paid subscription. There's a free subscription that's good for up to two clusters and 50 workloads. It's a good tool. It makes it easier to see some of the stuff that's going on in Lincordy. And it gives you, it gives you alerts if you've done any kind of misconfiguration. So you'll know that right away. But yeah, I would, I would check out initially, check out the tasks. It's, it's how I learned, learned it and got a little bit deeper. And I found it really valuable. And the other one is, is popping to the slack because it's a great place to just ask questions and talk to folks about it. And the music, yeah, as the cloud native foundation already said, they, all of the sessions from cloud native live, you can view them afterwards as recorded. So no worries. You can play back all of the details you want. Yeah. I think we have, we have what we have four minutes left officially. So if there's any quick final questions we do have some time. And usually when I say this, the longest question is always pop in immediately at that point, but not too much time. But do you have Jason, any final words, wrap up anything? Yeah. The other thing I'd say is if you're, if you're looking to try it out, a good one, once you've done that getting started guide, which you should 100% do, and did we post that in the chat? If not, hold on. Yeah. Just post it to the chat and we will get it shared. Yeah. So the getting started guide is a great place to go after that. Check out that, check out this multicluster. It's a, it's a fun one, right? Cause this will require you to do the install, but, but customize it, right? You can't use, you can't use that linker to install command as is with no commands. Because when you're doing multicluster, you have to create and specify a specific certificate authority, right? For, for each cluster, right? So generate two different intermediate certificates and connect them. You know, I've got, I've got a talk that I did for the, the SIBO folks where I connected a cluster. Oh, sorry. I mean, it's buoyant. Hold on. Let me take you over to buoyant.io. There we go. buoyant as in it floats. And here I'll post that link. But yeah, it, it's buoyant. We're the folks that make linker D. That's kind of our, our claim to fame. If you're using linker D and you're in production, love to hear from you at a minimum, if you add yourself to the adopters thing. So if you go to the linker D repo, if you're using it in production, add yourself to adopters, we will, we will talk to you and send you some linker D swag. So if you want a very cool linker D hat or some linker D shirts and stickers, add yourself to adopters and, and love to hear from you. And also come in, feel free to just pop in on Slack and happy to, happy to talk to you. You'd love to get you, love to get you into production with linker D if you're not there. And, you know, love to answer your questions if you're concerned about stuff around service mesh. Perfect, perfect call to action to, to finish off of this and no new questions there. But I think everyone will now rush over to the Slack side to ask their other questions later there. But yeah, it's been absolutely wonderful hour here with everyone. So many questions, so much interaction. Thank you so much everyone. Thank you everyone for joining in obviously today with the latest episode of Cloud Native Live. It was really great to have Jason Morgan talking about linker D today. And as I mentioned before always, really love the interaction. Thank you so much for joining. Thank you so much for your questions. It was a lot, but it was, I think it was the best way to spend this hour. Next week, we will have a session about multi architectural communities clusters. So tune in at the same time next week. Looking forward to seeing you there. And thank you for joining us today and see you next week. Thank you so much.