 All right, everyone, hopefully you can hear me. Sorry, I had some Bluetooth issues, but awesome. Fantastic. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor Jolazal, a senior developer advocate at HashiCorp where I focus on all things infrastructure, application delivery, and developer experience. Every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer your questions. Join us Wednesdays at 11 a.m. Eastern Time. This week, we have Jason Morgan here with us to talk about LinkerD 2.11 and walk through the new policy features. Some housekeeping, this is an official live stream of the CNCF and as such, is subject to the CNCF code of conduct. Please don't add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be excellent to one another. With that, I'm very excited to hand it over to Jason to start today's presentation. Jason. Awesome. Thank you so much. I've got a little screen share. I want to kick off for those that are watching and just do a very, very brief overview of what is a service miss and go from there. How does that sound? That sounds perfect. All right. I've got a little whiteboard going on here. I'm going to show you a fictional app that is very close to the actual app that we're going to be working with today. I want to talk about what it is to install and use a service mesh with it. Assume this app is running inside of our Kubernetes cluster. Each one of these components represents a pod. The way a service mesh works, and also you'll see in the chat, sorry for the distraction, but if you look in the chat, there's a couple of links coming out. First is the link to the LinkerD Slack. If you join that and you have questions, you can hit me up directly. There's also a pretty large community of folks that understand and like talking about LinkerD and I would love to have you join us. Beyond that, you'll also see a link to our Getting Started Guide. That is what I'm going to start with before I get into policy, and it's this page here, and this tells you everything you need to do to install LinkerD, and you'll also see some of it as I go through and do it. But before we install anything, let's talk about how it works. I have an app. It's got a front end and two back ends. The front end communicates with the back ends over some sort of call on the network. Well, when we install LinkerD or any service mesh, what we do is we install a control plane, which is an interface between the platform operators and the service mesh. It's got some components that actually run our environment. Then what we do is we inject a number of these proxies, these little load balancers in between your application pods. We use what's called the sidecar model in Kubernetes to connect the proxies to your application, and then we change the network traffic so that instead of going directly from front end to back end, it goes through the proxy to the proxy on the other side. That addition of proxies allows us to take a lot of things that you might put directly in your application and instead give them over to your platform team to run and manage. As a quick example, imagine your environment, you're highly regulated, and it's important to you that you encrypt the traffic between every pod inside your Kubernetes cluster. Well, these proxies, among other things, they can handle mutually authenticating the connection. So adding TLS and authenticating each side of the conversation. What we're gonna show you today is one, how to do this where you install and add an application to a mesh. We're also gonna show you how to use policy to decide whether or not this connection is allowed to happen, right? So we can use policy to say, sorry, you're not gonna be able to talk to that or from that app to that app, as an example, or add policy so that we are allowed to do it. And that's the basic picture. So I hope that was helpful. Again, check out the links. Join us on Slack. I would be super grateful to hear from you and any feedback that you have. And also check out our Getting Started guide. So this will walk you through it. One more tip, you'll see there's a link to dashboard.cevo.59s. I'm gonna do a little bit of the demo on a local cluster and then I'm gonna do the rest live on this active cluster. So you can actually go to this dashboard, see what's happening and watch me break and unbreak the EmojiVoto app right here. So please feel free to hit that up. There's also, this is the app that we're gonna break and unbreak later. So this is EmojiVoto.cevo.59s.io and I think someone will post that in the chat in a second. So with all that being said, let's install Linkerdee. Here I've got a terminal. On the left-hand side, I'm gonna do my actual work. On the right-hand side, you see all the pods that are currently installed and running in the environment just so you know I'm not faking anything and we're gonna go five minutes from us starting to us having an active service mesh with an application. So that's my commitment here. So by 2.12 Eastern time and feel free to interrupt me with questions so that I don't hit that deadline. I start off with Linkerdee install. So I'm gonna run the Linkerdee install command. It's gonna generate a bunch of YAML and it's going to hand it off to, it's gonna hand it off to the Kubernetes API. And to Nor, they're great getting started guides actually just on the Kubernetes docs themselves. And there's a bunch of other resources out there. And there's great videos on YouTube that'll show you how to get going. I recommend like I'm running Docker desktop. I love it. You can use Kubernetes right there to get going. So here I've installed Linkerdee but we don't know whether or not Linkerdee is working yet. So I'm gonna use the Linkerdee CLI. I could also use, again I could use Helm or a number of other resources that install Linkerdee. But in this case I'm doing the CLI and I'm going to run a Linkerdee check just to see whether or not my Linkerdee service mesh is healthy. A lot of tech scrolls through with green check marks which fills me with confidence. I also see status check results are green check mark. So now I'm feeling great about it. So that's Linkerdee installed. That's our core control plane but we still wanna add some more, right? I wanna see a dashboard like that, that UI if you go into that link, that's in a separate extension. So we're gonna show you how to install that. I run Linkerdee Vis install and do that same K apply dash F and for those that don't know, K is just an alias for Cube CTL. I can't reliably type it. So I use K all the time. Less is more. Yeah, it's too many letters, right? So I run the Linkerdee Vis install. Again, outputs YAML applies it to the Kubernetes API and we see something coming up and running. So let's do once again a check but I'm just gonna check the Vis components. I know Linkerdee's healthy. We're at two and a half minutes in right now. So I think we're gonna make it but let's see. Let me do one more thing. I'm gonna copy and paste my next command so that I don't waste time here. Great time for questions if you have them. But so so far I haven't, I've got just an empty cluster, no application running in it that's worth talking about, right? But we're gonna remediate that in just a second. I think one question I've got for you, Jason, is Howard, what are some of the different ways in which people can get this onto their cluster? Is the CLI tool the best way? Helm charts, CRDs, what are some happy pathways to get that onto your cluster? Yeah, so the CLI, the nice thing about Linkerdee is the CLI and the Helm charts share the same base templates and an argument that you can use for one works for the other. So there's like, I'm not gonna tell you that the CLI is better than the Helm chart or the Helm chart is better. It depends on your flow. We do see it's more common for production users to be using the Helm chart rather than the CLI. Also, if you're interested in doing it with Argo CD or Flux, there's a ton of material out there on running Linkerdee with Flux or Linkerdee with Argo and more stuff being generated all the time. So I'm happy to share some resources around that if you have questions. Let me install an application and then I'll show you how to add that to the mesh. Oh, Femi, I feel like Femi Yousif we spoke last time also. So Femi Yousif is asking, are the proxy pods by themselves or are they running as a container within the application pod? Thank you. Yes, that's second thing, right? So, and we'll see that in a lot more depth in a second. Right now I have the Emoji Voto app, right? We're gonna add that to the mesh but it's not in the mesh yet. So right now I'm gonna install four individual containers where it's one container to a pod. But after I add my Linkerdee annotation and add it to the mesh, right? There's now gonna be two containers within these pods and you'll see that happen right now. So let me, let me show you one way to do this. We're gonna do K get deploy dash and Emoji Voto dash O YAML. So I'm grabbing all the deployments in the Emoji Voto namespace and I'm just outputting YAML, right? And oops, didn't mean to do that. But I wanna transform this YAML and add an annotation that says, please inject Linkerdee into these pods. And so we've added that into our CLI, Linkerdee inject dash, we'll just add a single line annotation to it which you can apply yourself or you can apply at the namespace level. And then we're just going to send that right back to the CUBE API. So now we see new pods getting created and instead of one container per pod they all have two containers per pod. So now we've taken Emoji Voto, we installed it, it's a working app on its own. It's a working app on its own and then add it to the mesh, haven't made any custom resource definitions, haven't Linkerdee a-fied it or anything like that. I've just got to add it in. A to Z ICE is asking if there are performance impacts for adding Linkerdee, yes, definitely. And there's tons of things that you can do to manage or to measure it, including a really cool service mesh benchmarking tool from the folks over at Kinvoke, right? So we use that tool when we compare, we use that tool when we compare Linkerdee versus another very popular mesh in the environment. So it's a great testing harness and we'd recommend you check it out. And then if you have questions about the overall performance impacts of adding a mesh, it's really great to test it in your environment and see what happens. So this is kind of the end of the getting started guide or about all I'm gonna show you, although you can see more and go through it in more depth if you go to the getting start guide itself. So now I'd love to, we've got that out of the way. Taylor, good time to get into policy and break some stuff. Always, come on, let's see the bits fly. All right, so what we're gonna do is I'm just gonna change my Kube contacts to a live cluster. So this is a cluster running in Civo Cloud and if you haven't checked it out, Civo has a nice Kubernetes as a service offering you can check out. And it's really very inexpensive if you wanna run your own cluster. And I personally use it all the time. So what we're gonna do is we're gonna set up another watch here on the right. Okay, get pods dash and Emoji Voto. Because I'm gonna make life hard for Emoji Voto and I wanna see the pods in the namespace as we do this. So now once you do that getting started guide, right? You have LinkerD installed, you have an application in the mesh. So now I wanna go back to this little diagram really quickly, right? So the proxy is the tool that handles setting up the mutual authentication. It's what handles giving us data like, hey, what's the success rate or request volume or latency for the various components, right? I get all that because my traffic is passing through the proxy. So in order to do policy, we have to have a proxy on that node, right? On that component, right? That is going to receive the traffic. So in LinkerD 2.11 we introduced server side policy, right? What that means is in this conversation between front end and back end A, front end is the client, back end is the server. So as LinkerD 2.11 I can tell this proxy what to accept or not accept on behalf of back end A, right? We do not have policy that says from front end where can you go or what can you do? Although that is coming in the LinkerD 2.12 release and love to hear your thoughts on it. And again, join us in the LinkerD Slack, tell us what you'd like and feel free to participate in the design discussions actually going on in the LinkerD Git repo around 2.12. So yeah, so I'm gonna show you server side policy. Okay, so a couple of important caveats, right? When you use policy, you have to opt in, you have to opt into it with LinkerD. So as of now, EmojiVoto is using zero custom resource definitions to do its job, right? But to work with policy, I'm gonna begin to start using custom resources. So the first thing I wanna do is I need to set the default policy for either the cluster, the namespace, the deployment or the pod, right? The workload or the pod, right? I'm gonna set policy at the namespace level, right? And what I'm gonna set is an annotation that says, no matter what, if you don't have an explicit rule saying, if you don't have an explicit rule saying, you may do X, Y, or Z, it's gonna deny it. So let's do that. So I'm gonna do K, annotate, NS, EmojiVoto, and I'm gonna add an annotation called config. I should have copied and pasted this, on wikid.io slash default inbound policy equals deny, right? So I want it to deny traffic unless it's been explicitly authorized that it may do something. And the first thing you're gonna note is nothing happened, right? And if we go to the EmojiVoto app, right? Which you can go to, it's just EmojiVoto.cvo.59.io, you can refresh, you can vote on things and everything still works. So what's happened, right? Well, the what happened here is I haven't restarted any of these pods. So that default policy annotation is only picked up on pod restart. So let's restart some pods. But before I do it, does anyone wanna guess? I will, if you guess correctly, right? I will find you and I'll send you a LinkerD hat, right? Does anyone wanna guess what's gonna happen to EmojiVoto when I restart the deployments in this namespace? To be clear, this is the hat that I've got. It's cool. It doesn't work well with headset or on my head, but it's a hat. And if I have to, I'll send you my hat. So before I do this, what does anyone have a guess as to what's gonna happen? Taylor, you wanna guess? Absolutely. I'm gonna guess that we're going to see some connectivity fail. Excellent. Great guess. All right, all right. We got a lot of it's gonna crash. I'm hearing you. You're wrong, however. It's really unpleasant. It's a total, and the reason I'm pointing this out, right, is that policy is tricky and policy is for the first time in LinkerD, we have given you a tool that will allow you to thoroughly shoot yourself in the foot, right? So you've got stuff to harm your environment here. It's good and there's power. There's jelly in those donuts, but they're hard to get. So the problem is, you see my new version of the app? I'm getting restarts. Yeah, thank you, Son of Sengal. The health checks don't work, right? Cause the first thing that's happening, I can go look at, hold on, let me show you, let me show you in the actual traffic analyzer thing that I have here, right? So now that my things are hitting a new policy, or darn it, is this the right one? I don't know, it's not shown here, but basically there are health checks that occur on the admin port and none of them are allowed, right? So I can't even restart my application. So these folks are gonna eventually end up in crash loop back off, and they're not gonna get anywhere. So let's fix the admin port. So let me do a quick read on something. That is just an alias to a tool that reads text in a YAML aware fashion. So let's look at this. Manifests, emoji voto. So I got a policy here. It's like admin, allow admin, allow health. There we go. We're gonna allow some health checks through. Fun fact, if you do go in here and check out the Linkardee dashboard, you're gonna see it looks really unhappy, right? Cause it sees their new things spinning up, but it's not seeing a traffic forum, right? But it's still seeing requests because the old version of the app still works and things are flowing, right? So yet is an alias to the bat command and that's a CLI tool that does a better reading of files or better, a different version of cat. And it does it in a language specific fashion. So sometimes it looks good. So let's look at this policy. So what I'm gonna do is I'm creating a custom object. So now we're getting our first custom resource definitions in Linkardee that you're gonna have to use if you want policy. So first thing I create is a server. And a server is a tool very similar to a service that will match on some number of pods, right? And it gives it a name. So I need a pod and a port to match on, right? So I'm gonna match on any pod in the namespace. And I'm looking for the Linkardee admin port, right? And then I'm gonna apply a server authorization policy which is an allow rule on that server. So I decide on the server that I defined above and then I'm just gonna say, hey, listen, allow all unauthenticated connections to that port, right? Because the Kubernetes when it talks to my pods it's not using the service mesh. So it's an unauthenticated connection. So we're gonna go ahead and apply that. Okay, apply the policy. Now, does anyone wanna guess what's gonna happen? Drumroll please. I burned everyone earlier. What's gonna happen is eventually these things are gonna come out of crash loop back off, right? And then the actual health check will start applying. So policies apply, like server authorizations apply live as you make changes, right? I don't need to restart anything. Oh, thank you, Femi. I totally miss that. So it turns out nothing's gonna happen. Yeah, you got him again. Thank you so much. So now I've got policies. Things are gonna start running. Great, great catch. So these things are gonna... So no, I don't need to restart the sidecar. Great question, Sonosingle. I don't need to restart it. They will take effect automatically. However, if they're in crash loop back off, like that's the Kubernetes API saying, you know, hold up for a minute. There's something bad wrong here, right? Yeah, great. Okay, so we'll see the old pods going away and the new pods coming up. Now we have broken EmojiVoto, right? So even though the pods are working, it's busted. Uh-oh. Right? We have all kinds of problems, including the problem that my ingress, which is talking to EmojiVoto, is no longer allowed, right? No one's allowed to do anything here, right? And I can look at LinkerD, and I can see, oh, look at this. My requests per second have dropped precipitously because the only requests that are getting through are those LinkerD health checks. That's all, sorry, the Kubernetes health checks. Nothing else is passing, including the calls from Prometheus that scrape data about what's going on in the environment, right? So we're gonna, the next thing we're gonna fix is Prometheus talks to our pods and gets some data from the LinkerD sidecar, and we're gonna allow that. So let's do that with allowprom. So this is very similar to the file you saw before. I have a server, which is the Prometheus query, the Prometheus port on the proxies, and then I have a server authorization, which allows that to come from the Prometheus application. So with this policy, it's not really exciting to show anything, we can just start getting more data about the fact that nothing is talking to anything because we don't have any app traffic, right? But we're now in a place where we can actually fix the app traffic. So let's do that. So let me do one more. Yet, policy, manifest, OG, policy or something, there we go. Let's take a look at this. And thankfully the folks on the team who do useful things instead of just talk to folks have spent a lot of time giving good annotations about what's going on in this environment. But essentially we're setting a server that looks for the apps in the emoji service and authorizes the GRPC protocol from, or it just selects it, right? And now we have an authorization that says who may talk to it, right? Oh, sorry, I have another server. These are my two GRPC ones, internal GRPC. So it's a bunch of things. All right, so here's my authorization that says, you may talk to GRPC on either of these servers. So if you look at both of our servers, they have a label that says their internal GRPC. So with these two servers, I create a server policy that selects them and says, you may now have some traffic, right? On top of that, I allow, I create a server for the web front end, right? And I allow all traffic to the web server, right? From anywhere. So let's do this. And God willing, I'm gonna have something that works. So this is the fingers crossed moment, folks. Emoji Vota policy, great. We've created some things. Let's take a look at what happened. So refresh our dashboard. This isn't promising. Did I add the prompt stuff? Sorry, did I apply the prompt manifest? It did. Okay, so allowed prompt, allowed this. No, again, didn't restart our pod since we added policy. And yeah, there we go. We have traffic. Poof, I was sweating there for a second. And let's go to Emoji Vota, see if we can get to it. What's this? Emoji Vota working with traffic, with policy, right? We can view our leaderboard, okay, great. We can vote on our favorites. And that's the story of policy in LinkedIn. So it's dangerous, but it can get a lot of value. And once it works, it just works, right? And then if you wanna use the folks that make LinkedIn, that's the company, Boynt, who I work for, we make a product that allows you to get a little bit more information about your LinkedIn environment and turn it into a bit more of a managed service. If you go to Boynt Cloud, you can add your cluster, up to two clusters for free, and just check it out. See how it works with policy. You'll be able to see policy violations in progress and some other neat stuff like that. But yeah, that's really the heart of the story. I didn't mean to go through it quite so quickly. Yeah. I don't know, does anyone have any questions or anything I can dive into? Sorry, I know A to Z asked about performance impacts. Thing I tell you is adding a proxy, it adds a computational tax, right? Because that proxy is running beside your application. So it takes compute to run, right? And it also adds a hop in that transaction from one application to another. So it by default must add some amount of latency. Now, the question is, what does it do for you overall, right? And as an example, we have some folks at N-Pain, yeah, so these folks at N-Pain published a case study with the CNCF where they talked about how adding a service mesh allowed them to not only get better performance out of their application than they were able to see before, but allowed them to realize a 10x increase in throughput. So not only did they get faster by adding a service mesh, they also were able to scale higher than they were able to previously and all sorts of other gains. And you can check this out. I'll send the link to the CNCF folks and they can post in the chat. But they were able to see huge benefits. Right, and yeah, and there's plenty of stuff about performance. Sonos and Gal, oh, thank you so much CNCF folks. Oh, great question, A to Z Ice. So they ask, is it possible to write authorization for specific users rather than pods? No, right, that is absolutely not in Linkerd 2.11, right? This is entirely server side and it is fairly coarse grained, right? It is fairly coarse grained where, you know, I am saying what Kubernetes objects, what Kubernetes specifically, what Kubernetes service counts will a given server support, right? Linkerd 2.12 will allow you to make that more fine grained where you can say what server is allowed to hit what path on what server, but it has no particular support for is this user allowed to do X, Y, or Z? Yeah, that's the big story there. Did that answer your question, A to Z Ice? I hope so. How about traffic egress? Yeah, great question. Stay tuned for that on Linkerd 2.12 and I'll, let me actually show you the roadmap doc which just got updated. So GitHub, Linkerd, Linkerd 2, so one, this is the place to go. If you are curious about what Linkerd is doing or you like Linkerd and you wanna give it a GitHub star, it's always nice to have or you wanna get involved in the project, check this out. It's also where we have our roadmap which we're gonna find right here. You're trying to know more about what's planned in Linkerd, come check this out, ask questions, raise issues if you're looking for a specific bit of functionality. And yeah, we'd appreciate that. But sorry, son of a single, I blue past your question. Stay tuned for Linkerd 2.12 for what we do vis-a-vis ingress or egress. But all things like this are, all things like egress are the provenance of client side policy. So let me go to this diagram real quick. Right now, the decision happens here as to whether or not to accept the request. Client side policy will shift that that the decision occurs here as to whether or how to make a request, right? So egress is essentially, should I go from in cluster to something off cluster, right? Third service, whatever out here. And that would be an egress decision. So I can't tell you exactly what's going on because I don't know and things are still in development but Linkerd 2.12 is the release that will support things like egress or allow you to build things like egress. And I hope that was useful. Antonore asks if they're good first issues and they absolutely have issues that are marked good first issues. So would love to see involved. If you're thinking about contributing and you don't know where to start or what to do, come join us on Slack, right? That's where you can talk to folks like the maintainers. There's a contributors channel for folks that have contributed to Linkerd 2.12, right? And are looking to go further down that road. Yeah. That's that. We released an article today, oh, not this one. So I released an article today where one of our engineers talks through everything step by step you would need to lock down traffic inside of a namespace. So you can go a little bit further with it if you like and I'll, let me share this. So you can learn about namespacedale and how to make that happen. And yeah, it'll talk a lot more in depth about the service authorization, the server, sorry, server authorization, servers and policies. Oh, right, and our docs, sorry, one more. If you're looking at this and you're like, geez, I wish someone could explain this but like way more slowly, go into our docs, you'll see authorization policy, right? Which talks about everything that we showed you today, and has a link to some more in depth stuff in the policy of reference. So these should pop up in the chat in a second. It's not super fun to read on stream but essentially you've got a couple of different options, on your cluster on a namespace, you say, do I wanna allow things that are unauthenticated, only authenticated things? Which if you do that, remember, your pods won't start if they have health checks unless you add an authorization policy that lets that admin port come through, in cluster authenticated or in cluster unauthenticated, and that is if the traffic originates in your cluster, fine, if it doesn't, sorry, we're tossing it, or deny, which is what I would generally use, just deny all traffic all the time and only allow what you wanna do, right? Cause the best way to do something like policy is be explicit about what you allow, don't worry about individual denials cause it's way harder, right? It's less secure and I guess it's just those two, it's harder and worse, so don't do it. Yeah, so those are the docs. I guess really I'm waiting on questions from folks at this point. One question I have for Jason is, I know that Linkerd is kind of in the business of routing things around, right? So my guess is that I've used Linkerd a little bit but not enough to know kind of if that stores state of each request or kind of like how to go about debugging or building policy, so do you know if there's anything upcoming like a tool that would help like, let's say I'm taking an application and trying to add service mesh to it and wanna go about it, solely I'm not able to start with deny, do you know if there's anything coming where I could actually take a look at a potential policy and see like, okay, this is going to block 16 of 100 requests that it gets or kind of like any iterative approach to adding on policy on that front or do you have any recommendations on some good things to be mindful of as you write that policy? Yeah, so tons of stuff. One, if you're gonna start doing policy like at least consider setting up Boeing Cloud, like it's been like we're actively building features to make policy a lot more simple and straightforward. In general, right, like one of the nice things about LinkerD and now that we've got traffic again, we can see our map is it gives you a lot of tools to debug your application, right? Like so I've got EmojiVoto and even though it's working now it's actually secretly broken, right? And we can see that because when we go look at the namespace or if we look at all our namespaces we see that the success rate of EmojiVoto is down here at 95% and there's no reason that we should be seeing failures. So we can go in, we can look at our deployments, our individual deployments and see their success rate, dive into an individual deployment, see who it's talking to and who's talking to it, right? And then go snoop in on live calls between these components, right? So if I wanna know what do I need to authorize on what port? Well, just going to the LinkerD dashboard, I know that it takes call or it makes calls to Emoji, right, and what pass and it takes calls from Votebot and it takes calls from the ambassador service. And again, or the ambassador edge stack, their ingress. So those are the components that talk to it, right? And if you go slam this URL a little bit you'll see it pop up live. Yeah, so you've got some insight there. So there's all the tooling that you need to track it. And then we also had a webinar that we just did yesterday which is recorded, which talks about running LinkerD in production and how do we debug things and I'm grabbing the link right now and I will send that into the chat. Sorry, I have to ask someone because I didn't think to get that available in advance. So a couple of different pathways. One thing to note, folks, if you're looking at this and you're like, gee, I love this data but I hate web interfaces. I hate using a GUI, right? Cause I used to hate GUIs all the time. You can also get all this data. You can also get all this data you can also get all this data on the terminal and Catherine tried to send it but she's unable to. So I'm trying to grab the link here. There we go. Got it. Folks, if y'all don't mind posting it. So this is just a webinar that dives deep into, dives deep into debugging, right? And how you would do some of this live. But again, if you are using point cloud it's really easy to see when you have a policy violation error, right? And you'll get alerted to that fact. That's a, all right, so I'm sorry. A to Z asked, is there any way in Linkerd to analyze traffic which is not part of the Cates class there but access within the pods? You're gonna have to clarify that cause I don't quite understand it. I'm sorry for that. Okay. So, thank you Cloud Native Foundation for posting that, that link in the chat. So that's a recording of our production webinar and there's a whole webinar series if you're interested in learning more about but service mesh in general and Linkerd in particular cause obviously that's what we, so we talk about where the Linkerd folks. Yeah, it's a great place to dive in. Anyway, we can, just cause we've got a few let's go, let's go see what's wrong with emoji photo. How does that sound? Yeah, that sounds fantastic. All right, what the heck? So I've got my success rate, I've got the posts. I can also again filter on pass on these individual calls and see what the success rate is. Well, looks like when web calls to the voting deployment makes a post to this path, it has a 0% success rate. So it's tried to do it 34 times since we've been watching and it never works, right? We can go look at voting, you know? If you're wondering how I do this so fast like I've debugged this app like 500 times at this point, so I still haven't fixed it though. So what does that say? So we can go look at, we can go look at voting. It sees all calls coming from web and again, it sees nothing succeed when we call to vote donut. We can get a little further, we can tap the traffic, right? So more specifically, I'm just gonna change this path here. What I'm gonna do is I'm gonna look for in the emoji vote on name space from web, right? I'm gonna check anything that hits the voting service on voting and see what live calls I can capture, right? So this is just link or detap. So we got a couple calls coming through, so let's stop it here. This is kind of neat. So you'll notice that the calls to vote donut are failing, right? Yet the HCP status code is 200, right? That's because web is talking to voting as a GRPC call, right? So the connection works from an HCP perspective, but GRPC is throwing up an error code. So this isn't an unknown status. It is an error called unknown. This is unfortunate nomenclature, but that's what it is. So we're seeing a GRPC error of unknown pop up from this service when we call this path, right? So I've got more than enough information to say, hey, person that makes the voting service, I can't vote on the darn donut. I can go confirm it and y'all can do it as well and hit our very misleading error because it's not a four or four. It's just actually a generic failure, but you can try and vote for donut. And in spite of it being the best emoji option, it is sadly underrepresented in the voting. So we've got a problem and we can fix it and we know essentially where to go to fix it. If we go explore the emoji vote voting service, we're gonna find the problem pretty quickly. Yeah, donut gate indeed. Thank you, Taylor. Yeah, so that's a little bit of debugging. Again, when it's a policy problem, you're just gonna not see, you're gonna see traffic drop precipitously for that service because no one's gonna be allowed to talk to it, right? And that's gonna be the heart of it. And that debugging and production session will give you a lot of tools that you need to dive a bit deeper. And there's lots of material. If you like LincorD1, try it. I think you will like it, right? It's extraordinarily easy to use. It doesn't require you to transform your app and it gives you a ton of benefits right at the gate. But if you do like it and you're looking to go to production, there's a lot of tooling out there to help you including a buoyant production runbook. Hold on, I know I've got that one. So where we talk about what you need to do or what you need to think of before you take LincorD2 to production, right? Including things like answer the question of, do I really want to run an in memory Prometheus and Grafana or do I wanna externalize that and how am I gonna handle that, right? Or what am I gonna do about things like my Prometheus data and some other interesting tidbits. So I'd recommend that highly. And that's, oh, thank you for already sharing that. You look great. Okay, yeah, that's the bulk of it, right? You can see our policies taking effect now. So if you wanna see it in buoyant cloud, right now I get the TLS status by port, right? So are we using MTLS or are we using plain text or are we using application TLS? What identity is involved and what policy is taking effect, right? So that we can see why it's working at any given time. I really don't wanna hammer on that though. If you have questions, I've got whiteboard, I've got time and we've got an environment to break if you all like. Are there, I really do like how straightforward that LincorD is to both get installed and to kind of turn all of the knobs so to speak within some of the configuration. Are there add-ons or is it really just kind of a service mesh and then it's, you can kind of, battery's not included, you can bring in other things at a later point in time or kind of, what does that look like? Yeah, great, great question. So check, if you get the chance, check out this dashboard link, right? So I'll send it, I'll send it again. Check out the dashboard link. What you're gonna see here is a bunch of things going on, right? So I've got Flux, so if you know GitOps, Flux is a tool in the GitOps space. So I'm using Flux to install this. I'm using the Ambassador Edge Stack to route traffic to it. I can do telepresence or a number of other great toolings. And the nice thing is, LincorD is a well-behaved CNCF project. We hit graduated status back in July, which means we hit the highest tier of maturity for an open source project on par with Kubernetes, Prometheus, all sorts of other things. But we behave in a way that allows us to natively integrate, right? So I did a demo for the folks at Ambassador on how do you integrate with telepresence? Well, I install telepresence, I add it to the mesh and everything just works because the integration point is at that native Kubernetes service object, right? We don't expect you to do something special to talk to apps, right? Same thing with Ambassador, right? Like the integration with Ambassador just works. I add Ambassador to the mesh, my traffic rets around. I'm using, if you see, I've got, I've got TLS on all the websites that you're visiting. And that's because the Ambassador Edge Stack generated a certificate for me with Let's Encrypt, right? And again, I'm using their native objects and letting their flow happen. I've got Flagger in here. So I could do a progressive delivery rollout against one of my applications. And I could do it at the service mesh level or I could do it at the ingress level, right? I have options because no one's constraining me to behave a certain way and we don't have expectations for the projects that we work with. We do one thing, which is the service mesh. We do it well. Lingerty is not an ingress. It is not an API gateway. It is just a service mesh, right? And that's what we do. SanoSingle asked is, is it a good idea to inject Lingerty into our ingress controllers? Yes. It is, it's harder, right? You need to pay attention and there's documentation on every ingress in the Lingerty docs page. You know, I'll tell you that I know for sure Emissary works great. Nginx has a really easy way to do it. There's one thing you'll see when you read our docs which is we talk about ingress mode versus regular mode. As of 2.11, we are hoping to move away from ingress mode. So we want you using an ingress in its native mode and not try and have Lingerty do anything special but please check the documentations for your specific ingress. That would be my recommendation. Yeah, any other questions? Awesome, none that I can think up. If you have any other questions please feel free to throw them into chat. Otherwise we can get things closed out. Thank you so much Jason. This has been kind of fun to find out all about this and really to dig in and see, you know what was going on with that donut boat too. That's good to know. And I'll have to keep that in the back of my mind as I spin this up for demos myself. It's a great, it's a great broken application that you can use. It's a lot of fun. If you go, speaking of telepresence, if you go check out the telepresence folks getting started guide, they'll show you how to fix it. So you can actually learn how to resolve the problem if you like using telepresence. And yeah, that's all I've got. Long story short, add the proxy and get all the value of being in the mesh. Thank you, Son of the Single. Well, hopefully those ideas mesh for everyone. Perfect. Wonderful. Well, with that, I guess we can close out. Thank you so much Jason. Thank you everyone for joining the latest episode of Cloud Native Live. It was great to hear from Jason today about LinkerD 2.11 and all of the new policy features as well as well as the broken application DonutGate and what's upcoming for 2.12 as well. Really enjoyed all of your interaction as well and all of your questions today. Next week, we will be off due to the winter holidays and we'll be kicking off again in the new year. So thank you so much for joining us today. We'll see you soon. Jason, do you have any parting wisdom to share with anyone or any closing remarks? No, I don't. I'd love to see you in the LinkerD Slack. Please feel free to reach out to me. I'd love to make you successful with LinkerD and I'd love to help you on your journey to production with LinkerD. Awesome, awesome. Thank you so much again, Jason. The only thing I have for y'all is let's keep production boring. Let's figure things out. Wishing you all a wonderful rest of your days, weeks and months as well. Thank you all so much for joining us. We'll see you later. See ya.