 Welcome, friends, to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. I'm Whitney Lee. I'm a CNCF ambassador and a developer advocate at Tanzu by Broadcom. And every week, we bring new presenters to showcase how to work with Cloud Native technologies. We'll build things, we'll break things, and we'll answer your questions. Flynn especially is promised to break some things today. So we have Flynn here with us to talk about Beyond Kubernetes with Linkerd, 215 and Spiffy. But before we get to Flynn, I want to remind you that this is an official live stream of the CNCF. And as such, it's subject to the CNCF code of conduct. So basically, please don't add anything to the chats that will be in violation of the code of conduct. Be respectful to your fellow chatters, to me, to Flynn. It's going to be great. Everyone's kind. I believe this. Friends who are joining us live. It's so far, it's been correct. Yeah, so far in my hosting of Cloud Native Live experience. So anyway, folks who are joining us live, please say hello in the chat, say where you're from. I love that we're all part of a global community. I think it's the coolest thing ever. And as always, if you have questions during the presentation, please do post them to the chat and we'll get to them as soon as we can. And with that, I'll hand it over to Flynn for today's presentation. Hi, Flynn. Hi, Whitney. I am Flynn. I'm a tech evangelist for really for Lincardy working at Boyant. I have a few slides and then we're going to go into demo. So everybody pray to the demo gods. Most of the time I'm up here, I'm fairly confident I'm going to be building things. And every so often, I'm doing one where breaking things is more of a possibility than sometimes. So we'll see how it goes. Let's see. We have a new ebook out about zero trust reference. You should go download that one. We have a certification course for Lincardy. You should go do that as well. It's pretty awesome. If you learned at boyant.io, if you can't just scan the QR code. Oh, yeah, I'm awake. But right now, let's talk about mesh expansion. I literally could not remember QR code for a second there. It's going to be a great stream. Oh, yeah, it's going to be awesome. So yeah, we're going to throw a very quick discussion of what I'm talking about with mesh expansion. Then we're going to go straight into the demo and find out what gets built and what gets broken. Most of the time, I go, oh, yeah, you can follow along with this and that and the other thing. And this time I'm just going to say, you can't really do that yet. The source for this workshop is going to be available in the service mesh academy repo. This particular demo is pretty early. And so we are, in fact, I and a couple of the engineers have been rampaging through this demo upright, left down in sideways earlier today. And so we are not at a point where we have posted the source code for this yet, but it will soon be available at that URL, point.io service mesh academy. There's a bunch of awesome stuff in there. This particular one will show up in 215 mesh expansion. The other thing I'm going to point out here as we go into this is that linkerd 2.15 has not yet shipped. So today we are using linkerd edge 24.1.3. That's the most recent edge release. It came out last Friday, I believe. And it is the first edge release that has all of the mesh expansion stuff in it. So it's pretty cool. All right. Mesh expansion. What we're talking about here is just the ability to allow workloads that are not in Kubernetes to participate in the rest of the service mesh, along with the things in Kubernetes. Linkerd is gaining this as of linkerd 2.15, which is coming out soon to you. I don't want to hold anybody to a date, but I believe that is a small number of weeks away. And yeah. So keep an eye out for that. On the bleeding edge of technology right now. This is exciting. So, so bleeding today. The other thing that I meant to put on this slide is that even when you are running with non-Kubernetes stuff for linkerd, you will still need a Kubernetes cluster because that's where the control plane and the sources of truth for what services are around, things like that, live in Kubernetes if you're doing this with linkerd. So the way this looks with linkerd is basically you start off with a cluster like normal. You've got all the linky sidecars like you normally would see. But what we want to do is arrange it so that some of those are no longer inside your cluster. And in fact, they are just random Linux boxes. They are not even other clusters like you've seen in the multicluster version of this slide. You might want to do this for several reasons, right? Maybe you have some legacy workloads that are off not running in Kubernetes, but they need to start interacting with things in Kubernetes. Maybe you're out on a factory floor and you're trying to do edge computing on Raspberry Pis or something. The linkerd proxy is lightweight enough to do that, which is really cool. I'm hoping to show that off in a few weeks actually. But this is a thing that shows up a lot in a lot of different industries. The way that we do this under the hood is that we run a linkerd proxy and go and mess with IP tables out on the Linux boxes. And we basically do all of the things that Kubernetes does. So like that messing with IP tables or equivalent is a thing that happens in the linkerd init container when you're using normal linkerd. Or it's a thing that we use the CNI plugin to do. When we don't have Kubernetes, we have to do that more by hand. But it's the same code running in pretty much the same way. Just a lot more, oh, crap, Kubernetes isn't doing this for us. So we have to handle it all on our own. And we're going to talk a fair amount about that during the demo part of this. There are some challenges with this, obviously. The biggest one is the root of trust and identity in general. Linkerd in Kubernetes relies on Kubernetes service accounts for identity. And we don't have those outside. So in fact, we are going to be using Spiffy for identity outside the cluster. The network is itself an interesting challenge. You need good network connectivity. DNS, we'll talk a little bit more about that. We are massively cheating when it comes to DNS for today's demo. And also, we currently assume that you're running on Linux. Although it is worth noting, I am doing this demo for you on an Apple Silicon Mac. So almost everything I'm running here is running ARM64 code. And I can literally take a bunch of these same containers and go run them over on a Raspberry Pi, which I have done. And you'll see more of that in February. So stay tuned. Yeah, we have a question here. Could you tell us what is the difference between cluster mesh versus Kubernetes service mesh? In my mind, those are mostly the same. There is a thing out there, the network service mesh. Was that the thing we were talking about? There is no single Kubernetes service mesh. There are several service meshes that operate in Kubernetes. And we will loosely talk about those as running in the cluster or being a mesh in a cluster. Most of them, definitely including LinkerD, also have a way where you can extend a mesh so that you encompass multiple Kubernetes clusters within one mesh, which is a little bit like what we're doing now. But it's with clusters as opposed to just random Linux boxes. Dave also comments that he and I have never done a demo where we did not chat massively about DNS, which is sadly, well, it's kind of sadly true and kind of entertainingly true. It would be lovely if in the 21st century, DNS were not a thing we needed to constantly be thinking about. But guess what? All right, that's all I had for slides. It is now demo time. Like I said, everybody everybody pray to the demo gods and let's see what happens. All right. Hey, that's a good sign so far that the screen switch happened. I'm going to start doing this by running a K3D cluster in my laptop. I am explicitly telling it what Docker network I want it to use because I'm going to spin up three Docker containers to act like my non Kubernetes workloads on the same Docker network. But all of this is currently running in my Mac. Most of that is because that makes it a lot simpler for us to put this in a way where other people can try it later, as opposed to saying, okay, first you need to have a Macintosh and 47 Raspberry Pi. So let's go ahead and get that running. The rest of the parameters in there are mostly boilerplate for me. And so don't think too hard about it. I tend to turn off traffic in the metric server just to often disable a little bit of load because I'm rarely using those things. Next up, let's make sure that I have the correct version of LinkerD. I do, in fact, have LinkerD edged 24.1.3. There we go. And so we can go ahead and install LinkerD. So LinkerD's installation happens in two stages right now. The first one is the CRDs. And then we have to install LinkerD itself. For a lot of demos, this will be one line. LinkerD install pipe to kubectl apply dash F dash. We're not going to do that this time because one of the things that will be very important later is that Spire have access to the same certificate that we're using as the trust anchor for LinkerD so that our trust hierarchy works. And we will talk more about the ways in which this is cheating shortly. But yeah, in this case, it is very important that I use the certificates that I have access to on with both the public half and the private half of the certificate. I think it might be worth quickly saying what Spiffy and Spire are for the uninitiated. You want me? Yeah, okay. So you can tell that I am the wrong person to be giving a quick sound bite version of Spiffy and Spire. All right. But Spiffy is an acronym for something. I don't remember what it stands for. You can look it up. It basically is a way of, you know what? Actually, let's back up a moment. A core thing that you run across constantly if you're trying to do production level microservices or production level networking general is how is it that you know the entity that you think you're talking to is really the entity that you are talking to? This is at its core a problem of identity. It also gets into authorization. It gets into authentication. It is a really, really nasty problem. In Linkardee Networks, we use MTLS to provide an identity that we can then convey across the network so that you can make reasonable decisions, including authentication and authorization, based on that identity. Also in Kubernetes, we have these things called service accounts that we can use as a basis for that. But if you're not in Kubernetes, you don't have that. Spiffy is a project that it's actually fairly old at this point, I think. It's a graduated project. It's been around for several years. Its purpose in life is to provide a mechanism where you can create identity without relying on things like Kubernetes service accounts. This is a phenomenally hard problem. We at Linkardee are using Spiffy for non-Kubernetes identity because it is a phenomenally hard problem and we did not want to solve that on our own. It's much, much easier to rely on the work that Spiffy has already done to tackle this really difficult problem. Spire, I believe, stands for the Spiffy runtime or something along those lines. Spire is a collection of libraries and processes and such that implement the protocols that Spiffy created to deal with this problem. We are using both of those here. We are using Spiffy as the basis of identity, but we are also using Spire, actually, we're literally using the released Spire binaries downloaded off of the Spire site to implement this stuff. Again, because we at Linkardee are fond of not reinventing wheels. I think that's brilliant. Yes. When we do have to reinvent them, we're perfectly willing to, but when we don't, yes. Is entity identity a real problem when we are in a closed environment? Absolutely, because there's actually no such thing as a sufficiently closed environment for it not to be a problem. There are a lot of situations where you don't care, which is kind of like it not being a problem, but it actually is a problem everywhere. It's just that in a lot of environments we decide it's not a problem we want to worry about. Whitney, you will know better than I do if there is a recording. There is a recording on the CNCF YouTube channel. If you go under the live tab, you'll be able to find this recording easily. Awesome. And the demo thing is I will put a link to that in chat. I should have remembered that this was going to be a thing I was going to get asked. All right. I have put a link to Dimosh, which is the tool I'm using for all of this. Okay. So we have LinkerD running. We've run LinkerD check. It says everything is okay. The LinkerD buoyant yellow things are because this particular version of the LinkerD CLI knows how to talk to the enterprise version of LinkerD, which I am not using. It just is capable of doing it. So since it's capable of doing it, it is going to bug me that I'm not being commercial. All right. So we have LinkerD running. We can create a namespace in there. We will put in a workload. This workload is pretty normal. We have a service called Kate's workload that will deal with workloads tagged with my workload in the cluster. And we have a deployment that has those labels. So it will line up with that particular service. This is running the buoyant BB workload. I actually don't know what BB stands for. Somebody wrote it a while ago. But the way we're using it right now is basically just an HTTP server where you hit it and it will reply, hello from dollar pot name. And that's all it does. It's a very profound workload. So we're going to go ahead and apply that. I will get the link to the Zero Trust book. But I'm going to continue with the demo and get that later. Okay. We're also going to start a pod that we can go and curl from. And this is a really boring pod. So I'm not really going to show how we built its image or whatever. It's just we took the WN Bookworm Slim base image and added curl in JQ so that we could do things more than just sit on it and have to install things later. So we are going to use that to create a pod. I'm not even using an deployment here. It's just a pod whose name is curl meshed. It's in our mixed end namespace. And we're explicitly going to associate it with a service account named curl meshed as well. It is running sleep so that I can kubectl exec into it and run other things. Like I said, we're living on the bleeding edge. All right. Give that a chance to come up. And it is now running. So we should be able to run a curl. And oh, look, we can talk from our meshed curl pod to our Kubernetes workload. And it works. So far, this is all just setting the stage, plain old, boring, using Kubernetes. Okay. Now we need to talk about the not Kubernetes part. This will take a little bit so bear with me. The first thing is we need Spire and the LinkerD Proxy and our workload and certificates and other stuff in the container. And as you can see in whatever color that is, magenta-ish, this is not the way you would do this in production. The way you would do this in production is you would have the Spire agent in your container. And it would talk to some other Spire server. And then there's this whole spiffy thing called attestation, which deals with the problem of knowing that it's okay to do this. And we are going to ignore all of that for this demo because this is a demo. All right. So I've already built this image where I, it's pretty much the same thing, right? Take Bookworm Slim and slap on curl and JQ and then throw on the LinkerD Proxy and throw on the Spire things and dump it all in one place. I'm not really going to go into the Dockerfile right now because we have enough stuff to talk about. We will need our Docker containers to route to the pod sider range through the node that we have running. This is kind of the basic thing that you have to do if you want Docker containers on a flat network with a K3D cluster to be able to talk to things in there. And we need to have the node IP for that, which we can get by digging into the output of kucutrol get node. We also need the pod sider, which we can get again from digging through the output of kucutrol get node. So in this particular case, our node is running on 172.27.0.3, and we will need to route through that to 10.42 slash 24, 10.42.0 slash 24, excuse me. And we will do this through the clever conceit of literally running IP route add in the Docker container. Now we get to DNS, the root of all evil or something like that. We are going to cheat. And the reason that this is a challenge is that on the container, you would like to be able to refer to services in the cluster using their normal DNS names, which requires you to have access to the cluster's DNS server. We are just going to tell the container, hey, container, use the DNS server. Just use it for everything. And that works okay for the demo. This is not how you would do this in production. Again, in production, you would probably set up a caching forwarder and do this the right way. I may have bandwidth to make that setup for the February demo, but nobody hold me to it. Because then again, I may not. We will see. It is easy to get the DNS IP by grabbing the, basically by grabbing the service address for the Kube DNS service. It is weirdly not easy to get the service sider range, because that's basically just a thing you set when you start up Kubernetes running. But you will notice that the Kube DNS IP we just found is in 10.43.0, which is part of the service sider. So we will further cheat by allowing our container to just route directly into that. And yeah, so come back in February, and hopefully we will have less of this cheating going on. So finally, after all that, we will also need to have our Docker container actually run the linkerd proxy, which will require setting up some IP bit table stuff. We are going to rely on having the DNS, talk to the DNS in the cluster. We are going to rely on routing as we described earlier. We are going to rely on having a spire agent and a spire server, which is cheating. And clearly, this is not something we want to do by hand, so there is a bootstrap script in there. It is weirdly short for the amount of magic that is in this script. It is going to run the spire server and then it is going to run the spire agent. It is going to use spire server entry create to actually create the identity that we are going to feed over to the linkerd proxy. We will start the workload running before anything else, because it is easier in this case to run the workload in the background than it is to run the proxy in the background in terms of knowing and controlling what is going on. We will tweak IP tables and then we will start the proxy running. And this is what all of that looks like. Here we are running the spire server. Here we get a token that we can use for the spire agent to connect to our spire server. Here we fire up the agent. Here we go through and get an identity telling it what parent ID to use, which we set up here. After that, we start our same BB workload running, but we tell it different response text so we can tell the difference. This endpoint instance thing is because if we want to get really fancy, we can set up multiple end points that all have the same identity and I didn't do that for this demo. We set up IP tables. I am actually not going to go through these too much because this is literally what happens in the linkerd proxy init container if you go and dig through in there in the source code. It is a bunch of weird stuff that if you really enjoy IP tables, you will really enjoy reading about this. For the rest of us moving on, there are four specific environment variables that we have to set to actually run the proxy. We need to tell it what ID it is going to use. We need to tell it its DNS name effectively. We need to go through and tell it a specific JSON thing so that it can go through and tell the linkerd world. These are the outbound policies I am interested in. Please give me the right ones. We need to also do the same thing to allow it to talk to the linkerd destination controller correctly. After all that, we use run proxy to actually run the proxy. I am not going to look at run proxy. It is two pages of setting other environment variables to boilerplate values that don't need to be edited and then launching literally running opt linkerd bin proxy. So there we go. Everybody cross your fingers. We will actually run three of these. We need net admin so we can go mess with IP tables. We need to use the same network that our K3D cluster is on or networking falls apart. We are going to give this a name because I like having names instead of IDs. It makes it easier to type. We will point to the DNS IP that we found earlier. The workload instance and endpoint instance we are setting so the BB thing will do the right thing and so we end up with multiple identities and finally we are going to use the myVM image as we mentioned before. So we will then run two IP route add commands. And by the way, if you are curious, we could have done this by passing variables and then having the bootstrap script run the routes as well. Realistically, if you were doing this with hardware instead of random docker containers, this would just be part of setting up your network for setting up your hardware. And so it felt like it made more sense to not have it be part of the bootstrap script. And we will also do the same thing for, so this is the pod sider and this is the service sider. We are going to do exactly the same thing for workloads 2 and 3. And since it is exactly the same thing, I decided that I would just put it into a shell script loop. So there we go. We have got all three of those running. This is a royal pane to read on this screen. But here is external workload 3, external workload 2, and external workload 1, all of which are running. And it took me nearly a minute to go through explaining what was going on after I started external workload 1. Cool. You also get to see the K3D pod containers, rather, and you can see that, yeah, I used Docker build kit. Living on the edge or something like that. Okay. Last thing we have to do is we also have to tell LinkerD where these external workloads are. There is a new resource in LinkerD 2.15. It is called an external workload resource. And it is kind of analogous to a deployment or a pod where you are saying, here is a thing, you know, you don't really manage it, but you can go and talk to it. We will find the IP address of our workload 1 container. And then we will use this funky end sub command to substitute some variables in a template. And we will end up with this external workload. So it has the name external workload 1. It is going to live in our mixed end. It has some labels with it. This is the newest bit here where we are explicitly telling it, this is the TLS identity that you should be using in the mesh. And this is your server name. We also need to tell it where the external workload lives so that when somebody tries to talk to something through this external workload it gets vectored the right way. I should really have said this is more like a service than a deployment. I am not sure what I was thinking when I wrote deployment. The other thing that is a little bit weird here is you might not be accustomed to seeing statuses in YAML when we are creating things. The reason this is important is that the part of linkerty that manages the external workloads actually populates DNS and end point slices and things like that. And it requires seeing a type ready status true. Go ahead and show up before it will allow people to talk to that external workload. We are also making a service, oh yeah, this is why I said it was more like a deployment. We are making a service that references this particular external workload as well so that we can go and talk to it. Shouldn't the identity be part of a svid? How does the identity get into the svid x509 cert? The short version of all of that is that that identity got created by the spire agent that is running in this workload. And linkerty also knows how to talk to the spire agents. That may or may not completely answer the question. That is as much as I want to dive into this right now because we could talk about that for a very long time. So my apologies if that does not, but if there is more we can maybe pick it up in chat or on Slack later. All right. So we are going to apply that external workload and do exactly the same thing spire agent generates self-signed certificates. Spire agent actually generates certificates that are signed by the parent ID from the spire server. And did that make sense? When we were looking at when we were looking at the bootstrap script you saw it start the spire server and then the spire agent. The spire server has a certificate that it is going to use to sign all of the identities that it hands off to the agent. And we deliberately set this up so that the server would create identities signed by the same certificate that linkerty is using for its trust anchor, which is important for MTLS to work. We have another question. If we use MTLS can we use HTTPS for this configuration? Everything that is happening is HTTPS under the hood with MTLS. The places that I am running curl with HTTP I am deliberately running curl with HTTP because I want linkerty to be doing all of the advanced routing and things like that, which it can't do if you throw an encrypted connection at it. So basically one of the weird things that happens when you use service meshes is your application workloads should use clear text within the cluster. They should let the service mesh handle HTTPS for you so that they get to do things like retries and load balancing by request and circuit breakers and all that kind of stuff because they get to control things at the protocol level. If you are slinging around HTTPS connections on top of the mesh they pretty much end up just having to treat them as opaque TLS connections. Yes, we have a Kubernetes cluster with linkerty running. We have one workload running in the Kubernetes cluster. We have three containers outside the cluster that are also running workloads. They have gotten identities with Spiffy and connected to the service mesh that is running in the cluster. In this case we have made our external workloads. So given that we have this, we should be able to actually talk to things running in my Docker containers. In here we have services called case workload, external workload one, two, and three. External workload one, two, and three refer to specific external workloads so they will only talk to the one container. So here's one of them. This is our external workload one. It is selecting for things with a workload of external workload one. It has a cluster IP and all that good stuff in it. So if we go into our meshed curl pod and we ask it to talk to external workload one specifically, oh, that makes me very sad. That should have given us an actual answer. Okay, moving on from workload one. Workload two gave us an answer. Oh, good. You know what? Let's see if workload one has woken up now. Workload one might just have gone out to lunch or something. Do you remember when I said we were going to break things? You promised we would. That was the first thing we broke. Awesome. Let's see if it's alive enough to talk back inward. Okay, it's alive enough to talk back inward. I don't know what's going on with workload one. I'm not going to worry too much about it. Here you'll notice I used docker exec rather than coop control exec. So up here I am using coop control exec to run a curl command in my meshed curl pod in the cluster talking outward to an external workload. Here I'm using docker exec to start on the external workload and go back into the cluster. And yeah, you know, two-thirds approval for the demo gods. Yeah. I break rule. So we have another one. Another person saying that rule number one of tech is never do a live demo. I tend to phrase that one as never precede a demo with a comment more predictive than watch this. Yeah. Finally, we can start. Oh, yeah. Sorry. Let me also point out this that when I went to external workload, I got hello from external workload three and point one. When I started with my external workload and then spoke into the cluster, I get hello from Kate's workload. Here I'm starting from an external workload and going to a different external workload. And again, I get hello from external workload two. Cool thing about this is that that external to external bit is not actually flowing through the Kubernetes cluster. It's just going directly across because the proxies have been able to talk to the control plane to get all the information they need to just directly make an HDTV or an MTLS connection across to the other workload. All right. Now, this works, but it's kind of not terribly interesting. What is more interesting is when we start layering higher level functionality on top of this, and this is where we really get to wonder if the demo gods are going to are going to, let's just see what happens. So I'm going to create a couple of other services here. One of them is, I'll start with the second one, we have one called external workload, which is going to be explicitly asking for things tagged with my workload that are location in that are marked with a location of the VM. So this should span all three of my external workloads or maybe all two of my working external workloads. And then finally up here, we have another one called combined workload, which doesn't use a location selector at all. So it should include all of them, the external workloads and my workload in the cluster. So I'm going to apply those. And this is going to be fun. I've never tried this when one of my workloads didn't seem to be working. But okay, there we go. So it's doing the right thing and it's ignoring the one that's dead. I am really fascinated to find out what is up with the workload that's dead. You may also be looking at this and going, wow, why isn't it just round robinning, you know, 23 kates, 23 kates, 23 kates? And the answer is that there's, especially at very low request volumes like this, there's a lot of stochastic behavior in load balancing, because if the latencies stay really low, like you would expect for a really fast workload that's only getting one request a second, or in this case, a third of a request per second, right, then it kind of doesn't matter which one you pick. And so LinkerD will randomize things so that, just mostly so that things work better at scale. Okay. We can also do some fun things like let's use an HTTP route and see if we can do a Canary workload between some of our externals. And this will be kind of fun to see what happens. What do local Docker logs say for the one that is dead? I don't know. I'm deliberately not looking at that yet because I don't want to go down that rabbit hole. I'm going to make a service here called Canary. And I want to draw some attention to this. This is a service that deliberately selects no workloads. The only useful thing you can do with a service like this is attach an HTTP route to it. I am, in fact, going to go and in a window that you can't see, I'm going to change this file because we know that external workload one is dead. So I am going to make that external workload three instead. I've just saved that. And I don't know how to get back to reload that. Oh, wait. I know how to do it this way, though. All right. So now we've got external workload three and external workload two. That's likely to work better. And 20% of our request should go to external workload three. 80% of our request should go to external workload two. So let's go ahead and apply that. And then we'll just start that running. And eventually maybe we'll get a workload three in here. You remember I was talking about randomness at low traffic levels? There we go. All right. So it's not completely broken. It's just delivering random numbers that are a little bit differently random than I would like. All right. So it's not weighted like you just said, like you just said it. It is weighted, but what happens is that at very low request volumes, Lincardy is basically actually let's back up a second. Lincardy in general tries really hard to route to the end point that has the lowest latency at any given moment. At this point, all of the latencies are so low and the traffic is so low that it just doesn't really matter. And so we end up seeing fewer requests than we would expect going to workload three. But what we're going to do here is I'm going to start this one going and I'm going to start that going a little bit faster just so it has a few more chances here. And then I'm going to come over here and one of the users got ahead of me on this one. So but yeah, we're going to go ahead and change the weights. Let's make this a 5050 split instead. And when I apply this, should start seeing a lot more three show up there in my little inset window. And we do. Isn't that cool? Just for the fun of it, we can also make all the workload twos go away. So now we're only getting workload three because I changed the weight to just go and route all of it over to workload three. I could also have deleted that target ref section, but it's kind of more fun just to set the weight to zero and watch what happens. So yeah, the fact that these workloads are external doesn't really affect what Linkerd can do with them. All right, Dave, I knew somebody was going to ask this. Thanks, Dave. Can you configure Linkerd to use Spiffy for everything? And the answer is right now you cannot. Within the cluster, we still stick with Linkerd IDs. That is a thing we are looking at. I don't know if it's going to make 215 or if it's going to be later or exactly what's up with that. But yeah, that's a that is a frequent question that is interesting to implement. Yeah, that's it. Yeah, thanks. Thanks, Dave. Dave and I have worked together on a lot of occasions. So that's why he's decided to be asking things like that. All right, let's also talk a little bit about authorization because one of the other things that comes up a lot with service meshes is in fact authorization. So right now, even a nonmeshed workload can talk to everything. So I'm going to start by creating a nonmeshed workload. I'm going to point it to a new service account called curl not mesh. But also, if you read carefully through this, you will not see the annotation that says to inject the Linkerd proxy. So this is not going to be in the mesh. If I apply that, and then I give it a chance to come up and run, it's running. And if I then try to use it to talk to an external workload, this should work. You'll also notice that we've changed things a little bit. So I'm just getting an HTTP code. And I get a zero because that's extra you reach out to one. Yeah, crap. I'm saying crap, because I'm going to do that a little bit more often than this demo. And I don't really want to restart that. All right, well, let's cheat a little bit more than because, you know, this is demos, demos are all about cheating. Whoa. Okay. Now I've kind of backed up to there. And this is the thing that's going to fail. So I will let it fail. Then I shall talk to workload two instead and we get a 200. All right. Don't try this at home, kids. We're going to start by preventing this. We're going to use a Linkerd server resource. And we will use that server resource to select all of our external workloads and explicitly tell it default deny. So we have this new external workload selector in our server resource. So we get to say, pick everything tagged with my workload that's in marked as NIVM and make it default deny. So as soon as we do this, then this should fail, except it's going to fail in a way that I don't want it to fail. Now I get a 403 going to workload two. And I still get a zero for my dead workload one, not that I'm bitter. All right. So far, so good. There is a minor difficulty with this though, in that I have also broken access from my meshed workload, which is kind of suboptimal, right? All right. So what we need to do to make this work out is to add a Linkerd authorization policy so that we can allow that access again. So this is what we're going to do. We have an authorization policy that is attached to our external workloads server, which we just made a second ago, and says, you must have this meshed TLS authentication called meshed client MTLS. That is here. And it explicitly says the curl meshed identity in the mixed in namespace is a thing that we are going to allow. Curl mesh.mixend for that service account.identity.linkerd.cluster.local is the full name of the Linkerd identity associated with my meshed curl workload. It is a mouthful, but you don't have to say it very often unless you're doing live demos, which as we know, is violating rule number one of tech. So let's go ahead and apply that. And this is probably once again going to be to our dead workload one. We'll just go ahead and let that die. Workload two now gives me a 200. Great. In this bit, there we go. Access from an external workload to another external workload is still blocked. And access from our unmeshed curl pod is also still blocked. And I'm going to have to do that again. Still blocked. I think, thankfully, that that is the end of what I was going to show because, man, I'm getting sick of control zing out to go through and type things. Oh, that was terrible. I didn't wrap that line. I can't even blame that one on the demo gods. That's just me. Anyway, there's, you know, basic mesh expansion. So we've seen using external workloads to bring the external stuff into the mesh. We've seen some ways they can communicate both across, you know, from external to external external into the cluster cluster to external. And we got to see some of the higher level stuff like grouting and off policy and things like that. There will be a deeper dive into all of this at the next service mesh academy, which is on February 15th. That we put a link to that already in the chat. But if you can toss that one up again, that might be nice. And you can reach me for feedback as Flynn at brand.io or as Flynn on, I say the linker D slacks, but basically all the CNCF slacks, that is me. And I think we actually do have a little bit of time for questions, right? Yeah. Can I just say that I like that the broke the demo broke a little bit like it shows me that it's all like real that it's actually for real. Yeah. Yeah. It's, um, I'm, you know, now I actually think I will go through and oh yeah, we have time to see what we have another slide or two as well, I think. All right, we have Q&A while we're waiting to see if there are other questions to come in as well. I mentioned, yes, I am using the version of linker D that wants to talk to enterprise linker D and so it was yelling at me because I don't have it set up. Go check it out point.io slash enterprise linker D. It's cool. Um, and service mesh academy February 15th, the deeper dive into exactly this topic where the big difference between this one and February's is that hopefully my external workload won't die. But also there are a couple other things that I am hoping we're going to be able to do like show the external workloads running on actual hardware with like blinking LEDs and things like that because that would be fun. Yeah. And in the meantime, let's debug a container. I'm really, really curious what is going on here. Also, everybody should be able to tell why exactly I use things like Dimash for this because typing live is hard. Wow. Uh, I've literally never seen that before. Awesome. All right. Well, let's do this, shall we? Because I'm kind of curious. I'm, I expect that that was some horrible thing where the demo gods didn't like me, but now I'm kind of curious. I'm scrolling through and trying to find my line. Okay. All right. So 504 is an interesting one. That's linker D telling me, Hey, there are no end points that work with the Oh, you know what? I'll just bet you that, uh, probably not IP address 172. Sorry. 504 is linker D telling me, Dude, there are no live end points for this thing that you've just asked for, which made me immediately go, Oh, I bet you my Docker container isn't really 172.27.0.2 anymore. So let's see. But it is. Oh, that's really weird. Still unhappy. Well, yeah, this, um, it's a different mystery now. Oh, no, I did exactly the same thing. Oh, wow. That's kind of awesome for some value of awesome that isn't awesome at all. Again, I like it. It shows me that you're human plan. Oh, you know what? There's one other thing we can try doing here because that could also be quick. Does anybody remember the other thing that I needed to do that I said needed to happen when we got these things running? No, also need set up IP routing. Now this might be too late. So I might at this point need to go back and reboot some stuff and things like that. I'm just not going to try that. Let's go back instead to the slides where you can look at the wonder that is service mesh Academy, which is so much better. It was a good go. Um, um, we have one more question. I think we didn't get to you. Oh, yeah. Is it possible to set mutual TLS between the external and internal services? Not only is it possible it happened automatically and I didn't have to configure anything for it. Um, which might also be a fun thing to go and poke at in the deeper dive in February. Just, I bet you we can set up something where we use Wireshark or something to prove that that's going on. Awesome. This is a blast. Thanks so much. Yeah. Thank you. It's always good to be on. It's, um, yeah. And yeah, it's been so long since you and I talked anyway. So he's joking. We streamed together yesterday on the DevOps toolkit YouTube channel. But it's always good to hang with me. It's always fun. Oh, 100%. I feel that way about you too. And thanks to everybody for joining in and asking questions too. Yeah. And thanks everyone who watches the recording. So, um, I'm going to see the in script here at cloud native live. We bring you the latest in cloud native code on Tuesdays and Wednesdays at the same time, noon U.S. Eastern. So, uh, thanks again. I'll have Flynn. Thank you. Much appreciated. What a finish. And I do remember a couple of folks, um, had a question or two about Spiffy or Spire that I didn't want to do all the digging into today. Oh, yeah. Bring those over to the Lincoln East Slack and that would be the best place to tackle that stuff. It's with KubeCon coming up. So I hope to see you all in person at KubeCon Paris. Will you be there, Flynn? Yeah. I believe I will be. Awesome. Awesome. Cool. I don't actually remember how many talks I'm giving this time around. That might be. You lost count. Awesome.