 Hello, everyone. Welcome to Cloud Native Live, where we dive into code behind Cloud Native. I'm Annie, and I am a CNCS ambassador, as well as a senior product marketing manager at Camunda. And I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with cloud-native technologies. They will build things, they will break things, and they will answer all of your questions. So join us every Wednesday to watch live. So this week, we have Matei David here to talk with us about mutual TLS on communities with liquidity. And a few other exciting announcements from the cloud-native sphere is the Cloud Native Survey 2022 is happening as we speak. So remember to give your answers and provide input. And as well, if you are attending the marketing committee meetings, that meeting will be next week, this week, as well as if you are looking to have these kind of great events happening in your company or whatnot, Q3 calendar booking for CNCS events has kicked off, so get your slot now. And as always, this is an official live stream of the CNCS, and as such, it is subject to the CNCS Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants, as well as presenters. But now, I'll hand it over to Mate to kick off today's presentation. All right, thank you very much, and welcome everyone. Just gonna start by sharing my screen, as we typically do with these presentations. Cool, so yeah, welcome to MTLS on Kubernetes with LinkerD. I'm going to go to the next slide in just a second. All right, so my name is Mate. I'm one of the LinkerD maintainers, and I'm also a software engineer at Point. I also do not look like in a picture anymore, but it's the only one I have, so we'll have to make do for now. I've been with the LinkerD project for about two years and a bit now. Actually started as an external contributor when I was a student, and I was involved as a mentee through a CNCS Community Bridge Program. And yeah, been around the project for a long time, worked on a lot of different bits and pieces. And if you have any questions after this presentation, you have my handles right here on the slides, you can reach me on Twitter, Slack, or GitHub. So yeah, say hi, or if you have any questions, let me know. Cool, so today we'll talk about MTLS, but before we actually get into what MTLS is and covering protocol details, I'm just gonna give a brief introduction to computer security. So we'll talk a little bit about authentication and authorization and stuff like that. We'll move on and cover TLS as a protocol. Then MTLS, they're super similar, just a bit of a spoiler there. And then I'll have a demo with how you can get MTLS out of the box with LinkerD. So super simple. I usually like to be a bit more theoretical, but I think for the purpose of this presentation, we're gonna see more of a demo and sniff some traffic with D-Shark and just try to understand what's happening there and how we can verify MTLS in a Kubernetes cluster. Cool, so with that out of the way, I'm pretty much ready to get started. So like I said, I want to cover some computer security basics first. And I think if we talk a little bit about authentication and authorization and what those words actually mean and how they sort of fit in the computer security context, it's gonna be a bit more evident on the value that MTLS brings and why we need MTLS to begin with. So authentication is the foundation of communication security, or at least that's what I like to say. I'm not a security expert, by the way, by any means. So this is all just stuff that we're sort of dealing with and stuff that I've been learning on the job and in my personal time. But when we say authentication, we pretty much mean the process of identifying a user or a system or a device. So it's nothing too complex. It just means that when we do have a communication channel, we just want to make sure that the person that we think we are talking to is the person that we actually want to talk to. And this matters in computer security because oftentimes people think that once you have encryption, you have security and those two can be used interchangeably and they're synonyms, but that's kind of far from truth, at least in communication security. So when we talk about communication security, we typically want to have more than just encryption. We want to have some guarantees placed around our communication channel to sort of give us this security. And generally security experts and people in our industry like to use a simple model when they first start to create security systems or when they first start to put security into place. And this model is called CIA, it's the CIA triad. It's kind of a handy mnemonic to sort of understand it to sort of remember it. But CIA refers to confidentiality, integrity and availability. And when we say confidentiality, we basically mean that we want the data to be confidential and only accessible to the parties that participate in this communication. So this is where encryption sort of shines through and also where authentication comes into play because first of all, we want to make sure that our data is encrypted and nobody has access to it, but we also want to make sure that we only send this data or receive data from the advisor, the person that we want to communicate with. Data also needs to have integrity. So you don't want your data to be tampered with while it's in flight. You know, you send some bytes over the wire. It would be a bit awkward if you receive some other bytes and it doesn't make sense. Inavailability sort of speaks for itself, right? Like you can have a very complex and secure system, but if it's not available, then well, you haven't really done anything with it. Perfect. There's a point from the audience already now. Hey, all right. So Denan asked, will the slides be shared afterwards? Yeah, I can share a link to the slides, no problem. I'll do that. I'll try to do that after the presentation, but we'll see, maybe I can do it while I'm presenting. Speaking and typing is not really my strong point though, so we'll have to see how it goes. But yes, I'll share the slides. Perfect. The reporting of the slide will be also available in YouTube afterwards, so everyone can check that out as well. Cool, thank you very much for the question. All right, anyway, what I want you to remember from the slide is that authentication kind of falls into the confidentiality part of security. And MTLS and TLS's protocols give us confidentiality through authentication and through encryption, as we'll see in just a second. But before we go to the TLS protocol, there's one more thing that I want to cover, and that's authorization. And I think in my job, sometimes I see this misconception where people think that authentication and authorization, again, mean the same thing. But that's also a bit far from the truth because authentication powers authorization. So a really handy way to tell these two apart is by the questions that they're asking. So with authentication, we're generally concerned with, are you who you say you are? That's the question that we want to be asking. And in the real world, we implemented through tokens and certificates. Offerization, on the other hand, is are you allowed to do what you want to do? And obviously, that requires authentication because to tell a device what it's allowed to do, you first need to identify that device. And in the real world, authorization comes in a form of access control lists or policies, generally, but not always. Awesome. So we're ready to start digging into TLS as a protocol. But before I go into the diagrams, really quick example of how TLS looks like in the real world. So I do not want to assume any prior knowledge about any of this stuff that I'm presenting, by the way. So that's why I'm going to be going for all of them. But a real life example is when you actually use a browser. You go to Google.com, you connect to Google and in the address bar right next to it, you will see a little lock. And what that lock basically means is that once the connection established between your browser and Google server, behind the scenes, the two kind of negotiated encryption. They negotiated encryption and as part of that whole process, the server sent a certificate with some key material and the browser looked over the certificate, looked at the key material sort of, and it knows to trust it because it's been signed by another certificate or an authority that browsers implicitly trust. And out in the world, I think there are like seven or eight such authorities and they're the authorities that generally issue certificates for other people and other devices. So that's why your browser trusts it. It's basically bootstrapped to trust whatever certificate signed Google certificate. So with that in mind, don't worry if it doesn't make sense yet, we're ready to go into the belly of the piece, so to speak. So over here, I have just obviously a very simple diagram that kind of details what happens with the TLS protocol. So TLS stands for transport layer security. It's a connection oriented security protocol and it runs on top of TCP. I'm saying this because it's very important to know that it runs on top of TCP because it gives it a lot of flexibility. Once you have a protocol that runs on top of TCP, it means that whatever application layer protocol use, it could be HTTP or GRPC over HTTP, which I guess is still HTTP. You can still use it. So it's pretty much protocol agnostic if you think about it that way. But anyway, I'm digressing. In this, sorry. There's an audience question again already, which is great by the way. There's a pieceman who asks SSL certificate question mark. Yes, so it is an SSL certificate. Certificates, I'll get to it a bit later. They're not specifically tied to SSL or TLS or anything else. But yeah, you can think of it as an SSL certificate. Nicole, thank you for the question. Just gonna go forward now. So yeah, anyway, in the example that I gave with the browser and stuff, we have a client and the client is our browser and we connect to a server and that's Google. And what happens, at least on the surface level, I'm not gonna get into the nitty gritty of the protocol details, but at a surface level, we connect to the server. The server sends us back a certificate as a client. We look at the certificate and if everything seems okay, we negotiate encryption and then we have secure communication. So in just two steps, we have secure communication. In reality. Another question. We can also take it as a spot if you want, but there's been not asking, can we use self-sign certificate as well or is it has to be TA sign certificate? That depends on the system. So with TLS, you would probably need to have a CA sign certificate, but with LinkerD, for example, you can use a self-signed CA to bootstrap identity in your cluster. So it depends, that's the answer. I know it's not very straightforward, but it will always depend on what workloads you have and what sort of communication you have going on. Cool, thank you very much for the question. And yeah, basically, when we have this connection between the client and the server, you know, we just sort of negotiate some things and then everything works, but in reality, it's not super straightforward. TLS has a handshake mechanism in the same vein as DCP's handshake. So the client will connect to the server. It'll give it a client hello and it'll say, okay, server, I'm ready to negotiate some TLS here. It'll send some configuration parameters. They'll both, the client and the server agree on the configuration and then the server will reply with its certificate and it will ask the client, okay, are we good? Can you verify me? The client verifies the server. They both agree on an encryption secret and after that, you pretty much have established secure communication. Easy peasy when you just talk about it. But anyway, before we move on to MTLS as a protocol, just gonna give you some quick nerd facts about all of this stuff. So TLS, like I said, is implemented on top of TCP. It's built using asymmetric cryptography. So we rely on a key pair, a public key and a private key, one key encrypts, one key decrypts, and this is how we sort of make use of certificates in TLS. When we do TLS setup, this is known as a handshake and we agree on configuration, we authenticate and then we negotiate encryption. Now generally, there is a latency cost that's paid when you do TLS and MTLS, but this cost is usually paid at connection establishment. And as you start going along and the connection is already established, this is not actually felt as much. And then finally, a point that I really sort of want to make known is that authentication relies on certificates and does relies on public keys. And public keys should be available to all parties. It's kind of in the name, you know, they're public, but I do see a lot of people who sometimes send me manifests to look at or they send me some repro steps and the public keys are obfuscated or hidden. Just want you to know, it's totally fine to share public keys and certificates because they're public. And with that being said, we're finally on the topic of MTLS. And- Can we go to audience questions before we- Yeah. Perfect. Yes, of course. There's a lot of questions that you're going for being so active. If there's starts to be too much, but they just say that we will take them in like chunks if you want, but whatever works. So- No, just keep it coming. Yeah, perfect. You can go to self-signed needs to be pre-configured as trusted since it's not from one of the CA folks, more of a statement than a question, but any folks there? Yeah, I do understand the question. So if you're referring strictly about Linkerty itself, then it does not need to be configured to be trusted or signed by one of the major CA or one of the major authorities that usually sign intermediate CAs. So with Linkerty, for example, I know I'm going to give some spoilers here, but if it answers the question, then I don't really mind. With Linkerty, when you install it, you generally have to provide a root CA. We call it a trust anchor. And the trust anchor can be self-signed. We actually encourage people sometimes to have it self-signed. So it does not need to be signed by someone else. You can sign it by your enterprise CA, you can sign it yourself, whatever actually works. The thing is, because this sort of forms the trust route and it bootstraps the whole identity for Linkerty. For Linkerty, it's enough if you trust it. So generally with self-signed certificates in general, if you bootstrap your identity system with that self-signed certificate, it's enough for you to know that you created it and then implicitly everything will trust it. But if you have like a publicly accessible web service, again like Google or anything else that's on the web, then I think generally it's better to have it signed by one of the major folks. And I'll quickly give you an example before I move on, but if you have ingress termination in Kubernetes, for example, you would very likely expose this sort of ingress point to a broader audience. So in that case, you might want to have your CA signed by one of the major CA folks. But then what happens inside the cluster itself can be self-signed because you know you bootstrap the identity yourself. And I hope that answers the question. Perfect. And then there is two more. Suter asks, what TLS version is supported slash used in Kubernetes and pervert TLS encryption strength? Well, it depends. I'm not sure in Kubernetes exactly what version the API server uses. I would have to check the docs. In Linkerty, I think we use TLS 1.2 or 1.3. Yeah, that's kind of the answer. And should we take a few more now or move on? We can maybe take one more and then progress the presentation a bit and then get to the rest. So can we have multiple CAs to sign server and client search? It depends on what you mean by that. You can definitely have a chain of certificates where one certificate signs the other then signs the other and so on until you get to the bottom certificate that's not signed by anything but you cannot have two CAs signed the same certificate. It doesn't really work like that in a sense. Yeah. I hope these answers were satisfactory to all of you. And I love seeing that you're also enthusiastic about this. It's actually really great. Great, yeah, let's progress a bit and then let's get more questions then let us know when you want some more questions, yeah. Yeah, there's more exciting material, I promise. You'll have more questions. At least I hope you do, if I've done my job right. Cool, anyway, MTLS, really easy to understand compared to TLS because it's symmetric authentication. Suddenly we have a client, we have a server, the client connects to the server, they negotiate all of those TLS parameters or MTLS parameters, we should say. The client authenticates the server, the server authenticates the client and then the communication is secure. You might be wondering, well, you know, how does this help? Is this more helpful than just doing TLS? And the question is, well, the question, the answer is it depends. Basically, you would use TLS when you have a public website where people connect and you actually don't care about the identity of who connects to you and certainly that is the case with Google. Google doesn't really care what browser connects to it or where you're from, they might care, but not for the purpose of security. But if you have an API gateway or an API service, the scenario kind of changes because once you have an API that's accessed by other clients, then first of all, you want to authenticate it because you want to prevent malicious requests. You don't want just anyone to hit your API endpoint and get data off of it. And then you also want to know which clients connect to you so you can do a bunch of cool things such as rate limiting, right? Like if you don't know who connects to you, then how can you rate limit their calls? You need to have some information there. And this is where MTLS shines through because it allows you to do this at a sort of platform level because suddenly, and I'm moving on to the next slide now, suddenly you have cryptographic proof of both the client and the server identity. So MTLS is exactly like TLS, but it has these extra steps where you perform symmetric authentication of both parties. This gives you the cryptographic proof. And because you have this cryptographic proof, you can do really cool things such as client-based authorization or rate limiting or anything else that crosses your mind and can use this identity. One more thing about MTLS in general is that for cloud-native environments, it allows you to do zero trust. And that's a sort of a buzzword or at least I'd noticed it's becoming a buzzword. So what I mean by zero trust is that you basically do not trust the network. You implicitly assume everyone on the network is out to get you. So you only trust everything at a security level and at a platform level. In this way, when you do MTLS, you can get true workload identity because you have very clear security boundaries. You're not tied to the network topology, right? You don't care what subnet this request came from or stuff like that. And for enforcement is granular and everything happens at a pod level. And you can also extend it to arbitrary cluster topology. So for example, if you have two or three clusters, you can still do MTLS if all of these clusters are using the same root CA. And you don't have to implicitly trust anything, right? Everything is just going to come to your server. Your server's gonna authenticate the client. Your client's gonna authenticate the server and you don't actually implicitly trust stuff. Also you have a mechanism for secret loss at almost any level. There's a bit of an asterisk there because sometimes you can get, you know, your root CA's private key lost or you have to revoke the root CA. So you do have a mechanism for secret loss but depending on what level this happens at, it might be a bit painful to do everything but certainly better than, you know, having people access your data when they shouldn't access your data. Do we have any more questions? Yes, there's a few. So then there was a question from Khardeeb. Other than identity of the server slash client, what other advantages does it provide over just having HTTPS server serving APIs? Okay, other than identity, what other advantages? Well, it doesn't really, it's a tough question and I'm glad you kind of asked it. I'm not sure I fully understand it. So the basic thing that you have is the ability to look at the client's cryptographic identity. So other than that, it doesn't really have any other advantages over plain TLS. That's kind of what makes MTLS better in that sense is that, you know, authentication just happens both ways. So other than that, it doesn't necessarily have any advantages at least that I know of. I don't want to be super generic here and say that, you know, maybe it does have other advantages but for the purpose of this presentation and for the purpose of what we're doing in LinkedIn, what we're really interested in is authenticating the client as well as the server. Great. And then there was a comment from Oksana. Multiple CAs are implemented in Istio though. Yeah, I find it hard to believe, it depends again what you mean by multiple CAs are implemented in Istio. Like multiple CAs sign the same certificate because I'm not sure it would work like that due to how signatures are actually done. But you know, I'm not gonna say it doesn't happen. I would definitely love to read up on it. So if you do have a link, let me know. Or if you have a feature request for LinkedIn you think that would be a thing and if it can be done, we're also not opposed to it. Great. And then there is from Indang. We have, this is a bit off topic but could you elaborate a bit more on search? Yeah, I can definitely elaborate a bit more but I'm not sure exactly what you would want to know. So why don't we do this? If you tell me exactly what you would want me to elaborate on or what I could say to make it a bit clearer, I can come back to it and try to explain it a bit more. Great. Waiting for more info from you, Indang, then looking forward to that. And then the last one so far is best practice for storing TLS certificate comes with here. Okay, I can give you some best practices or at least like what we think about the operational model and LinkedIn. So first of all, in LinkedIn for the operational model we deal with a trust anchor which we call the root CA and we deal with an identity issuer which is an intermediate CA. Now for the trust anchor, we really strongly advocate that you do not keep your private key in the cluster. So ideally this private key you use it on your local machine or whatever bootstraps use it in your PKI, whatever bootstraps your root certificate and you never put that in your cluster. You only put the public information of the certificate in your cluster. For our intermediate CA in LinkedIn, we have it in a Kubernetes secret because that's how we can actually mount it to the workload that needs to do stuff with it but only that workload mounts it because it has access to the private key. So the workload that I'm talking about actually sign certificates for all of our proxies in LinkedIn and we mount the secret but only that pod mounts the secret. We generally do not advocate that you mount the private key anywhere. The public key, you can distribute it in whatever you want. You can use a config map, you can store it in memory, you can store it in the environment, you can store it in a volume, you can pull it off the web. It doesn't matter because it's public but the private keys, you need to be very careful what you do with them. So my general rule of thumb at least in LinkedIn land is your trust anchor root key should stay off cluster and your identity issuer key should stay in the secret and only be mounted by the pod that actually has to use it. And I hope that sort of answers it. Perfect. And then there was a bit of extra info that hopefully helps from Mingdong on thanks, cool. How it works, I guess. That is a bit of maybe a big question but if you have any thoughts there. Yeah, I do. Wait just two free minutes and I'll get to the certificates bit and I'll try to expand there a bit. I won't forget, I promise. Anyway, just to sort of end the chapter on MTLS, is MTLS all you need in Kubernetes or in whatever distributed system that you have? And the answer is no. And if I would say yes, I would probably have to expect a bunch of security experts with pitchforks outside of my window. So no, it is not all you need but it does provide a lot of value when it comes to on path attacks. So that's when someone wants to sniff traffic whenever you try to connect to a target and someone's in the middle because you have encryption and you have confidentiality, right? So they can sniff the traffic but they're just nothing to understand there. And we're actually going to do something similar when we get to the demo bit. It protects against spoofing attacks when an attacker pretends to be the client or the server because you do authentication. So again, sort of in the confidentiality side and it protects against malicious requests because like I said before, TLS only authenticates the server but now you also authenticate the client. So you can pretty much not accept any requests that are not authenticated and you also have the basis for authorization. However, it will not protect from any malicious attacks that happen from local host unless you also want to do TLS over local host but that's a bit of a waste because if someone has access to your pod's network namespace in Kubernetes then I guess someone sniffing traffic is the least of your worries, at least in my opinion. It does not protect against unauthorized access to nodes. So an attacker might be able to access keys or do nasty stuff if they have access to the host. This generally tends to happen especially if your proxy runs on the host because it becomes susceptible to like the confused deputy problem and also it does not provide encryption at rest. And TLS is about communication security. It does not really encrypt your database or your disk or anything else. And now I'm going to be talking about certificates. So first an analogy. If I want to fly into a different country this summer and if I want to go on holiday and I most certainly do but if I want to go on holiday and I want to take a flight I need to go to the airport and they need to see that I'm authorized to go to whatever country I want to go to and that I can travel. And generally they need to know your identity, right? They need to know my name, my date of birth and everything else like where I live and what nationality I am and stuff like that. If I write all of this stuff on a piece of paper and I just go at the airport and I present this to someone in the best case scenario I will just be turned away and they'll laugh at it and the worst case scenario well it's probably gonna be pretty bad. But if I go with my passport all of that changes, right? Even though it's the same information whatever information I have in my passport I can just write on a piece of paper. But the difference is that my passport was issued by a government agency and government agencies are implicitly trusted by people unless you're an anarchist I guess but they're implicitly trusted by people. And that's why when I present my passport they can look at that information they can look at me and say okay well I trust that this is your identity and now I can, you know, offer you to go or tell you to leave. But with the piece of paper there's really no trust there for them to check, right? They have no guarantee they have absolutely no guarantee that, you know, all of that information is real. They don't trust me. So certificates work in a very similar way. You have a name or you have some identifying piece of information and you have a public key. This public key is bound to the name or to the identifying information by signature and the signature comes from another certificate. A certificate that they trust. If they don't trust that certificate they have to check whoever signed that certificate and so on. And this is probably where, you know, I'm gonna start to talk a bit about Route CA. So I wasn't planning on doing this but I think it is valuable to know. When we verify certificates we generally go through what's known as a verification chain, right? So I'm sorry I don't actually have a diagram for this you'll just have to listen to me talk. But basically when a client and a server talk to each other they have what's known as a leaf certificate. That certificate is just signed by other certificates. That certificate doesn't sign anything itself. Every certificate has a signature and that signature is what binds the public key to the name. So each certificate has a public key it has a name and it has a signature. Another certificate will sign the signature with that certificate's private key. And then we can use that certificate's public key that we get from the certificate itself to undo the signature and check, you know, that it's been signed basically. If we can actually, you know, decrypt the signature then it means it works. And the public key belongs to that certificate or rather the public key and the private key are associated. I'm not sure if I'm making too much sense but basically picture this. You can use a public key to encrypt or decrypt and you can use a public key to encrypt or decrypt. The thing is only the other corresponding key can do the reverse. So when you have a bunch of certificates they all sign each other and the root most certificate is the one responsible for, you know, signing the next one and you know, the next one signs the other one and so on. But when you go through this verification chain basically end up at the root certificate and that root certificate where you have two choices it either signs itself or it is signed by, you know, one of the agencies that sign certificates like let's encrypt. You know, with someone like let's encrypt everyone sort of unanimously trust them kind of like the government but a self-signed certificate is a bit different, right? Generally in a cluster, if you use a self-signed certificate you sort of implicitly trust it because you create it, right? So you create it, let's say on your local machine it's probably not the best example but you create it on your local machine and then you put the, you know, public key or whatever you put it in the cluster or you start taking CSRs you get an intermediate CA and that signs other things and other things and so on and when it gets to the root most certificate it'll trust that implicitly because that's what boosts track the whole identity. Does that make sense? Have I answered the question now? Hopefully, Ying Dong, you can let us know how did it go? And there's a few questions as well. We can take them now or in a while, how do you feel? Yeah, let's take them now. Yeah, so there is Rackage asks are certificates refreshed frequently? Yes and no. Certificates have an expiry date and they're refreshed as often as you want to refresh them. With LinkerD, for example, the proxies are refreshed every 24 hours. That's like the maximum amount of time they can go without refreshing. The issuer certificate that signs proxy leave certificates that can be refreshed every three days, every seven days, every 10 days, you know, so on and so forth. It depends on how you want to do it. Generally, our recommendation is that, you know intermediate certificates are rotated every week. For some people, that's a bit overkill because it's kind of an operational responsibility. So yeah, the answer is depends. The quicker you refresh them, the better because it lessens the chance of them, you know, actually being stolen, so to speak. But yeah, it depends on how your system works. Great and we got a, there's one more question but before that, we got a review for Mingdong. Passport example is perfect. Cool, thanks again. Absolutely, so well done there. And then Hitesh asks, how can we validate that MPLS is enabled in the cluster or for services at Kubernetes? Okay, that's actually part of the demo. So you'll have to stick until the end to see. I'm not selling you on it, I promise, but I'm going to cover it up next because that's, oh no, first I'm going to talk about LinkerD and then I'm going to actually do the demo. So real quick introduction to LinkerD. So LinkerD is a service mesh, which is yet another buzzword, kind of like Zero Trust. But a service mesh is basically just a platform tool. You put it in your cluster and it provides you with observability, reliability, and security at a platform level instead of having to do this in your application. What I basically mean by that is LinkerD as a tool ships with a data plane. We call that a data plane. That's made up of a bunch of proxies. These proxies will run next to your application and they will intercept your application's traffic. They will talk between themselves. So proxy intercepts the client's traffic, it talks to the server, it sends stuff to the server proxy and then the server proxy sends everything back to the server. But the proxies between themselves, they do MPLS and they also pull all of the metrics that you sort of want to look at. So that will be success rate, latency, fruit putt and so on and so forth. And because the two proxies talk together, it also allows them to do retries, timeouts, load balancing and everything in between. Now the real value comes from not having to implement all of this stuff on an application by application basis. So imagine for example, you had three or four microservices that talk to each other but two of them are written in a different stack. So you have a Rust service, you have a Java service and a C++ service. Suddenly you have to do MTLS in all of them and you have to get metrics in all of them and you have to handle retries in all of them. And while for free microservices that might not be too hard, it does make it more cumbersome to do. So that's sort of the problem that service measures want to resolve. Instead of having to do all of this in your application trust me like implementing MTLS is really not as easy as it sounds. The spec is a bit under specified, there's a lot of stuff to go through and there are a lot of corner cases and edge cases. So basically all of this happens at a platform level without you having to make any changes to your application. So I think it's pretty sweet but I'm also very biased. Anyway, Demo, should we take, I see there's one more question, should we quickly take it? Yeah, sure. It's from Vinod and it goes during any compromise, is it possible to rotate the certificate automatically or is it manual process? It depends. I think if you know it's been compromised to say, you know there's a security incident or you think there might be a security incident and your certificate expires in 10 days and you're like, no, we need to change it now. Otherwise, we're liable for stuff to happen. You would have to do it manually or you would have to kick the process in whatever tool or a PKI that you use inside of the cluster to manage these certificates. So for example, with Linkery, you can manage the certificates yourself or you can use some PKI like cert manager to manage these certificates for you. So depending on what process you use, you'll have to definitely kick this process manually if you think they've been compromised. Anyway, let me quickly go to my terminal and I don't know if the font is big enough. Can I get a quick act if it's better now? It's a bit better, right? It could be a bit bigger, so. Okay, how about this? It's starting to look quite good to me, but if the audience has any wishes they can let us know. It could be a bit bigger maybe, I'm guessing. Okay, gonna go, okay. Okay, I think it's hopefully good now, yeah. Cool, I am running a bit out of time, so I'm gonna speed through the first part and the first part was just to show you kind of how Linkery looks like. So let me just look at what cube cluster I'm using. So sorry, I'm gonna be using an alias here and K stands for cube cuddle and yes, I am pronouncing it cube cuddle. But anyway, over here, and of course sorry in advance for the clicking, over here, I already have a cluster with Linkery installed and this is the Linkery control plane. So the control plane I forgot to mention, we have a data plane and a control plane. The control plane is responsible for pretty much managing configuration. We have an identity service that serves proxy certificate signing requests. We have a destination service that does service discovery and policy discovery and then we have a proxy injector that's an admission webhook server in Kubernetes. And I also have a demo application called the MojiVoto where I have a couple of microservices running and what I quickly wanted to show you is the state that we wanna end up in after we install everything fresh in a cluster without Linkery. So I'm gonna use the Viz CLI tool. Viz CLI tool we have called tap which allows us to tap into the traffic stream and this is basically going to confirm for us that we have TLS. So we tend to the traffic stream, we look at everything that comes in, everything that goes out and we have a confirmation here that TLS is actually enabled. And I'm going to also check the dashboard and the dashboard should, you know, spoiler alert, say the same thing. Let me just go over here. So if I go into my MojiVoto namespace, I'm not gonna spend too much time talking through the dashboard because, you know, I think it's not what the presentation is really about but basically at the bottom of the screen here, we can see the edges for this particular pod and we can see which namespace and in which service traffic is coming from and whether that edge is actually secured. So Linkery already gives you the tools to check for MTLS. But I figured, you know, since this is a presentation about security, you might not wanna take the word of a random person you've just met on the internet and you might want to verify everything yourself. Which you probably should do in most cases. I mean, we are pretty trustworthy but, you know, it's always good to double check these things. Anyway, over here, I have a new cluster, all namespaces, sorry, it's sometimes hard to type and speak at the same time. Like I said, multitasking, not my strong point. But this cluster doesn't really have any Linkery stuff in it. It doesn't have any manifest deployed. So it's just whatever is shipped with a cluster. I'm using K3D by the way. So we don't have anything in there. And what I'm going to do the first time around is deploy a different set of microservices called BooksApp. So BooksApp has three, four services. It has a web app and this web app is like a frontend. It listens in on port 7,000. It connects to two other microservices to get information about offers and about books. And it also has a traffic generator that sends, you know, synthetic traffic to the frontend. As part of this BooksApp, I also have another container. It's a debug container. So Linkery does ship with a debug container. You can pretty much inject it into your service so you can check stuff. But I wanted to cheat a little bit and just add it here because I want to have access to a tool called T-Shark that we're going to use to sniff the network. So yeah, just a normal debug container. It has some tools inside. I give it some system capabilities so I have to avoid, you know, running into any issues because live demos aren't always smooth. So anyway, I'm going to first create the namespace and then I'm going to apply this thing in the namespace and I'm sorry, if I get any notifications here. Cool, do we have any questions while I'm waiting for this thing to get deployed? Is it all making sense so far? It is, it is wonderful. And there are two questions so far. So there is one for Z naught. Can we use AWS certificate manager to issue the certificate for MPLS? Can we use, yeah, I suppose you can. So if, for example, you want to generate the trust anchor in your PKI, then you can definitely do that. You'll just need to make sure that the public information is available in the cluster. So like I said, keep the key in AWS secret manager but then you'll need to make sure that when you install Lincredi you provide the certificate to Lincredi, just a public bit. But yeah, you definitely can. Great, there's two more then if we have some waiting time still. So what is the difference between Anthos Service Mesh and Lincredi? It's a good question. And I do not mean to sound super ignorant but I do not know much about Anthos Service Mesh. So if you can give me some material to read up on I can give you more information about it. I can tell you what stands out with Lincredi in general. And I'm not sure that the team from Anthos does the same thing, but we run our own proxy. So we do not rely on Envoy and our proxy is written in Rust and it's purposely built for Lincredi's control plane and for interacting with Lincredi's philosophy of keeping things simple and operationally simple in general. But if you give me more information and if you wanna reach out to me I can definitely talk to you more about it. Perfect, is the wait over since there's two more questions to go or do you wanna take them now or how does it look like? I will quickly go through this just so I have more time at the end for questions. So anyway, we have our deployments running and I'm just going to exec. You can see the auto-complete there. I'm going to exec onto the spot and I'm going to do it in an interactive mode. I'm gonna choose the debug container and the command that I wanna execute with is spash. So I planned on giving you like not in depth but a bit of an introduction on T-Shark. I may not have the time. So if in doubt just do man T-Shark and have fun reading through all of this text. It's not hard to make sense of but it's also not super fun to read. Anyway, so T-Shark is a capture tool. It can sniff traffic and capture on interfaces. We can see what interfaces we have available if we do IP link show and that will show whatever interfaces we have attached to the pods network namespace. So in this case, we have ethernet, you have a tunnel and loopback and when you start capturing packets with T-Shark you can specify the interface that you wanna capture on. So I'm just going to capture on ethernet. I'm going to say I only want TCP traffic and I want to capture on port 7000. This is gonna start a capture and it's just going to print out the packet summaries. So over here, I'm just going to do a Q, quick cube, cuddle, get pods just so you can see how the IPs that we have for the pods correlate with what we're seeing here. So for example, in the last packet summary in this capture we can see that we have the source of the traffic here 1042.012 that corresponds to our traffic pod and the traffic pod is the source and the destination is 1042.09 corresponds to our current pod web app. Over here we have some TCP things including the TCP flag here, the sequence number, the act number, the length of the packet and so on and so forth. So these things are not super important right now but basically where I'm going to go with this is I'm going to show you how using T-Shark we can actually sniff the traffic, we can tell what HTTP information we have in plain text and then after we install LinkerD we'll see that it's not possible anymore at least I hope it's not possible anymore. Now I'm kidding, it won't be possible. Anyway, so with T-Shark what I'm going to do is going to decode all, so that's what the dash O means I'm only going to decode information for HTTP packets and I will have the output as Jason and again I'll choose TCP so this time I'll go for port 7001 and this is one of the services that our web app is connecting to to get information. So we'll see that again, we have a lot of things that are being printed out but this time we also have information about all of the other protocols. So protocols are sort of multiplexed on top of each other in a packet and we'll basically take all of this and we can see we start from the layers, we start the multiplex in everything, we start from the frame, we go down to the internet information, IP, TCP and finally we get to our application layer protocol which is HTTP and you'll notice that here we get all of the information that has to do with HTTP. We can get the server, we can get the content length for the header, we can get the response number and also the file data. So we're sending everything as application Jason and the content I think is somewhere up there but basically if we combine this with something like grep and we say we want to see file data, we're going to be able to see everything that's being sent here in plain text. Now, obviously this is not ideal but imagine that someone would run something like this and they would sniff your traffic, they don't have to be in the exact onto the pod like I am now, they can just sniff the traffic and see all of this stuff. So imagine these books are super secret, maybe they are, maybe they're not but yeah, you'd be in a lot of big trouble basically especially if you process credit card numbers or just anything. I think the rule of thumb is that we don't want people to see what we are actually sending through. So now I'm going to install Linkerty. Linkerty install is just going to actually print out the manifests, it's not gonna install anything so you can have a look through the manifests again, don't take my word for it but I'm just going to install Linkerty, pipe it to keep puddle and wait until everything is deployed. And while that's happening, should we take some questions? Yes, perfect. So there is a question from Hariharan who asks, with service mesh enabled, sometimes troubleshooting might be a bit difficult. Any pointers on how troubleshooting is better in Linkerty versus Istio or any other service mesh? I'm going to be general here and sort of lob them all into the same pot so to speak. I think yes, troubleshooting might be a bit more difficult because suddenly traffic is being taken over by something else that you don't necessarily own but what's really great about service mesh is in general is that everything is open source so you can go and look through the code yourself and more than that, we basically put all of these metrics available for you to use so I would argue it actually makes troubleshooting a bit more easier because suddenly you have metrics available where stuff breaks, you can correlate it with logs and usually proxies as software can have like variable log level so you can go as verbose or as quiet as you want to be and with the proxy itself actually we support modifying the log level on the fly so you can say okay, everything is at an info level and now suddenly I want to make a debug because I have a problem, let's see all of these logs and you look at the logs and then you can correlate them with the target IP or the client IP and actually it becomes way, way easier to troubleshoot what's going on and I'm saying this is someone who loves to help people out on Slack and I have people coming up with issues to me and problems and you have to basically solve these problems without ever having access to the environment and I'd say that if the proxy wasn't built as good as it is and you wouldn't have all of this information it would actually be much harder to track what's going on but with LinkerD we put all of these tools available and also have the metric stack that helps you sort of troubleshoot this further because we don't believe that you have to actually be a proxy expert to realize what's happening there. Anyway, so my deployment is done. What I'm going to do now is I'm going to get the book sap namespace and I am going to pipe it through LinkerD inject and what this is going to do is just going to add this inject annotation and then whenever new pods come up and they have this inject annotation the admission web hook is going to take them over and inject the sidecar. Now you can do this on a pod by pod basis or you can do it on a namespace and then the pods will inherit this from the namespace and that's what I'm going to do now. It's a bit easier than annotating all of our workloads. So I'm going to inject it, pipe it, apply it and then if we get the pods in book sap you'll notice that nothing changed. That's because we have to restart all of the deployments so they can be picked up by the admission web hook. Cool. And now this should be pretty quick. I have time for a quick, quick question. Okay, perfect. Is there any authorization policy supported like OPA or OIFMT for massage? Yeah, so LinkerD does its own server side policies. So we have our own server side policies CRD that basically lets you restrict traffic or allow traffic or require NTLS and stuff like that. We don't have any OPA integrations ourselves but it's something that you could integrate with OPA if you wanna spend time doing that. But generally we found that for user experience it's just a bit easier for us to roll out our own stuff and then the user experience around it is sort of dictated by us and you don't introduce another tool. But check it out. I think if you just do a quick Google LinkerD offset you should be able to find information on that. Anyway, what I'm going to do is go into this web app and I'm going to go back into the debug container. And again, I'm going to start a capture and I'm going to skip everything that I've done before I'm just gonna go straight in for the kill. So I'm gonna decode the packets as HTTP, JSON, TCP port, I think it was 7001. And then I'm gonna grab for HTTP file data and I'm gonna wait. And nothing displayed so far, but the packet count is going up. So we're capturing packets, but nothing is being displayed. And I wonder why, but secretly I know why this is just a prompt. It's because we no longer actually have any HTTP flags and HTTP packets in here. Just gonna scroll up. Right, instead of HTTP we now have SSL. So this is curious because before with the exact same configuration, we had access to the HTTP data and everything that had to do with the HTTP protocol, but now this has been overtaken by SSL. So something clearly is happening here. I'm gonna use a different command. I'm gonna use dash P, which means print packet details and dash X, which means do an ASCII thumb. I actually didn't choose the port. I'm gonna say only TCP traffic, port 7001. And again, it's gonna start to capture and this is just going to print the exact same thing as before, but it's going to do it in a different format. And we can see here that this would normally print application traffic and application content. But now there's just this application data for TLS and well, we can't actually make sense of whatever else is in here. And again, don't take my word for it. You can definitely try it, but let's see if we start the same capture, but on local host instead, which is not in TLS, if we can see anything different. And yeah, we can. Basically here, we have everything as HTTP. So I guess what I wanted to show with all of this, sorry, I don't actually have a dramatic ending to this presentation or to this demo is that, you know, Linkerty adds out of the box in TLS and it adds tools for you to verify if M TLS is working. But if you wanna roll up your sleeves and do it yourself, you can definitely do it. And, you know, you can use a packet sniffing tool like T-Shark, but basically what I try to do with T-Shark here is show that before Linkerty, we could sniff traffic. So we didn't even have to be in this container. We didn't have to be exact in this container and we would see everything in plain text. And now you can't do that, except if you're in the container and you're listening on local host. And like I said, if someone gets access to your pods network name space for a day, a second to the pod, then, you know, M TLS is not gonna save you from that anyway. So, but yeah, that kind of wraps it up. So I'm just gonna go back to my slides here. And I think we have quick minute for questions if we have more. Yeah, we have about two and a half minutes maybe. So rapid fire questions is actually coming up. Yeah, so there's a question. Is this the same functionality as Keali? Keali is more of an operating system, right? Or do you mean Keali, the tracing thing? It's not really a same functionality, you know. Okay, great. So someone's asking, what do you use for autocomplete? I use fish shell and that gives me the completion automatically. Perfect, then there's a question. Does Linkerty work with EBPF? It depends. It does not, like it does work with EBPF and CNI plugins, but it doesn't do any EBPF on its own. We still use IP tables for routing. But we have people who use it with Calico or Silium, so it's not a problem. Great, then there is, can you demo how to uninstall Linkerty if I happen to run into an unexpected issues? Yes, of course. I'm just going to exit from here and I'm going to do Linkerty uninstall. I'm gonna pipe it to cube cuddle delete. Well, I guess I don't need that Chef flag. Well, I first need to inject all of these. So I am going to get the namespace. I guess I'm, do we have an un-inject command? We do, the more you know. Actually, I did not install it very often. I just delete the cluster itself, but then we roll out the pods and then we do Linkerty uninstall. It still complains about something, but you kind of get the point. Right, then I think there's maybe time for one more. So what do we use for cert rolling? Cert rolling as in rolling certificates. I'm not sure I really understand the question. I'm sorry about that. Great, that was great. Or we can continue the discussion in the cloud native live Slack channel, by the way, for everyone. Okay, cool. Yeah. You can ping me there. Perfect. Then there's a few questions that the questions asked there, a bit of out of scope and then there is a final question. So app existing apps are encrypted ALS, the increased controller from Denon? So you can, if I understand a question correctly, you can also inject your ingress controller. Linkerty is not going to handle any SSL termination at the ingress level. Your ingress controller is going to take over that, but Linkerty is going to take over anything. So it's going to encrypt anything that goes from your ingress controller to whatever target that you have. Great. So there are about three questions that we didn't get to, but as mentioned before everyone can help into the cloud native live Slack channel within the CNC app or Kubernetes Slack, essentially. And I see that you have some academic here that you want to mention, maybe. Yes. So just a quick announcement. We also present all of these things related to Linkerty and service meshing at what we call the service mesh academy. So this is all free training and free knowledge. We do love to spread the knowledge. Jay, if you want to see more material, especially around certificate management or failover or multi-cluster, yeah, check it out. And with that, thank you very much for your attention. Yeah, thank you everyone for joining the latest episode of Cloud Native Live. It was an amazing session by Mathe. Thank you so much. We also really loved the interaction, so much interaction from the audience today. Really great to see that. And as always, we've renewed the latest Cloud Native Code every Wednesday. So join us in the upcoming Wednesdays as well. See you later, everyone. Thank you, everyone. Bye-bye.