 Yeah, hey folks Give you all a second to sit down But want to start off with Want to start off just by saying thank you so much to all of you for coming to our talk So today you're gonna hear us tell you how to do end-to-end Encryption from your browser to your pod with a couple cncf projects Emissary ingress and the linker D service mesh. So appreciate your time. Thanks I'm Flynn from ambassador labs. I've been working with the emissary ingress project since its humble beginnings in 2017 You can send me email at data wire.io or you can find me on github and you can find me probably most easily on our open-source Slack as Flynn and we'll have links and things like that Over to you. Thank you much my co-conspirator Jason Morgan So hey, I'm Jason Morgan. I hope this isn't as loud as it feels from right here But I am a technical evangelist for buoyant So that means it's my job to talk to you about Linker D and try and convince you that it is the best service mesh on the market and you want to use it If you have comments you want to make about what I just said feel free to email me Jason at buoyant.io I'm looking forward to hearing from you. You can also if for any reason you want to find me on github You can at Jason Morgan and you can find me on the CNCF and linker D slacks as at Jason So we're going to be talking today about a couple of things couple of different problems You know here's talk a lot about north-south traffic and east-west traffic north-south being traffic from outside your cluster coming in East-west being service-to-service within your cluster as Jason mentioned We're going to be talking about how to set this up so that all of that traffic is encrypted all the way from your browser All the way back through to the services We are going to be using emissary ingress to do the TLS termination and That means we're also going to be using emissary ingress to handle the certificate that secures you communication between The cluster and the browser and then we'll get on to the using linker D within the cluster But important to note that you cannot do TLS in any meaningful way without certificates So we're going to come back and talk about that a couple of points here So if you're not familiar with emissary ingress, it is an open-source cloud native self-service developer-centric API gateway So if you have a cluster Where's my laser pointer? There we go. If you have a cluster over there with a bunch of services in it and You have users who want to use it from outside Emissary's purpose in life is to sit right at the edge of the cluster and mediate all of that conversation It is a CNCF incubating project Emissary is powered by Envoy although It's also designed so that you can get started quickly and not have to spend six months of your life becoming an Envoy expert We started as I said in 2017 since then we've seen pretty wide adoption. We're running in thousands of different places And I want to emphasize that focus on self-service That turns out to be an important factor in our adoption and it turns out to be something that really does work out well So I said it's an API gateway One of the kind of fundamental things about API gateways is routing traffic So if we have our user Jane out in the cloud and she wants to request a quote Then emissary can field that request and hand it off to an upstream service that actually provides it If a user mark hits the same endpoint he can get over Depending on how you have things set up. It might be the same pod handling that it might be a different pod It might be a different service entirely You might be doing this with session affinity You might just be doing it to handle scaling lots of different ways you can do this and emissary makes all of them pretty simple However, that is not the only thing that API gateways do. They're not just dumb proxies They're also a good place to set up things that you want to manage centrally so that your developers don't have to worry about them probably the Most obvious application there is application security So maybe Jane is allowed to update quotes, but mark is not and Rather than making the person who wrote the quote service worry about authentication. You can bundle that in with emissary It's not the only thing you can also worry about observability about rate limiting about resilience if things go down You can worry about hooks that make life easier for developing applications And if you're thinking about this a bit, you might think that some of these overlap with things that you can do in the service Mesh that's okay. That's why it is by design API gateways work nicely with service meshes And that we tend to set things up so that you can decide where you want to do each of these functions You can mix and match and put them at the place that makes the most sense for your organization This is a quick example of Some of the emissary's configuration language all of these resources are things that you would use Really things that an application developer or a person in the developers role would use to configure traffic routing through to their service And then there are other CRDs for things like configuring the ports you listen on protocols and host names authentication all that kind of stuff those are things that are sort of more of the ops role and You know, you can have one person filling both of those roles in a lot of organizations that makes the most sense But as things get more complex and you have more people involved It can also make sense instead to have the ops role be separate so that the operations role person will worry more about the Infrastructure of keeping the cluster happy and the developer role is more about focusing on the applications themselves Since emissary does these with separate resources It's very easy to separate these separate these concerns separate these roles and arrange it so that the developers don't have to wait for The ops role the ops role doesn't have to wait for the developers to do things for their apps and neither of them steps on each other's toes And on that note over to Jason to talk about Likerti. Yeah, thank you very much so For those that aren't aware Likerti is a service mesh. It is built to run in Kubernetes It is in my opinion and I work for the company that makes it But it is the lightest fastest and most secure service mesh that you can use in Kubernetes And again, feel free to email me if you have thoughts about that and we're going to try and try and share some of that today So it's created by the folks at point It's been in production for a really long time and we have an active community We are also by the way of all the service meshes in the CNCF the only one to had graduated status in terms of open source maturity We are in use by all sorts of folks including Microsoft and I think just in an hour you can listen to the folks from Xbox live Talk about how they're using Likerti at massive scale to connect a bunch of clusters together and how it's made their lives easier Yeah, we do if you want to check out the edge you can see what's coming up there are weekly edge releases where you can try all the various features and That's all I've got But the thing I'd say is that the theme of this talk is that emissary ingress is a really solid Gateway and ingress that you can use for all sorts of stuff and we're going to integrate it With Likerti in a really seamless fashion, right? So we don't make Expectations on the ingress or what they're going to do Likerti focuses on doing the job of a service mesh not the job of an ingress not the job of an API gateway We leave that to others who have we have more expertise in that space So here I've got a little diagram if you're not already familiar with a service mesh at its core What it is is we're going to take some some common functionality that you might put in your Applications like how are we going to do TLS? How are we going to get standard metrics and observability from our apps or how are we going to do end point selection? And we're going to move it out of our application code and into an entirely separate process Through which we're going to route all of the traffic in and out of that application So the way we integrate with emissaries emissary sits there at the front door your cluster and handles that that north-south traffic And then we're going to add a Likerti proxy to emissary and that's going to put it in the mesh I'll talk about adding things to the mesh all I mean is sitting a proxy beside it and letting it begin to handle their traffic The emissary instance here is going to have our valid TLS certificate that we can use to you know Get that nice friendly friendly lock when we load up a webpage and Likerti is going to handle another set of certificates so that each workload is identified individually they can trust who they're talking to and You know do all the stuff that you want from service mesh at score It's going to be pretty straightforward, and we're going to stop with slides And we're going to do a demo so wish us luck as we hop over to a terminal This makes sense so far any any questions before we go on Or you all want to save for the end? All right Question Actually, I think there's a mic coming for you. Just one moment. I Can say it back go ahead so for most are for emissary sorry good The question was is there a performance hit going through emissary the answer is that? Yeah, but it's tiny The part of emissary that wrangles your data is actually on boy Emissary is serving more in the control plane role for angling the on boy configuration Envoy is really really fast so Can you measure it? Yes, but it's very very small All right, so we're gonna hop into the demo and thank you very much for asking that yeah We're gonna hop into the demo there's gonna be more opportunities to ask questions as we go So starting off we're just oh, I'm sorry actually you can talk through this Just take just give you a heads up. It looks like I'm typing I'm not typing, but it is happening live you can see on the right hand side here These are all the pods running in my cluster. Is that big enough or do we need to make this bigger? It's like it needs to be a little bigger How's that? Yeah, it's gonna be hard to see but there's just a bunch of pods running in the right So excuse you I'm not tricking you about what's going on The bottom right here is gonna be the custom resources that we're using for emissary to get our routing to work So that we can get traffic from the browser to our environment and the left-hand side is Where all the action is so if you can't see this raise your hand, and I'll make it bigger All right. Thank you so much folks. So while he's doing that We're gonna make him type good while I talk for a little bit Grasses in the lower right you'll notice that there's an error right now about the server if you can read it The server doesn't have a resource type ambassador CRD's because we haven't installed them yet because we haven't installed emissary So off we go. Let's get emissary installed We're doing this in the way that we document So the first step is to add the helm repo the data wire helm repo, which is where the emissary home chart lives We kind of lied. We already did that earlier. So error message. That's okay Next step is very very important. We are going to install the emissary CRD's You must do this for a new installation You must also do it when you're upgrading to make sure that you have the latest and greatest definition of the custom resources That we use to configure emissary If you can read up in the upper left You'll also see a shiny new namespace named emissary system that has a deployment called emissary API extended That chunk of code is there to handle automatic conversion between the get ambassador.io CRD's actually you can use laser pointer works better from over there. All right That handles conversion between the emissary get ambassador.io slash v2 CRD's and the get ambassador.io slash v3 alpha 1 CRD's It needs to be running. You shouldn't have to worry about it Just we do need to make sure that we wait at this point to make sure that it's running so that everything works So we will wait and since it's already running. This won't really do anything, but that's all right I should also point out Actually, never mind come back to that in a minute We're gonna actually install help install emissary using a helm right now This command looks kind of ugly. It's not actually that bad We're gonna install into the emissary namespace, which we are willing to create We will name the installation emissary ingress. We're gonna use the data wire emissary ingress chart And since this is just a demo running on a laptop We're gonna set the replicas count to one so we only get one replica running. Don't do this in production And then again, we're gonna wait for emissary to actually be running so You'll see now at the emissary namespace and you'll see various things running I'd like to draw your attention to the fact that all of these show that there's one pod one container running in each pod here in The emissary ingress agent in the emissary ingress pod themselves So the reason that we're highlighting that right is because for now we have the one process per pod Right, but as we add things to the service mesh, we're gonna add a second process That is going to be delinquency proxy and that's going to be the thing that lets you into the mesh and There we have a running emissary nice You'll also note in the lower right that the error message has changed from no ambassador CRDs type to now We have no resources. The types are there. We just haven't configured it yet. So let's fix that. Oh No, let's do TLS first. TLS is important We've mentioned earlier that to do TLS in any meaningful way you must have a certificate for that There are a lot of different ways you can get a certificate This one happens to be one that we generated ahead of time from let's encrypt Let's encrypt is cool strongly recommended Since we are at a conference with conference Wi-Fi. We downloaded it ahead of time and saved it on the laptop For emissary to be able to use the TLS secret TLS certificate We need to install it into a Kubernetes secret So there we go. We've got the secret ready to go and now we're going to actually configure emissary So this is what the configuration looks like I'm not actually going to read this line by line But a couple of relevant things the host record there is telling us that for this demo We're going to be using this emoji photo that k8s that 59 s that IO host name You'll see that in the browser. We have a mapping where any prefix arriving at That host emoji photo that k8s that 59 s that IO Anything there is going to get routed to the web service in the emoji photo namespace if you could page down, please And right at the very bottom. We have some listeners that say yeah We're going to do HTTP on port 8080 and we're going to do HTTPS on port 8443 so Happy to take questions about that We can talk about that in as much detail as you want right now. Let's go ahead and get it applied And as this is running you will see resources appear in the lower right I didn't talk about it But we also have hosts and mapping and things like that for the dashboard that Jason will show you as well All right onward At this point we should be good to go and we should be able to flip over to our web browser Yep, and show that emoji photo is running Ta-da So at this point You'll notice that you've got a little padlock up there If he clicks through the connection secure, there's a thing up there that says certificate is valid The browser is going through talking to our running cluster Talking to emissary emissary is terminating TLS feeding back the certificate to make it happy and then emissary is going ahead and sending it through to the upstream emoji about service and This all works. We can vote for emojis We can be the leaderboard we can do all of our stuff and Nothing I've said so far has anything to do with linker D because this is all being about getting emissary running Although you'll note it wasn't all that difficult. So over to Jason to actually install linker D. Yeah Thank you so much. So yeah, what we've got now. We have TLS termination happening at emissary ingress So we're we're halfway there browser to our Kubernetes cluster. We're secured with With TLS and now what we're gonna do is figure out how do we encrypt from that ingress All the way through to the various pods that make up emoji photo We're dealing with three services. We've got a web front end and two back ends one for Giving us our awesome emoji pictures and one for registering our votes So first off, I'm gonna install linker D. We're gonna do a curl to bash I'm sure folks have opinions about that and we're gonna entirely sidestep them for this conversation But again, feel free to put it in email. I'm happy to happy to listen to you there So we're doing the install. So there's a little bit of us lying to you But we're relying to you in an honest way here I have I already have the linker D CLI downloaded because I'm not relying on conference Wi-Fi because I'm insecure But this is the exact steps you'd use to install the linker D CLI on your laptop And like I said, I'm not really typing but when I press enter real commands are being entered or put in being run Right. So we've got the linker D CLI we're gonna update our path here just to make sure that it's available and now we're gonna check our version So I'm running the latest linker D 211.2. That's on the client side and I have nothing on my Nothing on my Kubernetes cluster gonna make the smaller because what we're gonna see I'm gonna install it and a bunch of stuff It's gonna happen But first Let's validate that this actually is gonna work So here we've got a situation where I'm running a k3s cluster on Docker desktop using WSL on Windows 11 Because I like to live dangerously. So let's test because he's insane We'll test just how crazy it is and see if we can really do what we want to do using the linker D CLI So I'm hoping to see is a bunch of check marks and I got my final all checks are checked And with that we're gonna install linker D So here the linker D CLI is going to generate a bunch of yaml right y'all are probably used to it here if you're a coupon It's gonna generate it and I'm gonna submit it directly to the Kubernetes API You can install linker D via the CLI you can install linker D via your favorite get ops tool You can install linker D with helm and you can install it with any other Particular set of fantastic continuous delivery tools you feel like using and then once this is done We're gonna actually check that linker D is installed and happy and healthy So let's talk more action So we can see some pods popping into existence on the bottom here These are the point the components of the core linker D install So we have an identity service It's actually gonna generate the certificates that our individual workloads are gonna use as they identify themselves and work together On the network We have a proxy injector which is a mutating webhook. That's actually gonna go When we've add an annotation to a workload or a namespace that says please add the linker D proxy It's gonna go ahead and add the proxy for you and all you're gonna do is is one line yaml And last but not least we have the destination controller Which is gonna help you decide or help linker D decide what to do with your traffic And now that our check marks are once again Happy face here with the green check mark We're gonna go forward So it's a demo and linker D's broken up into a couple parts So I want to show y'all what you know a fancy dashboard looks like with linker D You don't need to use it, but I use it every time I demo it So I'm gonna use the linker D visualization component and I'm gonna install it here with linker DC li Exact same process you saw before We're gonna present you audience with an option you can ask a question here or you can hear me tell a terrible joke While we're waiting All right, don't do it you haven't heard it's terrible Now you've chosen your fate. So I have a I have a UDP joke for you You may not get it Yeah, yeah, I'm worried next time we do this talk. You're just gonna Next time we do this talk, you're gonna just you know render the linker D check in interpretive dance, right? Oh, that's the next one So what we've got here is the visualization components are being installed. So we have a little metric server We have a Grafana and a permitus. So this is an in-memory permitus when you go to production with linker D We recommend and have documentation for how you can use your own permitus instance or Federate this in memory permitus with the rest of your permitus infrastructure, right? That's a bit of advice. Otherwise, you can get like four six hours of metrics at best, which isn't necessarily what you want We've got our green checkmarks. So we're gonna go forward Up until now all I've done was get linker D running So we're gonna actually integrate linker D or we're gonna add some things to the mesh So we've got that emoji Voto application and it works We have emissary ingress and it works and our idea here is that if you use a service mesh You should be able to add your applications that work in Kubernetes and after you do it they still work Right just marginally better So that's the that's the hope here and y'all can y'all can keep me honest. Let me know if that's gonna be true So here is the horror of integrating emissary ingress with linker D We're going to get the deployment again. There's tons of ways you can do it I'm gonna do it very manually. I'm gonna get the deployment I'm gonna output is YAML and I'm gonna send it to the linker DC li and the linker DC li is gonna look like all right Is this YAML? Yes. Awesome. Is it deployment? Yes. Awesome, and it's gonna add one line of annotation That's gonna say linker D.io slash proxy or slash inject colon enabled that's it and Then when we run it We get ourselves a couple things the thing I'm gonna tell you about first is we're gonna get a new instance of the emissary ingress Right that now instead of one container per pod has to the emissary instance itself and the linker D2 proxy We also get a warning because I installed this with helm and then I modified it manually and Kubernetes wishes I wouldn't have done that Presumably, it's just as easy to go and add that annotation directly using your favorite get-up stuff anyway, right? Yeah, absolutely, and if you want an example Get-ups con is gonna be coming up sometime in the not too distant future and we'll be able to show you that there So that's that's the emissary integration 100% of it. We're not modifying the emissary CRDs. We're not changing certificates out We're not doing anything beyond that now to get it all the way through to our emoji Voto application We're gonna do the same thing. We just did but we're gonna do it for the emoji Voto components once again We're gonna add an annotation. I could have done this at the namespace level But I wanted to show you all the glory of the CLI and we're gonna go ahead and put that in So once again, we're gonna see pods roll. So emoji Voto just doubled in size Right. Yeah, perfect. Just doubled in size because we got all new components all of them now having the linker D proxy added in This isn't like crazy exciting because everything just worked right well, we'll see Let's load up the hostages work. Let's load up the app and see if that's still true. So let's vote on our favorite Oh, it still works. You see that. I've still got my valid TLS certificate My connection is secure. I can I can vote on my sunglasses. That's good We lost all our votes from before because this was everything was stored in memory and it's gone But that's life. We're gonna turn on a little auto refresher Yes, please and so this is just gonna refresh the page over and over again so we can run some traffic through and we're gonna watch The browser to emissary to our various emoji Voto components through the dashboard Speaking of which we have the linker D dashboard right before there was no backing service once we configured Emissary to talk to the dashboard all we have to do is actually make a dashboard exist Get refresh and now I've got a bunch of metrics About my environment that I didn't have before I haven't modified the emoji Voto app You can you can take it on faith or you can actually go play with the app itself There's nothing linker D specific happening here at all, right? We added the proxy and now we're ready to go a little little fun fact Well, no no fun facts where you don't have enough time We've got a graph of the way it works. We've got a view of all of our components and We can go look at our web front end and we can see that the web front end sees that it's talking to emissary ingress Right and going all the way through we can see the individual API calls that are being made to our environment Right both from emissary and from the other components within the environment We can see whether or not those calls are successful. I haven't instrumented tracing There's no like Yeager behind the scenes or anything like that Although this does integrate with Yeager if that's something that you're you're interested in But is there some way we can verify that it's actually doing that's a great Emissary to the service so I'm telling you, you know, there's there's MTLS here But how do we know for sure that's going on? Well, you're still I'm slow actually going to ask you to trust me But I'm going to show you some output from the linker DC li that's going to look at the various requests and tell you what it believes The TLS status is So you're I'm going to do a tap I just want to grab the metadata about every single request that's flowing through the environment. There's a lot going on here I'm going to output it as Jason. It's going to be messy, but Well, we'll figure it out together All right, so Here I can see an individual request. I can see a lot about an individual request including the source destination all sorts of stuff In this case our source was emissary ingress, which is convenient because that's what I'm trying to prove and then our destination is Emoji Voto great the web component and it was TLS the entire way And that's our whole story Thank you One thing I want to reiterate through all this is that After we installed linker D we started off by installing emissary configuring and configuring the routing for emissary Configuring the routing for emissary then we installed linker D We did not touch the emissary configuration after we installed linker D Other than injecting the one annotation that you were talking about on the emissary deployment itself We didn't go back and touch the mappings or any of the other stuff for the cluster We simply put the two things in there and worked which is what CNCF projects are supposed to do So again, no configuration changes after that Yeah, thanks so much for joining us any questions any questions Go ahead. He's gonna come with a mic or he's gonna pick who's gonna get it. We'll let we'll let randy do his job Pardon me. I'm not Thank you for the wonderful talk. I have a My care question How does the link it need to be solved is a cross cluster communication? Let me let me see if I have two cluster in yeah two different regions Yeah, absolutely. So the question was I believe how does linker D do cross cluster communication? Yeah, so we use those linker D proxies and essentially when you add linker D multi cluster You get into more complex linker D, but it's not brutally complex You're going to install a component called the linker D multi cluster component It's going to create two new services one on each cluster And those services are going to have an external load balancer They're linker D proxies and then we're going to link those those clusters together with the cli command We're we're going to get some service counts and permissions and roll bindings and all that goodness from one cluster We're going to add it to the other and then they're going to begin speaking They require a layer 7 hbs connection between them. So no routing requirements. No assumptions about your network, right and Each service or each pod when you go to make a call between clusters using linker D You call to what looks like an in cluster service, right? So i'm going to call to you you know remote service dot my namespace dot cluster dot local And then linker D is going to handle routing you to your gateway in your cluster across That hbs connection between your your clusters and then from the gateway to your application And if you'd like to know more i'd love to hit you up on email and just give you a whole walkthrough of multi cluster Yeah, thank you really appreciate the question anybody else We've got a we've got an online question. Um, oh any plan to Integrate linker D with g rpc proxyless mesh. Well, so uh proxyless mesh I don't know but linker D you wouldn't there wouldn't be a lot of point right like linker D Does g rpc load balancing for you, right? It's one of the first use cases that it tackled So if you're doing g rpc, uh in kubernetes, if you add linker D to it, you will get Request level load balancing you'll get proper distribution of your g rpc request across all of its servers And you'll find pretty dramatic performance improvements. In fact, there's a cncf case study for a company named entain They built a massive gambling platform On top of on top of kubernetes and they used they said with linker D That they achieved over a 10x improvement in their scale while driving down costs And that's a cncf case study that they published Thank you. Great demo. Uh, first of all, um, i'm really interested in Encryption and to show us how to inject a certificate I was wondering if you could integrate self manager and especially Um, how secure it could be to communicate internally service to service and if you could Um, generate it for each connection different certificate Yeah, so we do already so you you do get a unique certificate for every single pod Right and those certificates by default last 24 hours And they're going to be tied to the kubernetes service account that you create with it We recommend for production uses and we have a if you're looking at running linkardy in production Look up linkardy production runbook. We publish it. We tell you what to check And what to do before you go to production with with linkardy and again you have my email hit me up I'll just talk to you about it Uh, get you ready for one thing we want you to do is use something like cert manager to rotate The intermediary certificate that linkardy uses to issue Workload certificates and if that's a lot again happy to talk about certificate architecture in linkardy with an avu Did that help did I answer your question? Awesome. Thank you so much Anybody else Well, uh, linkardy uses a sidecar based model. That's right. How do you look towards the ebpf things that are now starting? Yeah, so ebpf like so that's kind of a lot here Right, but ebpf is a way to get stuff into the kernel without getting it into the linux kernel Right and so there's all sorts of stuff you can do with ebpf like say you're using something like cilium You can replace the way ip tables works with their ebpf module and so the way we work with cilium actually You're going to be doing a demo on it tomorrow. I believe right that shows you how you use it So linkardy consumes the network. We don't make assumptions about it. We just write it We're you know, if you think of the tcpip model, we're at layers four through seven We don't care about layer three. We let you do it however you want So if you're running an ebpf module that replaces the way networking works or changes it to make it more efficient Go nuts. We love it. We consume it natively without really interacting with it Did that help? Awesome. Thank you and feel free to bug me more if you want me to go longer First thanks for the demo Can we achieve zero trust network with linkardy? That's a big question I'm going to go ahead and say yes because no one I think can challenge me right now because we're out of time but I would say if you define zero trust networking as every single interaction between your services and kubernetes Is based on an allow only model where I can say only pods that have the identity of x are allowed to talk to pods that have the identity of y You can do that with linkardy policy and that's actually as of Linkardy 2.11 that you can use policy to do that And there's some big improvements in the way we do policy. So right now Linkardy 2.11 was the first iteration in which we introduced the ability to do policy right or zero trust That's a bit marketing term wise Right, but that concept came in in 2.11 and there's a big refactor coming in 2.12 That should make it a fair bit easier to use as well as more granular and linkardy 2.12 should be out fairly soon Did that that helped and answer your question? All right. Awesome. Thank you so much folks. This has been amazing Thanks very much everybody awesome tough. Thanks everybody