 Hello, everyone, and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Itai Shokuri. I'm director of open source at Aqua Security, and I'm also a Cloud Native ambassador, and we'll be hosting today's show. So every Wednesday, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, they will answer your questions. And this week, we have Jason and Daniel here to talk to us about LinkerD and Emissary. Before we get to that, just a quick reminder that this is an official live stream of the CNCF and as such is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, just be nice to each other. So hi, Jason and Daniel. Hello, would you like to introduce yourselves? I'll let you go first, Jason. Oh, OK, sounds good. Well, hey, everyone, my name's Jason. As you can see, there's my Twitter there. You're looking for uninteresting comments on Twitter. I do technical evangelism for LinkerD. So talk to folks about the open source project, why it's great, and why you should use it. Hey, everyone. Daniel Bryant, director of DevRelate Ambassador Labs. You may know us formally as DataWire. We branded earlier in the year. My background is Java development. And I moved through to solution architecture, did a bit of operations. I was always the build person. Was the classic thing, right? And fell in love with Kubernetes when it came out. So much like Jason, I spend my days these terms chatting about the goodness of the cloud native tech scene, helping folks learn. Because sometimes it is quite complicated, this tech. And I like the way it's introduced the session about breaking things. I guarantee you in the live demos we'll break something because I always do, right? But it's part of the fun. Yeah. Great. All right, so let's start with the basics, which is a little bit about LinkerD and a little bit about MSRE. Yeah, so let's go with MSRE first, do you mind, Daniel? Yeah, sure thing. So MSRE, CNCF incubation project, we got accepted was a few months ago now. So we're in the process of moving everything across. You can check out the MSRE Ingress repo on GitHub with a lot more awesome stuff there and links to getting started, which we'll run through in just a moment. But if you are looking for an Envoy-powered Ingress, I recommend MSRE Ingress, right? That others do exist, I should say, both in the CNCF and in the general ecosystem, but it's fundamentally a way to get user traffic to your back-end services, right? So whatever you're doing, you always need to get that user traffic, whether it's literally browser traffic or mobile app traffic or maybe curl, that kind of stuff. You need to get that traffic through to the back-end services and in API gateway, Ingress, somewhat of an overloaded term, but we often do a lot of cross-cutting concerns or like non-functional requirements, so some folks call them, at the Ingress, at the gateway. So things like TLS, transport level security, things like rate limiting, things like auth, all that goodness, kind of centralized at a single point, separation of concerns, because then your back-end apps do not have to worry about all those good things. So that's why you wanna look at an API gateway in Ingress to handle what you traditionally call North-South traffic. When we used to draw our network diagrams, like from vertical kind of space, the user was at the top, traffic going North-South is at the edge of your data center. And that nicely leads into what a service mesh is, because that deals with East-West traffic. I hope it's Jason. Yeah, I muted myself right when it was ready to start talking there. So yeah, great, great, great segue. So Winkertie is a service mesh. For those that haven't heard the term before or at least have heard the term and maybe don't have a great sense of it, a service mesh is essentially what you get when you add a number of proxies, a little load balancers, in between every service in your environment. So in Kubernetes, we use what we call like a sidecar model to put a little proxy beside every one of your applications inside your cluster, and then that those connections between the proxies make up a mesh of services where the proxy can handle things like encrypting the traffic between services or introducing failures or adding metrics, right? So generally think of the features of a service mesh as being related to security, reliability, or observability. So making it so that more of your calls succeed, giving you better insight into your calls if you've got 20 different apps in five different languages, you don't have to have everybody put in metrics data, right? Instead the proxy collects standard metrics about everything and then feeds that up into the control plane for that mesh too. When you're talking about service mesh, we're talking about control plane and data plane, something for humans or computers to interface with the mesh or to the mesh with, and then the actual layer that carries your data. So Ligardee is the original service mesh, or at least we say it's the original service mesh, and I work for a point, the folks that make Ligardee. It is recently graduated from the CNCF, right? So that means it's a CNCF project and has met the criteria for graduation. That happened I think two, three weeks ago, which is big news for us. Congratulations. Oh, thank you. Yeah, we're really excited. And then Singh asked in the chat if it's available for beginner, if this talk is for beginners, and I would say absolutely. What we're gonna do is get started with Emissary and then get started with Ligardee and show you how you can use these two CNCF projects to get the best of both worlds, right? You get all that rich functionality that you get from Envoy at the Ingress without necessarily taking on some of the complexity that Envoy can introduce when it's the proxy in your mesh. Sound good? Yeah, so we talked about some functional or non-functional features that both products deliver, and the differentiation between them is that one is mostly concerned with traffic that is coming from the outside world into your cluster, and which is Emissary, right? And the other one is Ligardee, which is mostly concerned with what happens within the cluster after traffic is there, right? Absolutely. All right, good. So how easy it is to make the two work together? Well, so that will be a spoiler real quick, super easy. Like there's effectively almost no integration to do between them, and it's really very nice. And we'll do that as we go through our quick start guides. It's like a one-liner, it's always good, right? Like famous last words, but it should be one line in the CLI. All right, yeah, let's get to it. Awesome, can I get a screen share, please? Apologies to folks for seeing me looking to the slight side. I've got my restream window, my broadcasting window to the right, so I'll be looking at this just to check it's all working. I think at the moment, I'm seeing just three of us. Yeah, yeah, I'm generating your screen share. Oh, it's not. Are you sharing? Yeah, is it not working? Oh, there you go. Paul's there for a second. Does that look good? No, it is, yeah. Yeah, I think it's my bad, I think with some browser issue there. Awesome, so just to recap as in, if you do want to pop along, like we mentioned, there's the Getting Started for MSRI Ingress. We've also got the GitHub repo, so you can pop along to MSRI Ingress, scroll down, all your Getting Started links are there as well, so it's a good resource. We'll share all of these links in the CNCF Slack channel as well later on, so if you do miss them, don't worry, we'll share them. And then pop along to linkerd.io. See Jason's face on the front page here, he's famous, right, Linkerd famous. And all nice kind of jumping off point to land on the Linkerd page here. And Jason also shared earlier on the Linkerd 101 kind of service mesh intro and how to get started. I've learned a lot from the point folks, like from William and so forth over the years. Either the first time I heard service mesh being talked about was on the Boyant website, I think pretty much on the Linkerd website. So these are great references. If you want to get started, you kind of often took about building mental models, but you have to understand the tech at a fundamental level before you really get the full value. And these blog posts are a great way to do that. Awesome, so we have, I'll literally show you, if we go to actually this one here, this is the MSRI GitHub repo, scroll down to the Getting Started, you can jump into our tutorials here, Getting Started. I'm gonna install with Helm, I think this is the easiest way to get started. I'm gonna use Helm 3, the latest version. And it's the easiest jump to go towards sort of production-like environment. You can use YAML, you can usually got like a CLI tool, but again, if you're running a production, you're probably gonna be using something like Helm. You're actually probably gonna be using Helm, right? So we're gonna use Helm to get started with MSRI. So if I click install with Helm, I have in my browser window, just to show nothing up my sleeves here, a blank Kubernetes cluster, courtesy of Siva, Jason connected us up this morning. So you can see here, I've done a K, I've aliased for kubectl or kubectl, that's controversial, hey, pronounce it, but we'll just use K in this example, right? That is my kubectl CLI tool. I'm saying K, get services at the top here, blank, list of your standard Kubernetes services coming back. And I've done a get pods, and there's a couple of things that Siva have installed around the CSI and some node management and so forth, but fundamentally, nothing up the sleeves, empty cluster, right? And we shall get started with running through the Getting Started guides. So I have installed Helm locally on my Mac, I have added the Datawire repo, we used to be called Datawire Ambassador Labs, so that's why we've still kept the Datawire band branding here, and I will literally run this command here, Helm install ambassador. I'm gonna change it slightly just to make it easy on some things we've got later on. I go to my cheat sheet over here, and I wanted just to put a namespace in there. So I do Helm install ambassador, name is ambassador, looks good. We do the enable AES false, oh, I've left my config file open, my bad, do the enable AES false on the command line options for Helm, we can do it via values.yaml, and that just installs the open source, msre. So we've got a commercial offering, like open core adds more value on top, but msre the open source project, just set that flag, you've got msre installed. Oh, and I did not create the namespace, rookie mistake on my part, there we go. Already, I've got some issues, right? Let me create the namespace, like this is guaranteed. I'll clear my throat. Great things. Yeah, exactly. For a second, Daniel, so we've got a bunch of links in the chat, so the things that you see Daniel going through, or that you'll see me going through when we get to it, are all gonna be covered in the links that have been shared. So if you're missing any of them, please send it off and we'll send it again. Awesome stuff, Jesus. And that's it. It's a good thing to remind everyone that if you have questions or comments, please type them in the chat, we're monitoring that so we can address them. Super, this is all taken away. We're getting some warnings there because of different versions of Kubernetes and things being deprecated. So I know that some of those are raised issues that we're tracking as well, depending on what versions of Kubernetes cluster you're installing, you may see different warning messages there. But that looks good. If I do K get service all, now you see at the bottom here, we've got our ambassador admin and ambassador. All set up and running. Great stuff. If I now follow the instructions, I can create a mapping here. We'll install first the quote of the moment service as our demo service. I'll copy that text, pop that browser, I'll just clear the screen to move everything up to the top. To Kupa, I'll apply that link that should install deployment and service for the quote of the moment service. You didn't verify what mapping means? Yeah, great point. Great point. So a mapping is a custom resource. So we've created a custom resource that maps a URI or PATH into a backend service. So I haven't actually, I'll spin that up in just a second. I'm literally now I've just installed my service and my deployment. Hopefully folks are sort of roughly familiar with that in Kubernetes land. We're spinning up a container within a pod, then a deployment within a service. And then to your question, it's how you can see here, here's what the mapping looks like. We have created a custom resource as part of the helm install. We define what a mapping is and the mapping is quite a rich construct. You can start super simple, which we've done here. If I bump up the resolution, we've literally said create a mapping, call it quote backend and prefix slash backend. If you hit the IP address of our ambassador service slash backend, you'll be routed to the quote service running on port 80 by default. Does that answer your question, Ty? Yes, and then a follow up question. So how is this related to ingress resource? Yeah, we all great question. We also support ingress. So it's very similar to be honest. So you see a lot of the other ingresses doing their own sort of thing or some folks use annotations to define the routing, the mapping. Some folks have created custom resources like we have. So when we created ambassador, now it's our ingress, it was like three or so years ago and custom resources weren't a thing initially. We were using annotations and ingress wasn't very well defined. So our evolution has been from annotations to custom resources. And then we also went back and supported the ingress spec when it became more solid as well. Folks do want to know more about this. I'm happy to chat probably a bit sort of separate of what we're talking about today, but there is quite a storied history of Kubernetes ingress because it's not simple. It's the honest answer like getting traffic from the user to the backend is not simple and the like the SIG networking folks have done an amazing job over the years. And we've sort of followed in their footsteps with a customer resource for MSRI ingress called mapping. Great, thanks. Thank you, good question. Yeah, I see this day and day out, right? But I forget that this is a new contract with a lot of folks. So it's a great question there for me Ty. Awesome, so we have installed our service. Looks good, right? Now I'll just copy that. You know, actually, I think I've already got that set up. I, oh, where's my terminal gone? There we go, my laptop is open today. If I just bring up LL, you can actually see I've got the quote backend.yaml already. If I just that, you can see here it's exactly what we had on the intwebs, right? So I've just saved that locally. Let's get rid of that. If I now do a K apply file quote backend. Looks good, we've got a mapping. And because it's a custom resource, I can do K get mapping like that. And that's, I think it's quite cool when you, you know, regardless of the project, if it's got a custom resource, it's got a Kubernetes native, right? You can get extra info. I could describe, for example, the mapping and looking for more info. So super useful. You know, that's why I think following along with the sort of Kubernetes native way, and Jason will touch more on this later on, you know, both emissary ingress, Anna Linkedee really embraced the Kubernetes resource model, the Kubernetes way of doing things. And it makes our lives as developers and operators that much easier, because it kind of follows the principle of least surprise. If you can do K get pods, you can do K get mappings, same deal, right? So that looks good. So let's pop back to our, oh. There was a question about how does mapping gets impacted by using network policy? So what's the, maybe relationship between the two? Yeah, so I'd see that the mappings are sort of more fundamental, like you're literally mapping a path or some other details onto a backend service. If you want to layer on additional security, that you can do some of this via service meshes, so Jason can answer some more of that as well. And you've always got to bear in mind other policies you've got in place. But I, if you're learning this, I would start with a kind of blank classes like we're doing here and layer up your learning. Get your routes first on, get your mappings, very basic front end, you know, user to backend service, layer in your service mesh, see all the value you get there, and then start looking at things like, a lot of folks use like Calico, right? One of the other different lower level constructs that networking, OPA, Open Policy Agents, super popular. You can layer all those things on to add extra security, add extra protection, and they are great for production use cases. But if you're learning, my advice is start small and layer it on top. And just to add a tiny bit there, right? So network policy is that like really like layer three, right? Like it's like firewall and components like that. So when I say layer, I'm talking about the OSI model, right? And like the different layers of your network stack. When you think of an ingress, or in this case emissary in its mappings, that's all really like layer seven stuff. So up at the application side. So obviously if your ingress can't connect to a backend, then yes, network policy will have an impact. But generally it helps to think of them in totally different places, right? Put that in the security, like that network security bucket. Yeah, good job, Jason. And if you are using a cluster within say your commercial environment, your company, you may bump into exactly what Jason said there, where by default, certain network policies do disallow things. So that's totally worth checking. Like yeah, if you can start with something maybe even like local, kind, mini cube, like that can remove all that challenge for you. Awesome. So let's go to grab the IP of your emissary ingress. So I've literally copied this, pop back into my terminal. I'll just clear a screen again to make it a bit more obvious. I've bought, oh, I've missed my name space. Put the name space in there. Ashen, ambassador, like so. And hopefully if I echo, this is famous last word as well. If I just echo for you folks watching, what's going on there? We can see we've got an IP address. Again, if I was just to do K get service and just do all, you can actually see our ambassador pod within the ambassador name space, ambassador service within the ambassador name space has got the external IP. We've set up as a load balancer type service. That's why this is a little cheat sheet just for very quickly getting the IP address of your ambassador instance. Awesome. Let's pop back to the web page. If I now just do a curl on this, like the terminal, set up the curl. We're literally curling the IP address we just loaded in there. Slash back end, because if you remember going back to the mapping, or just switch back to the screen, we set up the mapping to be slash back end, redirecting to our very simple quote back end service that we deployed. And you can see it running. We get a very nice quote back and I can keep hitting that endpoint and we just get pity quotes. I see a Flynn, our lead ambassador, emissary ingress engineer. It comes in very witty quotes, hopefully there. So that is pretty much, there's some other stuff you can follow below if you want to go through. I've talked about K get mappings. If you do want to set up TLS, there's a nice page we can share the link here using cert manager. And then the ambassador edge stack does a lot of this stuff automatically but the emissary is the open source project, best to use cert manager. I've got one pre-baked, which I can share later on if you like but if you do want TLS termination at the edge with a free TLS set, loving less encrypt, loving the ACME protocol. And this guides you through using Helm and the jet stack cert manager and gets you all installed and all set up there. I won't do that now. I'll move on to Linkedee install but I always advise TLS particularly if you're in prod, for example. I'll pause there, Jason. Anything you want to add on that before I move on to the Linkedee stuff? Now, just for folks that are watching, what we've got now is traffic from the internet to our cluster. So that's our north south. So we've gone to the front door of the cluster which is emissary, our mapping tells us your ombudsman how to tell people where to go, right, or our traffic where to go. And now we're going to add Linkedee. We're not gonna do a bunch of special config and we're gonna be able to get statistics on the traffic that's coming in as well as encrypt everything from that outside to the front door and then from the front door and into the actual service that you're getting inside the cluster. And something you said earlier, Jason, which you know, my background's Java but I did a lot of go and Ruby. And I remember when I worked on my first microservices projects I had to re-implement all the things Jason just mentioned there in language specific libraries. Yeah, if I wanted observability, Java library. If I wanted observability, the Ruby services, Ruby library, go library. Linkedee by abstracting some of that to the proxy means that as a Java developer, as a Ruby developer, as a Go developer, I don't need to worry about those individual libraries now. And the more important, I don't need to maintain them because Linkedee maintains that for us. So I remember when I first bumped into Linkedee, I was like, this is awesome. Absolutely as a polyglot type programmer, right? Cool, so I've now fired up the Linkedee 2.10 Getting Started guide. Again, we'll share the links in the channel. I have already installed, I know my KubeCut version is good to go. Nice to check that in the docs. I have already installed the latest Linkedee CLI just because downloading on the interwebs can be a bit dodgy when we're doing live demo. So I'm all set there. I'll let you now start from the Linkedee version. So I'll copy that, go back to my terminal. I'll clear the screen again just to make it a bit easier to read. I just pop in Linkedee version. Good to go, right? 2.10, client version. Excellent. I will be chopping and changing here, but I'll go back to pre-flight checks, of course. So this is Hevo cluster, right? So they install a cloud hosted K3S. So we can check that this K3S cluster is a valid target for Linkedee, and that Daniel has the right permissions for what he needs to do just by running this CLI command. I love running the checks as well, because it just looks so good, right? And so I'm a big fan of that Linkedee CLI, super easy. Right, I'll just paste in now the install command. Oh, I obviously did not copy paste. Clear the screen, import that. It's all just going through all the install stuff. And now this looks great. Well, then if I scroll down in the background, we'll then run our checks again. Once Linkedee is installed, again, you get that nice visualization, that nice feedback. So I did a demo with Thomas Rampelberg for KubeCon or EU, I think a year or two ago, where we did multicluster Linkedee, and the checks were just fantastic there. We're not gonna dive into multicluster much today, but I did the Linkedee checking commands, like they can seem sort of trivial at times, because you see lots of green ticks, but when stuff's going wrong, they're super useful. So I'll run that again. Just really nice to check all the pods are up and running, that when you're doing multicluster, can you talk to each other, can the connections do the connections work? So do not underestimate the value of these check commands, they are super useful. Just a little talking while we're seeing this just go through, right? So what happens there? You somewhere in the command Linkedee install, right? And then pipe it over to KubeCTL apply. So that is generating all the YAML that you're gonna use for the install. So we've got examples doing it with like a GitOps flow or installing via Helm. So first thing note, the CLI and the Helm charts use the same templates, right? Same template, same templating engine. So the values that you set in one are applicable to the other, right? So they say, they allow you to go back and forth really easily. And then it'll just generate standard YAML that you can save and share out. I guess that's really the big thing with the install command. So if you don't pipe it over to KubeCTL apply, it's just gonna output a bunch of YAML right to your terminal that you could save off somewhere. I love the GitOps flow, Jason, that's right. So we've like, when we're in prod, we typically would do that because it's just easier to manage, easier to upgrade and so forth. But I love the CLI for getting started, but yeah, plus one on the YAML on the install. Status checks look good. And just to Jason's point there, if we just do now K get service all, right? You can see there's a lot more stuff installed now. We've got our ambassador, we've got our quote service in the default namespace, then we've got a Linkedee namespace all the goodness installed there as well. I'm gonna install a few more things now. Style looks good. I'll just clear the screen again. Move on to the next install instructions. And this is for installing the visualization tools. Okay, Linkedee Viz, it's actually what Jason was saying in terms of piping through, running the install, generating the YAML, piping it through on the command line to KubeCTL apply. The empty dash is basically saying, take what we're piping through and apply it. And that's what you're seeing here with all this coming through here. So while this is running, we can maybe adjust one question here about whether Linkedee can provide us the ability other functionality for non-HTTP applications. And actually there's a good opportunity to clarify what kind of proxy Linkedee is using. Yeah, so it is a great question. Thank you very much. So Linkedee is actually, I don't know if it's unique among the service meshes, but most service meshes use Envoy as that sidecar proxy. Linkedee does not. Linkedee is a custom built proxy called the Linkedee 2 proxy that's written in Rust. And we can, I can talk about that in way more detail than anyone here probably wants right now. But it is, it's very fast and it's very simple. So for non-HTP traffic, we can get metrics for TCP connections. So I can probably show some of that when we do the demo, although may have to take that offline. But we can show when you're, when you are a non-HTTP request, they're non-GRPC requests, right? We can get some details around the bulk TCP stream, but you're not gonna get like request level information, right? Cause it's just a bulk connection. Going back to that OSI layer thing, if it's layer four, we can tell you that it connected and how much data is going. But that's really, that's really all we're gonna see without understanding how to read the underlying protocol. So you generally get it for, you generally get interesting information for HTTP and GRPC traffic. Very nice. Very nice. Just a shameless plug as well. I did a podcast with Oliver from Boyant. My colleague, Wes Rice from InfoQ did another podcast with William, I think as well. And so if you do wanna know more details about the Rust proxy and like why Boyant chose Rust and how the libraries around it evolved, I learned a bunch from Oliver and William. So check out this podcast on InfoQ. If you do wanna dive in cause I think it's just super interesting to know about the tech, all right? That I think is a good point to hand over to you, Jason. Yeah, for the integration. Sound like a plan? Yes, just gonna figure out how to get off mute, but yeah, absolutely. Nice, let me stop sharing my screen in a minute and find out and restream. And voila. All right. So we're sharing our Civo cluster just via the KubeConfig files. So Jason, we're jumping in, following my footsteps. All right, let me know when we can see my terminal. We'll see you. All right. So right now, we actually have to probably bring that back. So right now we've got a bunch of stuff running, right? In the environment, come on a sec. I've got a little laser pointed but I try and show off every chance I get. So we have a bunch of things going on. So we've got Linkerdee, the Linkerdee pots, right? So these are the components that make up the control plane for our service mesh. We also have the Linkerdee viz components, which are the dashboard, right? Which I can show you all in a minute. Right, and this is, the dashboard is a nice way to visualize, that's why we use the word viz, what's happening inside your cluster, right? But you'll note, all of them have, like when we see ready, we see two of two, right? So that's two containers per pod, right? The reason they're two is because there is both the app that does whatever thing it's supposed to do, then the Linkerdee proxy sitting beside it, right? So that we can have it in the mesh. So our ambassador pods, we've got our three ambassador pods in that deployment. None of them are in the mesh. Same thing, quote of the day, right, is not meshed. So if I pop out, I didn't think I was gonna need this, but if I pop out another window and my dashboard, so I've got one right here. So Linkerdee, no, give me just one sec for our export. Yep, Linkerdee viz dashboard, sorry, that's so small. Let me make that a little bit bigger. If I pop open a dashboard here, right? I'll be able to see into my cluster, but I'm not actually gonna see anything interesting, right? It's just like there's very little in the mesh beyond Linkerdee itself, right? But we're gonna fix that. So we're gonna inject both our application and ambassador. So let's do that. So let's start off with quote of the day. So let's do K get deploy dash and default, right? It was in the default namespace, right? We've got quote, right? So we'll just specify it. We'll output it as YAML, right? So we're gonna output the deployment details as YAML. So I can use the Linkerdee CLI to add the proxy to the cluster. And all we're doing, right when we do it, is adding an annotation to the pod spec that says Linkerdee inject enabled. So add the Linkerdee proxy to this and then Linkerdee will do the rest. So I've got lots of opportunities for mistakes here because I mistyped constantly, right? So get the deployment as YAML, add our annotation, send it back to the Kubernetes API. I alias kubectl decay because I can't reliably type kubectl with a kubectl or whatever you wanna call it. So now if we go K get pods, we're gonna see just inside this namespace, the old ones going away and the new ones starting up with some data, go back over to that dashboard, right? And now all of a sudden defaults is coming in, we're gonna start getting some traffic for it, right? And if I send some requests over to the backend, we'll see that pop up. Oh, hey, look, it looks like every call is successful. You know, the latency is like super tiny and we have very few requests per second, but still nothing for ambassador. So let's fix that. So now, same thing, K get deploy, deploy, that's an ambassador, ambassador, right? So we're gonna grab ambassador itself, make it a inject, dash, dash, ignore, oops, ignore, no. It's skip, that's it. Skip, inbound ports. All right, so what we're gonna do is inbound ports, all right, so what we're doing here, this is the ingress. So in general, when we inject the service inside our kube cluster or a pod or deployment inside our kube cluster, right? What we wanna do is we wanna get traffic, both incoming and outgoing from that pod because that pod's only talking to other things inside the mesh or it's generally gonna be talking to other things inside the mesh. The emissary ingress or any ingress that you're using, right, like it doesn't actually, it doesn't actually, we don't, we're never gonna care about traffic coming into the ingress from a service mesh perspective. Right, we are about East-West traffic. So traffic between services in your cluster. So that's a lot of words for let's just skip inbound traffic, web traffic to this thing because we don't really care about it. And then we're just gonna go ahead and pass that right back to the Kubernetes API. So same inject command with a little bit of extra flavor. You don't need to do this if you don't want, that you don't need to skip the ports, but it's worth doing. And, oh, sorry. Let me skip this. I forgot to output it as YAML, right? So LinkedIn inject works with YAML. Cool, it gave me a warning because I didn't, you know, we created this with Helm, not with a KApply, but it's totally fine. So now we can do KGet pods, that's an ambassador. Right, we see that there are new pods spinning up for ambassador now with two of two, right? So we have the normal ingress plus a LinkedIn proxy. And in a minute, if we keep refreshing this, we're gonna start to see traffic coming from ambassador through to our quote of the moment, quote of the day. One sec, let's force this guy to refresh a little bit faster. Sorry, lots of refreshing going on. It might not hit it yet, right? Cause we've only got one of the three. Oops, wrong name space, sorry. So let's see how long this is gonna take. Let me just one sec. KGet pods and ambassador. All right, well, we've got this pre-baked, so I'll actually, is now we get time to swap over clusters? Yeah, I think so, Jason, it sort of takes a while just to spin up all the different pods in there and get everything aligned. You can see from our previous experience, we have learned to have pre-baked things good to go to shortcut the time, right? So this cluster, we did the same thing, right? And we can do KGet pods, that's an ambassador, right? On this cluster, we're actually using the ambassador egg stack, I like it, it's got some features that I use an awful lot. But we see that the ambassador ingress, in this case, I only have the one pod, is injected, right? Now we can actually see some traffic. Now what we've done, so you saw the mappings that Daniel showed earlier, right? We can actually get our mappings, right? And the thing I love a little bit, CRDs, our custom resource definitions, is like all my native Kubernetes tooling that continues to work the way I expect. Right, so I just tap, I don't know, mappings.getinvestor.io, right? I just start typing map, hit tab, it completes for me. I want to look at all namespaces, it's the standard CLI that I'm used to. And I've got a bunch of stuff going on, like I made it easy on myself instead of having to do that, linkerdviz dashboard, actually just hit, and actually anyone who feels like it can just hit this dashboard.cbo.59.io And you'll see this, there we go, I knew I had the link somewhere, you'll see this, right? So we've got, you know, we can see Ambassador itself, right? Who it's, so we can see the deployment, we can see what ponds that deployment is talking to, right? We can see the total number, requests per second heading through, our response time, every endpoint that it's going to, right? So we're hitting a quote service and that's responding really quickly. So going and looking at the same thing that Daniel just showed us, right? Or the same thing that Daniel just installed, we can see that from Ambassador, we're getting a get method to the root of this path. So I changed it from backend just to the root so it would be a little bit easier to route to and, you know, we're entirely successful. We could tap live traffic if we wanted to, right? So let's just see, hey, what's coming in, right? And this is, this isn't stuff that's instrumented in quote, right? We didn't have to put in a metrics library. We don't have to do anything special, right? We're just getting this data because the proxy's there and in our space, we also haven't created like, while we do need a mapping, a customer resource for the mapping, right? Inside LinkerD, everything else is just, oh, great, I'm getting an error, but we're going to close that, right? Inside LinkerD, right? Because it just works with Kubernetes native services and Kubernetes constructs, we didn't do, you know, I haven't created like a virtual gateway or service or anything special, right? I'm using Kubernetes services. I'm using a standard mapping or an ingress if I want to use an ingress via Emissary, although I find the mapping really easy to do. So I use the mappings and all our stuff just continues to work the way we expect, but marginally better. And that integration point, so we were digging, we used to have a more complicated setup in our docs for integrating with Ambassador Emissary, right, because we assume that, you know, there'd be something in particular about the way it worked that we need to override, but we're digging in preparation for this and in preparation for just giving people best practices for using Emissary and LinkerD because we think the combination's great. And it turns out because Emissary's default behavior is entirely Kubernetes native, and so now we can stack these two CNCF projects, get header-based routing, get rate limiting, get, you know, that nice stuff at the ingress with these detailed metrics about what your environments are doing. Right, I can look through all my namespaces. I can see, you know, where are things going well? Like where do I get the high success rates and where do I have apps that have a problem? Right, all done, all integrated with that, all integrated with that ingress, right, with no special configuration. So I think that's pretty cool and that's the bulk of what, you know, we really want to show today. I love that as a key takeaway, Jason, because that's something you and I would discuss, but like this is really easy, like it just works. But again, that's the power of like standardization, how to begin with the CNCF, right, all the great work going on here. If you follow the Kubernetes resource model, follow all like, you know, I know you perhaps want to talk about SMI, things like service mesh interface, all these good things. If you follow the standards, like it kind of just works or it should just work and mostly does, and you're also not locked into certain things as well, if you embrace, you know, that is one argument for some folks using the ingress rather than the mapping customer source, because our mapping customer source is not directly interchangeable with say, you know, another ingress, for example. But in reality, like what's the chance of swapping out ingresses? I remember back in my Java days, I always, you know, wrote defensive code around databases swapping out and in my like 20 year Java career, I think I swapped out one database, like Postgres for MySQL, and that was a completely custom reason why we did that, right? So look for your abstractions, but again, I'm with Jason, I'm obviously biased, but for me, the mapping resource is super simple, whereas the ingress stuff tends to be more complicated, powerful, but a bit more complicated. I'm a big fan of minimal code. You know, less code I write, less config I write, less stuff I've got to maintain. So that's why I like the mapping, that's why I like the link to de-configure is super super easy. Yeah, and they expand on that, right? Like ingresses is like, people think they're interchangeable, but they're not, right? Like there's a difference, but in every ingress, they're going to be specific to the one that you're using, right? So they tried to, they worked on, like I know the networking group has worked on ingress routes and expanding, or was it the gateway API spec? That's right, yes, yes. Which I believe Emissary fully supports, right? Or at least it's finding on Yeah, at least the current version, but I think it's like it's still in beta, but we do support that Emissary, yeah, yep. So any, like that's changing anyway, right? So there's no, I don't think there's any concern about using a little bit, like you can see it while you're here, let me just show a mapping, right? So I've got, like I've got like five mappings in this, in this document here. So let's just do, let me close this out, right? So here's the quote mapping, right? So if you're familiar with an ingress, this isn't like, this isn't crazy complicated, right? We give it a name, you know, the prefix that we're using. So what path are you going to hit on the API? What host, so what host name do I want to respond to? And then what service am I going to, right? And that's like, that's the extent of it, right? And it's a pretty simple and straightforward thing. Here's like a complex one. So here we see the, let me make that a bit bigger, right? Here's, here's for our dashboard. So the Lingardee dashboard uses web sockets. So we have to get, to get a bit more complicated. I'll, like I'll never, like I, I was working with a different ingress at one point and trying to get web sockets to work over that ingress. And like the pain and suffering that I went through, going through the docs, trying to get the config right for this, for this particular component was pretty high. When I looked at how to do it in Besser, it's like, oh, allow upgrade web socket and I'm done, right? It's not, anyway, it's, it's not a far, I don't think it's a far step if you're looking at, at ingress or at versus mapping would be my, my takeaway there. Well, that's, that's really awesome, I have to say. And if anyone has any questions, you can tap them in the meantime. What other things could we do with either emissary and the mapping or linker D? Like we got the setup working. Now, what can we make of it? Yeah, so all sorts of stuff, right? So we've got on, on this one, every, like, every one of these URLs has a valid, like y'all can hit them up in the chat, right? Like, they've got a valid HDSer. And I, like, I didn't do beyond, you know, defining a host, right? So this was actually with the, this was actually with the edge stack, but in the edge stack, right? I just, I just define a host. I say, I want to use my email address for the ACME provider, which I shouldn't share this on the internet, but there you go. Please don't spam me folks. I just, you know, I just put in the host name and then it's going to go through and auto generate certificates for me, right? Which is, which is really, really super handy, right? And then the other stuff that I love or what I really love about this integration is, is for us, it can sometimes be a challenge to have an ingress to like integrate directly because we have to say, hey listen, should, should it do some non-standard behavior? Here's how we override it so that it works well with the mesh, right? Especially when we start getting into SMI construct. So, so SMI is the service mesh interface where it allows you to do more complex things with traffic than, than the basic Kubernetes services that you do, like multi-cluster, like traffic splitting, or, you know, if you look at Argo rollouts or Flux, right? They use, they use the SMI, or they use, they use the SMI spec to shift traffic between things. Well, they create- No, no, no, no, just clarification. The SMI is a different CNCF project, right? That is like abstracting other service meshes and LinkRD is one of them, just to clarify for those who don't know, yeah. Yeah, sorry about that. Thanks for adding that detail. You know, so, so, but they, they rely on like this kind of, we call it a APEC service inside Kubernetes so that you can shift traffic around. Well, that, that thing really, it needs, the service mesh needs to handle that, right? And it does it with intelligent routing. Now with, with Emissary, because it defaults the right end of the cluster IP, we can handle things like multi-cluster or, you know, complex rollouts with no special configuration. But if you're looking for in a particular service, you want to do sticky sessions or you want to route to a particular pod based on some criteria, you can do all that in Emissary and still have LinkRD carry out the default behavior for the rest of your, your traffic, you know, with, with no special configuration beyond what you do in LinkRD. Is that, does that make sense? Just give it a bit more context that Jason, I think is like, like we level, I guess from Emissary side, we level all the service meshes, right? But the native sort of integration is much simpler when everything dials into the Kubernetes best practice we're doing things. Like a lot of folks, a lot of the service meshes will use end point resolvers, end points in Kubernetes, as opposed to the actual, like looking at the services and getting the metadata, the IP addresses. And there's good reasons for doing that, but it also adds a lot of complexity on top. And I've been, I feel the pain, Jason. I won't mention any service mesh names, but where there was a few like, why is that doing that? And it's just because like sort of the responsibilities overlapped, right? Like the North, South and East West, the Emissary and other service mesh, like it was almost, you know, if I anthropomorphize it, it was almost like an argument between the two things. That's my responsibility, you know, it's my responsibility where there's a clearer separation of concerns in Emissary and Linkerd, because we literally hand off at the abstraction points that are native to Kubernetes. The service is in this case, right? And if folks haven't bumped into things like end points and maybe even haven't gone deeper into pods, it's worth a little Google. So the Kubernetes docs are fantastic. And just understanding how end points slices work, particularly when you go to multiple multicluster stuff. I learned a bunch from Thomas when I was learning about Linkerd multicluster. It, you know, again, I don't personally code much at that level, but I still enjoyed learning about what's happening underneath. And it gave me a bigger appreciation for, well, if you just stick to the services, stuff just gets easier, right? Yeah, yeah, absolutely. Okay. What are some other Emissary features that maybe you can highlight? I mean, we've got the trafficking. What can we do to it or with it, with Emissary, something more than just bringing it in? Yeah. Encryption? Yeah, for sure. And in some ways, one thing I often say to folks I'm chatting to in the community is when a ingress is doing its job well, I think this really applies to the service mesh too, you actually don't notice too much about it. So doing demos for all these things is really hard, right? Because like it should just work. But you always wanna think about things, like security is a big one, 100% what you said there. So transport level security, Jason showed you Edge Stack with integrated let's encrypt ACME protocol support. You can use cert manager for Emissary ingress. I've got a demo of running of that. So definitely wanna secure the transport layer, the TLS. That's super easy to do. Next thing you wanna do typically is integrate authentication. Yeah, you wanna be, we've got some demos on the Emissary ingress site of using a very simple authentication service that we've written, I think in go or node, I think it's in node. And it uses basic auth. So like really uses actually the express framework in node and basic auth. And it's really like a really simple way to just do authentication at the edge because Emissary exposes EXT auth, like it's almost like an API and interface, standard envoy type interface. So you can plug in anything that implements that EXT auth API. So we've got some commercial offerings in that space. There's open source offerings, there's stuff on the interwebs. Be careful what you choose because authentication is super important, right? Yeah, just bear that in mind if you're pulling stuff down on GitHub and you think it's doing auth for any ingress, double check it because like that's, if the auth is compromised, game over, right? Really, really quite tricky there. But we do expose the standard APIs for authentication. We also expose the rate limit envoy API in Emissary ingress too. So rate limiting is sort of closely related to security because obviously you wanna secure transport, you wanna authenticate, authorize the human coming in, but then you might wanna stop things like denial of service. People accidentally abusing your service. Maybe you got a freemium product and the app just runs away and starts calling the backend a lot and it degrades the overall experience. So you often wanna think about rate limiting or load shedding. Emissary makes that super easy. I wrote an example rate limiting service in Java. You can find on my GitHub repo, Flynn and the team wrote a Go-based rate limiting example which you can find on the interwebs too. But those are the most common things, transport level security, authentication and rate limiting I would say. And then hooking it up to the observability is closely related to that. Hooking it up to Prometheus like Jason's talked about in service mesh context too. And then often, if you're doing things like distributed tracing, you wanna start them at the edge too. So we integrate with Zipkin and Yeager and a bunch of other things there. So that's like observability is often thought about quite a bit too. Awesome. We have a question about doing traffic splitting with Emissary. Maybe it can also be applied to Linkerd, but generally can you guys address this? Yeah, shall I take that one? So yes, there is a Canary flag in the mappings. Check it out. And that's what we use when we integrate and with Argo. So we've done a lot of work and had a tip to cost this from Codefresh, my buddy from the community. He ran a summer of Cates session. It's like our online free learning thing we're doing over the summer. And he broke down how to use Emissary and Edge Stack with Argo rollouts. And then I followed on with Argo CD afterwards as well. But how to do Canary releasing with all that tech. So you can do it manually just by changing the Canary, the weighting effectively on different mappings. So you have like mapping stable, mapping Canary, and you just change the weights manually and you know, do a kubectl apply. But like Argo is amazing. Argo CD, Argo rollouts, the whole Argo series of projects are amazing. If you're looking to do Canary releases, I suggest having a look at those. And then just to add on, like you might have seen it, so it's a traffic split, depending on what you mean, right? There's a traffic split, I think there's a traffic split object from the SMI spec, right, which is implemented by Linkerdee, right? And when you're connecting these two just work, like so you might have caught it in the dashboard, but there's actually an Argo rollouts install and Pod Info is using Argo rollouts. So the routing to podinfo.cbo.59.io, that's actually going to a traffic split object. Now 100% of the traffic is going to our green deployment, but you know, we'll do in the summer of Kate's session, is it next week, Daniel? Week after. So folks can like check out our Twitters for that, we'll put the details up, yep. Summer of Kate's week after that, we're gonna show traffic splitting with Argo rollouts, Linkerdee, and Emissary, all kind of together rolled into one. But yeah, that's kind of the way you do it. So there's a ton of options. And again, highlighting that it, because of the way, because both projects have a clearly defined set of boundaries and trying to do one thing really well, right? They work together super well and there's no special configuration to do. Great, looking forward to that session actually. I think there are no more questions. Is there anything you wanted to add before we conclude this one? One thing I'll share that is do get involved in the community. Both these are CNCF projects, Linkerdee is graduated, we're incubation stage. The community thrives from folks like yourself watching the stream, right? Jump on the GitHub repos, have a look at issues. Docs are super, super important. Like Jason and I were just tripping over a couple of doc issues that I'm gonna go fix later on. Like it's so hard to keep some of the projects up to date. So whether you're an engineer, doc writer, whatever skills that you've got, get involved with the CNCF. And we love you obviously to get involved with Emissary Ingress and Linkerdee. But if that doesn't work for you, pick a project, get involved. Yeah, great call out. And we shared the link in the chat so you can go to each project's GitHub docs and everything. Yeah, and then add on Slack.Linkerdee.io is the Linkerdee specific Slack. If you wanna talk to maintainers or get involved there, let the hear from you. I hang out in both the Linkerdee Slack and the DataWire Slack. I find them both really helpful. I gotta do a shout out. Good joke. So if you go to A8R, so there's Ambassador, they're basically A8R.io slash Slack. You can find our Slack there as well. And we've got like Telepresence, which is another CNCF tool which we steward or we helped steward. So you can chat to us there. Like I hang out in all the Slack's, the CNCF, the Oint ones. Yeah, so you can find us there. Great. And you can see on screen also just a reminder that KubeCon North America is upcoming. So registration is open for in-person and virtual events. And so hope to see you either there or on screen. Thank you very much, Daniel and Jason. This has been great. I really enjoyed it. I hope to see everyone here again next week, every Wednesday we're here. So thank you guys again. Thank you, Ty for hosting. Sorry, Daniel. Spoke over here. My bad. Just because I thanks to you, Ty. Thanks to the whole CNCF team. Really appreciate all the support here. Yeah, same. All right, thank you. Have a good day, everyone. See you.