 Hello, Kubcon, and welcome to Multicluster is easier than you think with LinkerD and Ambassador. We're going to be showing you how to Multicluster the easy way. And at the end, I'm going to give you a couple great use cases for Multicluster that you can go use on your own. Before we get started, let me introduce my co-presenter, Daniel Bryant. Daniel, why don't you tell us a little bit about yourself? Thanks, Thomas. Hello, everyone. Daniel Bryant, Product Architects Ambassador Labs here. I'm working on the Ambassador API Gateway Project for the past three or so years. Loving this technology, of course. Loving LinkerD, loving Service Mesh in general. Combination of Ambassador and LinkerD for Multicluster is super interesting. So looking forward to bringing you the demo today. And my name is Thomas Rampelberg. I'm a software engineer at Boyant, the creators of LinkerD, a CNCF Service Mesh. Before we get started, let me introduce the two projects that we're going to be using to build out our Multicluster solution. The first project is Ambassador. Ambassador is an API gateway. In Kubernetes, you need to have an Ingress Controller. Ambassador does that and quite a bit more. In fact, to get into production, you need a lot of things like authentication, rate limiting, request filters. And Ambassador does an absolutely fantastic job of all of those. It is open source, and I would recommend checking out their website to go and kick the tires on that if you've been looking for an Ingress Controller. I work on LinkerD, which is a Service Mesh. In fact, it is a ultra-light, ultra-fast, security-first Service Mesh for Kubernetes. You get observability, reliability, and security right out of the box, which is kind of all of the normal Service Mesh check boxes. Obviously, the thing we're going to be talking about today is Multicluster, and LinkerD makes that even easier than you can imagine. So let's talk a little bit about what we're going to go over. To do Multicluster the easy way, you really need to have four separate things. You need to understand how Service Discovery works and be able to discover services across clusters. You need to have cross-cluster access. You need to be able to get the requests from one cluster to the other. You need to be able to route the requests in an intelligent fashion, and my favorite part of Multicluster the easy way is profit. So to get started, Daniel, why don't you give us a little bit of a sneak peek into the demo and everything that we're going to be working on here? So welcome to the demo. Let me start at the end. This is actually what we're going to end up here. You can see I've got a command line terminal showing a East Kubernetes cluster. This is a GKE cluster running Frontend and Pod Info services. And you can see I've got a local Docker Desktop cluster down here on the context of West. I'm again running Frontend and Pod Info here, but I'm also running a Service Mirror of Pod Info East. So the Pod Info East service in my local Docker Desktop cluster is actually pointing to the GKE cluster. It's actually pointing to the mirrored service there. And you can see I've exposed using Ambassador, local host onto the Frontend service running in my Docker Desktop cluster. And because I've got some traffic splits set up using SMI, which I'll show you at the end, you can see as I'm actually making the request, I've got some refreshing going on the background against local host. We're hitting West and East versions of Pod Info service 50-50. That's what the traffic split is set up. So although I'm always hitting the Ambassador ingress instance running in my Docker Desktop cluster, sometimes the response is being served by the Pod Info service running in Docker Desktop, but sometimes 50% is also being served by the Pod Info service running in GKE, running remotely. And that's all thanks to the power of Service Mirroring. And it's all secure traffic between the two clusters. And we'll now run you through how you can actually set up your own. So for today's LinkerD in Ambassador multi-cluster demo, I'm going to be using two Kubernetes clusters. I've spun up one already to save time in GKE, using the G Cloud command line tool. I've spun that up in the US East region. I'm going to be calling that cluster the East cluster. That's my top right window here. I've also spun up a local Docker Desktop cluster. That one's going to be shown in the bottom right window here. And I'm going to be running my commands against the clusters using the different contexts of East and West. And I'm going to be running all the LinkerD commands and the Ambassador commands in my console on the left here. These clusters at the moment are both empty. So let's pop along to the Kate's initializer and use this to bootstrap the clusters. This is a free tool. You can install a bunch of stuff. Ambassador for Ingress, Prometheus for Monitoring, Yeager, Keynote, Argo CD, et cetera. You can select the configuration and auto-generate the YAML and simply apply it to your cluster. So for Docker Desktop, for example, my West cluster, I'd select Docker Desktop. Don't want to terminate TLS. Again, this is a toy example. You do want to look at TLS in production. Once I've selected my config, I can select more down here if I want. But today, let's keep it nice and simple. I go review and install, and voila. I get some kubectl apply commands I can run against my cluster. Now I've got some pre-configured commands I've done already because I want to put the contexts in, so I'm not going to be copying and pasting from here, but this is exactly what I would do. Let's go back and I'll show you with the GKE one. It's exactly the same. Select GKE. I'm going to be using an L4 low-pouncer. Don't want to terminate TLS in this toy example. Public hostname of star, and I review and install. And again, I just install that against my cluster. Let's actually do that now with my pre-configured cheat sheet. First, I'll run my watch commands on the Kubernetes clusters. At the top here, I'm going to be watching the east context, our GKE cluster. I'm looking for services in all names places. So as we add more things, you'll see more services appear here. And in the bottom window, I'll do the same for the context of west, my Docker desktop cluster, so you can see as I have more things here. Right, let's install ambassador in both clusters. With the east cluster now done, we can see in the top window, we've got our ambassador services in addition to our Kubernetes core services too. Let's now install ambassador on Docker desktop. Now we've installed ambassador on Docker desktop, so we have our ingresses set up in both clusters, both the remote cluster, GKE, and also local Docker desktop cluster. Let's now move on to the linkerd multi-cluster setup. Now the first thing we're going to do with our multi-cluster setup with the linkerd is to create a shared trust anchor. Public private keys between the two clusters that enables them to communicate securely. I'm going to use these steps here like man, you could use open SL, whatever you like, and generate my certificates into my directory here. Once that's done, let's run the linkerd install command in both our clusters, using a little bit of a cheat here to do both east and west at the same time. And you'll notice on the windows on the right, that as the linkerd gets installed, you'll see some of the services popping up here. This is all looking good now. You can see the linkerd namespace and the linkerd services, both in my GKE cluster and also my Docker desktop cluster down here. Let's do a quick check on the linkerd config. The status checks look good. That's one thing I do very much like about the linkerd CLI. You can run all the health checks and it presents everything in a very nice way. We can scroll up and down and look at all the various config parameters and know everything is good. Thanks Daniel. It is really cool to see what it takes to get ambassador up running on, not one cluster, but two clusters, as well as linkerd all set up and ready to go. Now, for those of you who are hopefully going to be trying this out at home, I would like to note that while it was, it's going to be pretty quick for you. It probably will be a little bit slower on your own clusters because I used a little bit of movie magic there and sped everything up. Great. So with all of the components for ambassador and linkerd setup, let's talk a little bit about what's required and what all of those components do. So to get multi cluster up the easy way, the first thing to dig into is service discovery. To talk a little bit about service discovery though, first let's go into what it means in Kubernetes. So in Kubernetes, I'm going to be using the pod info service here as an example. Kubernetes has a resource type called services. When a service is created inside that namespace, you get DNS for it automatically. I've got that up here on the slide. It's a pod info test SVC cluster local and the front end app that I've got listed here can address pod info directly. When it does that, the DNS response from core DNS returns a cluster IP. That cluster IP gets rewritten through the magic of kube proxy and ends up directly at the pod. This is great. It works out of the box. Kubernetes is fantastic on service discovery. But once we add another cluster, there's a little bit of a problem that we run into. Very specifically, you can't out of the box share resources between two Kubernetes clusters. How do we go and get the pod info service visible and addressable from the front end app? If the pod info is in the East cluster and front end is in the West cluster? Well, very, very, very naively. Here's your answer. Kube CTL context East gets the pod info YAML and pipe it into the West cluster. That's pretty much all you really need to do. Now there's a couple downsides to this and we're gonna go into why we've installed a component to do this, but that moves the pod info service over to the West cluster from the East cluster. And now there's going to be DNS that can be addressed for front end. But as I'm sure some of you have already thought through, one of the biggest problems there is, is that it isn't going to get updates. If pod info was changed, if it was deleted, you wouldn't get those notifications on West. And so in LinkerD, we've introduced a controller. Kubernetes has a controller patterns, deployments are managed through controllers, services are managed through controllers, and those controllers basically watch the API server for resources to be updated and then do something. And so the LinkerD service mirror watches the East clusters API server. And when it sees pod info created, deleted, updated, it goes and does that on the West cluster. That's pretty much all it does. The one important thing to point out here though is that it adds the East clusters name as a suffix to pod info so that you can directly address it. Great. So all we've done is tackle service discovery. We're going to go into a little bit about how the cross cluster access works next. But before we do that, Daniel, why don't you give us a little bit of a demo on what it takes to install the service mirror with LinkerD on a cluster? Let's now look at installing multi cluster in very much the same way. This is the command we'll be running here. I won't be installing the gateway. You'll notice here, install gateway equals false because we've already set up ambassador and we're going to be using ambassador as our gateway. With service discovery set up, we can now address services in the East cluster. But how do we actually get packets to go over there? To figure that out, let's talk a little bit about a resource type in Kubernetes that is kind of under the covers. Some folks don't spend very much time chatting about it and that is the endpoint resource. The endpoint resource in Kubernetes is kind of like this YAML that I've got up here. Really just the pod IP addresses for a service. So the endpoint controller watches services and takes the pod selector from those services, goes and looks up the pods and adds IP addresses to an endpoint object for that. Just like our service mirror is syncing the services across, we could just stop here and sync the pod or the endpoint resource across the clusters, attach that to our mirrored service and let everything go through. Unfortunately, that presents two problems. Problem number one is that pod IP addresses would need to be routable between cluster West and cluster East. If you can imagine a multicloud scenario where you're running a Kubernetes cluster in two different managed providers, that would be pretty difficult possible, but pretty difficult. The overlays configuration there would not be something I would want to do. Perhaps the bigger problem though is bandwidth, interestingly enough. The endpoints object here has all of the IP addresses for the pod info service in it. So anytime those are updated, it'll get synced across from the East cluster to the West cluster. Not only is that going to chew up a bunch of bandwidth, but it will also have some issues around keeping everything up to date. Imagine a pod IP going away in the East cluster and not getting updated in the West cluster, you potentially could have some lost packets. So what's the solution that we have to this problem? Well, if you've been waiting, the solution to this problem is actually ambassador. Why not use the same ambassador that you've got already managing your ingress for the multi-cluster communication? Ambassador already has a public IP address. It's routable from the West cluster. There's no need to pay for an extra load balancer and it can manage the traffic for you as part of its production setup already. So what do we need to do to get that all wired up? Well, all we need to do is tell our service mirror to instead of move all of the endpoints from the East cluster over to the West, we just grab the public IP address of the ambassador load balancer and stick that into the endpoints. One thing to note here is that when the service mirror syncs the pod info service from the East cluster to the West cluster, it also removes the pod selector to make sure that the built-in Kubernetes endpoint controller doesn't go and modify any endpoints for us. Pretty cool, right? That's it. Now all of our traffic from front end can get correctly addressed to pod info dash East and that will be forwarded by Kubernetes over to ambassador, which will finally pass it on to the pod info pods in the East cluster. I'd like to have one note here really quick around the routing because all of this traffic is going over the greater internet. Security is pretty important. Linkerty provides MTLS out of the box for both sides because there's proxies on the front end West cluster and ambassador East cluster, that communication across the internet is encrypted. Because it's MTLS, remember it's the mutual part, it's also authenticated. And so the ambassador only allows traffic from the West cluster or more specifically signed with the same, a certificate, signed with the same trust route into the cluster to get to pod info. And so your traffic is secure and access is also secured. So you don't need to worry about the security there. That said, security is a amazingly deep, deep, deep dive. And I've added a link to this slide so that you can go and take a look and go into all of the nitty gritty details if that's something that you're interested in. Fantastic. Daniel, can you show us what it takes to get ambassador set up as a gateway for all of this traffic and a multicluster solution? It looks good. This is probably the most interesting patch. Here we're patching the gateway to an open ports and ambassador to allow Linkerty to talk through and sort of to the mirrored services on either side of the cluster. That all looks good. You can see now lots of extra things are spinning up there and also the timing is nudging the window a little bit to display slightly strange. We'll just now check the rollout is working successfully. Here's nice command line cheat we can use to do that. Everything is successfully rolled out, looking great. And we can also check the multicluster aspect from the Linkerty CLI as well. I'll now just get all the Linkerty multicluster config. Everything is looking good. Everything's up and running here. And now I can link the two clusters with this command here. You can see I use the Linkerty command here from the context of the East linking in the West and actually a cube cuddling apply into the Docker desktop cluster. That all looks good. Let's now run our checks. Once again, context West check multicluster. You'll notice here we can now see the cluster East gateway ambassador ambassador. That's our GKE cluster is now configured. Linkerty running in Docker desktop in the West cluster now access the gateway of the GKE cluster. Now to make the display more interesting, I'll stop all namespaces and we'll now switch to the test namespace as we set up some services to play around with the service mirroring. I still can't believe how easy it is to get ambassador setup so that it works as a gateway for multicluster. It's pretty awesome. Thank you, Daniel. Now that we've got the gateway setup, we've figured out how to do service discovery across clusters. We have showed how to get packets across securely and now let's talk a little bit about request routing and why that matters. Being able to address services in other clusters is cool. As you'll see here, we're directly addressing pod info East, but you know what would be even cooler being able to shift the traffic over without needing to change code or even restarting anything. Isn't that the cloud native dream after all? So far, we have had a bit of a theme. We have built on top of Kubernetes primitives and only Kubernetes primitives. And the reason for that is that the Kubernetes ecosystem can compose and build on top of those very easily. We've used services, we've used endpoints, we've used load balancers and we've used ingress controllers. All of those are standard off the shelf components. This is perhaps the first non-standard component though it's part of the service mesh interface spec and what it does is allow us to split traffic. And because all of our service mirroring is using services, all it does is split traffic between two services. So here in our spec, we have said that any traffic addressed to pod info should be split 50% to the pod info service on the West cluster because we're gonna be talking about the West cluster here and 50% to the pod info East service on the East cluster. All you need to do is modify this Kubernetes resource and all of the traffic coming out of the front end application addressed to pod info will get split 50-50. And this is how it ends up looking. Being able to modify the eventual destination of the traffic to pod info really opens up a bunch of interesting possibilities. And because this is built on top of Kubernetes primitives, it opens up the world outside of service meshes and API gateways. Daniel, do you think you can install the front end and pod info services on your clusters in West and East and show us how the traffic splitting actually works in the real world? What I'm doing is I'm just watching in the namespace of test. This is the East cluster, the GKE cluster at the top here and at the bottom, I'm again looking for services in the namespace test, but this is in my Docker desktop environment. Now I'll run some simple script to see if the buoyant team, to go to the LinkedIn GitHub, download some services, some example services, some of them based on pod info from Stefan Pradam and some other simple apps as well, which we can use to demonstrate the service mirroring between Docker desktop cluster and the remote GKE cluster. This is gonna take a minute to do install. With our deployment complete, we can now see in both clusters we have the front end service and the pod info service there. GKE cluster at the top here and then the Docker desktop cluster in the bottom here. Let's now expose a very simple mapping in the ambassador gateway just to map the prefix of slash. I'm exposing Docker desktop on localhost. So localhost slash at the root, we're gonna map to the front end service running in the test namespace on port 8080. Let's do that now. Now if I pop along to the app browser and do localhost slash, voila, we are now looking at the pod info or the front end and pod info services running in my Docker desktop, west local cluster. All looks good. Now let's do some service mirroring. So if I clear my terminal and I'll copy and paste in this command here, I'll use the linkity multicuster export service to add some annotations to my services. You can see I'm configuring the gateway names ambassador and the gateway namespaces as ambassador to this overriding the defaults of the linkity gateway but that is nice and straightforward, hopefully. And we should see in my Docker desktop cluster in the bottom right here, we've now got a service named pod info east. This is effectively a mirror of the pod info running remotely in the east cluster in GKE. If we want more info on that, I can run a command here. I can basically attempt to get the endpoints for the pod info east running in my Docker desktop cluster. And I can also get the ambassador at global low banter IP address as well and compare them. And there you go. You can see that the west pod info east service endpoint IP is equal to the gateway IP in our east cluster. So basically the Docker desktop pod info east service mirror is pointing at the gateway of the GKE cluster, the east cluster. Now we're running some traffic generation in the back end as part of the demo. So you can do all the cool, regular stuff you can with linkity. You can get your stats to see what kind of latency and requests are going through. You can also look at the actual, you can do a tap and look at the traffic too, which is quite nice. You can see here also, I'll just pause it a second, TLS true. Although ambassadors are not running TLS exposed in this toy example, because we've got our shared trust anchor, we're of course encrypting traffic between the clusters courtesy of linkity, which is great too. Linkity command line is fantastic as is the dashboard for debugging, poking around the services, frequently find myself in there looking to make sure I've got my traffic config all set up correctly. Now I could simply expose the pod info east via my west gateway via my west ambassador, but it's making it more interesting and we're going to be using the SMI, the service mesh interface config to do some traffic splits. So if you can see here, I'm going to apply in my Docker desktop cluster, a traffic split on pod info across the pod info that's running in the Docker desktop west cluster. And I'm going to also split 50 50 to the pod info east service. And that is going to mirror into the east based, the GKE based pod info service. Let's apply this now. All looks good. If I pop back to my browser and do a refresh, oh, we're already seeing east. Perfect. And there we go. We see some west, we obviously see some east. Looks good. How cool is that? We've now gone through getting multicluster set up the easy way. And that's pretty much it. We've shown what you need to do to get service discovery so that you can see services from one cluster and another. We've outlined what it takes to get cross cluster access and make it secure. And we've shown everything you can use to do request routing. So the most important step of all is right now, profit, where we sit back in our chairs and talk about all of the amazing things that we've enabled with this. So what have we enabled? Well, one of the most interesting use cases for me is cluster isolation. Imagine that you're managing credit card numbers and that's a compliant environment that you need to have audited. Instead of having one big Kubernetes cluster that has everything in it, you now can go and stick that into a very small cluster and make your auditor love you because they have a smaller surface area to check now. You've got an insecure cluster that you can open up access to the rest of the company on and you don't need to go and put some crazy policies in place. Another use case for this is disaster recovery. Imagine a very important service failing. The front end can now get automatically redirected to your backup cluster or perhaps more interesting. The backup cluster can be tested with real traffic during normal operations. So instead of having to hope and pray that your backup cluster is going to work, you can go and test your disaster recovery during the day at any time that you want. Perhaps the most exciting use case for me though is development. I know we're virtual, so I can't see anybody's hands raised, but I'm gonna raise my hand. How many of us have worked on microservices and had it get to the point where it doesn't fit on your laptop anymore? I sure have. In fact, I hate it even more if it fits, it slows my laptop down and it's a pain in the butt to use. You can use this multi-cluster to go have a shared cluster now. So any services you're not actively working on, just go use them in a remote shared cluster and you can work on your service locally. And that's pretty much all you need. So that's everything that we've got for multi-cluster. Just a couple more slides here to talk about. First off, I'd like to introduce the Linkardee Community Anchor Program. We really like to get the community involved and especially if you're using Ambassador in Linkardee, we would love to know about it. So jump onto that link down at the bottom of the slide. If you're interested in becoming a cloud native expert and getting a little bit of help from us to tell your story. Also, join both of our communities. I've got links to our GitHub and Slack accounts. Daniel and I both are Slack junkies and are sitting around waiting for your questions with bated breath. So please dive into those. And then finally, here are links to everything that we've gone through so far today. I would really love to see everyone here go through the tutorials, deep dives and get up and running with multi-cluster on your own clusters. Thank you so much and have a great rest of your coupon.