 Welcome to ServiceMeshCon. Today, we're going to be seeing how to run Kuma in a multi-cloud and multi-cluster way. My name is Marco Paladino. I am the CTO and co-founder of Con. But before we look into Kuma, let's ask ourselves why ServiceMesh became such a popular pattern. We have learned many years ago that our monolithic applications, once they grow, they become very hard to scale, very hard to deploy in a reliable manner. They're hard to use, hard to deploy, hard to contribute to. Therefore, we're going to be decoupling our monolithic applications into separate services, microservices. Now, when we do that, we get speed, we get velocity, we get agility, we get a lower time to market because we can reuse some of these components when we create new applications. But on the other hand, we're also introducing more and more connectivity at the backbone of our modern systems. When connectivity is down, our applications are down. Connectivity really glues together all the services that we're creating, and the more services we create, the more connectivity we generate. Our application teams should not be in charge of managing that connectivity by themselves. When they do it, they do it in a fragmented way and it's not their job. They should be instead getting that connectivity from the infrastructure that we, the enterprise architect, are provisioning them across Kubernetes and virtual machines, across multiple clouds and multiple clusters. There is different connectivity types that our application teams are going to be asking us. They want connectivity at the edge when some of their applications must be consumed from outside of the organization by an ecosystem of partners and developers or mobile applications. There is going to be connectivity across our applications when different teams are exposing APIs that another team can consume. And then there is going to be more and more connectivity inside of our applications. Whenever our applications become microservice oriented, all of the services that make up the final applications, all of those services communicate with connections and those connections have to be secure, reliable and observable. And this is why at Kong, we originally created Kuma in order to simplify how this connectivity can be offered by the enterprise architecture team to the application teams, no matter if they run on Kubernetes or virtual machines, no matter if they run in a standalone or distributed way. We didn't stop there. We also donated Kuma to the CNCF Foundation. Therefore, Kuma is now available with the same neutrality and open governance as any other CNCF project in the foundation, like Amboy proxy, for example, which, by the way, Kuma uses. And today it is a sandbox project. Over the next few months, we're going to be obviously climbing the CNCF ladder into incubating and one day graduated. Because of the very unique set of features that Kuma provides and it's ease of use, yet it's very powerful, today Kuma is being used by over 900 organizations in very important and mission critical use cases. Kuma, from a very high level standpoint, it is a control plane for service mesh. It supports Amboy under the hood, but we don't require any Amboy expertise to use Kuma. Kuma is universal. It's a first class citizen for both Kubernetes and VMs, but really for any other containerized environment. As a matter of fact, you could be running Kuma entirely on VMs with no Kubernetes dependency whatsoever. We have created this universal support because we have a very clear understanding that we as enterprise architects, we must support the teams that are very far advanced in their Kubernetes journey and the ones that are behind that core. And so we want to support a service mesh that can run across the board on both VMs and Kubernetes. As a matter of fact, we want that to be hybrid as well sometimes. And we're going to be seeing this in the demo. We can run in a multi-zone or single zone capacity. We can run in a multi-cloud, multi-cluster capacity and we can easily scale, upgrade and operate this service mesh. Simplicity is a feature. It's not a nice to have. And we built Kuma with that in mind. Kuma comes with a lot out of the box. There is a GUI out of the box. There is an HTTP API out of the box. On Kubernetes, there are native Kubernetes resources on VMs. There is declarative config. There is an HTTP API and a CLI that allows us to effectively integrate Kuma with our CICD workflows. Kuma also comes with much more than that. It comes with observability charts and very easy to use integrations with gateways and existing API management solutions. It supports any containerized environment, not just Kubernetes, not just VMs, but also environments like AWS, Fargate or ECS. And enabling features like zero trust security, like fault injection, like traffic routing. It is one click away. We have native policies that allow us to do that. Kuma can be deployed in a multi-zone capacity. A zone can be a cloud, a cluster, can be VMs, can be containers. And we can create one deployment of Kuma that spans across all the environments that we want to support, including private data centers. And then on top of that one deployment of Kuma, we can create as many virtual meshes as we want to compartmentalize how our teams and applications are setting up their mesh policies. We all obviously want to give a degree of freedom to the application teams to change the behavior of their applications without us, the architects always being the bottleneck of every request. Yet we want the underlying infrastructure to be provisioned for them so they don't have to worry about it. We have automatic policy storage and propagation with the concept of global control plane, which is the primary control plane and remote control plane, which is the secondary control plane. All the policies we create can be created from the global and the global control plane will automatically reconcile and propagate these policies across all the zones that we want to support. Not only, we can also enable connectivity from one zone to another in a very automated way. Therefore, if there is a service wanting to consume another service, none of these services have to know what zone they're running and how they're being built. As a matter of fact, we can also mix and match hybrid and Kubernetes environments. It can be very, very complex the way we deploy it. Yet it is very easy to use and simple when we manage it. So in the demo, I want to show you how we can run Kuma in a multi-cloud and multi-cluster capacity on GCP and AWS on three different Kubernetes clusters on GCP. And we're gonna show this is a demo about containers and VMs, hybrid mixed together within the same mesh. So let me pull up my environment. I do have my AWS EC2 zone with the remote control plane and my demo application running in VMs. And I do have my Kubernetes clusters, the global and the East and the West that also have the demo application running. Every time I increment a counter in Redis from the demo app, depending on what Redis instance we're incrementing, depending on which zone we're hitting, we're going to be seeing different clusters. Now, if I connected to the global control plane, I can also expose the GUI of Kuma to understand what's the status of the current mesh and fetch all the status of my zones and my workloads. So I'm port forwarding the control plane in the Kuma system namespace. And once I do that, we can go ahead and explore. So first and foremost, there is an API, an API that we can use to automate everything that the GUI can do. But if you go on slash GUI, we can also see it. And we can see that this is running on three different zones, AWS EC2, GKE, the GKE West. And we do have a few services running inside of our systems as well as all the data plane proxies that are being connected to our service mesh, which is running in a distributed way. So right now I can load my, oops, I can load my demo application and that's probably the wrong address. This is the right one. There we go. I can load my demo application and increment my Redis counters. Now, because MutualKLS is being disabled right now in this current setup, we need to enable MutualKLS in order to allow for that cross zone connectivity. So right now the default mesh has MutualKLS disabled. So let's go ahead and implement MutualKLS and enable it in one click. Kuma will automatically generate the certificate authority for us, although we can provide our own and automatically it will rotate the certificates for us. So that the entire certificate management is being taken care of, no matter if you have one service or 10,000 services. So let's go ahead and create these zero trust security policy in the global control plane. As soon as I create it, the global will propagate this to my other clusters. The certificates will be generated. And as soon as they're being generated, we're going to be seeing crosses on connectivity from VMs to containers across multiple clouds, multiple regions, multiple environments, hybrid. This is how easy it is to enable zero trust security with service mesh. Of course there is much more to it that I want to show you but there is not enough time. So if you want to get in touch with Kuma, so first and foremost, you can install Kuma by going on kuma.io as well as you can get in touch with our community and please do not miss the talk today about a federal use case on how we can support 10K transactions per second with Kuma given by a user of Kuma itself. So thank you so much and enjoy the rest of your day.