 Welcome everybody to this live demonstration of Kuma. Today I'm very excited to be presenting to you this session because we're going to be seeing the results of many months of work with the community and with our users into building the best multi-cluster and multi-cloud support that any service match has ever seen. My name is Marco Paladino and I am the CTO and co-founder of Kong. And Kong really was the organization that first created Kuma and then decided to donate Kuma to the CNCF Foundation, today available as a sandbox project, which means that Kuma can be used with the same openness, the same neutrality, the same governance as any other CNCF project. When we first looked at creating a service mesh integration for our enterprise customers at Kong, we didn't want to build Kuma. We really wanted to leverage existing service meshes out there that were available in the industry so that we could package those service meshes into a solution that we could ship to our users. And none of the service meshes out there worked for us. And let me tell you why. We wanted something that was easy to use, that was simple to deploy, simple to scale, but for many users, service mesh has been a very complicated technology to deploy and to use at scale in production. We needed something that would support not just Kubernetes deployments, but could also support virtual machines or support other containerized environments that were not Kubernetes, for example, AWS Fargate or ECS. And none of the service meshes out there allowed us to do that. We needed something that would allow us to support multiple zones in a distributed service mesh across multiple clouds, multiple clusters, across a hybrid of VMs and Kubernetes, and no service mesh was allowing us to do that. So we decided to build Kuma. We built Kuma and we have donated Kuma, making it the first envoy-based service mesh to ever be donated and accepted into the foundation. Kuma, it is built on top of Envoy. We're strong believers of Envoy as a networking technology for our data plane proxies. And because of the very unique set of features that Kuma has built over the past years, we are looking at an incredible growth in the community when it comes to community adoption as well as mission critical enterprise adoption in mission critical use cases where Kuma it is being used today to support a service mesh for the entire organization. Kuma, it is something that has been designed for the enterprise architect. We want to make sure that as the application teams are building more and more services, more and more applications, they do not manage the network that comes as part of the infrastructure. And so today Kuma, it is being used by central teams, architectural teams to provide under the hood connectivity via a service mesh across any environment, any cloud, any architecture that the application teams are using for their applications. So Kuma primarily has been used for many different use cases, but we can sum them up in the following ones. So Kuma has been used to enable service connectivity across all the services that we're building to discover the services and make sure that the traffic among them is reliable. It has been used for one click zero trust security model enabling in order to be able to create security, enhance security across all the workloads as well as being able to rotate to the data plan proxy certificates in an automated way. And then of course, the more services we have, the more traffic we have, the bigger is the requirement for strong observability. And so Kuma is being used today to capture those traces, those logs, those metrics and either visualize them on Zipkin or Yeager or Splunk or LogStash or via the out of the box dashboards on Grafana that the project provides. And from a 10,000 feet standpoint, Kuma is a control plane that implements the XDS API. It can run on Kubernetes and VMs. It can run in a single and multi-zone capacity. It is built for the enterprise architect that must support the entire organization. And because it's part of CNCF, it is a vendor neutral technology. Kuma provides a very unique set of features that we've built because we couldn't find them elsewhere. And so Kuma supports multiple virtual meshes, multi-tenancy for each team or each application that we want to support in order to reduce the team coordination as well as improve the compartmentalization of our service meshes. So we can deploy Kuma once and create as many meshes as we want as opposed to creating one service mesh per application or per team. It is universal. It runs on Kubernetes and VMs. It supports custom attributes that we can use for our policies. For example, to keep traffic within a country and it supports the best built-in multi-zone connectivity that we're going to be seeing live in a demo today. When it comes to service mesh in general, service mesh is important because it centralizes how we manage connectivity which is going to be one of the most important things we need to manage as we get more distributed and more decoupled. But it also makes the application teams more efficient because they don't have to reinvent and rebuild all the things that the service mesh provides. As well as it's built on top of Envoy and we are strong believers of Envoy. Like I said, we can leverage all the Envoy functionality inside of Kuma. We can implement your trust security in one click by using the mutual TLS and traffic permission policies that Kuma offers out of the box. As well as, like I said, we can integrate Kuma with any sorts of observability tooling we may be using today, as well as using what Kuma provides out of the box. Now, of course, one of the biggest reasons for using a service mesh, it's also to make sure we can implement the green deployments and canary releases and traffic shifting across different data centers and Kuma can do all of it with the routing capabilities that it provides. And in the demo today, we're going to be seeing it live across multiple clouds, multiple regions, across VMs and containers simultaneously. When it comes to deploying Kuma, there is two different ways we can run the project. We can run it in a single zone or standalone mode or we can run it in a multi-zone mode. Multi-zone, really, it is what makes Kuma very interesting in an enterprise organization. So when running a service mesh across multiple zones, multiple platforms, multiple clouds, multiple clusters, there's two main challenges we have to solve. Propagating the service mesh policies across each zone, as well as enabling cross zone connectivity from one zone to another. And Kuma automates both of these problems by providing a global control plane and remote control plane separation to automatically synchronize the policies. The global control plane is the entry point for setting all the service mesh resources and those resources will be automatically propagated to the remote ones, whereas we're going to be using the built-in service discovery and the Kuma ingress that comes out of the box to enable cross zone connectivity from one zone to another. Even if one zone is VMs and another zone is Kubernetes or one zone is in one cloud, one region, and another zone is a physical data center, Kuma makes no assumptions as to where we're running the service mesh with the goal of supporting every workload in the organization. And of course, it provides a GUI out of the box. It provides a CLI out of the box, an API out of the box. And I'm very excited about all the things that we're building when it comes to improving how the users are interacting with the service mesh. In Kuma, we deploy a service mesh and then we can apply policies on top of our workloads, policies like traffic route, virtual TLS, permissions, health check, security breakers, and so on and so forth. As well as Kuma, it has been used today to accelerate the transition to Kubernetes by allowing to support simultaneously virtual machine-based zones with Kubernetes zones and then determining with the traffic routing rules how much traffic should go to the VM-based version of a service as opposed to the container-based version of a service, including environments like AWS, Fargate, as well as ECS, which typically other service meshes do not support. And this is a function of the universality that we've built inside of the project. And of course, it integrates with existing gateway technologies. For example, at the edge, service mesh is not applicable if we want to enable our APIs to be consumed by a client that's outside of the organization because we cannot force a cycle deployment to them and we don't want their cycle to talk to our control plane. And so we can integrate with gateways which can become the ingress and the egress of the service mesh. And we have that native full stack end-to-end integration built into the product. And so in the product, you can see gateway data planes that can be assigned in order to either support edge requirements with an ecosystem of partners or mobile apps, as well as inside of the organization to enable different teams to talk to each other via an abstraction layer provided by a gateway. So let's not spend any more time talking about all the things that Kuma can do, but let's watch them live in production. So I'm going to be pulling up my infrastructure right now so I can show you what Kuma can do. So first and foremost, I am running Kuma right now in a multi-zone deployment that spans across both EC2 and virtual machines as well as Kubernetes clusters on GCP, on GKE. So we're going to be seeing here that on GKE we have Kuma East, Kuma West, these are my East and West zones on GKE as well as there is a global zone for our global control plane. And then we do also have the remote control plane and the ingress being deployed as a virtual machine on EC2. Now, in order to show you this demo, I've built a very simple application that basically integrates, shows a front end that allows us to increment a counter on Redis. And so I was loading my applications from EC2 and as you can see, if I press the increment button it will increment a counter in a specific zone. And so this is the zone where Redis lives. And so different Redis instances in different zones may end up having different counters depending on how often we increment that counter. I can generate some traffic here if I want, but most importantly, let's go to the global control plane and see what we have running live into our mesh. So first and foremost, if we explore the namespaces that we have running, we see that there is a Kuma system namespace. And if I look into the pods of this Kuma system namespace, we see that there is one Kuma control plane pod and a service it really that we can then access. So I'm going to be port forwarding the Kuma control plane service from the Kuma system namespace so we can access it. Now, the GUI that I'm going to be showing you, it is built on top of the same RESTful API that you can integrate with your own automation. So this is the RESTful API. If I go on slash meshes, I can explore the meshes and I can see the resources of the mesh. But if I go on slash GUI, I can see the GUI that's being presented to me and it's built on top of the API I just showed you. So this GUI right now is showing us all the resources that we have in the mesh. So we have two different zones. One, it's on a WS on virtual machines. Another one, it's on Kubernetes East. Another one, it's on Kubernetes West. And then we have a series of data planes that we have running where we can see that there is a Redis running on a WS, the one that I've just showed you, as well there is a demo app running on a WS and virtual machines. But we also have the demo app running on GKE East and we have Redis running on GKE East and West and so on. We also have the three ingresses that allow us to enable cross zone communication out of the box and we have one for each zone, for WS, for GKE East and for GKE West. You know, when it comes to QM itself, we can go on QM.io, we can go on slash install and we can see all the different installation methods that we're supporting. We are about to release QM 1.0 and we're also going to be introducing support for Windows which is now being released in an alpha version in Unvoy proxy. All right, so let's go ahead and start using the mesh. So right now the mesh is not really doing anything. Besides having one virtual mesh called the default, everything is being disabled, there is no resources. All I have is the data planes being registered to it but there is nothing really going on here. So let's go ahead and make these service mesh a little bit more interesting. So let's, for example, enable zero trust security by enabling the mutual TLS policy. When it comes to mutual TLS, there is different certificate authorities that we can choose. We can choose the built-in certificate authority that will automatically generate a CA for us. We can provide our own root certificate and key. And in order to apply these resources, it is very simple. We can use on Kubernetes, we can use kube-cattle to effectively update our default mesh to enable mutual TLS with a built-in backend that does a certificate rotation for our data plane practice every day. But if you were to be running this on virtual machines, we could use a very similar YAML declarative config, but instead of kube-cattle, we would be using kuma-cattle. So this is truly a universal service mesh that can support all kinds of environments. But because we're running on Kubernetes, I'm going to be using this policy to change the state of my mesh. So if I go back to my editor, I am going to be running the command that we're going to be executing to change the state of our default mesh by enabling mutual TLS. And if I do this, mutual TLS will be now enabled for every service in this mesh and by default, without having a traffic permission, which is another policy we need to add, without having this, our traffic will stop working. And I'll show you this. So if I go here and I open a new terminal and I apply my resource on Kubernetes on the global control plane, we can see that the traffic will stop working. And that is because we have enabled your trust security and we must have an explicit traffic permission to determine what services can consume other services. And so the traffic permission is another resource that Cuma provides. And it allows us to determine what source of traffic it can consume, what destination. As you can see here, we can use attributes that are being associated to every workload in Cuma. And these attributes are attributes that we can customize. Some of them are also being auto generated, but we can find them from the GUI or from the CLI or the API and we can see them here. So anyways, by enabling every service to consume every other service, we are effectively re-enabling all the traffic to flow again. Now by default, whenever we enable this traffic, the traffic will go and flow across every zone that we support, which means the Kubernetes zones as well as the Amazon zone. We can limit that and change that behavior by using a traffic routing rule. But if I do this now, we can see that not only the traffic is being resumed, but the traffic is also going to be flowing from one zone to another. What you are looking at right now, it's a multi-zone deployment, running on multiple Kubernetes clusters, running on virtual machines, on different clouds in different world regions with zero trust security enabled, with traffic permission ACL enabled and the traffic is flowing from one zone to another out of the box, automatically discovered, automatically secured. Of course, the counter is going to be different depending on what read this instance in what, you know, where we're hitting in the specific zone that's being visualized down here. But now let's say that we want to change this. We want to force traffic to go to specific zones that will be very easy to do by using the traffic routing policy, which allows us to determine how we want the traffic to flow from one zone to another. And so I'm going to be pulling up my editor again and we can determine, for example, that we can use the attributes again. So we can say that I want all traffic from this service generated by this service going to this other service. So all the traffic generated by the demo app going to Redis to go, you know, all of it, all of it to go to Redis in a specific zone. So let's say that we want this zone to be, we want all the traffic to go to GKEast, for example. So we can create this traffic routing rule. We can apply it on Kubernetes, but I'm going to be doing this next to my app so we can see what happens. I'm now applying, like I've done before, I'm applying this resource on the global control plane. The global control plane is now automatically synchronizing this resource across all the remotes so that we can put this new effect, new change into effect. And so if I do this, we can see how the traffic is now going to be forced into GKEast. But let's say that I want a little bit of Amazon, a little bit of GCP, that's great. I can go back to my configuration. I can add another, I can split the weight in the following way. And I can say that a little bit of traffic goes on AWS and a little bit of traffic goes on GCP. So right now we can see that approximately half of it will go on AWS and half of it will go on GKEast. I can update my resource. We can pull up our application again. And if I do this, we're going to be seeing the traffic going a little bit on VMs, a little bit on containers, a little bit on one cloud, a little bit on another cloud. It's that easy to run a distributed multi-cloud, multi-cluster service mesh with Kuma. Now, obviously this is a very simple demo that demonstrates zero-trial security, demonstrates the traffic permissions, but there's a lot more to it. And you can explore the policies that we have here, as well as you can use any filter that Envoy provides with the proxy template policy, as well as you can see the metrics, the traces, and so on and so forth. Now, this is how easy it is to use the project. As one of the most important announcements that I would like to make, we have released Kuma 1.0, which brings so much more in performance improvements and improvements in this multi-zone deployment to the project. And this is available on the website today, on kuma.io you can download and use it and push it in production line. So we have seen today what Kuma is, what Kuma can do, why Kuma is very different from other service meshes, how simple it is to run, how distributed it can be in a multi-zone capacity. And we've seen a live demo of probably one of the most complex service mesh deployments anybody could run, multi-cloud, multi-cluster, hybrid VMs and containers. So thank you so much. You can check out Kuma at kuma.io slash install, and you can chat with the community on Slack by going on kuma.io slash community. So thank you so much and I hope you've enjoyed this talk.