 All right, so first and foremost, it was great to see some real people, real faces in a real event. I think it's been, what, two years and a half since there was an event before, so it's just amazing to be here. I know it's virtual and it's hybrid, but it's good to be here. My name is Marco Paladino. I am the CTO and co-founder of Kong. Today, I want to chat about a couple of things. First, I want to talk about the efforts that we've been doing with Cuma, a CNCF sandbox project, and then some features that we've released for Istio. In my role at Kong, I spend lots of time trying to educate organizations into what service mesh is and why they need it. I think that the educational component, it's something that has been missing, generally speaking, and some folks are really not understanding why we need a service mesh. The reality is, service mesh is inevitable. Whenever an application team does anything that's decoupled and decentralized and it goes over the network, well, the network, by default, it's not secure, it's not reliable, it's not observable, and we have to build something that allows us to get all of these features on top of the network. So there's only two options. Either we build it in every single service that the teams are creating, or we provide all of these capabilities in the infrastructure level. The question is how we build it, but there's no question about having to build it, right? And this is one of the biggest things that I'm trying to explain to leaders and technology leaders in the organizations that I work with. It has to be built no matter what. And the fun part about service mesh and the fun part about our decentralization journey is that we're just starting. This is just the beginning. We're going to be creating many more services in the future, many more applications that are going to be needing something like service mesh in order to function properly. This is the time for us to think about what service mesh technologies we want to adopt, to rethink the decisions we made before, to make sure that we have the right infrastructure in place to support our teams. No matter if they're working on containers, Kubernetes, or virtual machines, or this cloud or that cloud, this infrastructural component has to be ubiquitous across all the platforms that our teams are using. So in order to understand the work that we've been doing with Kuma, we really have to take a step back and look at the evolution of service mesh throughout the years. Service mesh, it's a new pattern. It came out in 2017. First service mesh projects back in those days, they were quite experimental. Some of them were JVM-based. Some of them had architectural decisions that made the product very hard to deploy, to scale, to upgrade. It was the early days, as we know, of service mesh. But then service meshes got very well figured out how to work very well in a single application environment, which is great if you're a team, you're building an app, you need a service mesh, you deploy it in your cluster, and you have it. In my capacity at Kong, I work with platform teams that are going to be providing this technology to every other team without having the actual teams having to worry about doing it by themselves. We want to make sure that we can accelerate the efficiency and the performance of the application teams by providing a service mesh to them on demand whenever they need it. And this is why the evolution of service mesh really goes above and beyond the single application and really spans across all the applications that the teams are creating. And some of these applications are going to be created on Kubernetes. Some of these applications are going to be still running on virtual machines. And some of these applications are going to be running across either single cloud or multi-cloud. So when I work with the platform architects that are going to be providing this technology to the teams, I wonder how are we going to be doing this across every team so that we don't have to reinvent the wheel every time. And this really is the reason why we created QMA. QMA is a service mesh that is a CNCF sandbox project. We're going to be starting the incubation process quite soon. It's built on top of Envoy proxy. Envoy is the data plane, proxy technology that at the end of the day it is going to be managing all the requests across all the services that we have. But then on top of this we've built our global and zone control plane separation which allows us to essentially deploy a global primary control plane and a secondary zone control plane for each cluster, each cloud, each environment, can be containers, can be VMs, can be a mix and match of both across the organization. Whenever there is a new change within the system, that change goes through the primary global control plane and the control plane automatically synchronizes and propagates that configuration across the board. By doing so we have access to cross zone discovery, cross zone connectivity. As a matter of fact we can also connect virtual machines and container-based applications and services transparently by creating this abstraction layer that can span across multiple clusters and multiple zones. This is native into the product. It's not an afterthought. The product has been built in such a way that it can be deployed across multiple zones. On top of that we have added one of the lessons that we learned with Congateway. Congateway has the concept of plugins and QMA has the concept of policies. We want to make sure that we can deploy the service mesh but then we also want to provide an easy way for the teams to wrap their head around all the features and functions that they want to apply in a service mesh, things like circuit breakers or traffic routing or observability and metrics and tracing. And so we created this concept, this abstraction called policies that allow the team to select what features they want from the service mesh on demand and then put it in their own zone automatically. Finally, this system is built with an abstraction that supports natively virtual machines and containers. As a matter of fact, it's not only Kubernetes, it's any containerized based system. It can be AWS Fargate, it can be ECS. Virtual machines and the support for virtual machines really is a first class citizen support. As a matter of fact, there is organizations that are deploying this on virtual machines only with no Kubernetes whatsoever. They do that because they want to lift and shift those virtual machine workloads into containers and by connecting the legacy VMs with the new containerized applications, that's a much easier journey for them to make that happen. And then of course, as you know, Khan provides above all of this an API management solution and support that helps, that wraps everything up together when it comes to both API management, full cycle API management and service mesh. Over the past 12 months, we're seeing a forex increase in growth and adoption of Kuma in the world with more than a thousand organizations using it in production. As a matter of fact, we made more than 40 plus releases in the last 12 months of Kuma. The momentum of the project, it is quite active and that's the way we want to do it. We want to keep shipping new features, new releases, performance improvements every time we have them. We believe that building something and not shipping it, it's essentially holding back value to the community. And so whenever we think timing is right, we'll make a release. And across the board in the past 12 months, some of the features that Kuma has released is, for example, native support for Vault, for Vault PKI, for our mutual TLS and identity to our services, things like FIPS encryption, things like policies for health checks and circuit breaking, traffic routing and monitoring. There's 15 plus policies that are available to the teams to get up and running today with service mesh. As well as automatic propagation and synchronization of TLS identities across multiple zones, including a hybrid of VMs and container zones. Again, the goal for this project, it is to support the entire org, which means containerized Kubernetes workloads as well as virtual machines. Why not? Service map out of the box, which ships in addition to the 70 plus charts that Kuma provides out of the box on top of Prometheus and Grafana. As well as improvements in the performance of the system. So we shipped, just last month, performance improvements that double essentially the number of data plane proxies that can connect to the control plane in an efficient manner. Performance obviously matters. In my role as the city of Kong, I spend lots of time not only talking about the technology itself, but trying to provide the right guidelines and materials for the decision makers in the organization to understand the value of a service mesh. And nothing better than looking at the use cases of service mesh can explain that. With a service mesh, we can implement zero trust security without having to ask the teams to build it, of course. And everything is being automatically propagated and issued by the service mesh itself. The entire life cycle of issuing identities, rotating those certificates, decommissioning those certificates, all of that. It's fully automated. The teams don't have to build it. As well as, of course, observability, being able to understand out of the box. And this is something that teams love, because they can get immediate value out of everything that they're building. 70 plus charts plus a service map to determine what is the performance of their services, how they are related to each other, what service consumes, what other service, and so they can understand the topology. As well as load balancing. Actually, this is an interesting one. This is something that it wasn't obvious at first, but it creates immense value in the organization. When we think of load balancing, load balancing for the most part hasn't changed. Centralized load balancing is a remnant of the legacy monolithic architectures and way of doing things that doesn't have a place in a decentralized, containerized environment. Organizations are using centralized load balancers or elastic load balancers in the cloud in front of every service that they are shipping that the teams are creating. But centralized load balancers, guess what? It's an extra hop in the network. They add latency. They are not efficient. They are slow, and they're not portable, especially when running in a multi-cloud environment. So service mesh can be used to implement client-side load balancing in such a way that we can get rid of those enterprise load balancers or cloud load balancers. As a matter of fact, I was speaking with a CTO customer of Com that were able to get rid of 16,000 elastic load balancers on AWS in front of every service while at the same time being able to save seven figures on those load balancers. Cost adds up real quick. And so we're working towards Cuma 2.0. We welcome the community to contribute in our community channels into the definition of what Cuma 2.0 really is. One of the things that we're working on is a new spec for selecting policies and sources and destinations across our services. So if you're familiar with Cuma, this is the chance to give your input into defining what Cuma 2.0 really is on Cuma.io slash community. But then, and I'm going to wrap this up, the last thing I want to talk about is being able to expose the services from a service mesh. Exposing a service from a service mesh, it's not only a load balancer concern. It's not only an identity concern. We want to apply governance on how we want to expose our services to the outside world. And to do that, we want to make sure that we have an API gateway that allows us to manage the consumers, the clients, the applications, and being able to govern each one of them in a different way. Maybe we have a tiering system. And so these are very important use cases of exposing any sort of API. It doesn't have to be in a mesh in general. And so one of the things that we announced at Khan Summit two weeks ago was the integration with Istio, with Khan Gateway. So essentially, we support natively Istio services that can be exposed through an Istio service mesh on top of Khan. And then Khan allows us to natively determine how we want to govern the APIs that we're exposing from Istio, as well as it gives us access to hundreds of plugins that go from authentication, security, traffic control, monitoring, observability, and so on and so forth on top of our APIs. Khan, of course, is open source, so this works in an open source capacity as well. As well as, of course, developer portals and catalogs and all that we need in order to properly onboard users to our APIs. And then, of course, the Gateway and the Istio spec are going to be converging on the Gateway API spec, which is the next generation Ingress spec for Kubernetes. Khan has contributed with the Kubernetes community on this new spec, the next generation Ingress, and these configurations are going to be converging over the Gateway API. And quite frankly, it's very easy to get started on konghq.com. All right, so we spoke about Service Mesh being inevitable. We spoke about Kuma and some of the things that we've been doing when it comes to the open source Service Mesh, as well as we introduced the Istio Gateway on top of Istio. Thank you.