 Hello. This is Nikola Nikolaev. I'm a tech lead at Conq. I'm going to talk to you about Kuma, our service mesh that we develop. So what is the service mesh? Good definition can be found in the cloud native definition by CNCF. So essentially it says cloud native technologies empower organizations to build and scale applications in modern dynamic environments. And then service meshes exemplify this approach. So what are these modern dynamic environments? When we think about it, it's usually the cloud, the very famous cloud that we have these days. And typically the modern applications are composed of these microservices. So the microservices are these small building blocks that you get where they compose your applications to them. And then you can stack them up, you can scale up and down, you can interconnect them and build your final applications. But also alongside them you have your virtual machines where you probably run your monoliths. And also you might have your bare metals where you run your like huge databases that are still running on dedicated hardware or computing intensive applications. And all of these participants in order to form your application, you have to interconnect them. And suddenly you get with all these connections that you have to manage, you have to observe, you want to know what's going on inside. And this is where actually the service mesh comes into play. So the promises of the service mesh are essentially to provide you to give you observability so that you can see what's going on in this communication between all these participants. It allows you to secure all these connections. It allows you for resiliency. So if some of these connections gets dropped, it can automatically be recreated. You can control the traffic so that you can redirect particular communication between various versions of the service and so on. And actually it's a good, it's a great development tool. Service mesh can be used during the development of your application. So this is where Kuma comes in play. Kuma was developed by Kong. It started in early 2019. And by September 2019, the very first minor version 0.1 was released. Nine months later, it was donated to CNCF and it's a sandbox project now. In July, we have released version 0.6, which includes the multi-zone support. We'll talk later what this multi-zone support consists of, what it is. And in October, we released our first major release. So how to install Kuma? We support a number of deployment methods, number of installation methods. So when talking about Kubernetes deployments, we have our own Kuma tool, that command line tool that can be used during the lifetime on Kuma to actually control it. But on the installation time, you can use it to actually deploy your Kuma cluster to enable Kuma in your Kubernetes cluster. We have Helm charts available on Helm Hub. We have a set of cloud formation templates that can be used to deploy Kuma on ECS. We have Docker containers, of course. We have a number of users that actually use Kuma in pure Docker without even leveraging or enabling Kubernetes. And we have a set of shell scripts that allow you to download and run Kuma in your Linux distribution of choice on your virtual machines or bare metals. All this is documented on our website, how to be used and how to be leveraged. So once deployed in a very simple scenario, Kuma looks like this. You have your control plane. You have your data plane, which essentially is a Kuma agent plus envoy proxy run as a sidecar alongside your application. And the communication between the data plane and the control plane is through using the envoy XDS, which is a control plane protocol used by envoy. If you want to scale up and you want to go in multiple zones, multiple clusters, this is the famous multi-zone mode that we already mentioned, that we released in 0.6. So the concept becomes a little bit more complex. You have on the top, you have your Kuma global control plane. Then locally on each of the zone, you deploy your Kuma remote control plane with all the services and everything as if it is a standalone mode. But all of these, like the communication between the global control plane and the remote control planes, happens over the so-called Kuma discovery service protocol, which is also based on the XDS. And effectively what you get is that you have your communication between the control planes and the data planes. They talk through using the XDS and then communication between the global control planes and the remote control planes in this multi-zone deployment also leverages some form of XDS. In this mode, the communication between the different clusters, the different Kuma deployments actually can be direct. You don't need to use any intermediate proxies, etc., etc. Once you deploy Kuma in this mode and you have a public IP accessible from the other cluster, the services can directly communicate. As we will see later, this also is valid for the concept of the mesh. So you can have a single mesh that actually spans across the multiple clusters. So if we zoom into a little bit into the specifics of each type of deployment, the very basic deployment that one can have with Kuma is a standalone Kubernetes. So you deploy your Kubernetes cluster, then use one of the Helm charts or Kuma-Cuttle deployment methods enable essentially the Kuma service mesh in your Kubernetes cluster. Once deployed, you effectively install, like, annotate your namespace with a specific annotation and then this allows for Kuma Kubernetes controller to actually start injecting Kuma data plane as a sidecar. We are leveraging this sidecar pattern here and then that's how you get your Kubernetes cluster enabled with the service mesh. If we want to scale up, so you can actually use several Kubernetes clusters side by side. There are various use cases about this. You can go multiple geographies, multiple data centers, or even within the same data center, different availability zones. It's up to you. But effectively, Kuma provides you with the tools to actually join together and distribute your services across different clusters, Kubernetes clusters. There is also the universal mode, which effectively is used when you want to enable service mesh on your virtual machines or your bare metals. It works kind of the same. You just deploy your service. You have to actually manually run your Kuma DP and endpoint alongside your service. And then all this gets connected to the Kuma control plane. You can bind them together. You can have multi-zone deployments of universal clusters and so on and so forth. My favorite deployment mode is hybrid. In the hybrid deployment mode, you can actually mix and match services run in Kubernetes. And you can have versions of the services on bare metals on your universal. And all this can be actually bound together and controlled from the global control plane. One of the basic concepts that we have in Kuma are the meshes. Effectively, the meshes are communication, isolation domain. So if you have several teams that want to share the same infrastructure, they typically would use different meshes. Meshes are cross zone concepts. One mesh can spawn multiple zones, even multiple like Kubernetes and Universal. It can be mixed and matched in the various deployments. And all the policies that we see in the next slide, security settings, observability, all these are per mesh. So effectively, all the rules that you apply for your traffic are go per mesh. So effectively, you form one communication, isolation domain. Policies are the basic building blocks. Actually, the tools that allow you to do all these, to implement all the nice promises that service meshes give you in Kuma. So you have MTLS, which is for security. You have your health checks. You have your fault injection can be used for development, metrics, logging. And the most recent one in my favorite policy is the external services, which effectively allows you to give you one service that runs outside of the mesh you can consume it from within the mesh. Our next steps, we have a number of them here. But the most important one is the CNCF incubation. So we're working very hard towards it. And we are strongly going and working with the community and gathering Q-scases and expanding our maintainer pool. The number of committers, people actively working within the community so that we can go into incubation. This is our high-level goal for the next months to come. You can find us at our website, kuma.io. We have Slack channel, which is linked here. And twice a month, we have community goals. Join, find us, join our community if you're interested to know more, explore. That's it. Thank you.