 All right, we're going to go ahead and get started. I'd like to thank everyone who is joining us today. Welcome to today's webinar, Kuma, Build Secure and Observe your Modern Service Mesh. I'm Taylor Wagoner from the CNCS and I'll be moderating today's webinar. We'd like to welcome our presenters today, Marco Paladino, CTO and co-founder at Comm and Kevin Chen, developer advocate at Comm. Before we get started, there's a few housekeeping items to go over. During the webinar, you're not able to talk as an attendee. There is a Q&A box located at the bottom of your Zoom screen. Please feel free to drop your questions in there rather than the chat window and we'll get to as many as we can at the end or throughout the call. Also, this is an official webinar of the CNCS and as such, it's subject to the CNCS Code of Conduct. Please do not add anything to the chat or the Q&A that would be in violation of that Code of Conduct. Basically, we ask that you're respectful of all of your fellow participants and presenters. With that, I'll hand it over to Marco and Kevin to kick off today's presentation. Hey, thank you so much and thanks for having the opportunity of being here. It's amazing to be able to target the CNCS audience and tell them what we've been working on at Comm. So I'm here with Kevin Chen, developer advocate at Comm. I will start by giving the following agenda by introducing Service Mesh and Kuma. They were going to be seeing a live demo of Kuma, so we're going to be firing up the terminal and see Kuma in action. And then of course there's going to be some time for question and answers at the end, but you can ask questions in the meanwhile via the Q&A function of Zoom, if you wish. And then at the end, we're going to be accepting new questions and answering the existing questions. So there is something happening in the world and we're seeing it every day around us. Every product in the world is becoming a digital product. And once they become digital, they become cloud native. We focus a lot on technology and platforms, but really what's driving all of this, what we are really trying to do is growing our business. And growing the business is the main driver to every transformation. That means we want to focus on our products. We want to make existing customers happy and then get new customers. And to do that, we adopt the cloud. We adopt modern technologies. We stand on the shoulders of giants. The application teams that are focusing on building the best products rely on us, the architects, to provide the best infrastructure for their applications to use. And I like to say that running a modern infrastructure is like running a city. The architects build the underlying infrastructure, the roads, the bridges that the teams can now use. And then when everything is in place, the teams can finally focus on building their products. They can focus on the things that matter, the users and the customers. Effectively, the architecture that we're building for them is a partner to their success. But like running a city, we need to connect the different places together. We need to enable the flow of information from one building to another. We need to enable that traffic. We need electricity. We need security. We need police departments. We need routing. We need street lights, street signs. And the more buildings we build, the more people we have in motion. And the more infrastructure we need to have to make sure that effectively everything runs in an organized way. And really there is no other way around it. If we don't build these for our teams, then the teams will be attempting to build these themselves. They're going to be creating fragmentation. They're going to be creating bits of architecture within their applications, which in turn will make them less productive over the long term. Sometimes, some teams, they're not even worrying about this. They're not doing anything at all. And that will hurt the business. A real functioning city has to be built by somebody in order to grow the business in an effective way. But this can also be very challenging sometimes. Different teams are running on different platforms. They require different services. And adopting modern patterns sometimes can be very hard in a way. It's all about connectivity. As we're decoupling and distributing our applications, we are replacing the reliability of the CPU with the unreliability of the network. This is a key point. The reliability of the CPU, the function calls within our applications, it's being replaced with the unreliability of the network. But likewise, with our data centers, we don't want to build our own data centers. We want to leverage the cloud. Likewise, we don't want to be building our own network management. We want to be leveraging something that can do that for us. And that's why there are patterns that can help us with that. Introducing service mesh. Service mesh fundamentally improves connectivity among different services within our architecture. And the word service mesh implies having a mesh of services, which is certainly the case more often than not. But really the benefits of service mesh, the benefits of having an improved connectivity, an improved observability, an improved security among all of these different services. It's something that we can benefit from the regular list of how many services we have running in our systems. It can be a thousand or can be a monolith talking to a database. Even in that scenario, we still want connectivity to be working and we don't want the teams to be doing that for us. Service mesh until now has been very hard to implement. We work with lots of practitioners, lots of operators, and they're all very confused when it comes to implementing service mesh and looking at the solutions that are out there. That's why we have introduced Kuma on September 10, which is a month and a half ago, really, in order to be able to help these operators into creating a universal service mesh that can work across not just modern Kubernetes environments, but also help traditional applications running on virtual machines to be transitioned to those modern environments. So it's for both. And we did a significant effort to make sure that we could build an obstruction layer that can work simultaneously on traditional infrastructure, virtual machine, bare metal, as well as very easily run on Kubernetes and modern architectures. Kuma, effectively, it's a control plane for service mesh. It relies on, it's built on top of Envoy and it relies on Envoy for the cycle proxy functionality. For those of you who are not familiar with service mesh, the concept, it's very simple. It doesn't have to be hard. Every time we make a request from a service to another service, that request because of the network can fail. That request is, by default, unsecure unless we protect it. That request needs to be monitored, needs to be logged, needs to be traced, so that if it fails, we know what happened and we can improve it. Now, traditionally, the way developers have dealt with these issues are twofold. Either they build more code, they write more code in their applications so that they can create smart clients, if you wish, that can make those requests to the outer world, but that means more code, that means being able to maintain and update the code, that means technical debt, that means replicating those smart clients into different languages if we do adopt more than one framework or language within our system. Or even worse than that, developers were not bothering to create anything like that. So the connection fails, the application fails, and we lose business. Either we build something ourselves or we don't and we hurt the business. And so because we don't want to write code, right, the more code we write, the more technical debt we create, we want to be able to outsource that functionality to something else. And this is where ServiceMesh comes into play. With ServiceMesh, we can deploy an out-of-process proxy that can run alongside our services and the proxy will intercept every outgoing request that the service makes. And from then on, it will take care of that request. That means that from a developer's standpoint, if I'm building an app, I can make a request and make an assumption. The assumption is that that request will work. Everything else, it's taken care of by this site or proxy. So as a team developer, I can focus on building the actual product. I don't have to worry about making sure that all of those requests are going to be working. And the more decoupled, the more distributed we become, the more requests we're going to be making. On the other end, when we receive a request, we'll have another proxy that receives that request and terminates, for example, mutual TLS or logs the request to a third party business intelligence tool or enforces traffic permissions. On the receiving end, we receive the request in the application and as a developer building the app, all I need to know is that at one point I will receive a request and that request will be taken care of so that when we talk about connectivity, either making outbound requests or receiving inbound requests as an application developer, I don't have to worry about that. This is what service mesh tries to fix. And really this is a benefit that we can get on any platform, not containers, Kubernetes, as well as virtual machines. And this is a benefit that we can get no matter how many services we're running in our system. This is all L4, which means that any traffic to any database, any caching system to any other service using any protocol can get benefits out of this pattern. But the more sidecar proxies we have and the harder it is to configure them all, we don't want to manually redeploy or restart or reconfigure the sidecar proxies. We want to be able to do that from a centralized location and then push that configuration to the sidecar proxies or allow the sidecar proxies to retrieve it. And that is really the role of the control plane. The control plane is the source of truth for all the configuration that the sidecar proxy will have to dynamically fetch or receive or ask in order to be able to enforce all the features we want to enforce. Think of the control plane really as that source of truth for all the configurations. So we use the control plane to push this configuration and then eventually in an eventually consistent way that configurations will be applied to the sidecar proxies. It's a very similar concept to control planes and data planes that come from the networking space. You know, back in the days in our physical data centers we used to have many racks. Each rack had hundreds of switches and then the switches in order to be configured relied on a control plane to make that configuration applicable to them. Same thing, but applied to software. So what Kuma really is, it's a universal control plane. Lots of organizations are building their own control planes because existing control planes have a few problems. Either they're focusing on one platform only but in an enterprise organization we know that we don't run only on one platform or they are a little bit too hard to use and they have a very high operational costs, lots of moving parts. So we built Kuma with simplicity in mind. We built Kuma to be able to run everywhere and we built Kuma to be able to be simple to use and we can see that in the demo later today. So the control plane Kuma will talk to the sidecar envoys and all of these happens in an integrated experience. You are not left wondering, okay, so I need to get Amboy separately and then I need to start Kuma separately. All of these is being provided with a nice experience. Kuma can run on top of any platform. It runs in containers, it can run on Kubernetes as well as running on universal, that's why we call it universal, it can run on top of any other platform like virtual machines or bare metal. So you can deploy these on Red Hat, you can deploy these on AWS CC2 instance for example, and have the same benefits of service mesh across the entire organization. Different teams are going at different speeds, different teams are going to be using different platforms, but the service mesh, we want to provide that functionality to all of them. Because again, like we said before, if we don't do that, either the teams are going to be doing that or worse, nobody's gonna do it. Kuma comes from the learnings that we had come had with hundreds of enterprise organizations and enterprise customers. If you're not familiar with what Kong Inc does, we are the makers of, among other things of Kong Gateway, which is one of the most popular open source API gateways today. And we built Kuma as another open source project that is going to be joining the family of our open source projects like Kong, like Insomnia, in order to be able to tackle the connectivity problem. And when we spoke with our customers and with our users, really the most important feedback we have received was existing service meshes are too hard to operate. We don't want to start a new cluster for each team, but we want to be able to start one control plane. And from that one control plane, we want them to provision different meshes for each team. This makes it easy to do a couple of things. Number one, it makes it easier to run Kuma in an enterprise environment. So you only have one thing running as opposed to having multiple clusters. It makes it easier to operate Kuma at scale, if you wish. And the second thing, it makes it easier for the architect for us, it makes it easier to understand what teams are using service mesh so that we can consolidate those teams later on because everything is in one place. These was a very important features that we decided to build since day one in order to build a pragmatic service mesh, a service mesh that you can use today to deliver business value today on any platform, not making that adoption contingent to a transformation to containers, a transformation to Kubernetes, which may take multiple years to be achieved in an enterprise environment. So Kuma is multi-ten since day one. It's very simple to use. We're gonna be seeing this with Kevin later today, but you can install Kuma on Kubernetes with one command. It's very easy to start on any other platform as well. Kuma provides, among other things, provides identity to your workloads out of the box. It allows us to implement the traffic permissions so that we can determine what services can consume other services. We can implement security when it comes to that, as well as it implements traffic logging so that we can extract observability and metrics out of our infrastructure, out of the box. We can then decide to push these metrics into any existing business intelligence tool we might be using, think of Splunk, think of Logstash, and so on. So we can push this to any third party TCP server, as well as it provides integration with Khan Gateway. Khan Gateway operates in a north-south ingress capacity and by using the open-source Khan Gateway, with open-source Kuma, we're now able to take care of the full life cycle of the requests. We're able to protect and secure connectivity within the data center, and then we're able to expose those services to other teams or other data centers in the organization, and that goes through the Khan ingress. And really much more. Kuma is a modern project that is really breathing and living thanks to the community and community contributions. So I encourage everybody to go check the GitHub repository and give it a try. Long story short, all the work the teams have been doing to fix that connectivity, to secure that connectivity, to make sure that those requests never fail, can now be removed from the applications and can be delegated to Kuma and Envoy. So it really makes it easier to think about improving connectivity. And connectivity, by the way, is a problem that will increase over time the more teams we onboard, the more products we create, the more platforms we decide to support. So there really is no cloud-native modern architecture without a modern way of dealing with connectivity. And this is really what Kuma is about. Kuma, like I mentioned, can run on two different platforms. In two different modes, if you wish. It can run natively on Kubernetes. So when Kuma is being deployed on Kubernetes, it automatically injects the sidecar into your applications without having to change them into your pods without having to change them. It automatically detects all the data planes and really is a one command to install Kuma and run Kuma. Because we took that feedback from our users and customers to make Kuma easier to operate, we also made sure that Kuma is only one component. Kuma is built in Golang, and it's one thing, you can scale it horizontally by adding more replicas, and that's pretty much it. There is no other separate components that you have to worry about. So it's really simple to install, really simple to wrap your head around it, as well as very simple to operate. And when running on a universal mode, of course on Kubernetes, Kuma is leveraging the underlying Kubernetes API server to provide all the functions, whereas in a universal mode, we're going to be having one dependency, which is Postgres, in order to be able to store all the configuration that you're applying. On Kubernetes, you can configure Kuma 100% by using Kubernetes CRDs, and that's really the only way you can configure Kuma on Kubernetes, whereas on universal, you can configure Kuma either by using the API, the HTTP API that we provide, as well as the CLI client that can be integrated with your CI-CD workflow. The CLI client effectively, it consumes the HTTP API to perform all the functions. So you can version, if you wish, your configuration, you can apply it via the CLI infrastructure as code. So it follows the best practices, regardless of the fact you're running on Kubernetes or in universal. This is lots of information. Hopefully with the demo that Kevin will provide, we're going to be seeing this live, running live, and it's very simple. We put lots of focus in making things simple without excluding the possibility of going deeper in those more complex use cases where perhaps you want to manually configure the underlying Envoy. So Kuma will provide that nice abstraction, those nice primitives to do pretty much all the things, 80% of the architects wanted to implement, and then if we wanted to go deeper, there is a way to do that as well. So Kuma wants to be simple and easy to use without removing that powerful capabilities that the underlying technology is providing Envoy. In fact, we are contributing to Envoy as well. So we are contributing to the Envoy. We're part of the Envoy ecosystem as well as we are building Kuma. So with that said, I spoke for way too long now. I want to leave the screen share to Kevin so that Kevin can show you how Kuma works live. And again, if you have any questions about everything that I've said or anything that Kevin is going to be saying, please ask those questions using the Q&A tool from Zoom and we're going to be answering all of those questions at the end of this presentation. So I'm going to be stopping my screen share now and I'll leave it to Kevin. All right, thank you, Marco. So can everyone see my screen? So I'm going to be giving a demo on how Kuma works alongside a sample application that we built. So what we have here is a sample marketplace application that we built out. You see on the left, we have our browser which is just a front end UI that you can access application. And what the browser access is the API service. The API service is a node application that makes a request to two other services within your infrastructure. So first we have Elasticsearch which stores all the items that you might be selling within your marketplace. And the second one is the Redis. So the Redis is in charge of storing all the reviews that pertain to the items that you are selling. Just like how if you shop on Amazon.com you can look at the reviews for each item. Kuma marketplace also supports that, allowing people to understand what they are buying. So we jump back to the front end UI here. I have the application up and running. This is without Kuma alongside it. This is purely the marketplace. And you will see that all the items are populated on the front page. And you can easily scroll and you can even search by, let's say if you want a dad Hawaii insurance, you can find that. And we can also look at the reviews for them, right? So this is a very basic application with if we go back to the diagram with three services. And with three services, even with such simplicity you still have to implement the traffic logging, the identity management, the traffic permission logic within each one of the services. And what we wanna do with Kuma is abstract that kind of redundancy of code and time and find a simpler way to deploy that logic and enables you to move faster and secure your application. So let's go ahead and dive right into how would you install Kuma? And Marko gave you a glimpse earlier, but it really is as simple as using the Kuma CTL command. Right now I already have it all installed. So you can download Kuma at Kuma.io. I already have it downloaded and installed so I can go into the bin folder and you'll see that I have Kuma CTL here. So with Kuma CTL, you'll see you can check the available commands. What we're gonna do is install Kuma. So we're gonna do Kuma CTL, CTL install control point. And I wanna apply that to Kubernetes. Make sure everything's right, okay. So with that, you'll see that it'll install Kuma with all the necessary CRD, part services deployment necessary in one step. So that, as Marko said earlier, it's fairly straightforward for you to get started if you wanna deploy Kuma on Kubernetes. And with that, while we're deploying it, let's take a look at our pods. It's in the namespace of Kuma system, which is where everything will sit. And you'll see everything is already up and running. We have the control point itself, which will be in charge of, let me increase the font a little so it might be easier to see. So we have the control point itself, which is in charge of configuring all the data planes that is gonna have a sidecar proxy in them. And also the Kuma injector, which is in charge of automatically injecting that sidecar proxy alongside your services, okay. So right now we have our application running up here. This is what we previously saw and I'm portforing the application, but you see everything only still has one container running on-site it, or this one has two because there's a front end and back end service, but it's missing that sidecar proxy. So in order to have for the Kuma injector to inject those sidecar proxies, we're gonna have to delete these existing pods and let it be deployed. So let's just Qubesetail delete pods all within the Kuma demo namespace. And all these instructions will be available on the Kuma repository, so you can easily follow along and try it at home. Okay, so let's take a look at our pods in the demo, so everything's up and running again, but now you can see immediately that all of them have an extra container running alongside them. And that is because the Kuma injector is able to recognize that these services need to have a sidecar proxy, so injects that additional container alongside it. So if we go back to the diagram, this is essentially what we did within one step, one CLI command, we installed Kuma and it deploys the additional sidecar proxy necessary. So now whenever you wanna configure those Envoy sidecar proxies, you simply have to talk to the Kuma control plane that sits in the middle of them and it'll handle all the additional steps and configurations for your Envoy proxies. Awesome, so before we add mutual TLS, let's actually take one more step and let's explore Kuma CTL a little more because that tool can offer you a lot more control over your control plane. But right now our Kuma control plane you can see is sitting within a Kubernetes cluster. So we have the port for it to allow Kuma CTL access to it. So let's port forward our control plane. So we're just gonna do Kuma CTL port four, the control plane pod name on the top right here in the namespace of Kuma system and the port we wanna expose is 5681. So once we configure port four of that, we can now configure Kuma CTL to fetch that address at this control plane. So let me go grab that command. This is gonna be within the Kuma repository so you can find it. But this is what I'm gonna do is you're gonna use Kuma CTL config control planes and you're gonna add that control plane address. So right now I'm exposing it to localhost 5681 but you can use Kuma CTL to control any control plane that sits anywhere as long as you have access to that address. So I'm gonna add a new name to it. We're gonna call Kubernetes. You'll see that it automatically adds a control plane of a mini cube Kubernetes and switches to that as the active control plane. So now we do Kuma CTL get meshes. Awesome, we can see the default mesh that is enabled when we deployed Kuma. And we take a one step further and let's look at the data planes, right? If we look at the diagram we have here, we have three data planes and that should be shown with a Kuma CTL command. So we do Kuma CTL inspect data planes. We'll see those three, let me try to free format this a little. So you'll see that we have actually three data planes here as I'll represent it accurately, right? We have the Kuma demo application itself, the node service, elastic search, and redis. Okay, so now everything is working fine. The application itself is still working fine, right? Nothing, nothing has, oh, I need a port forward it, I'm sorry. So let's port forward our application after we redeployed it. And nothing has changed except traffic is being routed through Envoy, the sidecar proxy. And we haven't enabled mutual TLS, we don't have to configure any traffic permission as those still work as this, right? The review still work and then the elastic search services endpoint still works as well. So now let's go ahead and add mutual TLS. So to add mutual TLS is a very important capability and that's kind of one of the core features of service mesh is the ability to kind of encrypt all your traffic among services for services service. So we are gonna go ahead and use this deployment. And here you'll see that I'm simply just saying for the mesh resource, which we already know exists, the default mesh resource, I'm gonna say mutual TLS is unable to true. And with that, Kevin, sorry to interrupt you. I just want to point out how in that because we're running in a Kubernetes environment, as you can see, we are configuring the systems with CRDs. And that's really, really is the only way to be configuring Kuma on top of Kubernetes. Thank you for the clarification, Marco. So yeah, so we're configuring the mesh CRD and we'll see that it has been configured, the default configured. And now if we go ahead and try to access our application again with mutual TLS enabled, it doesn't work, right? Now we can't even fetch the items or even see the reviews, but it, because now everything, all the traffic is encrypted and the EnvoyPsychR proxy is not allowing us to access to service without the correct traffic permissions. So to make our application work, to get it up and running again, let's go ahead and add a traffic permission. Let me copy and paste that properly. So you'll see that I'm configuring the traffic permissions CRD with I'm giving it a name of everything. So I'm basically saying for the rules, I don't want any source to match any destination. So essentially any service can talk to any other service. With that CRD in place, we can use Kuma CTL to check that traffic permissions. We can see that traffic permission is indeed applied to the control plane. The control plane would now enforce it and apply that to all the data planes out there. So going back to our front end application, our application works again. And that's how easily you can enable mutual TLS across all your services. We can check our reviews endpoint, reviews work as well. Awesome. So up to this point, it really doesn't take a lot of effort in order for you to secure your services service traffic with Kuma. Now all the traffic that runs between API service and Elasticsearch is encrypted. All the traffic running between your API service and Redis is encrypted. With mutual TLS knocked out of the park, let's go ahead and talk about adding logging to your service mesh. So right now we have actually, if we take a look at QCTL get namespace, we have a logging service running and that essentially sends our log stash to a hosted log stash service called Logly. So with that up and running, let's configure Kuma service mesh to send all our traffic, traffic logs to log stash. So we have this CRD right here. Once again, to kind of go through what's happening, we are configuring the mesh CRD again, but now this time with alongside mutual TLS enable, we're enabling logging and telling logging exactly where to send all the logs. But on top of that, we also have to define exactly what logs we want to send. So with the CRD of traffic log, you'll see that it looks very similar to traffic permissions where you have rules of source and destination. You'll say for any service that match X, you wanna send that talks to destination Y, we wanna send it to log stash. So right now we're just gonna send all traffic to that runs through our application so we can trace everything. So we'll see what our mesh resources is configured and also we create a new traffic log CRD. With that, everything is gonna be sent to kumademo.logly. So let's kind of just generate some sample activity within our application. We can send, check some dad shirts, some fedoras, awesome. And let's look at the reviews. Sometimes it takes a while for it to get to logly. So that's why I just wanna give it a few minutes. So it takes a few minutes for it to get here. It's not instantaneous because it's a hosted service. So at the end of the demo, we'll come back and look at everything that happens throughout our demo. And we'll leave logging for the, we'll get back to logging again. And the next step after logging or while we wait for logs to populate within logly is securing traffic permissions. So we showed how we can enable traffic permissions for all services to all services. And it was very, it was simply defining a rule and saying for source to destination, I want to match all. But let's say hypothetically, your marketplace is being bombarded with fake reviews, right? Maybe your competitor doesn't want you to sell any items. So they just start creating some, a lot of fake reviews and giving you one start for all your items. So you want to quickly shut down your Redis review endpoint so people can no longer access it. But we're going to go ahead and reapply another traffic permission. And this time you'll see, it looks fairly similar to the first one, except this time we're defining specific service, source and service destination. We are going to say for that traffic permission, everything, we're going to change it. So the source of APIs, Kuma demo API, which is a node service can only match the destination of Elasticsearch, essentially cutting out Redis from the whole flow of your application, okay? So with that configured, what we're going to see is that with the traffic permission, your traffic to Redis will no longer be allowed and Envoy will terminate that traffic because it sees that you have to find that traffic permission, right? But Elasticsearch will still work because you said that specifically in the traffic permission rules. And since we haven't used Kuma CTL in a while, let's use Kuma CTL and get our traffic permissions. So we'll see that traffic permission, everything still exists. And if we output it as YAML, you'll see for any destination of Elasticsearch, it has to be matched with Kuma demo API. Awesome. So we can go back and we'll take a quick look. Logging is still last 10 minutes. Oh, just kind of to sync back on the logging, you'll see that logs aren't D coming in as we make requests to our service. So logging is working, okay? So now we go back to our final application. So Elasticsearch, which populates our items is working. And we can keep searching and querying for, my favorite is still the Hawaiian dad shirts and the fedoras, but if we were to make the same request to reviews, you can see that reviews no longer works now, right? Because you define those specific traffic permissions to disable the review endpoint, the review service. And with that, we essentially, and within 10 minutes, I would say, we were able to enable mutual TLS across all your services, we enable traffic logging and we enable traffic permissions. And that is the end of the demo. We are really proud of what we've built. We are nearly at 1,000 stars, despite only being like one month old. So please help us out, go to our GitHub. It's github slash com slash kuma. So check out repository, star it, follow us. If you find issues, please open up issues. We love people to engage with us and help us improve what we have built. And we also, you know, if you want to get updates throughout, you know, about kuma, you can sign up for our community updates at kuma.io, or you can also join our community slash channel, which I will have a link later in one of the slides. And to summarize, you know, we demonstrated that kuma is that universal control plane that can help you configure all your data planes, regardless of it's in Kubernetes or universal mode. The policies are easy to use. You want to ensure that people can get up and running it and create the policies that matter to them and enforce that policy across their mesh. And it's also platform agnostic, right? Today, we only demonstrated kuma on Kubernetes because this is a CNCF webinar, but we will have tutorials on how to deploy it in universal mode as well. And if you want to learn more, we will be at KubeCon NA. So please join us. We also have a co-located event the day before. You can register now. It will focus on the API gateway in the morning and kuma in the afternoon. Or you can always just swing by our booth. You know, we'll have tons of slags and we'll have engineers and demo labs that you can play with and you know, learn about how to utilize kuma for your architecture. These are the resources we have. So kuma.io is our main web page. You can get quick started. You'll find the link there as well. Or you can also join our Slack channel that I mentioned earlier. It's chat.kuma.io. I'll leave it for a second so people can digest the links and write them down. And one last thing is, you know, download kuma. You know, go to kuma.io and install it, run it and let us know what you think. Thank you. So Marco, do you want to head Q&A? Yeah, sure. I've been answering questions all over your presentation. There were like many questions on the Q&A tool of Zoom as well as in the chat. I've been answering to both. There are two more questions that I haven't gone through yet but perhaps somebody else from the kuma team that's online can answer them. So one of the question is, are the traffic permissions stateful? So you allow the API to the elastic search but not back. So today, traffic permissions are both ways. So when we do allow API to consume elastic search, that means that API can initialize a request to elastic search. Elastic search, of course, does not initialize any request to the API server. It receives that request and provides a response. That's why we did not have to create a traffic permission rule that would allow elastic search to talk to the API because the connection is not initialized by elastic search but it's initialized by the API service. If that answers your questions, if I understand the question right. And then there is another question around mutual TLS. Mutual TLS is good but who provides the certificates? Is there any equivalent of Citadel in kuma? The answer to that is that we've got a built-in certificate authority. We did that with the vision of really removing as many dependencies as possible that you have to take care of when running kuma. So kuma itself provides that for you and we're going to be enabling as part of our roadmap the option of using third-party certificate authorities as well. The project has been released a month and a half ago. So there's a lot of things we want to build but this is one of the things that will come up quite soon in the mutual TLS feature. So I hope that answers your question. There was another questions around dependencies that I really want to make it clear. I mentioned how kuma can run on Kubernetes and can run on Universal. So when it runs on Kubernetes, there are no dependencies. There is no Postgres dependency. When running in Kubernetes, kuma will rely on the underlying Kubernetes API server to store all the information, which means that there is no Postgres whatsoever. Postgres is only required if you're running kuma in a virtual machine environment. Why? Because we do not have the ability of obviously leveraging the underlying Kubernetes server. And so we have to store this configuration somewhere. So you can store it in Postgres, but guess what? I mean, you can also start an AWS RDS instance, for example, or any managed Postgres so that you can get Postgres out of sight, out of mind. But for Universal, there must be a place for storing this configuration. We've chosen Postgres for the simple fact that it's one of the simplest systems to run. And it's quite a good database as well. And we're pretty sure that in Universal environments, it's going to be straightforward to start a Postgres instance or find plenty of Postgres as a service, services that you can use for kuma. So those are also options. There have been some questions around the dashboard. How can we visualize all these traffic permissions on a dashboard or is there like any dashboard for kuma? This is a very good question. We are in the process as we speak of building a dashboard for kuma that will show up in the next major version of kuma, 0.3. So you're going to be seeing a dashboard available for kuma via your browser starting from the next major version. And that's going to be the first step for us to be building more and more visualizations on top of all the things that are happening on top of kuma. In fact, I'm quite excited about some of the things that we're planning to build. You can, like Kevin mentioned, also sign up on the community newsletter for kuma so that as we build these features, you can get notified and then you can get access to them as soon as possible. On universal mode, you can configure kuma in using YAML, yes. You can configure kuma with kuma-cattle, yes. You can also leverage the underlying API. So kuma-cattle, think of it as a glorified HTTP client that we've built that leverages the underlying HTTP API that kuma provides. You can also use curl if you wish to make that happen or you can integrate because it's an API, you can integrate the kuma API with pretty much any system out there. The API also exists when running kuma in Kubernetes. But unlike universal, the API when running on Kubernetes is a read-only. So you cannot make changes via the API. You cannot make changes with kuma-cattle. You can only make changes by leveraging a kuma-cattle. And that's really how things should be done on Kubernetes. For universal, there's no kuma-cattle. So kuma-cattle takes that role. So kuma-cattle allows you to implement those changes as well as the HTTP API. If you guys have any more questions, I'll be happy to answer. I'm finishing to answering the last ones on the chat tool. And yeah, as you start to explore kuma, install kuma and play with it, definitely if any questions arise then, you can join our Slack channel. We're always on there and we're always talking to our community so you can get a quick answer there. And if you run into an issue or a bug, just open up issue on our GitHub repository. The link for the Slack is up on the slides again. So feel free to join us there. Perfect. Let me see if there is any question. Let me see if there is any question on the chat. I think I've answered those again. Something, a question that was very important was about, is there a mutation controller to automatically inject the sidecars? And the answer to that is yes. Everything it's automated on Kubernetes, the sidecar is automatically injected. You don't have to worry about it. So another question, how fast is kuma compared to Istio Linker D? So kuma shares one thing with Istio, which is the usage of Anvoi. So the data plane at runtime that we're using for kuma is Anvoi and that is also what Istio is using. Unlike Istio, kuma can work across multiple environments, not just Kubernetes. And it's much simpler to use. There's much less moving parts so that it's easier to scale and operate a service mesh for the entire organization, as well as the capability of being able to implement a multi-tenancy since they won, which means that you can provision isolated meshes with every mesh as its own CA certificate authority so that different themes can create independent meshes from the same control plane. Linker D, instead, implements its own data plane. I believe it's built in Rust and we have not done benchmarks against Linker D yet, but when building kuma, we made a very obvious decision. We don't want to be reinventing the wheel on things that the industry has already contributed to the world. And so we don't want to rebuild our own low-level networking capabilities, but we want to be able to leverage the de facto industry standard Anvoi to be taking care of that. That allows us to put more focus into making kuma the best control plane for service mesh while leveraging all the contributions that the community and industry has been doing to Anvoi for the data plane technology. So that's why we've decided to use Anvoi as opposed to building our own data plane. And we're going to be contributing back to Anvoi for any improvement we think it's necessary to be merged in order to be proposing, at least, two requests for Anvoi in order to be able to fix any bottleneck you might experience. Any more questions? I think we are good then. Looks like everyone has, like we said, if you have any questions, you can always follow up with us on the existing other channels that we have open. And I want to thank everyone for joining us today on this webinar and thank you CNCF for having us. We are a CNCF Gold member, so we always try to, you know, we love to give back to this community and work with this community to make Kong and also the CNCF ecosystem better. Thank you so much, everyone. Thanks, Kevin and Marco for a great presentation. And thanks to all of you for joining us today. The webinar recording and slides will be available later today on the CNCF website. I just posted the URL into the chat. We look forward to having you again at another CNCF webinar. Thanks so much.