 Okay, I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Kuma Service Mesh and the future of application connectivity. I'm Ariel Jitib. I'm a business development manager for Cloud Native Technologies at NetApp and also a CNCF ambassador. I'll be moderating today's webinar and would like to introduce today's presenters, Marco Paladino, CTO and co-founder at Kong and Kevin Chen, a developer advocate at Kong. A couple of housekeeping items before we get started during the webinar, you're not going to be able to talk as an attendee. There's a Q&A box at the bottom of the screen. Feel free to drop your questions in there and we'll get to as many of those as we can at the end. This is an official CNCF webinar and as such, subject to the CNCF Code of Conduct, please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page, cncf.io forward slash webinars. With that, I'll hand it over to Marco and Kevin for today's presentation. Thank you so much and welcome everybody to this webinar. Today we're going to be talking about Service Mesh, Application Connectivity and Kuma. My name is Marco Paladino. I'm the CTO and co-founder of Kong and I'm joined today by Kevin Chen, who's a developer advocate at Kong. This session is going to be split in two different parts. There's going to be my presentation and then we're going to be seeing a live demo of Kuma. Kevin will lead that effort. So today we're going to be talking about Service Mesh and to really understand why this is the time to talk about this new pattern. We need to zoom out and take a step back and understand that we are really entering and we have entered, since a few years, a new era of software development. We have transitioned away from large monolithic applications that were baking all of the functionality and all of the features in one, in primarily one codebase and we started to de-couple and distribute those large codebases into smaller de-coupled services. These services are going to be powered by an API because they're separate. We're not going to be using function calls in our monolith to consume different parts of our application, but we're going to be making calls over a network to do that. And really this transition from monolithic to microservices was started by a few technologies that came out in 2013 and 2014. One of them is obviously Docker, which popularized the adoption of containers and the other one in 2014 was Kubernetes, which gave to everybody else in the entire industry a platform that anybody could use self-service to deploy our applications. Some companies have done this transition prior to Kubernetes, of course. I mean microservices, they really exist and have been existing for a long time. If we think of Google internally, first and foremost, but also Amazon companies like Netflix, they did transition to microservices prior to Docker, prior to Kubernetes. The difference is they had to build their own tooling from scratch in order to be able to do that in order to be able to run those microservices. But after 2013, after 2014, there is a new ecosystem that's coming out in the industry that provides this tooling out of the box for everybody else. So we don't have to build our own monitoring solutions. We don't have to build our own orchestration platforms, but we can go on landscapes, ecosystems like CNCF and then get the software we need without having to build it ourselves. In this sense, 2013 and 2014 were pivotal moments in the industry and created this new technological transformation in the world, a new era, effectively. And really this transformation we connect with pretty much every other technology transformation we're doing in the world. At the end of the day, the goal that we have, I have, you have, everybody has really, is to grow the business. And to grow the business, it is important that we can capture new users and we can monetize the existing ones. We can provide a better, more reliable digital experience. As a result to that, we're going to be making our applications more reliable. We're going to be distributing them. We're going to be decoupling them over time. We're going to be leveraging different regions, different cloud vendors. We're making our software more distributed and more decoupled in order to make our development speed faster, to make our services, our entire experience more reliable. So as we're transitioning from centralized to decentralized architectures, we're really transitioning from a more static, if you wish, sort of architectures, monolithic apps that we deploy at once, multiple people, all the teams working on one code base, to a more dynamic, to a more elastic, if we wish, architecture. We can decouple these services. We can deploy them independently. We can build them with different technologies. We can deploy them without too much coordination across the different teams. As we do that, the number of services increases over time in our systems. It's inevitable from large monolithic applications to smaller decoupled microservices. As the number of services increases, well, we're introducing some new challenges that we didn't have before. Some of them are more control, more visibility on our overall architecture. In the monolithic world, we had a handful of systems running. At the end of the day, they were hard to scale. They were hard to build, but it was just a few of them and we could deploy them very well. In microservices, these gets harder because we have much more, the scale of the things that we have running increases to a much, much bigger scale. And as we are decoupling our monolithic applications in microservices, we are also introducing a new variable into our architectures that was always existing even before microservices, but now becomes a much bigger, bigger component, a bigger part of these modern architectures. And that is the network. As we are decoupling our architectures into separate services that we can consume via an API. And by the way, the API, it's not necessarily just an HTTP API. It can be any API with any protocol that runs over the network. It can be GRPC. It can be a Kafka event. It can be a Kafka stream. It can be a more traditional service-to-service request. It can be anything. But the point is, we're going to be having APIs that we're going to be accessing on a network. And when we do that, we're making the network and the network reliability part of the overall picture in a much bigger way than monolithic applications. The network, as we all know, is not secure. The network is unpredictable. It can be slow at times. And so when the network becomes such a bigger part of our architecture, problems in the network are going to be affecting our end experience in bigger ways. And again, this is part of this transition that we're doing from running everything on a CPU, you know, where it effectively replacing the reliability of the CPU with the unreliability of the network. We are replacing our function calls, our objects in a monolith with network calls. Now, we always, always had network calls even before this. You know, even in a monolithic world, the monolith, if anything, was consuming a database, and that was a network request. And if that network request went down, then the monolith was down. What has changed now is the scale of those network requests. And really, we are having many more network operations and network calls and network requests overall in microservices than we ever had with monolithic applications. The network, this is the keyword, service connectivity over the network. Now, the network is a problem because, like we said, it's not secure. We need to encrypt the network. We want to be able to build zero trust security models by assigning an identity to every workload that runs in the network. If a service wants to consume another service, we want to make sure that the service is, first and foremost, a real service, we can verify the identity of that workload. And then we can then guarantee a permission to consume other services. We want to be able to set up ACL rules that determine what services can consume other services. If we have a very restricted service, for example, that is exposing sensitive data, user information, we don't want every service perhaps to access that specific service. We want to be able to set up rules to determine how we're going to be segmenting our traffic. Services are going to be having different versions. They're going to be having feature flagging requirements. So we want to be able to enable some features only if certain services or certain versions of our services are making those requests. We want to implement some complex routing and versioning over the network. In a monolithic application, we would either create a new implementation of an interface, or we would redeploy the entire thing over and over again. But with microservices, because the application, the overall application is not in one place, but it's made of these services running altogether and different versions of these services running together. We need to have something that allows us to determine what is the behavior of the network in our system. We want to be able also to deploy our services in a safer way. We want to implement the Canary releases. We want to be able to observe all of this network connectivity that's going from one service to another. We want to be able to collect metrics, to log all the requests, and trace them. So we can find bottlenecks. We can find where the problem is. In microservices, performance also becomes a much bigger requirement. In a monolithic applications, they had many flaws, but at the end of the day, invoking a function was quick. The underlying Java Virtual Machine, for example, was quite quick to consume objects within the context of the JVN. But with microservices, we're not doing that anymore. We're making network requests that go to the outside world, to the outside of that microservice to consume another microservice. Therefore, that performance, it's much more impacted by the network. And so we want to be able to trace that network. We want to be able to observe it in greater and better ways. There's a whole set of concerns that we need to take care of. And traditionally, even in monolithic applications, we would have the network. And traditionally, we would be writing more code. The application teams would be writing more code to take care of this network. So we would write smart clients, for example, that would perhaps retry a request to the database, maybe. If that request failed, they would log the exception. They would log any problem that the network experienced. The problem is that as we are transitioning away from a few handful of large code bases to more and more microservices, we want to be able to implement this network management across the world, not just in one place, but on pretty much every service. And so over time, the problem is that each team and each application and each service, if we don't do anything about this, each service will then implement in one way or another their own smart client or their own network manager. Over time, this creates lots of fragmentation, especially if we're going to be using different programming languages across the board, because then we have to reimplement this logic across different languages. This creates lots of fragmentation and creates lots of problems, including security problems, compliance problems, observability, incompatibility across these different services, as different teams are going to be creating this extra code to manage the network. The teams should not be managing the network. The same way we don't ask the application teams to manage the data center, we give them an abstraction that they can use to deploy their services. Likewise, we want the application teams to focus on creating the service, not on managing the network. If they do manage the network, these will inevitably lead to fragmentation and poor implementations. It's not their job. Their job is to manage is to create the service, the end user application and make sure that that experience is reliable. But managing the network is a side job that today they're implementing themselves, and we don't want them to do that. If we manage the network in a fragmented way, in a duplicated way across the board, eventually this is going to hurt the business. It's going to be creating unreliable experiences. We want the teams to focus on creating the apps. We want to abstract away that network manager. What if we take our code that manages the network, retries the network, enforces security policies, logs and observes all that's happening when we make an outbound request or we receive an inbound request and we extract it away. We separate that from our service. What if we want to make this particular code that's managing our network, we want to make it portable? Because we don't want this code to be tied to a specific programming language. We want it to be portable across the board so that if a team wants to build a service in Python, in Ruby, in Golang, in Java, they can do that. These separate network management executable, if you wish, is going to be managing all of the network concerns for us regardless of what language or what technology the service is being built. If we did export these as a separate executable, we could also use this for the services that we're not building but we're using, for example, a database or something that we're downloading and running. Now, for this to work, we need that component, that network management executable to be on the execution path of our requests. We want that component to be able to take over those requests and then proxy them to another service. The originating service that's making the network request should not be worrying about managing the network because these executable will do it for it. Now, of course, because we want the latency, obviously, we're adding a new component in our requests. Therefore, this component is going to inevitably add some latency, but this is the catch. If the latency is very small, in such a way that it doesn't really affect the overall end user experience, but the benefits that it provides are so high, well, then it's still worth adopting it. To reduce that latency, we're going to be deploying this executable that manages our network, these proxy effectively. We're going to be deploying it next to our service on the same underlying host, virtual machine, or pod. Basically, we want the connections between the service and the network management to always happen on localals that we don't want to go outside of the network. Otherwise, we're defining the purpose of having this executable in the first place. We want this to be as close as possible to the services. In Kubernetes, this would be a cycle container, which effectively is a way to tell Kubernetes to deploy this network management proxy on the same underlying virtual machine as the service that we're running. Because we want to be able to, as part of the tasks that this executable should be doing, we also want to be able to encrypt the connections, we're also going to be having this proxy on the other end when receiving those requests in order to enforce encryption out of the box without the services ever knowing that any of this is happening. By doing so, if you look at this picture, we are effectively abstracting away the network management from our services. This means that we can take this code, we can take this executable, we can push it alongside every service, and we can get network management out of the box. The teams that are building the services are never, ever going to be worrying about managing the network ever again. All they care about is triggering those requests or being able to receive those requests, but how those requests are being secured, how they're going to be observed, how they're going to be retried and so on and so forth, it's not a concern of the service itself. Therefore, we can build services in many languages, in many technologies, and out of the box, we would get network management via this portable executable that we're shipping and deploying alongside our services. We can build our own or we can use something that already exists from the landscape. If we want to use something that has already been built out there that we want to use very quickly, we can definitely use something like Envoy. Envoy is a proxy that we can use for these kinds of use cases. It implements network management functionality that we can leverage across the board alongside our services, so we don't have to build it ourselves. Because Envoy runs as a separate executable alongside our services, all these network management comes out of the box regardless on what platform we're using, including containers, but also virtual machines. There's nothing about Envoy that makes it very specific to containers and couldn't be executed on virtual machines or bare metal even if we want to. It's a portable executable. Now, if we do use Envoy, we don't even have to build our own network management executable because it comes out of the box from the community, from the ecosystem. Now, if we take a step back and we look at the big picture as we make the transition to microservices, as we introduce more and more services in our architecture, we're going to be having alongside each one of these services that we're creating, an Envoy proxy. The Envoy proxy is going to be responsible for processing the outbound requests to another service and receiving the inbound requests from another service. On top of these requests, we can create security, encryption, routing functionalities by leveraging Envoy without having to build them ourselves in our services. Effectively, it's as if we're creating this sort of network overlay. The services are unaware of all the complexities of the network out there, but we're creating this overlay that's been provided by Envoy out of the box, and that overlay will make our network requests more reliable. Now, of course, because we're going to be having many instances of our proxy of Envoy alongside our services, it can become challenging to configure the behavior of the network. I mean, the behavior of the network is something that, over time, we might want to change. We might want to change the permission settings. We might want to change how we observe our traffic. And every time we make a change or every time we want to expose a new service to another service, we don't want to manually go and push that configuration to all of these Envoy's. We could, but that wouldn't be very smart. It would be quite a painful process because we're going to be having a data plane proxy Envoy. It's called data plane because it sits on the execution on the data path of our requests. We would have to effectively then manually go ahead and reconfigure these proxies every time we want to make a change. But we don't want to do that. What if we leveraged another component, the control plane, whose only job is to connect to these proxies and push that configuration? The terminology data plane and control plane, it's actually quite common in the networking world. Now we don't manage our own data centers anymore. We use the cloud, but if we did manage our own data center, we would have a bunch of racks and servers sitting in this building. Each one of them would have its own switches and routers and so on. And every time we want to change something in the behavior of our network, we don't want to manually, physically, go into the data center and connect to each rack and update the configuration of every switch, for example. We want to be able to leverage a source of truth that's centralized and that source of truth will be responsible for propagating the configuration to our switches and routers and so on. The same thing is happening here. As we are deploying our data plane proxy across the board, every time we make a change, we don't want to manually do it on every single proxy. We want to be able to leverage a source of truth, the control plane, that will connect to the data planes to push that configuration. Now the catch here is that the control plane is never on the execution path of the service to service requests. The control plane, it connects to the proxies only in order to be able to push that configuration. The actual service connectivity flows through the data planes, not the control plane. So technically, the control plane could be down and if the control plane was down, that would not affect the service to service traffic. The reason why Amboy is quite popular these days in these kind of use cases is because Amboy provides an API that the control plane can implement in order to push that configuration very easily. So that API, the XDS APIs that Amboy provides come out of the box and the communication between the control plane and the data plane on Amboy is done via gRPC. Now likewise, for our data plane proxies, we would be deploying our control plane alongside our application so that the control plane can connect to this data plane proxies. And just like that, we have learned what service mesh is. Service mesh is a pattern that implies having a data plane proxy running alongside every service that we're running so that the network management can be abstracted away from the services we're building into this proxy. One implementation of this proxies can be Amboy, for example. And then it implies having a control plane that can connect to these proxies so that we can reconfigure the network behavior without having to manually push that behavior into the proxies themselves. So the control plane becomes this source of truth that dynamically pushes the configuration to the proxies. If we want to change the network behavior, we log in into the control plane or we change the state of the resources into the control plane to make that happen. We don't go directly into the proxies themselves. This is service mesh. Service mesh is not really a new concern. Even in a monolithic world, when a monolith wants to make a request to a database to something that's outside the code base, we have to make a network request. And we can use service mesh in a portable way, not just on Kubernetes, but everywhere. There's nothing that prevents service mesh in this pattern to run on virtual machines, for example, if you wanted to. And also from a pattern standpoint, this is something that we can use not just for micro services, but for anything else we might be using today. Today, even when a monolith talks to a database, chances are we're managing that network. With service mesh, we can make those monoliths, we can make those services much simpler to build by abstracting that network management to service mesh. This is what service mesh is. Now, likewise, Envoy is an implementation of a cycle proxy that we can use for managing the network. The control plane also has, there's different implementations out there, each one with its pro and cons. And one of them and the one that we're going to be addressing today is Kuma. This is the old logo of Kuma, we're updating Kuma with a new logo coming up from the next version. And this is going to be the new logo of Kuma. Kuma is also a project that is in the process of being donated to the CNCF foundation as a sandbox project. Last Friday, we opened up, we started the process, there is a process to follow, but the goal of Kuma is to be a vendor neutral, open control plane built on top of Envoy. So we're going towards that direction. So let's talk about Kuma for a little bit. Kuma is a control plane, it's open source, it was released in September 10, 2019 by Khan. And it's an Apache license 2.0 project. It's written in Golang, and it provides a native Envoy integration. So from a technical standpoint, Kuma is a control plane that implements the XDS API so that it can communicate to Envoy. It has been written with a very clear design and goal in mind. So first and foremost, Kuma has been built by Khan. And at Khan, we really value the ease of use of the API gateway. We think that simplicity using the product really is a feature. And service mesh has been very complex for a long time, but it doesn't have to be that way. So with Kuma, we wanted to create something that was simple, that was portable, that was extensible. So Kuma is first and foremost easy to use. It's a very simple lightweight and extensible control plane that supports Envoy out of the box. It provides policies that we can use out of the box for managing our traffic, for securing it, for monitoring, for observing it. And it comes with support for not only multiple platforms, but also for multi-tenancy. So Kuma can be executed on Kubernetes in a native way. And when running on Kubernetes, Kuma will automatically inject the Cycler proxy Envoy without us having to do anything about it. It's just going to happen. And Kevin will show you later how that works. But basically, by using Kuma, we don't need to know how to use Envoy. Kuma abstracts a way that complexity so that all that you need to know is out to deploy Kuma and use the policies. And that's it. Of course, if we are power users and we want to go deeper and change how the Envoy configuration is being created, well, we can still do that, but it's not required. Kuma comes out of the box with native support for Kubernetes, but also Kuma can run on any other platform. Like I said, there is no reason why Service Mesh as a pattern cannot be implemented on Kubernetes as well as virtual machines. At the end of the day, if we can manage to deploy our Cycler proxies and we can deploy a control plane, the Service Mesh pattern can be used pretty much everywhere. If anything, if you are transitioning to Kubernetes, implementing Service Mesh on virtual machines will make it easier for us to transition some workloads to Kubernetes because we're getting rid of one extra concern, and that is the network management, we're getting rid of that from the migration process, therefore reducing the surface area of the things that we have to migrate. So if anything, it enables that migration to be smoother because the network has already been taken care for us. So it runs on Kubernetes, it runs on pretty much any other platform, it supports hybrid deployments, and it's quite easy to scale. Kuma is one component, we add more nodes, more replicas, if you want to scale it, and we remove them if we don't need as many. It's multi-tenant since they won. That means that with many other Service Mesh implementations out there, we need to start a new cluster of Kuma for each line of business or for each team that requires a Service Mesh, and over time that becomes operationally very expensive because we do have to then manage all these clusters running. But with Kuma, that's not an issue because we can start one instance of Kuma and then create as many meshes as we want, and then we can determine on our end if we want those meshes to use the same underlying CA certificate authority for provisioning those identities or we want to be using different certificate authorities. And all of this is dynamic inside of Kuma. So it's quite simple to get up and running with it, and Kevin will show you in a second what's the look and feel of Kuma. It comes with native CRDS for Kubernetes, it comes with a native CLI that we can use across the board. So we've built this abstraction layer that abstracts away how we are retrieving the Kuma resources on Kubernetes and non-Kubernetes. It provides a GUI out of the box that we can use to get up and running with Kuma, and quite frankly we're making lots of work to make sure that the entry point for Kuma is as easy as possible. Of course this is a community-driven project. So we are always looking for feedback from the community, and as a matter of fact, tomorrow we have a community call on Kuma that if you want you can attend. You can check out the details for that call on kuma.io slash community. You can find the Slack channel, you can find all sorts of things including the information for the community call. In Kuma again, again it's community-friendly. So it's the only control plane built on top of Envoy with an open governance. So there is a journey for contributors to become maintainers of Kuma. We do have bi-weekly community calls. Like I just said, the next one is tomorrow. And we also are the first Envoy-based control plane that is going to be donated to the CNCF. So if you want to leave your plus one, that is the CNCF issue that has been open to kick off the donation process in the sandbox. The velocity of the project, it's quite high. We're trying to make, we're trying to learn about the kind of requirements and feedback the users have around Kuma. And we try to keep quite a good velocity when it comes to implementing our roadmap. We're going to be talking about some of these roadmap items in the community call, but long story short, there is policies out of the box that you can apply once you deploy Kuma to manage the network, things like traffic permissions, neutral to left, tracing, observability, multi-tenancy, fault injections, and so on and so forth. And as part of the roadmap, we're working towards integrating more and more with more complex network deployments so that we can create a mesh that can run simultaneously on Kubernetes and virtual machines as part of that overall picture of transitioning or integrating some of the greenfield new things that we're building with the brownfield applications we already have running as well as we're looking at making it easier to manage Kuma with SMI integrations with the open integrations and so on and so forth. So these are all items that we are working on with the community and with the community. We prioritize these items depending on how many people want. Simple as that. So this is a small introduction to service mesh, the pattern, and to Kuma, the project. So I'm going to be leaving it up to Kevin now to fair up the terminal and see Kuma in action. Kevin, you're there? Yep. Thanks, Marco. Can you hear me? Yeah, I can hear you. I'm going to stop sharing my screen so you can share yours. Awesome. And then I'll take over once you're done. Sounds good. All right. Let me try to grab the screen share here. And Marco, can you see my screen? Yes. All right. Thank you. So, hey, everyone, today I'm going to be illustrating, you know, how Kuma works through a demo. And to do that, we built a demo application in order to illustrate how Kuma would run perhaps in your production environment. So our application here is a marketplace that has sales clothing items and split up into four services to represent, you know, as you kind of break apart your model of how you would kind of distribute your logic of the application. We have a front-end app which allows you to kind of visualize the marketplace, a backend API built in Node, and then a Postgres and Redis database to store the items in Postgres and the reviews of each item in Redis. So, as Marco mentioned earlier, there's various ways you can deploy Kuma, and we built out kind of deploying a path for both Kubernetes and Universal for this demo application along with Kuma. But since this is a CNCF webinar, today I'm going to be focusing on Kubernetes. I highly implore you to check out the Universal because I think that's one of the big value propositions of Kuma is how easily you can get it up and running in Universal as well. So, everything I cover today here can be found on this Kuma demo repository, which houses just the demo application and the deployment instructions. You can also just find Kuma on GitHub through con-slash-kuma without the demo in the back there. So, as you can see here, as you follow through the Kubernetes deployment guide on GitHub, everything will be drafted out here and can navigate it through the table of contents. To kind of spare some time and not have you guys watch my containers and cluster spin up, I already have the application deployed. But to illustrate how easy it is, it merely is just kubectl apply this manifest that we already have built out that includes the entire demo application across the four services. And if we were to check that, and I do kubectl get pods in my Kuma demo namespace, increase the font a little here, there you go. You'll see that I have these four pods up and running and each of them correlate to one of the services that you see up here, right? The back end, the front end, the Postgres and the Redis. But one thing to notice that was in each pod right now, we actually have two containers. And that's because the first container itself is the applications. And the second one is that Envoy sidecar proxy that Marco highlighted. That's what's going to be doing all your network logic that you're going to abstract away from your application. So we have both cool and oh, and to show you how to install Kuma on Kubernetes as well, it's something that I already have installed and up and running. It's merely downloading the Kuma CTL by command line enter tool, and then you use Kuma CTL install control point and pipe down to keep CTL. Those two commands will basically get the entire application Kuma up and running on your local machine, local cluster, wherever you choose to be. I have up and running in Minikube. So let's take a look at what this demo application looks like. I already have a port forward here, as you can see, I'm port forwarding the front end service on port 8080. And if I also navigate there, you'll see, voila, we have a very simple Kuma, we're gonna have to update logo here, obviously. Marketplace, you can shop for horrendously expensive dresses, which these are all stored in Postgres service. And then you can look at the reviews which are stored in Redis. So this shows you that the entire application is working as we would expect it to. Oh, sorry. There you go, lost my master for a second. But this is not enough. As Marco mentioned, by default, the network is insecure and not encrypted, right? All the communication between the front end and the back end, the back end and Postgres or Redis to fetch these items and reviews is not secure. So how can we make it secure with Kuma? Well, it's very simple. We just have to visit the mutual TLS policy. So I'm just gonna jump to that section here. You'll see that all the policies are listed out on the table of content. So we're gonna jump to mutual TLS. So mutual TLS gives you the capacity to add encryption along all your services in the mesh, right? And Kuma ships with a built-in CA, which initializes with auto-generated root certificate. We also support third-party CAs, and you can configure that by looking at the docs. But today I'm just gonna use the built-in CA. So since the mutual TLS is not enabled by default, we have to configure the mesh resource to basically say, hey, we want mutual TLS within this mesh. And how will we go about doing that? It's as simple as updating the mesh CRD with enable true. So by default, this is what our mesh looks like, right? This section that I'll highlight right here, this is what the mesh would the resource will originally look like without enable true. So we were to add that additional enable true to our mesh. You'll see that mesh default is now configured and revisit our application. You'll see that the product API now has an issue, right? It no longer has the right permissions to communicate with our back-end API. And if you were to access the back-end API directly within the container and curl and try to query one of the Postgres or Redis databases, you'll also get the same issue. The Envoy sidecar proxies will not give you that permission to do so. So very quickly, by editing our mesh resource here, we add that level of security encryption that did not exist by default. Or previously, you would have to build out within each part of your application or each service that you're deploying. Cool. So now that we have Neutral TLS enabled, our application no longer works. So we still need to get up and running right. So this is where the next policy comes into place. And that's traffic permissions. Traffic permissions gives you the capacity to determine how your service will communicate. Especially once you have Neutral TLS enabled, you have to specify how you want your services to talk to each other. You can be very granular or very overarching. Let's just apply a very blanket statement, a blanket permission across our mesh. And you'll see here, I'm just going to say for our traffic permission named everything, I want to add a spec that says match any source to any destination. Basically saying I want to allow any service to talk to any destination at once. I'm going to go ahead and apply this using kubectl. You'll see that traffic permission, everything is created. And let me close this tab. We're going to go back to our application and refresh. The application works again. Because now all these services with Neutral TLS enabled have the right permission to communicate with each other. So as an end user, I will not be disrupt at all. But from a networking standpoint, everything is more secure. And that's exactly what we want as we kind of distribute more and more of our services. Before I go on to explore just a few more services, I want to take a step back and see, you know, we applied, we edited our mesh to enable Neutral TLS. We added some traffic permissions. How do I get a better overview of what's happening within Kuma? And this is what the GUI comes into play. Marco mentioned earlier that Kuma shifts by default with a GUI. And this is what the GUI looks like. GUI gives you an easy way to overview. Eventually we're going to add more functionality onto it, such as onboarding wizard. But you'll see here that it gives you a capacity to see exactly, you know, what our mesh looks like and what our data planes and all the policies we have in our mesh. If I was to click the data planes tab here, you'll see that we have the four data planes online as we have deployed the services across four pods. Each of these services, data planes can have a tag and we can eventually do some routing or traffic tracing based off these tags. Okay. So let's dive back to policies. We, I think we applied a very blanket policy earlier, but I think we want to be more specific, right? You're never going to just say we want all services communicate each other. That's, I think that's a little bit too broad. So we can start looking into very granular traffic permissions. To do that, let me delete the existing traffic permission we have. So I'm going to delete everything. There you go. And what I'm going to do next is add traffic permissions that basically say, hey, I only want the front end to talk to the back end and the back end to talk to Postgres. You'll see these two traffic permissions over here. What I'm leaving out here is Redis, right? I'm not giving permission for the back end to talk to Redis. So by that, we should not be able to, if I was to apply these permissions and I was to try to fetch any reviews, you won't be able to see those reviews. So let's go ahead and apply this. As you can see, I'm going to leave out the con front end here. This is another part of this entire demo application that you can explore of how con would play into this entire stack. So let me apply this here. There you go. So I have two new traffic permissions front end to back end, back end to Postgres, and it refresh application. It looks normal, right? You can see the items on the screen, but if we were to read the reviews, the reviews, the back end API no longer has the ability to talk to Redis without the permission. So this is how you can use that granularity to lock out services or to shut down services you don't want. Awesome. So I just explored adding neutral TLIs and traffic permissions. There's a lot more. You can do health checks, you can do traffic routing based off tags. And I'll leave you the link later for you to explore these different policies we have built out. But there's one more thing I do want to show you is observability. The ability to kind of see what's happening within all the data planes and all your entire network, right, as traffic flows through it. And this is where Kuma comes into play with Prometheus and Grafana. The ability to do traffic metrics using these two tools is really powerful. As you can see, we're going to go back and use Kuma CTO install to install the necessary Prometheus and Grafana components into our cluster. I actually have everything installed already, so perfect. So if I was to do cube CTO get pods in the Kuma metrics namespace, we'll have all the necessary Prometheus and Grafana components up and running, and it'll be all configured to work alongside Kuma. So this is a really quick way to go about it. Once we have metrics installed, all we have to do is basically revisit that mesh object right here that we have here. Earlier, we visited to enable mutual TLS, but now we want to enable Prometheus. So it becomes necessary to include this metrics Prometheus section. And in order to edit this, we're basically telling Kuma, hey, we want to basically send all traffic along our Envoy data planes to this Prometheus and visualize it using Grafana. Oh, one thing. I noticed that I have CA built-in. You'll see that this does not have enable true. So, this one will have it. So just now that Kuma didn't have enable true, so you'll see the mesh object default is configured again. We can go back to the GUI to make sure it is. You'll see the mesh entity here has the Prometheus section, and by these are the default parameters. You can change this as you see fit. And now if we were to port for the Grafana dashboard, we'll be able to visualize the metrics flowing through our marketplace. But before I do that, I'm just going to kind of, you know, let's query for some more sun dresses, see what absurd prices we have here are, generate some traffic, read some reviews. We still can't read reviews because we don't have traffic permissions. And now let's go ahead and port for that. Oh, there we go. Sorry. Grafana pod. There we go. We still have port for Kuma metrics. And what? Sorry. 3,000, right. There we go. Okay. So, we were to access localhost 3,000. You'll see our Grafana dashboard. And by default, you're logging with admin-admin, and we'll skip this. And here you'll see three dashboards that we have built out for you, versus the mesh dashboard. So, you'll see your overall mesh. You can see the amount of data planes, data planes are connected to the control plane, the bytes flowing through Envoy as a whole. All these great metrics are starting to flow in here. But you can, you know, dive in more granularly and then look at specifically what's happening within a data plane. Oh, I'm running long times. I'm going to go right through this. So, you can look at your data plane metrics based off which data plane you can pick here. And then lastly, you can look at explicitly what's happening between two services. So, you can choose, I want to see the source of front end to back end and see exactly what's happening between those two. So, this gives you a really good way to visualize your network and get observability there. So, okay. Marco, since we're running a long time, we'll hand it back to you. Thank you, Kevin. I was answering to a few questions in the Q&A, in the Q&A section. Let me share my screen again. There we go. Can you see it? Yep, looking at it. Yeah. So, one of the questions that's very common that has been asked is how Kuma differs from Istio. So, we were looking into Istio to extend Istio and we found that there were some fundamental problems with extending Istio that made us want to create a new control plane. One of them is the fact that Istio is not an open project. It doesn't provide an open governance. It's not being donated and we needed to have something that we could be contributing on that was open. And so, that's one of the reasons why we made Kuma open with open governance. And it's the only control plane that supports Envoy that does that. There is some other control planes that are also open and donated, of course, but they're not built on top of Envoy. The second thing is Istio has been for some of our users that we've been working with quite complex to deploy. The deployment modes of Istio have changed in the past, but we decided to provide an easier way to deploy the system since they won without having to go back on the fundamental architectural decisions we made when creating Kuma. We also work with use cases that are not 100% Kubernetes ready yet. Kubernetes is a journey for many users out there and some of them are transitioning their VM-based workloads into Kubernetes or they're building new workloads in Kubernetes, but they still want to integrate them with virtual machines. So, we needed a system that could run across all of these different environments and not just for the greenfield Kubernetes applications. Therefore, we built a system like Kuma that can run on pretty much anything. It's portable. So, you can run on virtual machines, you can run on Kubernetes. We can mix and match. It doesn't really matter as long as the data planes are able to retrieve their policies from the control plane. We also built an API abstraction layer that allows us to integrate Kuma with CI CD in order to retrieve the Kuma resources that have been created in an agnostic way by either using HTTP API as well as the CLI in addition to the Kubernetes CRD kubectl integration as well as the GUI. We have also made Kuma multi-tenant. As you know, if we want to support the entire organization, ideally, we would want to have a large mesh for all of the workloads, but pragmatically, different teams are going to be adopting mesh at different times. And in certain industries, especially the financial one, they are requesting some form of isolation between one mesh and another. So with Kuma, we can create as many meshes as we want, but we don't have to do that by creating one Kuma cluster for each mesh. So we deploy Kuma once. It's quite simple to use and quite simple to maintain across every environment. And we can do all of that from one single control plane. And most importantly, we can do that in a vendor-neutral way, since Kuma is in the process of being donated to the CNCF. If you have more questions around Kuma, I'll be happy to answer those questions in the Kuma Slack chat as well as in the community call tomorrow. So I'm going to be sharing a few links after wrapping up this presentation that you can check in order to get in touch with the core team as well as ask any questions you may have. So today, we looked at a few things. We are transitioning the way we're building our software from analytic to microservices. As we do that, as a result to that, we're getting many benefits. We can build software in different technologies. We can deploy them independently, but also we're going to be introducing more and more service connectivity across the board. The network becomes a bigger part of the overall picture and we have to manage the network. We don't want teams to be managing the network themselves in their own applications. We want to be able to delegate that to a data plane proxy and a control plane that can manage how the network behavior is being enforced. That is service mesh. And one implementation of service mesh is Kuma. So you can download Kuma from kuma.io as well as you can check out the GitHub repositories for the Kuma, the Kuma GUI. The Kuma GUI by the way ships already built into Kuma, so we don't have to deploy a separate component. It's all built in as well as the Slack channel as well as Twitter. Well, thank you so much. I've been answering some Q&A, but perhaps Kevin, do you want to proxy some of those questions to me so I can answer them live? Yeah, so we have one that came from Kalkish. He asked that does Kuma only work with Envoy as their plans to consider other data plane providers? Today, Kuma works with Envoy. We really like Envoy. We believe that Envoy has been doing a great job into providing a very solid networking primitive for managing all of these network requests. So Kuma is leveraging Envoy for both L4 and L7 communication. That makes Kuma suitable not just for, let's say, more traditional HTTP traffic, but Kuma can really be put in front of anything. So we can put Kuma in front of anything that's exposing, that's listening on a TCP port. We can put Kuma in front of databases. We can put Kuma in front of log collectors like Kafka. We can put Kuma in front of anything that's using TCP as the underlying protocol. Of course, as an extension to that, Kuma also supports JRPCH, HTTP, and so on, but really it can be used for any source of traffic. So today, we're leveraging Envoy for this kind of things and we have not found any limitation in Envoy that prevents us from achieving our goals, which is to create a more secure and manageable network overlay. We have contributed back to Envoy in those instances where we needed something that Envoy didn't provide and the Envoy community is being very helpful and very collaborative with us. So far, we're not planning to support other data plane proxies, but of course, things change. And so I'll be happy to hear your feedback in the community channels if you think that Kuma is not doing something that should be doing or if you have suggestions on supporting other data plane proxies. But so far, in the foreseeable future, Envoy is going to be the data plane of choice. Awesome. And Marco, we could update the slides real quick. The kuma-mesh.slack is for folks that already joined. So I want to include the link. I'll just send it out to chat right now. So in order to sign up for the Slack, it's actually chat.kuma.io. Just want to make sure everyone gets that correct link there. And then one last question before I wrap it up since we're out of time. So, Ariel, do you want to take it from there? Yeah, in those links, we'll be sharing. Thank you all for joining today. Marco, Kevin, great stuff. Thank you. Excited to hear that this project is going to be in the process of getting donated to the CNCF in great overview with the ICO stuff. For those who attended, the webinar recording and the slides will be available today. So those, these links, like the chat one and the Slack one, will be available at that time. Thank you all for joining us today and see you at a future CNCF webinar. Have a great day. Thank you. Thanks everyone.