 Hello, good afternoon everyone. Thanks for joining this session. My name is Wen Han, and I'm now a solution engineer in Kong Japan. Today, I'm going to introduce how to use the Kong project, the service mesh, called Kuma. And how to use Kuma to do the tracing for your cloud native microservices. First, today's agenda. So I'm going to introduce Kuma first, which is the service mesh. I heard some people know the Kuma, so I'm really very happy to hear some of the members know Kuma. And the next, I'm going to introduce what is the open telemetry. And the third one is we can bind the Kuma and open telemetry together and what will be a good solution for this combination. And at last, I'm going to do a demo and show you how easy you can use Kuma and open telemetry to do the tracing things. First, let me do a quick introduce for myself. My name is Wen Han, and I am a developer. And I also have done with the technical support. And right now, I am the solutions engineer, which is the pre-sales roles in Kong Japan. So if you have any questions or you want to have some comments on my slides or feel free to leave me a comments on the Twitter or the X or drop me a mail. Thank you very much. So first, the topic is what is Kuma? So these are the famous Kuma in Japan. So you can see the most famous Kuma, which is called Kumamon in the South Japan. And we have the beware of the bear in the North Japan and the most famous Kuma in the one piece, the anime. And we have another Kuma called Kuma's service mesh. So Kuma right now, it is a CNCF sandbox project, which was developed by Kong, the company. And today, I'm going to introduce the Kuma service mesh part. So the first, let's talk about the network design patterns for the collaborative things. So at the very first, when I was a software developer, I built a very big monolith applications. And the application will provide some features which are exposed by one API. And the one API, we can use the only one API gateway to expose it. This is the very traditional way. And in the cloud native way, we have multiple APIs because we destroy the monolith application into multiple APIs. So you have to control the APIs, the connectivities. And you need to do the logins to do the authentication authorization. So you need to put the API gateway between them. So you need more API gateways to manage a lot of the APIs. So for the gateways, it is the no source traffic from the outside to the inside the cluster. So you need to not only call gateway, but you need the gateway to handle the traffic from the outside to inside. And go to the next part. So this one is more like the call gateway combined with the service mesh. So imagine that you have a lot of the APIs. So the design pattern in the middle, you may have about 10 APIs, 20 APIs. You can handle it. But if you have 10,000 APIs, so some big project, you will have more and more APIs. And some APIs will join into your cluster day and day. So in that case, it is not possible to adjust every API to the API gateway. So in that case, we can use the service mesh to take control about all of the APIs. And in that case, you can use the service mesh to manage the traffic inside the cluster, which we call the West East traffic. So West East traffic, which means the traffic inside one cluster, you can use call mesh or other service mesh products to control it. And use the gateway API, you can control the traffic from the outside to inside, which we call the null source transaction. So today, I'm going to talk about the service mesh. So in this diagram, you can see we have a service mesh control plane. And we have some data plane, which is the pink part in this small box. So each service, we will have one data plane. And the data plane will take charge of the pods to manage the traffic between the pods and across different clusters. So these are the features for the service mesh. So let's go deeper about how you can use the service mesh more smartly. So in the control plane, what we can do is we can do the configurations to send some new configurations to the data plane proxies. And in the control plane, we can also set some policies, which you can add some new features. For example, you can add some zero trust networking. You can add some logging. You can add some rate limiting to control the traffic. You can use these features as policies in the control plane. And then the control plane will put the settings into the data plane. And the next one is the monitoring. The control plane will take control about the whole cluster. So every cluster which is sending the traffic will report the metrics information or the logging information to the control plane. And then the control plane will expose these logins, these metrics information to a third party platform, like promises, or like Datadog, or like some ELK, extra, extra. And for the operation point of view, the users, which is the administrator, can access to these control planes and doing some configurations, or apply some new CRDs, or using the directly API endpoint to take control about the control planes. So this is the overview about what is the service mesh. So next one is the kuma. So kuma is one of the service mesh. So everyone knows there are a lot of other service mesh, like Istio and Invoys, some things like that. And I think maybe some of you may have some questions. So compared to the Istio, what is the good part for kuma? So for my personal point of view, I think there is no good bad, some black and white things. I think every service mesh product has its own features, some good points. So there is no best service mesh products, but the most satisfied for your environment. So back to the kuma part. So when you are not using the service mesh and you are going to deploy some new applications, so example you have two services, and the services need to communicate with each other. And so in the real world development case, you may have different teams for each service. And when the teams A, building the service, the team member needs to think about what I should do for the logins and what I should do to keep the save for my end point to expose only the save APIs and do the authentication and some rate limitings, some things. So every developers, when they are developing their APIs, they must think about the common issues every developer will meet. And with the kuma or with the service mesh, the developers can only focus on their development about the core features about their APIs. So for the other things, you can let kuma sidecar to do it. So kuma will take control about the logins, the authentication, the rate limitings, and the other common tasks. And you can put these layers up to upper and you can put the common tasks in the service mesh layer so the developers do not need to take everything, every boring stuff on every APIs. So this is the benefits when you are using the kuma as a service mesh for your cloud native development. So next I'm going to talk about how you can deploy kuma. So this is the very simple one which we call the standalone type of the development. So in this development, we have one control plane and each service will have one data plane as a sidecar. And the sidecar will report to the control plane and the control plane will put the new configuration to the data plane. And the use case for this architecture will be, for example, for the kuma will only available in one cluster or in one VPC or something like that. So for the next one, this is maybe the good part for the kuma deployment. We have a multiple zone deployment type. In this type, we have a global control plane and we have a zone control plane. So in this case, you can have your isolated, you can have your multiple isolated service mesh clusters for your each clusters. And so kuma is not only available for the cloud native environment, it is also available for the bare metal, for the VM, for the other traditional development. So you can have it mixed, which means that you can have one zone for Kubernetes and one zone for bare metal or VM. They can be controlled and they can be used in one global control machine, global control plane. And using this one, and you can see that we have a zone egress and a zone ingress here. So if you deploy the same service on each zone, and it can act like high availability features for both zones, so if one of the service is done and the egress and ingress will try to find other services in other zones. So in that case, if one of the service is going down for zone A and the kuma will find the same service in zone B and provide the service with no issue. So this is like a handover, a failover of things. So kuma can also do that. So in the real use case, you can create your own kuma mesh clusters for multiple regions and you can combine all of the things in one global control planes, which will reduce the cost for the management and operations. So the next one is the policy. So policy means that you can create some new features in the service mesh level without changing anything for your service code. So in this case, in these examples, we can have some MTOS. We can use the ACL or we can use the kuma ingress and logins. All of the policies can be done in the kuma part, in the service mesh kuma part. And currently, we have more than 20 policies, and which is very easy to use. And to apply these policies is very simple. And you can just type up commands. You can apply this CRD and the CRD will apply the new policies to the services, which do not need to change any code. So here is an example about how you can deal with the policies for the service mesh development. So I have a sample application here from the browser. I access to the demo app. And the demo app will save the counter to a Redis server. So from the demo app to the Redis server, we are using the service connections. So at the left part, there is one policy called the traffic permission, which we will set. So this is like the zero trust things. So if I select these rows, and you can see from the right side, the demo app cannot connect it to the Redis server, which means the policy is applied to the control plane. And the control plane will push the settings to the data plane. And the data plane see, OK, I don't have any policies. I don't have any permissions for these two services. So the access is going down. So you can see this is a very easy part. You can control the permissions from one services to other services, just using the cone policies, OK? OK, so this is for the Kumar part. For the next one, I'm going to introduce what is the open telemetry. So open telemetry is a collection of tools which can be used to export the tracing data for your software. And the red part is an example for one application. You can see for this application, it will cost about 70 milliseconds. And inside these applications, we can see which part costs the most. So you can use these tracing information to do the troubleshooting to your applications, especially for the performance. So sometimes you may think that your application's performance is going a little bit lag, or some delay happens. And you want to see which part I need to change and which part is having some issues. We see with the open telemetry tracing information, you can easily address which part is having some issues, OK? And yes, use this. OK, so that's it. So this is a part for the open telemetry. So next one is to combine the Kumar and the open telemetry together. So what will happen when you are using the Kumar and the open telemetry? So imagine there's a real-world use case. You have a very complex cloud-native systems. And your user told me that I just feel that something is working, but it's a bit slow. Can you travel to that? So in this kind of very big cluster and very complicated applications, it is very hard to address which part has some issues. So for the traditional way, you may need to see the logs, or check the T-shark, the wire shocks to see the pockets from here and go to there. So I did that before when I was a support engineer. So it is a nightmare to see the traffic from here to that part and to see from this interface to that interface. And I can address, OK, here is the performance issues. And with the Kumar and open telemetries, you can very easy to see which part has issues. Because we can see the tracing information inside your applications. Then we can directly see which part has some issues. So that is the good part for using the Kumar and the open telemetry. So using the tracing in Kumar, it's very simple. We are using a CRD called the mesh trace part. So you can create a mesh trace and the set and the config that which this one is the. So yes, this is the which mesh you want to output the tracing date. So by default, it is called the default. So you can have multiple service mesh in different zones. And in here, you can set I want to output the tracing traffic from the default zones. And you can set the information here. And let's going down. So the spec is this is a mesh. And by default, no, I'm sorry. So the default is the back end for default is the open telemetries. And here, yes, this is the important part. So the endpoint is the OTL collector, which is the open telemetry collector for this mesh tracing. So it is not directly send the tracing data to a platform like the Splunk or like Neuralic or something like that. So we are using an open telemetry collectors to collect the tracing date first and then send the tracing date to the vendors. So why we are doing that? Because the collectors have some new benefits for you to use because when you have multiple tracing parts. So you want to output multiple zones, tracings to one vendors. You can use the OTL collectors to collect all of the informations into one collector. So you can reuse the open telemetry collectors in this way. And next, the open collectors, the open collector has the ability to send the tracing date to different vendors. Honeycub, Jagger, and Neuralic. So it has the ability to do it very smartly. So I'm going to show how to config this OTL collectors later. So for the demo part, so today I'm going to create some microservices and apply. And I will show you how to apply the Kuma service mesh to it. And then I'm going to create a mesh tracing policy for my Kuma service. And the mesh tracing policy will send the open telemetry informations to the OTL collectors. And OTL collectors will send the date to the OTL back end, which I'm using the Honeycub for this demo. OK, I think this is the end for my slides. So I'm going to show you what is my demo environment. So first, let's see. OK, so first I'm going to show the, so let's see the collectors. The OTL collectors, Honeycub, OK. So this is the open telemetry collectors. The most important part is that here, so the end point, I'm pointing to the Honeycub as the open telemetries end point. And I will send my API keys to it. So don't worry, I will direct this key after these sessions. So no worries. And actually, with these collectors, you can do more things. For example, you can do some patch workings for your tracing informations. And you can have a multiple port. Here, you can have a gRPC port to receive the open telemetry date. And you can have the HTTP port to receive the open telemetry date. Is it easy to use to see? So for the next one, I'm going to create a new applications and make it joined to the service mesh cluster. So first one, let me see. I'm going to create a namespace called mesh for devs. So I'm going to install the applications in this namespace. So before that, so if you want to aid some new applications to Kuma service mesh, what you need to do is just to label it. So label it with the namespace. I'm going to call the mesh for devs. Oh, I'm sorry. Devs, dev, OK, the typo. So because I just add a new level, which is called the sidecar injection, it calls enabled. And after this one, everything, every port in this namespace will be inject, a sidecar. And it will take care of the traffic from the port. Let me do for the next one is we have a Kuma for devs. And I will apply dk, sk. So this is the OK, I will create some things. I will create two services and two deployment. OK, so this can be a C at here, get all in this mesh for devs. So you can see the port. There are two containers in this port. One of them is the applications. And one of them is the sidecar. The service mesh sidecar. So next, I'm going to expose these applications to outside. So for that, I'm going to use ingress for it. So ingress will expose these work applications to outside. So I'm using the con ingress controller, by the way. So just very easy to apply this one. Apply this ingress. OK, now I have ports, and now I have service, and I have ingress to expose this one. So let me try to access to it. So let me see here. So this is the default endpoint for my demo environment. And I'm using work as a subpass to locate to my API endpoint. OK, it looks good. So I worked for 1.5 seconds and went to four meetings. So everything looks good. OK, next one, I'm going to show how you can use the mesh trace to export the open telemetry information outside. OK, so let me see. My date is at, yes, OTL mesh trace. Mesh trace, OK. So this is the CRD, how you can define a CRD to expose the open telemetry date to the, just as I said, the OTL collectors. So the important part is here, so the back ends. So correctly, I'm going to use the open telemetry state as a back end. So the open telemetry, I'm going to use the OTL collector. So the name is a bit long, but it is an open telemetry collector service. So Kuma will send the open telemetry date to this endpoint. And this open telemetry collector will send the tracing data to Honeycomb outside of the cluster. OK, so I'm sorry. And just using some tags, OSS-Japan2023, and we have some headers here. OK, everything looks good. So I'm just going to create, maybe I already have this created mesh tracing. OK, kube, so typo, kube control. OK, already existed. No problem, sorry. So now I'm going to make some traffic to let the applications to generate some tracing information, OK? I'm going to use these Isomania applications to send the traffic. So one, two, sorry, I can use it automatically. And repeat on the interval. OK, send the request per second. OK, it comes a little bit lag, but it's OK. I think I already collected some of the information from the. So yeah, this is the honeycomb of the endpoint for the application, sorry, for the open telemetry information. And for the date part is called the meeting. Yeah, the meeting here, I can see we have some tracing here. Yep, so see the details for this tracing. You can see that my application cost one second. And I have four sub-processes, which takes 250 milliseconds for each, OK? So you can see my application is working. Because I went to four meetings. So these are the four meetings for my applications, OK? So each meeting will cost me about 25, 250 milliseconds. And for my day, I cost one second to go four meetings. So with this example, you can see this is the very easy part for you to expose your application's open telemetry state to the other point. OK, so I'm using the honeycomb for my endpoint. And beside of this one, you can have other open telemetries in the point, like Jaggers or the ELK or some New Relic. Yeah, this is the open source standard. So you can choose whatever you want, OK? OK, so this is my backup just in case my demo went wrong. OK, thanks for listening. So this is all I have for my presentations. So do you have any questions you want to discuss? OK, no problem. OK, thank you very much.