 Hello, everyone. Welcome to my presentation. I feel very honored to be here. Today, I will share about sitting-mart cloud traffic management with Kamana. First, let me introduce myself. I'm Zhong Hongxu, an open source engineer from Huawei. Previously, I worked on upstream Kubernetes community and a call computer in CIG API machinery. And since 2008, I joined the Istio community and became the co-maintenor of Istio and I am a steering committee member. I'd like to open source. If you are interested, please feel free to connect me in GitHub. Let's first look at the agenda of today. Today, my presentation includes three most critical parts. The first part, I will talk about the multi-cloud benefits and disadvantages. Next, I will talk about Kamana, a young sense of project which handles multi-cloud workload accession with Kubernetes API. It is natively suitable for multi-cloud application management. The third, I will talk about multi-cloud traffic management. This may be the most tricky point users can face when adopting multi-cluster on multi-cloud. First, let's look at the multi-cloud. Multi-cluster strategy is widely adopted. Let me do a quick explanation of multi-cloud. Multi-cloud is when an individual or organization uses more than one cloud provider for their IT needs. The approach enables companies to better support their business technology and service needs. This approach enables companies to better support their business technology and service reliability requirements while mitigating over reliance on a single cloud provider that might not accomplish all tasks effectively. A multi-cloud strategy can encompass private, public, and hybrid clouds and allows business to seamlessly manage multi-provider providers and the virtual infrastructure performance while achieving great efficiency. As a figure indicates, 89% of the respondents report having a multi-cloud strategy. It is still the defective standard, but the ways in which organizations arrive at multi-cloud vary depending on their needs and mix of providers chosen. Most are taking a hybrid approach. You can see at the figure, 80% of the respondents adopt a hybrid cloud. Okay, why we choose multi-cloud? Speaking of multi-cloud benefits, it is commonly compared with single-cloud. I listed four points here. One is avoid vendor lock-in. This is the most persuasive multi-cloud benefits. It keeps organizations from being locked into one vendor or service provider. A multi-cloud app approach involves business to deploy multi-specialist services instead of relying on one vendor for all their software requirements. Second, meeting compliance requirements. As data privacy regulation continues to ramp up, the need for companies to maintain strict levels of compliance is increasing. Geographic locations need to be accounted for in order to satisfy regulations such as Europe's GDPR. Aside from company integrating its own on-premise data centers, a multi-cloud strategy is typically the most effective and efficient approach. Third, it enhances residents. Outages can happen at any time for any cloud, which makes it very risky for organizations to rely on one vendor alone. Multi-cloud strategy offers business improved security, better failover options, and enhanced disaster recovery. It ensures deep storage resources are always available, making the organization cloud deployment more resilient for the long term. The last point is improved flexibility and scalability. With deep volumes increasing, exponentially, multi-cloud architecture is an ideal solution for organizations looking to store and process their data to enable business to scale their storage requirements up and down. Next, let's look at the multi-cloud challenge. The first is management complexity. The more cloud vendors you work with, the more resources your IT team has to manage. Most vendors have its own technology. Although Kubernetes is widely used, this is mitigated, but the distribution of workloads across multi-cloud is still a challenge. Second is security concerns. The responsibilities of protecting data in the cloud is shared between your organization and cloud providers. Most vendors offer built-in cybersecurity tools to protect customer data, nevertheless working with multi-cloud services creates additional security risks, such as you need a multi-layered security approach when working with several cloud-hosting environment. It is also a good idea to carefully monitor your cloud resources and cooperate with trusted cybersecurity partners. The third is communication across cloud. Different cloud may have its own private network. It is simple to communicate within one network. However, if cross-cloud, this could be a big problem. First, we need to figure out a way to make traffic flow across cluster, across cloud like a single network. For this, the most reliable way is direct connect, but it is very expensive. The fourth one is monitoring system. Different cloud main provider is simple to use to monitor your own applications, but for multi-cloud, the monitoring systems are not consistent, so it would require multi-layer monitoring to unify multi-cloud monitoring system. So this is also a big challenge. Next, let's take a look at the challenge of managing multi-continental clusters. Firstly, there are too many clusters if users want to manage their own clusters on multi-cloud. You need to create and manage the lifecycle of the cluster and even some repetitive setup and fragmented API endpoints of different vendors. Even if you use cloud vendor-managed Kubernetes services like GKE, EKS, or Huawei CC, which have all pasted the CNF conformance test and they provide the consistent API as upstream Kubernetes. The cluster is totally hosted. Users are free of cluster life management, but there are still other challenges listed in the figure. First, it is the workload fragment. The second is the boundary of clusters, how to do the resource scheduling, how to make the application hire-available, and how to do the auto-scaling. The third one is vendor-locking. If we use a commercial project to manage the multi-cloud Kubernetes, we may be locked by the SAS provider. Okay, let's introduce Kamada here. Kamada is an open and cloud-native multi-cloud orchestration engine. It is easy to build an infinitely scalable cluster with Kamada. Use multi-cloud clusters just like a single Kubernetes cluster. Why we choose Kamada? I listed six points here. One is the Kubernetes native API. It is open and neutral. You can join the community and you can use it to avoid vendor-locking. It is out of box. You can use it directly. It provides fruitful multi-cluster scheduling policies like cluster affinity, multi-cluster splitting, rebalancing of the applications. It provides the central control plan. You can manage Kamada in a central center. Next, look at the Kamada architecture. Kamada control plan consists of three components. One is Kamada API server. It provides Kubernetes-like API. The second one is Kamada control manager. Control manager includes several different controllers. I don't want to explain them one by one here. If you feel interested, please feel free to check out in the official website. The third is Kamada scheduler. Scheduler is to schedule the resources to different clusters with the policy user provided. The ETCD can store all the Kamada resources under the Kamada control manager or scheduler. We'll talk to it directly. Oh, sorry. We'll talk to the Kamada API server to access them. Okay. Next, let's look at the Kamada API workflow. This is a little hard to understand, but let me introduce here. You can see here Kamada API server API includes three kinds of APIs. One is the resource templates. This is Kubernetes API-like deployment service and name space and so on. The second is propagation policy. Propagation policy is used to define multi-class scheduling and spreading requirements. It is used to specify which clusters you want to spread the resources to. The third API is override policy. Override policy provides standard low-end API for specializing cluster-relevant configuration automation. For example, we can use image prefix to override the different cloud or image registry. We can override the storage class according to different cloud provider. The following diagram shows how Kamada resources are involved when propagating resources to member cluster. First, it is the policy controller and Kamada scheduler to bind the resources to different cluster workers. Then if binded, the worker will create the resources in member clusters. Running multi-class application with vanilla Kubernetes API server, it is very simple as the example shows. Look at the right side. We can define a Kubernetes native deployment just like usage in single cluster. On the left side, we can see the policy propagation policy is used to specify to schedule the deployments to different zones. Okay, let's talk about the propagation policy in detail. Propagation policy includes two important spec. The first one is the resource selector. It is used to match resources that the propagation policy will apply to. Here, it is very flexible. We can select one resource by name or by or even by label selector. The second one is the placement. It represents preferences of propagating resources. It has three sub fields. First is the cluster affinity. It can be defined to define the preferred cluster to go just like the node affinity. The second one is the cluster relations. Similar to pod tolerations and node tense, we can define tolerations on member cluster to make a reservation for special usage. We can use the tolerations here to tolerate the tense. The last one is the spread constraints. The constraints of spreading resources among member clusters. Here, the example shows that we want to spread the resources to three different zones. Okay, look at the resource customization among clusters using override policy. This example is to do some customization of different cluster deployment. We can look at here. We have three clusters, cluster one and cluster two, located in DC1, which is an on-premise environment. Cluster three is a hosted public cloud Qnase cluster. To save image downloading, bandwinder and latency, we may override the image registry for different cloud. And here, for the deployments in cluster one and cluster two, we want to download the image from local registry. And for cluster three, we want to download the image from the registry managed by cloud provider. We can use the example here to override them. Okay, let's take a look at typical multi-cloud traffic flow. Multi-cloud traffic mainly contains two kinds of traffic. First, north-south traffic. It is originated from any user which can come from anywhere on the world. Firstly, it goes through a load balance. Could be a hardware device F5 or software LB LVS. Then it goes to the Ingress Gateway before enters a cluster. For north-south traffic, I think it has no big difference with single cluster deployment. Second, it is east-west traffic across-class for inner-service communication. Application can be deployed in any cloud. Even when application can be deployed across-class. Sometimes, they need to communicate to each other. Then how can we do service discovery? How to build, connect across-cloud? How to make sure the traffic is secure? These are all the challenges of multi-cloud traffic management. Next, I will talk about them separately. Here, as we can see, Kubernetes is widely adopted. We are talking about challenges based on multi-cloud Kubernetes. First one is network reachability. Container network is unreachable between different clusters. Absolutely, we can have other ways to build connectivity, like use direct connect. But it costs some additional spending. Second, we can build a VPC peering. But also, this is very complex. Second one, for native Kubernetes, there is no way to do service discovery. So we have to make use of external service registries to register one service of remote clusters by hand, like you recall, auto-keeper. The third one is remote-class service domain is unresolvable in local cluster. Kubernetes is only compatible of resolving local class service. For a cluster, for a service, it spans across different clusters. It is resolvable. But remote service instance is hard to access because Kubernetes does service load-balancing via IP devos, which only redirect request to local cluster service enterprise. The fourth is load-balancing is run-robin for Kubernetes, and it is only handle L4 protocol. High-level protocol, it is hard to implement. And the fifth point is less secure. Even if the inter-cloud network is built, it is very risky to talk with each other in plain text, especially through public internet. Let's look at the commander way. How can solve these challenges? First, commander can build inter-class-to-network connectivity based on Submariner. Submariner is a tool built to connect overly networks of different Kubernetes clusters. The second way is commander can export and import services between clusters with multi-class-to-service APIs. Multi-class service APIs aims for minimal additional configuration, making multi-class-to service as easy to use as one single-class service, and it leaves room for multiple implementation. The third is integrated with motor surface mesh solution, like ECU. ECU is the most popular surface mesh project able to support application-level management, especially on traffic management. They are trust and observability. With the help of ECU, commander can seamlessly make east-west traffic flow between multi-cloud as native as single-class. With ECU, layer 7 traffic like ADDB, GRPC could be well controlled. With ECU, the inter-cloud traffic is securely encrypted, even in public internet. Next, we will focus mostly on ECU way to facilitate multi-cloud traffic management. Okay, ECU requires all the arbitration injected with a sidecar, and the traffic in and out of them will be intercepted by the sidecar. Then, the sidecar is responsible for traffic routing, load advancing, and encrypt. How does ECU achieve this? According to different network models, the behavior is a little different. In flat network, ECU list watches, all services, and endpoints from underlying clusters managed by commander generate the XDS with them, then push the XDS to the sidecar. Sidecar can see the pod IP address of the remote cast, so it can redirect the request simply to the pod IP address. At the same time, the caller is unaware of this at all. For nine flat network, ECU still list watches all the services and endpoints from underlying clusters, and then generate and push XDS to the sidecar. But the generated EDS does not contain the pod IP address of remote clusters, but the address of gateway. So, sidecar redirects the request to the remote cluster gateway. This is the biggest difference with flat network. Next, look at the DNS resolution ECU provided. ECU introduced the DNS proxy for multi-cluster service results. Previously, we need an additional component ECU called DNS. It is not very friendly. It requires service entries created to shadow the services. ECU extends a new kind of XDS-NDS to facilitate DNS results. NDS is used by DNS proxy to fetch DNS name tables from ECU control plan, and then it builds DNS lookup table to serve DNS results from local application. The name table contains all the services across clusters. For Kubernetes service, we can resolve the remote cluster service to the IP address of the class IP. Okay, next, to take a look at the flat network. Flat network also called single network model. All the clusters reside in the same network, and pod from class one can talk to pod from class two directly. Thus, no gateway is needed, and cross-cluster communication will not increase latest. All the capabilities of single clusters are inherited, such as load balancing, then naviq routing, and ECU has also comes like complexity to build connect and less secure, as all the workloads are within a single network. There are no boundaries, and it requires no overlapping service IP ranges from different clusters. Compared with flat network, let's take a look at the different networks. So this is, service matches can span different networks. Each cluster resides in a network. Pod cannot talk directly to the other posts in the other clusters. It provides better isolation. Each cluster is independent, more secure. Cross-cluster communication must go through an east-west gateway. So the challenge is cross-cluster service communication. Actually, it depends on the east-west gateway. It works in TS auto pass-through mode, which requires a network filter SNI cluster to map the SNI cluster from TS handshake to the cluster names. And then the gateway will redirect the request to the destination cluster. Okay, this is all today's presentation. Let's have a recap of the presentation. First, we talked about multi-cloud evolution and what challenge it brings about. The second part, we talked about what Kamada can do for multi-cloud. As a third part, we talked about inter-cloud communication with EC2. I hope you can get a good knowledge of multi-cloud application with my presentation. Thank you. This is a QA time. Thank you, everyone. Bye-bye.