 Hello, my name is Kevin Wang, open source architect from Huawei Cloud. I'm streaming Kubernetes and since I've contributed. Today I want to talk about automating multi-cloud workloads with the Kubernetes native APIs. You have probably seen this report already that 93% of enterprises have a strategy to use multiple providers. The fact that Kubernetes has matured over years along with the cloud market will hopefully unlock programmatic multi-cloud managed services. However, there are still many challenges. As more and more applications running on top of Kubernetes, you may result in the following situations. The first one, too many clusters. Repeated setup due to the incompatible cluster lifecycle APIs or fragmentation of YAMLs, lack of simplified way to do per cluster customization for applications, or functionalities including resource scheduling, autoscaling, etc., limited to the boundary of clusters. And even more, you may find that you're still locked in somewhere due to the gravity of forecoding systems and data. As the early participant and adopter of Kubernetes federation, the major lessons we learned from previous work are as following. Couple of APIs embedding with implication definition, placement requirement, and customization requirements are complicated. And the API incompatibility really slowed down people's adoption. One-to-one mapping of federation API and workload are redundant, which results in too many fields to fill up every time we create applications. And the building blocks are insufficient. Lack of time key solution and too many customizations result in no standard. Therefore, we started thinking about the new project, KMRDA. We are targeting on providing the Kubernetes native APS support with a set of extra policies to provide the dual actual mode HA remote disaster recovery for different scenarios. And to avoid the vendor locking, besides some integration with mainstream cloud providers, we also provide automatic allocation and immigration across clusters. And also, we provide things like cluster affinity, multi-cluster splitting and the rebalancing kind of advanced multi-cluster scheduling features. KMRDA is also able to provide location hypnotic centralized API endpoint and access for clusters, no matter in probably cloud, on-prem or on-age. Especially, this project is jointly initiated by the users from internet, finance, manufacturing, and telecoms industries. Here's the architectural overview of the KMRDA project. KMRDA runs its own API server to provide template APIs, which is exactly the same definition with Kubernetes. And policies, APIs, including propagation policy, override policies are also the key concepts provided by KMRDA. Particularly, the synchronization mode between KMRDA control plan and the member clusters can be done either by centralized controller or by the agents, which is very useful in different network environment. A brief example, say if you want to deploy your applications across three zones to achieve higher availability, the propagation policy API is ready to help. Simply, config the spread constraint fields as shown in the deck, and you can make it apply to all the deployments. And if you have dedicated image registry for the clusters in, for example, data center one, you can also use override policy to replace the image prefix with specific address. And then when you deploy the applications, you can just repeatedly submit exactly the same YAML that you applied to your Kubernetes single cluster. That's how the propagation policy, override policy are shared and reused. Here's the link of the KMRDA project. Please have a try and let us know your feedback. We also have another talk at the QCon that covers more details about the project. All right, that's all about my point of talk. Hope you enjoy the show. Thank you.