 Hello, Kupkan. Good morning, everyone. I'm Dennis Gu from Huawei. I'm very glad to share some thoughts and visions about the intersections between cloud-native and AI. The topic of my keynote today is the unleashing the intelligent era with a continuous open-source innovation. According to Epoch, the computing power required by the Foundation AI models increased 10 times every 18 months, which is five times the gross rates of the general computing power as indicated by Morse law. This new Morse law brought by the AI, especially large-scale AI model, impressed key challenges to the cloud-native paradigm. First, low GPU-MPU average utilization results in high cost of AI training and inference. And second, frequent failure of the large model training cluster limits the training performance. And third, parallel policies and hardware acceleration configuration make the large-scale model AI development more complicated. And lastly, massive AI inference deployment suffers risks are unpredictable and user access latency and privacy issues. So what can be done against this new wave of challenges? During the past few years, Huawei Cloud has already made some substantial progress on addressing these challenges on AI innovation domain where some cloud-native projects. Kubeh is a CNCF project built upon Kubernetes and extends native containerized application, orchestration, and device management to the edges. By leveraging the advantages of Kubeh, Huawei Cloud solved the difficulties of understanding the ambiguous natural language input for robots and leverage large language model on the cloud side to perform NLP understanding and translate them into precise command sets that can be understood by the robots at edge and terminal side. Furthermore, Kubeh brings following cost-efficient benefits like reduced end-to-end deployment cycle by 30% and improve the robot's management efficiency by 25% and shorten the integration of new types of robots from months to days. Vakano is another CNCF open-source project from Huawei Cloud in order to better support AI machine learning workloads running on Kubernetes within a bunch of job management and advanced scheduling policy enhancements. Xiaohongshu is a top-level content-sharing community in China with more than 100 million monthly active users. Its recommendation model has nearly 100 billion parameters and takes hundreds of parameter servers and workers for one training job. With the help of Vakano project, algorithms such as topology-aware scheduling, beam packing, SLA-aware scheduling are introduced so the overall training performance is improved by 20%. And O&M complexities is greatly reduced. Looking ahead for Cloud Native Next, we're considering to craft a new generation of service AI platform to address the productivity and cost-efficiency challenges as mentioned above more comprehensively. As for developers, service AI platform needs to allow resource-free model development and deployment so that AI developers can just focus on model architecture and specified SLA-SLO for training and inference jobs without specifying the hardware flavors of CPU, GPU, MPU and manually configuring their cluster scaling threshold or defining 3D parallel policies. No longer suffered from frequent failures on the large-scale training clusters and even no need to care about the exact deployment locations of the inference models. From the cloud service provider perspective, service AI platform needs to improve resource utilization and power efficiency by maximizing resource-spatial temporal sharing across multi-tenant training inference deployment hybrid workloads and achieve escalated throughput and resource utilization for distributed training via automatic resource scale up and down and automatic failure detection and recovery and support unified AI resource scheduling across multi-region edge nodes in the cloud to enlarge the range of resource provision with a further improved cost and energy efficiency. Finally, let's take a look at the reference architecture of service AI platform. The key idea is to build up a resource auto-driving engine for large-scale AI based on three layers. Bottom layer focuses on the runtime OS and virtualization, so the CPU-GPU as well as hyperlink network resources can be divided into fine granularity with the right size determined by the resource profiling of AI training and inference workloads. Middle layer aims to accomplish the control plane smart scheduling of multi-tenant fine granularity resources specified at the app layer so that it can be perfectly mapped to the underlying physical resource pooling spanning across multiple data centers with resource utilization, maximized, and APP layer performance SLA satisfy at the very same time. Upper layer APP driven resource profiling and elasticity play a very important role of bridging the gap between APP and the resource, where the precise resource profiling are derived on both theoretical simulated performance modeling as well as the monitored performance data of large-scale AI training and inference workload. Furthermore, we hope that this target service AI platform can be built as an open ecosystem with help from the joint efforts from the community. Okay, that's all for my sharing. Thank you for listening.