 OK, good morning. So my name is David Zhang, so marketing director of the Operate Cloud and NFEI from ZETI. So actually, everybody know here ZETI? So ZETI is one of the leaders in the telecommunication solution area. So ZETI telecommunication solution has been widely used in operators by more than 200 countries and regions. So we are not only have our hardware, but also we have the full set of the open infrastructures. For example, ZETI own OpenStack distributions. So today I come to here to share ZETI 5G ready for mix on demand cloud infrastructure solutions for you, because 5G is coming. So it bring out a lot of requirements on the cloud infrastructure. How to build a 5G ready cloud infrastructure solutions for 5G services is a challenge. So today my presentation was divided two parts. First part is some trend, evolution, drivers, and et cetera. Second part is technical aspects. How to understand in this form mix, how to understanding this on demand. So first page, 5G is coming. So today I'm here. I'm not one to talk about what is 5G or why we need 5G. I want to emphasize that is 5G is coming. The 5G new services, for example, the URLC or enhanced mobile brand, it figure out the new requirement to the infrastructures like the low latency, high bandwidth, high speed, and et cetera, something from the machine talking, machine communications. This kind of requirement bring an impact, require change on the cloud infrastructures. I think nowadays most of your cloud infrastructures is centralized. You deploy everything in a centralized database or two centralized database with active standby or geo redundancies. Your cloud EEPC, your cloud IMS, your cloud OSS, BSS, or your IT systems, centralized. But due to the different requirements on the latency, on the bandwidth, on the speed, the centralized deployment is no longer fulfill all the service scenarios. We are looking for a multi-layer distributed cloud infrastructure architectures. So today I emphasize a distributed cloud infrastructures. But what are the key technologies or key characteristics for this kind of distributed cloud infrastructure? How to understand this? OK, let's see these pictures. OK, I will assume that is the network, the cloud infrastructures will be a hierarchical architectures divided into maximum three layers. First, we will have the core cloud. You deploy a centralized element, latency, insensitive element in this core cloud. For example, your PCF, SMF in 5G, and our Gateway control plan in the core site. And also we have Edge Data Center and Access Data Center. So you can deploy different kinds of applications in different level of data centers. For example, let's talk about Access Data Center. Access Data Center, we will assume is the most close to the subscribers, provide the lowest latency best bandwidth. What kind of elements? For example, your cloud RAN, your cloud RAN control plan, multi-edge access control services, and UPF, the package plan in 5G services. This kind of services has extreme requirements on the latency bandwidth. We should deploy them mostly close to subscribers and something we deploy in-betweens. So I'm talking about three level data centers, but it is not the strict regulations. Maybe if you say, OK, your country is not big enough. Maybe you only have core cloud and access cloud. Or even your country is really small enough. Maybe you only have one. But I'm talking about in China. So China is quite big. They may have three level data centers. So total latency is a hierarchical data center architectures. And also for each data center in core site, you will have generic hardware. It is X68 platform or ARM platform, and et cetera. And you will have centralized management system. You will have multiple resource pool management, cross vendor, cross private cloud, public cloud. And also you will have the AI-based O&M tours. And also in Edge, you not only have the generic hardware. Maybe you also have some accelerator. For example, the GPU, FPGA, SmartNIC, in this way to provide a better performance. And also in Edge data center, you are also not only common hardware accelerator, but also you have different kinds of hardware. For example, some integrated hardware, all-in-one nodes, multi-node hardware, different kinds of hardware, to fit in the limitation or Edge access data centers. Also for the open stack point of view, OK, in the access site or Edge site, you are not looking for a full set of open stack installations. You expect maybe some lightweighted open stack installation to save resource, save CPU core, save memory, save your disk space. So do everything, deploy everything as demand. So this is the full picture of ZTE understanding on a distributed cloud infrastructure. So later on in the next few slides, I will put a little detail and emphasize the most important key technical aspects. OK, let's have an overview. What is ZTE we call the full mix on demand distributed cloud infrastructure looks like? How to understand its full mix? First is mixed operation maintenance approach, automation and intelligence on demand. You may have different in the whole lifecycle of the network. You will have software installation, software upgrading, data maintenance, optimization. N2N, we have N2N or pre-maintenance tools to make your network management to be automation and intelligence. And also this is AR engine based. So you can choose an O&M approach as you demand. Second is deployment on scale, on demand. You can install a full set of cloud platform here, but also you can install a lightweight OpenStack here or even computer storage OpenStack merged in one node. So this flexible deployment mode on demand depends on what kind of location, what kind of data center, what kind of service you would like to deploy. And also mixed acceleration scenarios. We will say, OK, the acceleration, the CPU, is not enough now. We are looking for the GPU, FPGA, and OpenStack community. There is some projects, some projects is ongoing to focus on this kind of acceleration scenarios. So ZTE, we make a lot of contribution and efforts on the hardware acceleration, how to manage a different kind of the accelerator, how to make compatible with different accelerators, and how to open this capability to the up-layer applications. We have different, manage a different capability of accelerations you can use on demand. And also a mixed cloud results post. You may not only have virtual machine bare metal, but also you have containers, Kubernetes. You not only have ZTE results post, but also you may have other vendors, Red Hat, other vendors OpenStack, or even results pull from VMware. And nowadays, we are some operators are talking about, do we really need or necessary to deploy the virtualized EPC, virtual IMS on top of the public cloud? And also, I think this would be one technical trend. It's possible from a technical point of view. And nowadays, in this way, you need to manage the public cloud resources, like Amazon, like Ali Cloud. So we should, from a cloud management point of view, you should have these such capabilities. So it is a private cloud, public cloud, heterogeneous management. So you can choose a cloud result type on demand, whatever container, whatever bare metal, whatever ZTE, whatever third party, whatever Amazon, whatever private cloud, whatever public cloud. So this is four mixed understanding. So first mix, just I mentioned, you can not only manage ZTE cloud here, ZTE cloud input, ZTE virtual machine bare metal and container. But also, you can manage the VMware third party open stack or even public cloud and made on results. One cloud management up to manager all type of the cloud resources. So this is a very powerful cloud management engines. This is totally complete design by ZTE. And also, second, you are looking for maybe, OK, now you heard a lot of things. For example, the Kubernetes open stack, open stack over Kubernetes. OK, you have virtual machine, you have bare metal, you have containers. We are also looking for a solution to deploy Kubernetes over open stack. We can use a unified open stack management framework to manage the virtual machine, to manage the container, to manage the bare metals. So because open stack has more than a lot of years, right? So Kubernetes maybe is young, still young. So we can use some open stack material models introduced into the Kubernetes. Then we can have a better maturity, better Kubernetes build on open stack to build on services. So this is what we call dual-course drive. Kubernetes open stack dual-course drive the service innovations. OK, how do you understand this deployment on demand? OK, this is also easy to understand. First, you may have a centralized open stack deployed in your centralized data centers. OK, but for your edge data centers or the extended data centers, you are looking for maybe wide, lightweight open stack. Or you adopt some open stack sale v2 deployment. You deploy a local message queue or novice sale node in each data center. Or even you use the available zoom mode. Everything managed by your centralized data centers. So different kinds of approach, different kinds of alternatives enable you to have a flexible deployment on the open stack sale. To save your energy, complete set or a lightweight set or even nothing here to manage by the centralized one. It's very flexible to fulfill your diversified requirements. And also for the containers, last just now I mentioned dual-course, you can deploy containers in virtual machine. You can deploy containers on bare metals. All the bare metal and the virtual machine resources will be unified and managed by the open stack. So open stack will be the unified resource management for the virtual machine for the bare metal. So you can also build the container on the bare metal and in the virtual machine. So this will dual-course drive them. But of course, you can choose if I don't need open stack. Because open stack occupies the resources. You can deploy everything directly on the bare metal. It's also possible here. You can deploy everything directly on the Kubernetes. There's no open stack at the bottom. So acceleration. So nowadays, we have different scenarios to enable you to have accelerations. For example, some 5G LAN, NFV accelerations, the 5G connected accelerations, or even NFV accelerations, a lot of scenarios. So CPUU is not enough. Now we must introduce GPU, FPGA, or MP, and et cetera, the third-party accelerator into your network. The open stack, the IT techs should manage this kind of accelerations, this kind of third-party hardware. And at we have the cooperation with my colleague Intel. So we adopt their FPGA and this kind of hardware to provide this kind of acceleration capability to the uplayer services. For example, we are talking about SBC. During the SBC session board controller, this is one element in the IMS. They team make a test, a trial. That is, nowadays we use Intel CPU to do the transcoding. But we found it's not cost performance solutions. But we do GPU. We found, OK, if we talk about transcoding, the efficiency of one GPU is equal to around 107 CPU cores. So you use GPU to do the transcoding is a cost-effective, cost-performance solution in the NFS solutions. So the open stack should manage the GPU and open this GPU capability to the uplayer services. The SBC is one kind of services. Or we're talking about another kind of acceleration. We call it OVS or float-ups acceleration for the gateway. Or the UPS. We use FPGA-based smartNICs to do the accelerations. We are no longer fully depends on the CPU-based OVS packet forwardings. We found, OK, if one FPGA, the efficiency of one FPGA-based smartNIC plus two core is almost equal to the performance provided by the 12 inter-cores, it up to the 40, a 30 gigabit gigabit. So it is also a cost-effective solution if we adopt smartNIC FPGAs. Next, my colleague, my friend, Mr. Ding, will share the inter-product portfolio here. Thank you, Mr. Zhang. Mr. Zhang just mentioned that he is a 5G infrastructure. There's a lot of utilization of Intel's accelerator devices. For this part, I will talk about most of the Intel accelerator devices products across all the computing, network, and storage areas. For the computing, we can say recently, we have released the latest generation of the Intel ZR platform for high-performance computing. And also, we have some GPU devices integrated with the ZR platform also. And also, in the computing area, we have some FPG devices. As we know here, this is the two generation of the FPG devices. It's a Terra 18 and S10, the two-generation products. Very FPG devices can help a lot in the 5G infrastructure setup, we know. In the networking part, we have the product of a smartNIC like this. A smartNIC here integrates the FPG chip inside the NIC devices. And the FPG devices can do a lot of networking package processing, offloading perfection to increase the performance of the networking bandwidth a lot. And the storage, your storage area, we have some several different type of devices. This one is the standard NVMe interface SSD card. The product name, we call it NVMe Interface SSD devices. And this one is very important. We call it OP-10DC Persisted Memory. It's a hardware interface, the DIMM team interface in the platform board, the same as memory chip. So it can provide very high bandwidth for the storage also. And the performance is also OK. And also the capacity will be very large to some several, for example, the 2T for the whole platform for the persistent memory. OK, that's all for the introduction. Thank you very much. So thanks, Ms. Ding. So let's come to the last two minutes, OK? So we come to the last mix, the mixed O&M automation and intelligence on demand. Actually, they can also provide N2N operation maintenance tours across the whole lifecycle of a project. For example, when trying to do the installation upgrades, when trying to do the operation inspection, we're trying to do some intelligence watching for management log analysis. For each approach, each step we have dedicated tours enable you to have natural automation and intelligence. And also, OK, all these tours behind it is AI engine driven. So we collect different performance data, logs, alarms, from the IT equipment, from the IT environment, from third party environment, computer storage node, and on to a centralized AI driven engine there. We have offline training, knowledge base, online analysis, big data database, and machine learning. To learn, OK, what is the situation of the healthy? Detection exceptions, route course analysis, capacity forecast, and optimization, automatic closed loop scaling out, and self-healing based on the alarm. So it's totally N2N closed loop. This kind of AI drive, the operation maintenance engine, enable you to release your efforts, to have a full network automation and intelligence. So this is all what I want to introduce today. So it is full mix on demand distributed clouding for top-run the IT. So many thanks, everybody here. So thank you.