 Right, I believe I've talked about, so we use OpenStack for managing our commercial the virtualization platforms along with the storage plans, you know, including both centralized and distributed storage, and also the commercial SDN and NFA solutions. So on top of that, we also have our full stack of virtualization solutions and developed on top of the open source products, including the KVM for virtualization, safe for storage, and OpenStack Neutron and OpenWare switch for SDN. So basically the OpenStack is used for managing our legacy IT resource pools and the ongoing building CT Cloud network resource pools. And for container resources, we have also integrated the Kubernetes in our adaptation layer and we are planning of adding the project of premises into our adaptation layer for monitoring purposes as well. So this is our deployment architecture. So we call this a three layer deployment architecture. So first on the left-hand side, we have the central portal deployed in Beijing. So it contains the central portal for unified management through all China Telecom's resource pools across the country with the direct management of resource pools located in Beijing. The second layer is the regional or provincial layer. So it is deployed in the provincial capital cities providing the unified management for the local DCs, through the OpenStack adaptation layer. And we have the provincial portal and the central portal. They are linked together using the API gateway so that the operators in Beijing in the China Telecom Group's knock center, they know they have a grip of what's the usage are like and they have also the ability of managing the lifecycle management of the instances running on resource pools across the country. So the third layer is the city level DCs plus the edge nodes. So in order to manage those resources, the CMP has an additional aggregation layer which we can see just on the right there, I believe. And so this is just to enhance the ability of the unified management at the provincial capital level. So the CMP relies on the dual adaptation layer model and interconnected WIM gateways to achieve what we call the unified management over the resource pools all over the country. So this is very important in the telco business applications. And also those, the WIM gateway provides the APIs to external systems. For example, the OS system or workflow system, OSS, BSS, those are traditional telco business systems and the VNFM and FVO for managing the VNF. So this is the current state of our China Telecom CMP system and the numbers have been blurred out for privacy reasons except the one on the top left corner which is the number of the resource pools overall resource pools across the country. So our CMP manages more than 300 resource pools over 50 plus cities. And you can also see the overall resource usage rate and with the top five rankings of the index such as CPU allocation percentage along with RAM and storage usage allocation percentage on the right there. And you can also see some of the virtualization vendors proportions in the pie chart on the left-hand side there. So in general, our CMP provides heterogeneous resource management, the decentralized distribution and unified management of various ICT resource pools including both IT and CT cloud. Okay, so next up is our the CMP usage in IT and CT scenarios. So for IT clouds, we have a hierarchical deployment in central and provincial DCs and carrying four plus 31 plus X IT cloud resource pools in total and realizing the unified management of all those different IT systems and service platforms across the country. So just a brief in a breakdown of the numbers. So the number four stands for the four national level or central level DCs which are located in Nemon, Guizhou, Beijing and Shanghai. And in this level, we have achieved the unified IT system and unified business platform. So the number 31 here, it stands for the 31 different provinces and which are labeled in color purple on the map to the right. So the IT systems and the external systems, for example, the ICT or government cloud and industrial clouds are being managed through this layer. And last but not least, the number X. So why X is because the process of building our edge nodes is still an ongoing process and we don't know exactly how many end nodes are gonna be connected to our system yet. So we just use X there. But anyway, it does for the city level nodes and edge nodes. So in this situation, the CMP will manage both the local government and the industrial clouds. So here we have more snapshots from our, the IT cloud systems from the CMP. So we manage more than 10,000 nodes for the private clouds including our very own, the city business cloud, the Jufon insurance private cloud and the very glamorous, the Beijing second airport cloud and the civil aviation clearing center cloud. So for industry clouds, we manage more than 500 nodes and including our applications in Zhejiang industrial cloud and the Suzhou Taihu cloud. And for government clouds, the CMP manages more than 1500 nodes, but I cannot tell you like where they have been applied. Sorry. Right, so that's enough for the IT cloud. So with the development of our network cloud, so in the last two years, our CMP has actively participated in the process of the network cloudification. So our entry point basically is using the CMP's VIM, the FVVIM, so it is hierarchically deployed in a multi-level of core to age, core to region to age locations and managing multiple VINFs and for our pilot projects in one of our permits. So just a quick explanation of what, so what is the layered decoupling test? So basically, we are using our CMP VIM, which is responsible for docking the hypervisors from different vendors and providing aggregated API through the Northbound API interface to systems like VNFM and NFVOs. And we have passed, so during the process, we have successfully tested our CMP VIM with more than 40 vendor combinations and passed the testing in more than 90 different environments and with six more VNFs, including VIMS, the 5G core, the VBRAS with CPE, VEPC, and V file wall as well. And it has been deployed in production for VBRAS, VOBB, and VOTI VIMS. So for 5G core, it's still an experimental phase, but we have already achieved some progress in experimental environments. This is just a more detailed list of which VNF has been tested in which location, so in our term, it's which province. So the second part of today's presentation will be focused on what I just mentioned, our China Telecoms Clouds Network transformation. So there has been a new trend in the carrier network development, which is the network codification and intelligent network transformation. So this can be achieved through using the usage of the SDN software-defined network, the network cloud, open source, of course, and DevOps integration. So during that process, China Telecom released its own vision of how the network is gonna transform. It's called the CTNet 2025. So it is our vision to achieve four cloud-native, virtualized network, a large-scale on-demand services, and et cetera. So our transformation of China Telecoms Network Cloud includes three phases. So the phase one is the infamous preparatory device phase. So where the apps have been independently deployed and configured on top of the dedicated hardware. So in this phase, our telco companies will have very little control over the full stack. And the second phase, which is a virtualization phase. So in this case, the infrastructure consists of bare metal servers plus virtual machines and the general hardware. And those are all running on the VMs, are running on the general hardware. And the service and the results have been orchestrated and managed through the usage of both the user plan and control plan, and plus the storage plan, VMFCs. So this is what we have already achieved in 2018. And the third phase is our final goal here. It is called the full cloud-native phase. So with the addition of containers as infrastructure, the user plan and control plan, storage plan components can all be converted into like microservices, which are very hot topic of the hour. So this way, the carriers will have the maximum control and agility over the apps running on top. And we have very little constraint from stuff like vendor locking. So in order to achieve our vision, so China Telecom has promoted the NFA applications and which helped our industrial, with the help from our industrial partners. So we used the ETSI NFA as our reference architecture and we've been working very closely with our partners from the industrial world like Huawei, Ericsson's and ZTE and Huasen and et cetera. So as I mentioned in the previous slide, we have completed the layer decoupling test with more than 40 different vendor combinations and we have a large-scale test run in our production environment in one of our permits. So this is where our entry plan is. So as I mentioned before, so we are using the cloud-managed platforms VIM, VIM virtualized infrastructure manager as the NFA VIM here. And during our test, we also at China Telecom, we have also released top enterprise-level specifications and covering the scope of almost all the interfaces in the ETSI's NFA architecture and including NFAI and MANL and the interface. So just to summarize a few key capabilities, our CMP VIM has achieved in order to achieve our vision. And so there are four capabilities in total. Those are the NFA feature support, unified northbound API, the VIM gateway enhancements in terms of the monitoring and alarm capabilities and integrating with the multi-vendor hypervisors and also the outside system. So breaking those down just here, yes. So in terms of the NFA features, our CMP VIM supports features like VCPU to PCPU, the NUMA affinity scheduling, huge page for RAM, DP-DK, SRIO with direct pass-through affinity and anti-affinity deployment. And the VIM gateway plays a huge part in the unified management module. So with the addition of the OpenStack adaptation layer, so it can provide the aggregated resource API from our resource pools in the different regions and to the external systems such as FVO, VNFM and many more. So this is a more detailed view inside the CMP VIM. So besides the just mentioned unified management module, there's our OpenStack adaptation layer. So this layer we are using OpenStack to collect physical and virtual resource status and alarm information through the monitoring and alarm management module. So it triggers alarm according to the resource status analyzed and writes down alarm notification to the interface gateway. And we also use the DABICS for physical resource monitoring and status collection and where the open stack thermometer is used for telemetry for the virtual resource monitoring and status collection correspondingly. So the last, but it's also a very important capability is the multi-vendor heterogeneous virtualization adaptation. So in order to manage the virtualized resource through the OpenStack adaptation layer, there are two solutions we can adapt. The first is through working with commercial supervisors. But in our use case, we use vendors like VMware, Huawei, and they all have their very own mature commercial virtualization products. And generally, there are already a virtualization management module running on top of that. So those modules are then responsible for providing the drivers to our OpenStack adaptation layer. And the docking process is achieved through this method. So the second solution is to develop our own hypervisor, which is based on KVM. And by using the strong coupling mode, by using the KVM strong coupling mode, so the OpenStack adaptation layer can directly deploy agent drivers to the KVM kernel so and achieve the direct management this way. And also, there are some other features should be taken into account. For example, during our process, we have supported the, we have done very extensive development on top of the native OpenStack interface and we have added functions like decentralized access, dynamic authorization, fault alarm and subscription notification, resource reservations and et cetera to our CMPs VIM. So just a quick summary on the few things we have achieved in our telco network transformation. So the CMP VIM has been thoroughly tested with more than 40 decoupling vendor combinations and has passed testing in more than 90 different environments. So by the end of 2018, so it has already been deployed on our production environments with VNFs ranging from V-Brass, V-O-B-B and VOTI, V-I-M-S and we have already achieved successful deployment with 5G core running on top of our clouds but that's in the experiment phase still. So next up, I'm gonna introduce my colleague, Chen Tian and she's gonna give you guys a bit more inside view of our research and practice on the edge to cloud coordination based on OpenSug and Starnex. First, thank you, Henda. Thanks for excellent presentation and also thank you for encouraging me to speak English. Okay, introduce myself. My name's Chen Tian. I come from China Telecom Intelligent Network and Terminal Research Institute in Guangzhou. Okay, first, let's do some background. First, the concept. Our work on cloud computing becomes 2011 or 2009 and before. In our opinion, cloud computing is naturally distributed and we call it the classic cloud and under the promotion of new services and new technologies, the cloud is extending further to the age of the network and even to the user side and we call it the age computing. And we say that age computing is complement and cooperative with the classic cloud and we have studied many core technologies like for computing cloud and MEC, where exactly there is something different in the definition architecture or use case or the capabilities of those technologies but exactly they are the same. They are all specific implementations of the concept of age and the cloud technology, they are the conventions of the tool. And by the way, the ITUT Y.3508, we made this in standard, have been published in 2019. Next, the use cases. For carriers, we divide the service mainly into three types. First, the video, second IoT and third cloud network and the video and such as AR, VR, video monitoring, providing video process and the coordinated processing and analysis services to the customer and second IoT for various industries. For example, transportation, manufacturing, agriculture, which will provide the capability of data collecting a massive device and also cooperative data processing, storage and analysis between the age and the cloud. And third cloud network, you know, the network is the key capability of our carriers and so is the cloud, they are the same. So the cloud network convergence is very important for carriers, which will provide some new solutions for traditional services and also new service experience for the customer in the new trends, yeah. And now the dynamics, you know that in my opinion, the whole industry is falling in love with the concept of age plus cloud and the intelligence. You know, for the cloud providers, cloud service providers, such as AWS, IoT plus cloud is very important services, new services. And for the open source foundations, the more and more new age project appeared, such as everybody knows the Aquino, Airships, Darling Eggs and also Cube Age, IOFOC and IoT age and et cetera, many, many. And for carriers, a network cloud, 5G, and also white box is exactly the key capabilities and the key worlds. And on the right side, you can see the photo, the hype cycle cloud computing 2019, the Gartner report that the distributed cloud is just on the rise. And the age computing is on the peak. And also Gartner have listed the empowered age and the distributed cloud on the top 10 strategy technology trends of 2020. Okay, let's come to China Telecom distributed cloud. Our cloud need to support various service requirements, ICT, every IT system, service platforms, network cloud, et cetera. And so this is our distributed cloud and it's a large-scale and hierarchical distribution, well, which could be divided into four levels, from on the right, the group level cloud and the provincial level, regional cloud and also the age convergence cloud on the cities level and also the age cloud on cities age. On the bottom is the mailing requirements for the age. And everybody knows the massive, massively distributed nodes and also heterogeneous solutions and requirements and also resource constraint, unreliable network connections and also the age node need autonomy management capability and also unified management of the host resource and also need to support various age services, new services. And next one, we came the framework, this is a common cloudification framework, everybody knows including physical resource, virtual resource and also the management components on the right side. Well, in this we should mention that we add acceleration hardware and acceleration resource and also both hypervisor and container in this photo. Well, all of these resources are configured on demand to various service requirements. For example, in the core cloud, the GPU of acceleration capabilities should be configured to provide the capability for big data processing and also AI training like this. And also on the age, the GPU and all FPGA capability is configured for age applications. Well, next, what we focus is the management system. Well, this is the similar like they are both mentioned by Henda, one center and four levels, but this photo contains more levels. We use convergence layer to extend the management levels to age convergence and age. And this convergence layer can also provide open standard interface for the outer systems. For example, NVO and the VMFM, so to increase the flexibility. And also for the age, we should say that the age we take many different kinds of flexible solutions. For example, the single compute node or the just the virtualization systems on the age node. All lightware solutions, for example, multi-V, multi-AZ. And also we could use the integrated delivery systems. And what we choose for the integrated delivery system is the styling X, we choose it as an option of age cloud coordination management. Everybody knows styling X now have two release, the first release one for distributed cloud containing one central cloud and also several sub-clouds. And also styling X release two is containerized. And because the release two is not good, yeah? So our work now is focused on the release one. And this is the architecture of our management system, age cloud coordination management system based on styling X. Now the three levels, the upper layer is the cloud, the China Telecom CMP Unified Management Center. And in the middle is the age cloud management platform containing including a CMP styling X adaptation layer and also styling X central. And on the bottom are the styling X sub-clouds. And what our work focus mainly on six points. The first one, the CMP styling X adaptation layer. The second one, the keystone for unified authentication. And the third one is the API, we add an API proxy in the CMP central. And the fourth one, fourth and the fifth and the sixth is on the CMP Unified Management Center. Let's see the detail. The first one, the styling adaptation layer. And you know, because some specific encryption and authority management requirements, we use the keystone and the cloud components of our adaptation layer to replace the keystone and the cloud components in the styling X central, yeah. So we convert, so we use this solution to convert styling X central authorization system into our management system. The second, the unified authentication. Well, in detail for the authentication, the keystone component in our adaptation layer cooperate with the DC manage component in the system controller to synchronize the tokens of the styling X sub-clouds to the sub-cloud. And where the user of the sub-clouds could use those tokens to do some authentication operations. The third one, API proxy, you know, we add an API proxy, Andrix reverse proxy in the styling X central to Unified receive the API requests of OpenStack services and the schedule of the services to the corresponding styling X sub-cloud. And the fourth one, the Unified VM gateway. As mentioned above, the VM gateway may mapping and the handling and the filtering the OpenStack API services, yeah. So we add some, add the information of styling X handling and the filtering information into the VM gateway so that the VM gateway could handle the API request to of requests of the sub-cloud, of the OpenStack on the sub-cloud. And the Unified resource model, well for styling X sub-cloud, we treat the sub-cloud corresponding to the region of our Unified resource model. Yeah, on the left is the resource model of our CMP, where the sub-cloud is corresponding to the region. And the cluster to the cluster host to the cloud post. So by this, through this we could manage the whole resource of the sub-cloud by our CMP. And the sixth Unified portal, we add their capability of age, resource management and some monitoring and the alarming management capabilities into the portal of our CMP. So this slide shows their POC environment in our, of our work, yeah, including six nodes. There are one for CMP and two for central, styling X central and three for sub-cloud, based on styling X, version one, release one, yeah. And in this environment, we deployed age, video processing application. And this application is developed by our colleague of other department, yeah. This application is deployed in the architecture of Kubernetes, well, two nodes. One nodes that it including the master node and also the worker node, one, one styling X sub-cloud is support, support the filtering and the engine of the age video processing. And the other node support the master node and the other management features on it, yeah. And also we are now cooperating with our MEC project team to support MEC deployment. Now the work is ongoing, yeah. Okay, that's all what I want to say, thank you. Okay, any questions? My personal opinion, yeah. Well, the city, city cloud and the IT cloud is mainly separated in our cloud resource pool, yeah. But for the age, we considered MEC as an ICT cloud node. Well, fixed and mobile conventions node. We may put some IT applications on the MEC node. And just according with the city applications, yeah, together, maybe. But for many other scenarios, they may be separated. Any more questions? We had that without the mic, it's going to be pretty short. If you can say, obviously, what platform are you using for MANO, for network functions, management and orchestration? Right, so in terms of that question, so we have actually two vendors. The first one is the HP, and the second one is developed by my colleague and her colleagues as well in the China Telecom Guangzhou Research Institute. So we call that E-MANO, so basically two manufacturers. Any more questions? Yeah, just an addition to the question to the gentleman back there. And besides those two manufacturers, we have some more manufacturers for MANO which are provided by the VNF manufacturers, because usually they have a full stack of solution. So I have another question about the evolve strategy for the CT core network. Do you think the cloud native is the third step for the, especially for the CT core network, all the VNFs? I mean the CNF is CNF necessary for the CT operators in the core network, especially. Now our five, for example, our 5G core VNFs are VMs. And we deployed with the Microsoft architecture VNFs from the solution vendors. But the trends we think is containerized. The big trends can't be stopped. Okay, so the time has passed 10. So I guess thank you everyone for coming again. And if you guys have any more questions would like to share with us or discuss with us, you can always come to us afterwards. Thank you.