 Good morning. Good afternoon. Good evening everyone. Welcome to our session. This is Chi hui jiao from China mobile and special comer from Huawei. Ever since 2019 cloud media has become one of the hottest topic within telecommunication industries. China mobile as an operator has also done a lot of research and also participated in many as deals and open source projects. So, within today's session will introduce you our lessons learned and thoughts on telco cloud native evolution and open source practice. Finally, let's look at the first part, our thoughts on cloud native evolution. Firstly, let's look at China mobile's cloud native situations on China mobile has three types of cloud cloud, it cloud and network cloud. This cloud is our public cloud, it can provide general and standard cloud services to individual users, as well as some customized cloud solutions to the enterprise users. And also, this cloud carries some of CMCC service platforms like the AI platform and the big data platform. The third cloud is it cloud. It is a private cloud to carry CMCC's internal it systems. For example, our customer service system, charging system and email systems. The overall design of this cloud is based on the requirements of our it applications. The cloud is cloud network cloud. This is the most important cloud for us as an operator. It carries 4G 5G network functions value added network functions such as multi media messaging services and some network function related management systems is also on this cloud. For example, the manual EMS. This cloud is also a private cloud that has strong telecom features. So, by looking at all the three clouds as a cloud and it cloud has less telco features, they go faster on cloud native evolution. E cloud is is always providing the past services for applications containers, micro service architectures are provided as the service of E cloud. Our it cloud has has the past platform and containerization rage of all the systems is very close to the 100%. But however, as the network cloud has very very strong telco features, the cloud native evolution rate of network cloud is much much slower. So, for today's topic, the cloud native evolution is is mainly about this cloud. Okay, here let's have an overview of Sam CC's network cloud. Since 2018, Sam CC has been establishing centralized the network cloud in a different district in China. Our five G core IMS, if you see are running on this cloud, the cloud ratio of Sam CC's network cloud is currently up to 35%. And also, we were planning to build some edge clouds in each of our 31 provinces. So, the technical structure used by the network cloud is currently a nephew plus SDN, and the major infrastructures used right now is the virtual machines, and also open stack is used to as the infrastructure management systems. The network functions are all the nefs in deployed in virtual machines containers and microservice network functions are now involved in the cloud, but they are currently wrapped within the virtual machines. So they're not manageable right now and also even not be able to be visible for us. And for future will gradually introduce some manageable and visible containers and see a nefs in our network in our network cloud. So, after introducing the basic info's of our network cloud. Let's look at the motivation of the telco operators evolved towards cloud native. As the network cloud has been commercially deployed and stable running for more than three years. The next thing come to our mind is that we should do some optimization. And after seeing so many successful cases of it cloud native evolution, we think it is time for our telcos to use the cloud native technologies and also some theories to optimize the natural cloud and improve its agility. So the first problem problem we think the cloud native may solve is to increase the service flexibility under some different use cases. So currently the network functions are deployed as the normal architectures. Each network function is a well packaged package that are deployed in the cloud. This makes it when we want to use it for different use cases, we would only use one deployment with some different configurations for for those different use cases. But as we all know that the to be customer has very diverse requirements, it may cause if we continue to use this kind of deployment strategy, it may cause inflexibility and some resource waste in the future. So if we could design the network functions as microservices, and those microservices, if they support to be orchestrated under different use cases, this will make it possible to design the customized private 5GC. So this is the first problem within the cloud native may solve. Oh, sorry. And, and the second problem the cloud native may solve is about the resource usage rate. As we all know that in many telecom situations and standards, we usually use very very high level of redundancy to ensure the reliability. So redundancy may exist at a network function level server level resource pool level and etc. But if we allows some light service downgrading using the containers to achieve stateless and multiple replicates of of network function services, we may reduce the resources that are redundantly, redundantly used. And also, as NF are now usually designed with large size of virtual machines, this may cause service resources are not servers resources are not fully used. So for example, if a server you really has to to physical CPUs, and we deployed to virtual machines. One is with 20 vcp use, and the other one has the top vcp use this would cause a waste of resources, if the other, the another virtual machine is 10 virtual vcps. But if we can design the network functions as micro services with each micro services cost a smaller piece of resources, then the utilization rate might increase. And the third problem, we think that the cloud native may solve is automate the process of network function upgrading. Here is an example of of the traditional upgrading processes that are still being used by by many operators. Firstly, we would generate some requirements, then the vendors would do the development. And then we would receive a new version deployment package revert by the vendors, and we would do some manual testing of these upgraded network functions. And then we would select several commercial sites to do trial deployment. If the feedback from the trial deployment are good. We would do some offline upgrade. The processes are mostly complex, widely influential, and also happened at night. This is definitely not automatic and should be upgraded. So, these are some examples to explain why evolve towards cloud native is important to the total operators. But as there are many special features tackle industries has, we cannot simply copy those successful experience of it cloud native evolution, we have to explore our own way. So here I lifted several features of network cloud that are different from traditional cloud computing. Firstly, the network functions are our core applications we don't have many other diverse applications. And secondly, the network functions and cloud platforms of of the tackle operators are usually bought from the vendors. And we, we defines the product requirement through standards, and we don't do the design and also the development. And the third feature is that the products we bought are usually from multi vendors. The third feature is that we requires high reliability, which is usually up to 99.99% and the other feature is that the NFA architecture has been used for many years and is quite mature right now. So if we want to involve some new technologies, we have to considering about its influence on the NFA architecture and also the, the, the workflows, and also in telecom environments, we have relatively solidified operation and management systems and the processes. There are many things that different from the IT industries, but, but here we found some common features that can be shared between the IT systems, and also the telecom systems. One is microservices plus satellites. These are about the application design. And also we would have agile infrastructure, we would care about continuous delivery, and also we would requires effective operation and management. So by, by seeing all these features, the difference and similarities, we have summarized three research point that we're exploring. The first one is how to achieve a network function microservice design and management, because application is always the vital point. And we have to always start from the application level. And the second research point is the capabilities of platform to support the cloud native network functions. So the platforms are the carriers of, of network functions. So obtaining every capabilities that the network function required is, is quite important. And the third point is how to adapt current management systems for the NF management. Because the manual, for example, the, the manual says, manual systems, the manual workflows, and of the OSS management systems and things like that. So, these are about the high level research point for network cloud to achieve cloud native. So, and this page shows the, what the China mobile has done to explore the cloud native evolution. We have been doing technical research on container microservice and pass for containers, we have worked out some technical architectures and done testing trials. Although our attitude on introducing containers to core is relatively conservative, but a technical standard is ready for use and we have started using it on edge. And for micro service design and management is the topic by by looking at the network function design from many partners, we have draw the conclusion that there's no standard on network function micro service design. Similarly, they still follows the structure of the physical network functions, which contains the load balancer module, some business processing unit, the only am unit, and the data storage unit. And the developers are doing to do working on splitting these units into smaller pieces for micro service management. We, we don't have any solution for this now, but knowing whether to manage or not what to manage and how to manage micro service is the direction. And for pass, we have working on past structure and capabilities in the network cloud. How to involve pass and merge with the NFV structure, or even change it is one of the key question we we try to answer right now. And what past capabilities and corresponding use cases is another thing we are exploring. And this is happening in X developer project. The standard promotion part, our team has also done a lot. We have relatively mature technical standard and interface standard for container layers. These are enterprise standard. And we have designed principles for cloud native applications in in network cloud and we are, we plan to do a white paper for this topic. The two standards for cloud native maturity evolution for applications and technical standard for pass of telco network cloud are two standard that we started this year. And then in it you will lead one, one standard named functional requirements of pass for cloud native applications. And in at sea, we're following NFV EV 019 report on the VNF generic OAM functions, because this is a important use case for for pass to implement the OAM functions. On the open source practice. We follow the same CF projects, including Kubernetes and K native. And also, we start a new one named X developer to explore platform capabilities required by the network cloud. We have found that cognitive is closely related to firsthand experience of network function design development and operation. So it requires us as an operator to engage in all these procedures and have internal load into the network function at code level. So we think start from open source practice is a good way. So that's all about my part. And then I'll hand over to sexual to introduce X develop, which is the most important open source project, we have been participating. Thank you for the introduction. Hi guys, this is Seshu Kumar Modiganti from Huawei technology in your private limited and I'll be covering about it. She will and how she will be solving some of the problem statement, which she has actually explained before. To start with extremely is a open source to look up last part past platform. We know that pass has past platforms are many in the current scenario and we want to actually expose what the telecom needs of it as a specific functionality. So it's actually going to address all those issues which are there as a part of the course problems, the problems which will be coming across into the course world. And we'll be looking into how we are trying to solve them to start with X realize a project which started in April 2020. It got a sandbox. It has been adopted a sandbox approach with Leffen in the 2021 January. Currently, we have 10 DSM members working on XGVLA who have been participating in different functional aspects. The if you see the architecture of XGVLA mainly constitutes of extension to the past platform. The general pass as you see in the blue box here is is the one which we are all know what XGVLA scope is mainly the red dotted line and the red scope here. The key scope being the solid red line here, which is actually constituting of two main parts. One is adaptation there, which is an extension to the general pass itself, but adapts to the specific telecom needs and then the functionality which has to be implemented as a telecosic. I know it's a little confusing at the initial stage to see this entire stuff as a building block will be covering off it in details in the next slide. The other important aspect to understand here is we also have the interfaces to the general pass where in as in when necessary. That is what we are trying to depict from these blue lines which are actually opening from the details which are they here. This is actually not it is completely out of scope of current XGVLA. Our main scope is to enhance this functional here, which is telecospecific pass capabilities mainly used by network functions and telecom management systems under the telecosynary and adaptation there to make sure that this telecospecific functionalities are adapting properly to the general pass and then the APIs are on it. The upper bound, not bound for this will be again any application. It could be third party hosted app. It could be any NF function which is coming from any OSGT. Sorry, it can be coming from any of the open source definitions, open source standardization bodies. And of course we can also have integrations with other open sources also. So we are looking at all the way in which we can do SDO integration as open source integration to this pass platform, a teleco pass platform and also leverage what is there as a part of general pass platform which is there in current scenario. Having said that, this is what will illustrate further as to what we spoke just now. As you see here, they are different functional areas which which are already provided from past. These are not allowable but this is mostly the key aspects which we want to talk about that includes issues that includes functionalities like telemetry, monitoring, logging, the scope of it all that. The other other aspects which we also consider are the service mesh, the tracing, logging, security and complaints and all that. What we've tried to build on top of this is the teleco pass layer which actually is talking about the FMAS, the metric management, default, the configuration, which are again looking at a teleco specific functional areas. The artificial layer which is actually more like glue between these two, which includes some protocol enhancements. It can be enhancements over the telemetry, monitoring and all that, which we don't require to use as it is. Maybe we require to fine tune as on need basis actually. So if you see the past capabilities requirement are actually divided into three different categories. One wherein we require to implement the NF functions. The other is past capabilities required to implement to manage the NF functions which are already there. That is what we are talking about here. And then the past capabilities to expose NF services to external customers. So these external customers we are talking about are these layers which is actually coming as part of this OSSPSS or ANAP or NFPO sort of orchestration layer or the OSSPSS or the telemetry and metric layer and all that. The other part which we have is this horizontal part which is going to be useful for us in different aspects. We require to actually have the catalog, the LCM part, the image repository, registry, all that. This is what will be our horizontal section. This diagram which is there is to illustrate different aspects of where and what we will be using and what will be building on top of it. Having said that, the current seat code of HCL has been contributed by Mavenere. It is coming from their MPCIL. The MPCL is most of us may be knowing that it is pretty advanced past platform which has been provided by Mavenere and it is in production. Some of the key features have been adopted from that as part of seat code. Currently we have the CMAS, the configuration management which actually provides day 0, day 1 and day 2 configurations for the HelicoPASS. The T-MAS which is specifically topology management as a service which again is providing us the 3GBPMOS which is nothing but manager objects for NFS. The F-MAS is the fault management. This is what will be our key focus area to do as part of release one. The West Gateway, the CIM and Helm-based package in framework. As the name suggests, these all things are supporting, they are support systems for us to work on. Delivering of this specific functional areas is what we have as a part of release plan. Right now we are focusing on F-MAS as a top one priority. The others will be following in the coming month but I can say that F-MAS, CMAS and part of CMAS is what we are trying to deliver as part of release one. F-MAS being the top one priority. So the way we want to do it is that we will be building this general pass capabilities coming from OKD, the OpenShift Coverance Distribution, which actually has most of this operator based implementation. The K-8S operator based implementation is what they have used for that. The CIM will be extended and integrated to this specific functional areas which we have talked about. And together we will be able to constitute the functional aspects of Delicos, specific standardizations. So we will be considering the 3GPP, the TMF standards also will be delivered. So this is what is a key work and roadmap to crystallize and put it crisp. As I said, the OKD operator based enhancement is what we want to have. We are mainly looking at three major integration points as of now. From open source collaboration, one is the OKD part, which I just discussed. It will be the past platform where on top of it, what will build a complete Delico functionality. The one app integration is the critical part, which we are monitoring. As I said, we have two major aspects, the day one date, the day zero, day one and also day two. So the monitoring part of it will be integration to DCA where we will be actually having a best based metric management, which will be exposed to DCA as a part of it. We will be enhancing the telemetry part of the DCA, which is what is a critical component from one app. The SO service orchestrator will be the one which will be used for us to actually have the LCM part from the application deployment. The CNFO orchestration part of it is what we are looking at integration. The other integration which we are looking at and analyzing it at the moment is NIFIO, which is a new project which has been launched in LFN. Again, coming from Google. So this also is actually having certain integration points which we are analyzing. We'll be getting back to you more with more details by H2O this year. The CI CD pipeline is actually in progress. We have almost finalized everything. The Argo CD being the most right candidate for us. We are also looking at other aspects of other functional areas which we can cover. Again, this is something which won't extend as a part of the KITS itself. I mean, we are looking at, again, a CI CD pipeline which can be adopted so that developers can actually do their build environments and all. It can be easy for them to adapt, I mean, take the existing code and then put in more functional areas. The prototype build is something which we are also working on where execution integration, functional integrations with general pass information and deploy and build a prototype. So this is one area where we are constantly working on taking the inputs from different operators. Currently, CMCC is giving us a lot of inputs from their functional areas. We are also taking inputs from other operators who are looking at actually whether from a distance like now, but we want to have them collaborated in the near future. The other important thing is, as I said, the continuous teleco pass functionality and operation layer where we want to give much more demonstrations to using the XT-Vela. That's what we are trying to have it from H2 onwards and then demos. So this demo right now, we are hiring the seed code. There's an OKD, there's one at the CNFs to start with. Again, I want to emphasize, we are not building a platform. We are not building a product here. We're building a platform which can be used different to have different use cases demonstrated and actually have the functional areas verified. With that, I want to actually give up the key information from XT-Vela point of view. The conference is where we have the XT-Vela has its own GitHub. XT-Vela has its own Viki under the left hand. The TSC meeting actually happens on every Tuesday, 1 p.m. UTC. Please do join us. The meeting ID has been provided here. We have two different ways in which we can actually communicate with XT-Vela if you have any questions on all. One is for the mail chain. The other thing is we have a Slack channel and then the WhatsApp group. Again, the group details and all are provided in the Viki. The GitHub is where we actually have the initial discussions before we were part of LF networking. We were actually having the GitHub as even for documentation. You will see both the code as well as the initial discussions in all in GitHub. The mailing list, as I said, is in the... As I said, we have mainly XT-Vela TSC where at the list.XT-Vela.org where we'll be doing a major discussion about what we have to and what is happening right now. It's pretty active. Also, we have main.list.XT-Vela.org where we can actually take up any other functional discussions which we want to have. With that, I conclude and I'll be opening the floor for any questions. Thank you.