 Good morning. Good afternoon. Good evening, everyone. Welcome to our session. This is Chihui Zhao from China Mobile and Seshia Kumar from Huawei. Today's topic is thoughts on telco-clown-ledevolution and open-source practice. Ever since 2019, cloud native has become one of the hottest topics within telecommunication industries. China Mobile has also done a lot of research and participated in many SDOs and open-source projects. So within today's session, we'll introduce you our thoughts on telco-clown-ledevolution and one of the most important open-source practice. Firstly, let's look at China Mobile's cloud computing situation. China Mobile has three types of cloud, e-cloud, IT cloud, and network cloud. e-cloud is our public cloud. It can provide general and standard cloud services to individual users as well as some customized cloud solutions to the enterprise users. And also, this cloud carries some of CMCC's service platforms like AI platform, big data platform. IT cloud is a private cloud to carry CMCC's internal IT systems. For example, our customer service system, charging systems, and email systems and things like that. The overall design of this cloud is based on the requirements of our IT systems. So this is about IT cloud. And the other cloud is network cloud. This is the most important cloud for us as an operator. It carries 4G, 5G network functions, some value-added network functions such as the multi-medium messaging service, and also it carries some network function-related management systems, for example, the manual system, EMS, and things like that. This cloud is also a private cloud, but it has very, very strong telecom features. So by looking at all the three types of clouds, as e-cloud and IT cloud has less toggle features and toggle histories, they go faster on the cloud-native evolution. Our e-cloud is also providing past services for applications to achieve cloud-native. Currently, container as cloud-native infrastructure is already online. Our IT cloud also has the past platform, and the containerization rate of its services is 100% for now. But however, the cloud-native upgrade of network cloud is much, much slower. Currently, the virtualized infrastructure is now the resource. Automatization rate is still very low. The container and microservice is still invisible. So for today's topic, the cloud-native evolution is mainly about this cloud. Here, let's have an overview of CMCC's network cloud. Since 2018, CMCC has been establishing centralized network cloud in eight districts in China. Our 5G Core, IMS, and EPC are running on this cloud. The cloud ratio of CMCC's network is up to 75%. And currently, we are planning to build edge cloud in each of our 31 provinces. Our network cloud uses NFV plus SCN as our technical structure. The infrastructure is now virtual machine and OpenSack is used to do infrastructure management. Network functions are all the nefs in virtual machines. Containers are involved but are wrapped within virtual machines and not visible or manageable. We'll gradually introduce manageable containers and CNFs into our network cloud. And after introducing basic info of our network cloud, let's look at the motivation of telco operator evolved towards cloud native. The first motivation comes from service development. So currently, 5G and edge computing now serves major enterprise customers, which may face diverse requirements compared with individual users. Let's take 5G core slicing as an example. Use cases of different industry needs different 5G core network slides. The network slides required by NBIOT applications such as smart metering requires mainly signaling functions and the bandwidth needed is only up to several kilobps. However, the network slides used by video broadcasting applications requires mainly data transmission capabilities and the bandwidth varies from 10 mbps to 100 mbps. So to better meet customer's requirements, network cloud needs to improve its agility and flexibility. And this agility and flexibility can be achieved by cloud native. The second motivation is from technical evolution of the whole industry. So cloud native technologies such as container and microservice architecture has already been used in the existing network cloud but invisibly. So in the future, we want to make this visible, manageable and maintainable so that we can fully benefit from cloud native. The third motivation comes from the need to optimize our network cloud. As more network functions will move onto the cloud, we definitely need to improve the resource utilization, product delivery speed, and reduce the cost and management difficulty. So we think that moving towards cloud native is an inevitable trend. Here we think that it requires combined evolution of network functions, network cloud, culture, and design mind to achieve better cloud native. Firstly, an agile infrastructure platform should be built and platform services should be provided. Container and Kubernetes should be selected as new infrastructure and orchestrator. Pass that contains all the common and reusable services should be built so that all the network functions could only focus on business logic and the time to market of new network services could be shortened. So this is the base for cloud native. And then network functions should be in microservice architecture to achieve flexibility. This also requires the platform can support the running and management of microservices. This is the key part of cloud native. And also CICD and DevOps culture should be involved to achieve automation. This can help us to speed up the lengthy and slow purchasing and online procedures. And last, all the developers and maintainers should follow cloud thinking in their design and management, which is using the couple sharing resilience and automation ideas in their development and management. So these are about high level action points for network cloud to achieve cloud native. Until now, China Mobile has done a lot to explore cloud native evolution. And this page is a conclusion. So we have been doing technical research on container and microservice and pass. For container, we have worked out technical architecture and done testing trials. Although our attitude on introducing container to our core cloud is relatively conservative, but a technical standard is ready for use. And we have started using it on edge. For microservice, design and management is the topic. By looking at network function design from many partners, we draw the conclusion that there's no standard on network function microservice design. But commonly, they still follow the structure of physical network functions, which contains load balancer, business processing unit, OAM unit, and data storage unit. But currently, developers are do working on splitting these big units into some smaller pieces. And for microservice management, we don't have any solutions for this yet, but we think knowing whether or to manage or not what to manage and how to manage is the direction on microservice research. And for pass, we have working on pass structure and capability in network cloud. How to involve pass and merge with NFA structure or even change it is one of the key questions we're trying to answer now. And what pass capabilities and corresponding use cases is another thing we are exploring right now. And this is currently happening in xDevelop project. So these are about technical research. On the standard promotion part, our team has also done a lot. We have relatively mature technical standard and interface standard for container layers. We have design principles for cloud native applications in network cloud, standard for cloud native maturity evaluation for applications, and technical standard for pass of telecom network cloud. These two are what we started this year. And in ITU, we lead one standard named functional requirements of pass for cloud native applications. And in Etsy, we're now following NFA 19 report on VNF generic OAM functions, because this is an important use case of pass to implement cloud native OAM functions. So these are about standard promotion. And on open source practice, we follow CNCF projects, including Kubernetes, Knative, and many others. And also we started a new one named xDevelop to explore platform capabilities required by network cloud. We have found that cloud native is closely related to first hand experience of network function design, development, and operation. So it requires us as an operator to engage in all these procedures and have some internal look into the network functions at code level. So we think that start from open source practice is a good way. Then I'll hand over to Sashu to introduce XG Vela, which is the most important open source project we have been participated in. Thanks, Shukui, for the handover and the nice introduction of the problem statement which we have before. Hi, guys. This is Sashu Kumar from Huawei. Now we'll be talking about XG Vela. So XG Vela is a next generation pass platform. It's an open source cloud native telecom pass platform, which actually is trying to address some of the issues which have been discussed in the previous slides by Shukui. Let's jump into the details of XG Vela and how it's all set. Next slide, please. Right. So this is the overview of XG Vela. XG Vela as such started in last year, 2020, April. And we actually started with 13 members as a TSE members. And we have also have other contributors who are more than eager to join us in this course, in this course. The main scope of XG Vela as you see here is that we are a plus plus of general pass. You can call them general pass, plus, plus, which is mainly looking at the telecom specific problem areas. When they say it, we are mainly saying that we extend ourselves from the general pass platform. But we actually go with the specialization of the specific problems which arise as a part of the telecoms. General pass as such is pretty exhaustive. It is something which actually takes care of a lot of stuff coming from the enterprise involved. It's mainly designed for enterprise. So when we actually try to apply it with one is to one on the telecoms, we find many of disintegrations are not smooth. So with that in mind and to make sure that this is actually coming as a standard alignment, we want to make a pass platform above it. As you see in the diagram here, we have the IS and CAS. CAS can also be bare metal or it can be on top of IS. That's absolutely fine with the way we want to take it. And then we have the general pass, which is a blue box, which is what we can show in the dotted line, in the dotted area within the red lines. On top of it is what we have two major things. One is a green and the other is yellow. So the scope of XG Vela is mainly on these two areas. That is the yellow part and the green part. These two areas are is what constitutes our main scope, which actually talks about the extensions to the general pass and then the adaption layer. By adaption layer, we are talking about that area specific to which we will be adapting ourselves to general pass or we extend general pass to do a specific job which is required by us to do anything on top of this. So as you see the lines which go from the top to bottom out of scope, as you see top of it, we do have some things which are green. They are all the APIs which we expose from the teleco pass itself and the blue lines which are still there to those specific areas which we require to be done directly from general pass. So I know it's a little confusing with the diagram at the first class. I'll try to extend it further as we move on. The applications are what we actually run it on the purple. You can see the purple boxes are what we intend to have applications. There are different kinds of applications which we are trying to do right now. But for the phase one or the first release, we try to actually have this green box and the yellow box. And yellow box is what we are trying to do with specific like general pass like OKD, OpenShift Kubernetes distribution. So that is what we are trying to actually do the integration part and then showcase some of the functionalities. Next slide please. Right. So this is what is the extension which I was talking about. Here maybe we'll get a better picture as to what we meant by the previous diagram. These are all pass management platform as you see on the right hand side, the vertical box which actually talks about the complete functionality which is required vertical across the pass layers. That is whether we are talking about specialization or generalization which is what we require across. The general pass actually tries to signify some of the key areas which actually cater to the major functionalities which is required for us to actually have the general pass running. And then the telco pass is actually trying to actually have some of the key functionalities that is the F caps part in short. And then the yellow part is a repetition layer which is an extension to the general pass to make this integration smooth. So the major things if you have talked about, we will mainly working on three major capabilities. Pass capability is required for implementation of inner functions, the network functions. And pass capability is required for managing them. And the pass capability is to expose enough function service to external customers. So we are talking about external customers here are the end users or they could be users who are actually above it. Like that could be a service orchestration that could be a analytic tool that could be a control loop or that could be any other design tool which actually takes this general pass into consideration. So this is the way we do the integration of this specific pass layer further down to the, I mean further up to the other customers. That's what we mean by the last technique. So next slide please. This is a code specific of what we actually stand and where we stand with respect to XG Vela as of today. XG Vela, the code has been seeded from the MTCI of Mevenir. It's a pretty versatile platform which actually has a lot of commercial rivalry right now, which is actually having a lot of adaptation in commercial sites. And we have taken the seed code from there. Thanks to Mevenir for contributing it. And we are working on it still to make it a complete end to end stretch with certain use cases, which is what we are, we will see in the coming slides. So the current scope of the current seed code actually has the CMass, Tmass, Fmass, West Gateway, the CIM and Helm based packaging platform that the description of it is there in this thing. But the major part of it, what we have covered is the CMass actually takes care of Day 0, Day 1 and Day 2 configuration flows. The Tmass, which is a topology management service, we will be talking about the discovery of KIT services, builds up the 3GPP managed objects. Okay, that is what we call it MMOS for NFS for the network functions, which are required for using it or consuming it later. The Fmass as the name says is a fault management as a service. So this is the one which is mainly taking care of the alarms, the events and all. And via the TCA will be working via the MMS. The MMS is actually the management layer, which actually talks about metric management, which actually talks to Prometheus and all and takes a matrix from the regular managed objects. And along with those two via MMS and Prometheus is what the fault manager collects the events and then events based on subscription and all will be managing it further. So one of the integration points which we are looking at is ONAP. So we have VES-based compliant. We are VES 7.1 compliant, which is what DCA understands of ONAP. So what will happen is we will work as an agent to DCA, we will be giving the packets for the CNF packets via Prometheus or VES. Both of them are possible to DCA for further control of operations. VES Gateway, as I said, is based on ONAP VESPA, ONAP or VESPA I mean project and with enhancement to support multi-NF streams. So we can actually have multiple streams of it to be configured. And we can actually have data received through the XGBella to the northbound services. CIM is in-point concept here, which is a CNF interface module. It's a sidecar, which actually is an integration point for the API for applications. So this is what will take care of all those interaction points for us and also certain, it will take certain load for us to have the sidecar business as we know Istio and all. So that's exactly what is done by CIM. And then Helm-based packets, of course, this whole thing is done using the Helm-based packages. So it's easy for us to actually have programmability brought into it on a need basis. The MS part of it is still in progress. We are expecting the seat code to be done somewhere this year end. And we can actually expect some other things also happening in the Prometheus scale, where Prometheus-based metrics is what we will try based on the use case. I mean, again, use case is not what we want to deliver, but we want to keep use case as a verification functionality to ensure that end-to-end functionality can be showcased. With that, I think we this what is the overall picture, the current site code is somewhere around 55K log. The seed was done in December of 2020. And since then, we have been working on different aspects of it. And the complete code is Apache 2 licensing. So it is easy for us to distribute and also to use or consume for the commercial cases also. The main primary language is Golang and Java. Most of it is done in Golang to actually make it adaptable much easier to different platforms. And also, we have Java in specific areas where we have used it in the complete seed code as of now. So this is not the complete code as I said, this is what is the current seed code and we are expecting more to be done in the coming days. Okay, so this is where the complete important links are for XGVela. If you want to come and join us, please make a note of this. We have the wiki pages, which is pretty elaborative so far for what we have done. That is actually the XGVela conference. We also have a GitHub wiki, which was initially done before we were adopted and become a sandbox. In fact, XGVela has been incubated as a sandbox project with an LFN in giant 2021. Before that, we were using the GitHub wiki. Once we got incubated, we have our own space in the wiki of LFN. The TSC meeting happens every Tuesday at 1pm UTC. The meeting ID is there. I mean, you can actually feel free to join us there. We also have a subgroup every day. So subgroup is where we are discussing about the use cases. This is where we actually discuss about 5G slicing right now. We are expecting a 5G slicing based use case for the end-to-end slice to be done. At least a part of the end-to-end slice is what we can start with for the coming releases. And then we will try to take it forward. There's also chatting tool. We use two different tools. One is WhatsApp, other is Slack channel. Both of them are available, as I said. We have admin for the WhatsApp. If you are interested, you can actually shoot a mail to the TSC group. Mailing list is actually what we have done. And we'll be able to add you there. Also, we have a Slack channel where if you want to join, you can actually feel free to join us to have further discussions. The code depositors are in the GitHub right now. You can see the slide of it, GitHub slash com slash XGVela, GitHub console slash XGVela, where we have the complete code repo. The current code is actually a combination of what we just discussed some time before. And you can feel free to download. We also have a very detailed developer guide along with the code to make users understand the code and make them match up when running. The mailing list, as I said to you, we have two main mailing list. One is main, other is TSC. Both of them can be used for different purposes. If you have any questions, feel free to ping to either of the groups and we'll be able to assist you further. Next slide, please. All right. So this is the release info as I was talking about some time before. The release one is expected somewhere this month. I mean, the month and this one we are expecting. The main contents of it would be the difference architecture or the architecture diagram, which we have been discussing about. We will try to actually provide the capability code. I mean, the Helicopass capability code contributed to my revenue. That is MTC, coming from MTCL, Sierra from MTCL and developer guide. So these two is what we already have ready and we are making it further improvements to it. If anyone wants to actually help us with that, feel free to join us. We will be more than glad to take your help in whatever way and you please provide your suggestions and corrections. If you feel anything, we can discuss about it and we can take it forward. The future plans as such are ready. I mean, it's a huge list. I think we try to cut it short of something simple here. Open source, NetConf solution for CMass. This is something which we are thinking about. One of the candidates being NetOpair. So the other is CI-CD pipeline. Now, XGvilla doesn't have a complete CI-CD because the code which we have right now taken is only one part of it. It's a complete functionally as we move on. We also want to keep the continuous CI and the continuous improvement, continuous integration, continuous development to be part of this. And also, we want to enhance our DevOps part of it to make sure that we have a build time, daily builds, and also corrections of the code with respect to the static checks and all. All this seems to be taken into consideration. The prototype building is something which we are working on as a part of the use case also and also as a printing. We want to keep a simple use case to start with or a verification, I would say, to start with. And that's where we want to have the integration with general pass to build a complete prototype. We are working with OKD. Red Hat is doing a great job in helping us to do some progress in this case. Demos involves the seed code plus CNF plus ONAP. We want to actually have a demo which actually takes care of the orchestration part, the design onboarding, and also the integration part of the CNF itself or the XGvilla itself with respect to ONAP. As I said, we have two major integration points with ONAP. One is on the orchestration layer where we do the instantiation of the cloud. Right now, the cloud in itself is actually a black box for ONAP. With the integration of XGvilla, it becomes much easier for us to understand which clouds are being configured and we can also orchestrate the clouds itself as per the need of the application being run. The other being the DCI where we actually have the data collection, it will help us in the day two operations, I mean the control loop and day two integrations and operations also, which will help us to actually manage the system better. So it comes post instantiation level where we will be able to do integration better for entire end-to-end solution. So also the continuous telecom pass functionality and operation layer is being explored. This is something which actually goes hand in hand and we require all your support to make it better. Feel free to join us. Any questions, we can take it forward. Thank you.