 Good morning, everyone. I'm I'm Xulie. I come from China. We have three speakers today. Let them have an introduction first. I'm Kai from Nanyang Club. I'm Hu Wei from Intel. Today's topic is China. It comes at administration system in 5G to manage Stalin X, Kubernetes, and OpenStack IH-D6. Let's have a quick review on China income 5G IH in opening from Berlin summit last year. We delivered a session named comparison between Open IH projects and ETSI, MVCRA, with Intel and 99 Cloud. We show the design of China income IH platform based on OpenFrance structure of projects and it's according to the ETSI MVC reference architecture. Last year we have the design and today we have had the online system supporting the MVC business. We will show you the deliverable and the back end details. First, let me introduce the China Unicom Cube IH. 2019 is a 5G commercial year of China. Five days ago, on October 31st, China's 5G networks went live and China Unicom established the brand, the power of 5G. It means with the slogan, let the filter grow. We commit to the innovation technology and enable the industries and bring unlimited experience to users. From 2G to 5G, the technology is constantly updated. Now the China income IH is focused on digital transformation for the industries. As MVC, the IH Cloud is a key infrastructure for the next generation IT and we will reform the traditional business with MVC and to support innovations on MVC with AI, data and mobility. Today China income is supporting 60 IH projects over 20 provinces around China. For example, we cooperated with Tencent 90Es on cloud gaming area and cooperated with BMW and China on the intelligent connected vehicle area. We also incubated innovation products and explored new business models like intelligent manufacture, smart house care, new retail, smart pause and so on. China Unicom is building the 1000 edge nodes around China. Involving, we have the QB edge surface platform have three layers. There are the infrastructure layer, capability layer and the app and service layer. The infrastructure layer can provide the virtual machine and content resources and capability layer can provide network capabilities such as low balance as net, virtual firewall and can provide application capabilities such as VCDN, AI, and so on. We have a central site for operation and maintenance. We have built a business spot center including BSS, OSS, marketing spot and customer service spot and we also do the management and orchestration of apps in the center site. Next, let me introduce the architecture overview for QB edge. The numbers of MEC in the future will be very huge involving three levels of DCs. There are central DC, regional DC and edge DC. In central DC, we have OSS and MEC developer center and FVO and MEO orchestration. In the regional DC, the MEPM takes charge of the life circle management of MEDC, the IS and we also have the regional monitoring and cloud edge orchestration. In the edge DC, we have lightweight IS and the CAS. On top of that them is the vast and the fonts. Let me introduce the deployment architecture. MEO deployed on central DC. MEPMs are deployed on 31 regional DCs and MEOs deployed on edge sets. These three tiers architecture is to spot edge applications quickly deployed to edge as needed. And the central MEO is connected to BSS and OSS. According to 3GPP standard, traffic rules are configured by MEPM via NF interfaces and the UPF, VINFs and MEP modules will be deployed in edge sets as needed. UPF will be deployed by NFVO and MEP will be deployed by MEPM. In most cases, edge DC that is a shared infrastructure model will be the first option to provide application hosting. AF client agree on the SLA in latency bandwidth and the security. In some 2B cases, one side client DC means dedicated infrastructure model is also spotted especially to some strategic partners. Okay, next turn to Li-Kai. Okay, I will tick over the demo part and the detailed text behind this demo. So this is a video. It's around two minutes demo. So this is a diagram to show one edge DC. You can see the different parts of the edge inside the specific edge DC. This is show the list for ice. Not actually not ice is include some like a Kubernetes doc infrastructure. And we have a geo, this is just a demo because it's not live in the production but you can see the functions. We can see the geo for those edge DCs. And this is a deployment for our application. You can see first we have a very basic application. You can input the application descriptor, discretion, name and those things. Second is select ice to be deployed. And the third one and the fourth one is very important to edge because you need to define the rules especially for the network rules including DNS rule and the traffic rule. So this is something we will talk later in our application. Describe the design that we need to define some policies in a related with those two parts. Okay. Sorry. Okay. So it's just replay. So define the rules, define the traffic, you know, destination and the source, network information. So we deploy our application. This we will show, we will talk this later because if you're familiar with the VNFD, VNFDescribe, you will be know the TOSCAR descriptor. It's to define the VNF, right? But we also implemented in edge application, but that is not all because we have some cognitive applications. So we use YAML to define it. We will introduce this later. So this is show a skill policy define the incremental you are going to scale out or scale down or the maximum size of the application instance you are going to scale. So this is a demo for the change of the scale, to the scale out, scale down. So we also will demo upgrade. This is a VM-based application, actually. So we can both do VM-based application and the doc-based container-based application. We need to define the application descriptor to enable those scale up and the scale down policy. This is when upload application from raw image, you need to add some, you know, actually this is a kind of process to define the descriptor. We develop a self-service tool that to enable certain party users to import their application via raw image and we can add those descriptor information in the backend. So this is for upgrade. Okay, that's it. We will just show a full life cycle for application from deployment, scale up, scale down, upgrade or import for the application package. And this diagram actually this we cover more than application. What we show is just from, you know, sorry this is still in some, some words are still in Chinese. This is the official repository, production repository to the edge DC. We just show the life cycle to cover the edge application distribution and edge application development management. And we also have a life cycle before the application. We can allow SV to upload the application via image or we also provide a source code base for including like CI CD process and artifact process to enable the development life cycle for application. So the underlayer shows the change. This is for source code and we open some API to allow them to call the, call the capability we expose it in edge. They can use those APIs to enable their applications. This is an image. This is raw image. After raw image, we need to add some described information like in YAML or TOSCA. We then become, you know, official, official image then go to production department. Okay, we are going to introduce the backend technologies for this design. We will cover four topics. First is the, we implement a lot of, this is a high level picture. We are going to implement some already in this system. Some is, some module is in our plan to implement. Like we enable OpenStack, Kubernetes and .exe, we can compatible for those underlayer infrastructure. We also enable HIT and HEM and the aspects in the investigation to see we can enable it or not in our solution. We also use some ops tools to do the data collection for those operation information. And another module we are currently investigating is OpenList for the data planning part. We will introduce this later. This is a very special if you are managing edge DC because there are traffic from the user side to your edge applications. And so again, this is a full life cycle. First, we will talk about the options for edge DC. When we look into this diagram, actually there are several different type of object we are going to do orchestration. Like we have, like this is actually a city part UPF or in 4G it's a gateway or, okay, we call X gateway, right? This is actually defined in the ETSI standard. It's a DP part. And we also have some applications. We expose some service. This is a value added service. In short, it's what we call advanced VAS. And we also have some fair wars to managing the ITCT boundary and cloud edge boundary. So you can see we have different type of object to be orchestrated. And this is to show that for the different type of the objective, this is a destination we are going to host. And some objective we can cover the CICD and microservice governance process. Several of them are not included. But anyway, in summary, we have some requirement if the application is a VM based, like some VM based application, or especially for some VMF or some we call MEP, especially for network functions, they need like acceleration for DBTK or need original, it's only support VM based department. So we need a VM. And then we also have some functions like a China Unicorn expose some vehicle service in the future. And FNS is some functional network service exposed to third party application to use. So those services will be cognitive. We are trying to build a cognitive service back end. So we need the Kubernetes as a back end infrastructure. So this is very, because starting X is still in the, from version one to version two, right, still in the process of to be a production ice in the edge. So we can support all those three different types. And now we have three, we have different objective. And we have different target infrastructure to host those different objects. And how to describe application is a problem. So we need to go to the, sorry, the upper layer, there should be some title is in, okay, okay. Challenge two is about the edge application descriptor. When you, when we look into ETSS standard in, in this, in this white paper, they have a application descriptor defined. So when we look into the details, some, most attributes, I think it's a very quite straightforward, like the name, like the provider, but some specific attributes actually relate with education that says traffic, DNS, and latency. Those attributes are very important to edge. So we have for this, for this reference, reference application descriptor proposed from ETSS, how, how do we, you know, design a real, describe it to solve the authorization, authorization purpose. There are several candidates in, sorry, we have several candidates. First is Tosca for the, for the VNF guys, I think it's quite familiar with Tosca because we define the VNF in Tosca format. And we also have existing YAML, YAML descriptor for the cloud native application. And in OpenStack, we are quite familiar with Ho. It's actually a YAML style, but it's for VM-based applications. And we also have HEM chart. So this is how we define the different drivers and the application descriptor format. For the VM-based application and the DPMEP network related object, we will use Tosca because it's mature. And when you look into those features, like in scale out, it can easily find the reference from Tosca format. And for VNFs, I think there are a lot of VNF validations in telcons. And those experiences can be reused in the application. But for the cloud native, in our original design, we are using Tosca for cloud native application. But lately we found it's a little, you know, some, for a lot of developers that the developer are cloud native defined by YAML. So the, actually the processes that we change the format from YAML to Tosca, then when it goes to the detail underlayer, we call APAS, it's also translated from Tosca back to YAML. So it's a, sometimes we found, little, we found it's a, maybe it's not a good design. So we change it back and use the wrong YAML as described for cloud native application. And actually one thing we missing is in China unicorns, the China unicorns, there are some extend like we, we together with like Tencent, Baidu, Ali, they have some, actually, self-owned edge eyes, actually, software basis. So they have some application ecosystem behind those platform. For those things, we need to follow their standards to do the application described, because it can be easily to import the back-end ecosystem via their standards. Okay. So the fourth part is about a module that is not that familiar with IT guys. In ETSI, when you look into the reference architecture, people might be confused about the MEP, MEP part. So what MEP major purpose is actually is to handling the life cycle for application, like a logistic health check and subscription, those things, and also cover some scope in network. So, so we have different, you know, approach to, to build MEP. First is we build the code from scratch. Second, but it's not that easy. So we have a country Intel open project called Openness. This actually is a new open source project. Hopefully it can be imported in OpenStack Foundation, but we, we think it's, when we look into Openness and who we will introduce this later. When we look into Openness, it's can cover some scope for the MEP, like service authentication, service registry, especially for the DNS part. They have a DNS library from IP to domain name, and they also have very important is DNS local redirection. This is the very key for edge redirection. And you, it also has some capability for managing the N6, if you familiar with the VNF architecture, there are interface. They can handle those data plane from the UPF or the gateway. Okay. So that's it for the, for the demo and technology details part. So, wait. Thank you. Uh, so, uh, I'm away from Intel. Actually, uh, I work for, uh, software department for Intel. So our role for this case with, uh, nana cloud and China income is to, uh, support our ecosystem partners to leverage some open source project to speed up their, you know, edge computing, uh, uh, application development or some, you know, uh, some deployment in their real, uh, product environment. So, uh, my part, uh, so I would like to give you, uh, some updates on some, uh, open source projects that, uh, that we help, uh, nana cloud people and, uh, China Unicom people, uh, in this case. So, uh, let's first start with, uh, Stalingex project. I, I put this slide from our, uh, Stalingex, uh, project update on the keynote. So, uh, for this case, actually I, I, it is well aligned with our collaboration with China Unicom because, uh, when you study, uh, start, uh, look at Stalingex is actually a relatively new open source project. It is open source on last year, uh, May on, uh, Vancouver summit. And at that time we have our first release on, actually on October. And at that time, uh, we begin some collaboration and some early evaluation with China Unicom and nana cloud people to see how we can help them for their, you know, edge, edge DC, uh, project. And then, uh, we come to, as Ken mentioned, uh, we now at the phase two of Stalingex. It actually, we have a very important to release 2.0 on September. And, uh, it is a very, um, I would say a very big, uh, change for Stalingex. And, uh, the most important party is we, uh, integrate Kubernetes platform into the stack. And, uh, that means we provide the container support. And as a community, we grow steadily and with a lot of, uh, you know, ecosystem partners like China Unicom, like nana cloud people, nana cloud. And also, um, we limit a lot of patches. That means we will run completely on unmodified open stack release. We don't, as Stalingex, we don't keep local patches. So in the future, uh, as we can see what we are doing in this case is also allowing our, uh, goal is we want to partnership with our OS ways to, uh, service providers, these partners to scale to, you know, to have real, uh, deployment of Stalingex. So that's our next phase. So, uh, on this page, I would like to give you some updates for Stalingex 2.0. Uh, as I mentioned, it is a, uh, very, uh, big release and a very important release. So I list here some very high level update for Stalingex, uh, release 2.0. So, uh, it actually turns everything upside down. So the first one is, uh, as a community, uh, we do a lot of work, uh, to, uh, to build the technical ecosystem and to, you know, to invite contributors from all, all over the world. And also, you know, the Stalingex community is kind of, uh, uh, governed as a very open source way. So we elected the technical steering committee, uh, you know, from different companies. So this is a, uh, a lot of work around this area. And second one is, as I mentioned, we have a huge stride in involving the architecture of the project. I will give a specific, uh, slide for this one. So, and the rest of the big update, including we, uh, added a significant, uh, we added a significant lead to our documented suit and do some security enhancement, uh, some easy of deployment, uh, enhancement. And for the networking part, we do a lot of work to, you know, especially for the edge computing, some high performance component integration into the stack. So also in integrated, you know, like some PDP protocol for, uh, highly accurate synchronized timing. So this is some, uh, details for the, for the, uh, what we have done to achieve these goals. The first one is, as I mentioned, in, uh, in, uh, when we announced Stalingex and also in, uh, Stalingex, uh, 1.0 release, Stalingex is actually a hardened open stack platform, of course, with, uh, some enhancement and some added component, uh, like we call a flock services into the stack. And starting from, uh, 2.0, uh, 2.0, we, uh, integrate Kubernetes platform and, uh, uh, Stalingex is now a cloud native platform to, uh, to support both, uh, VMs by open stack Nova components and, uh, uh, uh, and Bell mental service by open stack ironic components. And now we support, uh, container, uh, by, uh, Kubernetes platforms. So the current architecture is kind of like, uh, on top of OS, the first layer is Kubernetes platform. And upon that, upon Kubernetes platform, uh, there is the, uh, the open stack run, uh, is, uh, run as a containerize, uh, uh, services on Kubernetes. So with that architecture, uh, the Stalingex, uh, is provide some, uh, uh, very flexibility for the, you know, end user and our OS partners for different requirements for their use cases. Like, Kay, uh, like Kay mentioned in their private slides, in some case they need some VNF, VNF workload. And in some case they need maybe some, uh, container workload for their, uh, use case. So that's a big change for Stalingex from the architecture per, uh, perspective. So for other things I mentioned, uh, we eliminate, uh, patches against, uh, upstream, uh, stack, open stack. So, um, uh, actually, uh, 2.0, we can turn five patches against Nova. And I think the latest status is, uh, uh, these pictures has been accepted by, uh, by the open stack upstream. So means, uh, from now on, um, uh, Stalingex will run completely on, on modified open stack. Now is we migrate from the, uh, 1.0 on pike now on stance. So we, we will, you know, um, keep the, uh, you know, uh, upgrade, uh, pace with the upstream open stack component. So this is, uh, uh, some, uh, update, big update from the, uh, architecture preview. And some other features, like I mentioned, we have, uh, a lot of work on documentation, suit, and, uh, uh, if you are interested in Stalingex, you can go to the Viki site. You will find it has a completely refreshed Viki, a lot of content there. You can either as a developer or either as a user, you can find a lot of information there. And for security part, we have enabled the TPM devices to store the secret and enable the UF, uh, UFI security boots on the, uh, on the stack. And for the configuration and deployment side, uh, as I mentioned for the, uh, for the initial host environment, we, uh, leverage, we use Ansible for the, uh, uh, configuration and deployment. And after that, we use the Amanda and OpenStack help to deployment the OpenStack service, uh, uh, on top of, uh, uh, Kubernetes. So for the network part, we support IPv4 and IPv6 for stack and Calico as the, you know, CNI part and some integrated multilas and SIO for high performance networking capabilities. And we integrate the PDP protocol for the, uh, accurate time, uh, synchronizing protocol. And with all the architecture and some big change, we refresh a lot of many components to align with, you know, the OpenStack stain and update, updating the storage, uh, part of the stack self to, uh, Mimic version. So that's for the Stalix and the second open source project that we think we can help, uh, China Unicom and, uh, Nanacloud for this case, for this collaboration is called Openness. Uh, Openness is actually, uh, from previously it's called Intel NEV SDK, Network Edge Virtualization SDK. And now it is an open source and renamed to Openness stands for Open Network Edge Services software. Uh, I got this one from the, uh, Openness, uh, website with the brief introduction and some, um, uh, some points why we do this project. So, uh, basically Openness is an open source reference toolkit that's, you know, to enable the ecosystem, especially for the, um, uh, integrator and application developer to, uh, create and deploy the new edge application and services. So it will make easier for them because our goals here is, uh, the first one is, as you can see, it abstracts out the complexity of the below, uh, network particle on the edge node. And also it enables some, uh, you know, uh, very important features like secure onboarding and management of applications with GUI, uh, uh, web portal. And also it, uh, provides some common, uh, uh, functions, functions, uh, for the, uh, edge node like, uh, access, terminations, traffic, steering, uh, all kinds of these things. And also it provides, uh, expose some, uh, standard APIs to the developer, to the web edge, uh, edge application developers. So this is our goal. And, uh, this is, uh, for the project information and, uh, since it is a relatively new, uh, younger than, uh, Stanley X. So you can find more information on the GitHub and, uh, the source code is here. Uh, you can have a try for this project. And the last, uh, size format part is, uh, uh, a little bit, uh, detailed for this open, uh, next project. Uh, you can see, uh, openness, uh, including, uh, several important modules, including the DNS service, the traffic policy, and, uh, especially, uh, I would like to mention is the, we call data plan services is on the, uh, this red block of this, on this diagram. So this is kind of, uh, dbtk based, uh, data plan software implementation. So, uh, especially for this case, I think, uh, Nanocloud has, uh, uh, would like to leverage this, uh, you know, this openness project, especially this NTS, uh, component to, to have a try in this to, for this China Unicom, uh, project. So, uh, other things like edge node configuration, edge node interface configuration, because we have different traffic, you know, from the UE side to the edge node side, we need to some, uh, configuration for different, uh, traffic type for, uh, for the edge node, for the edge platform. So, yeah, that, that's, that's what I have for, for, for my part. Hi, how about the time? Do we have a, have time for Q and A or run out of time? Any questions? So what, what, what, what 5G standard do support, uh, SA or NSA? Wait. Uh, in this architecture, we, uh, we can all support the NSA, NSA, and also the, the, the 4G, uh, EPC. Any other questions? Okay. Thank you.