 Ladies and gentlemen, good afternoon, everyone. I'm Chen Dan from Chen Unicom. Today is my great honor to give this presentation, Comparison Between Open-Eye Projects and ETSI MEC standards. Together with my partners, Li Kai from Nanan Cloud and Ding Jianfeng from Intel. Let's see now. You can go ahead. OK. Please. In the first part, I'll introduce the architecture of Chen Unicom new network, that is KubeNet, as well as the strategic layout and large-scale pilot of MEC iCloud of Chen Unicom. As we know, with the development and the combination of SDN, NFV, big data and artificial intelligence technologies, the Fauzi network will become the key infrastructure in the digital transformation of all industries. Fauzi services have the characteristics of lower latency, large bandwidth, and more extensive connection. The traditional vertical Fauzi network architecture has many deficiencies in the aspect of results, sharing the agile innovation, the flexible expansion, and simple operation and management. So to effectively meet the service requirements of EMBB, MMTC, and URC in Fauzi network, as well as enhance the industrial competitiveness of most all the global telecom operators, have started the network structure and transformation aiming to establish the DC-centered new networks. As shown in this picture, the Fauzi network of Chen Unicom will be an elastic network based on the three-layer DCs, that is regional DC, the local DC, and the IDDC, which will quickly respond to and shorten the deployment time of new services. Towards 2022, Chen Unicom will construct the 72IT, the regional DCs, the 600 to 600 and to 700 local DCs and more than 6,000 IDDCs with new management and business models. It is well known that the multi-access ad computing is a result of the ICT integration. So for the telecom operators, tens of thousands of the IDDCs are the best high-quality resources compared with OTT companies, such as Tencent, the Alibaba, and Fospoke, and so on. Now, Chen Unicom is committed to building an open ad cloud services platform and providing the resource capabilities and unified APIs for applications developers, aiming to accelerate the incubation and commercial use of the innovative Fauzi services. There are three main characteristics of our ad platform. The first one is open. We can provide the open cloud API, the open management API, and the open network capabilities, such as the LBS, the Amis, the QS, the AI, the real-time test coding, the cloud rendering, and so on. The second one is agility. We can provide the agile ads and past services, as well as the agile orchestration and management. So the customers can apply the online resources on demand. There are also many challenges in the process of IDDC construction, for example. The specific customized servers and lightweight open stack or containers are needed for adapting to the bad environment of the telecom access offices. So this slide shows the edge business progress of Chen Unicom. In 2018 MWC exhibition in Barcelona, Chen Unicom announced the large-scale cloud pilots in 15 China provinces, including Beijing, Shanghai, Guangdong, and Shenzhen. So the pilot scenarios include but not limited to the video and games, the industrial manufacturing, the traffic and V2X, venues, intelligent security, the tourist attraction, water projects, and so on. As an official communication service partner, for the 2022 Winter Olympics, we have started to establish the network infrastructure and verify the related services, such as the 360 degree, the VLR broadcast, based on the Fauzi and ad computing technologies. So the services of this video is based on the Fauzi networks and the ad computing technologies. Now Chen Unicom is expecting to work with more industrial technology, and we are also working with more industrial technology, and we are also working with more industrial technology, and we are also expecting to work with more industrial partners in the ad cloud pilot, the project, and commercial deployment, aiming to co-build the Fauzi-oriented ad ecosystem. Up to now, we have more than 100 partners focusing on the HD video, the enterprise and government services, the intelligent traffic and industrial IoT. Besides, we have established the ad cloud industry in Beijing. We investigate on the Starlink X project together with Intel and Nanan Cloud. In the following sessions, Li Kai and Ding Jianfeng will introduce our research and test results in detail. Welcome, Li Kai. Thank you, Dr. So the coming two points is about the ETSI Mac reference architecture and our decision and our practice. So before we go detail that for there are some different between telcon and IT environment. In telcon, there are some standards, like mandatory standard, and some reference standard, like ETSI. So far, we only have a clear standard defined by ETSI for the Mac and they define the framework, they define the interface and they define the modules that should be included, but this is not mandatory as I recommended. So, but that's what we have. If you want to build an edge, there are no successful reference that we can learn. So we follow the standard and see what's the gap, what we can have from the open project, especially for the open edge project and what we can do based on those projects. So there are three open projects related with edge. Those are Aquino, actually announced by Linux foundation and the VCO actually is also under Linux foundation. This morning Redhead announced it in the coming maybe their code base and also called C-O-R-D Central Office Reacted as a data center. So they have different progress and they have different missions. Overall, they are both to be deployed in the edge data center for the telcons. We are not talking about the confusion relationship between Linux foundation and OpenStat foundation, right? But they are all of them are open technology projects. And the underlayer, you can see the underlayer sorry the underlayer edge infra is should be the same, like most of them include OpenStat and Kubernetes. But the uplayer, especially the manual and the SDN layer are different. They select like ODL or NAP or ONOS or XOS, right? This is what code have selected. So in summary for Kubernetes, it should be the underlayer infrastructure options but for the manual especially for the application infrastructure layer or the data plane layer, they might be different. So currently you know Aquino is announced in this year, right? And VCO actually started from 2016 and we the version that they have released, Aquino just released, not Aquino, sorry. Study X just released its first version, right? And VCO have two version of demo, POC demo and code actually already in 6.0 version. So this is the first version from those three project and let we take a look into their architectures. First VCO, actually VCO, we cannot find some code base from online but we see some press release that there are two demos as already and one is in OCP and another is OCP demo. But the technology they use actually is open source software stack and this project is led by OpenRV foundation and integrated with OpenStack and some network project. This is code. It's like actually it's used OpenStack as the foundation of the infrastructure. They also use XOS to do the workflow design for the layout or the archer stream and they use onus as the data plane layer and they develop some effort in hardware design like they use OpenRack white box switches as a hardware layer and they also there are some license issue maybe but they use some open VNF project so this is the code architecture. Next page actually is more like I should say it's a blueprint architecture. It's not real but it's the architecture announced from AT&T when the first release this architecture. It's a most complex architecture among those three projects and you can see there are up-layer life cycle tools and like underlayer look code life cycle also includes some ops tools in the right side. Okay. So two projects actually relate to one one is starting X and another is airship. There are some you know different scope starting X is more focused on infrastructure in one edge side and airship do something like deployment to multi-side and you know they have some progress actually in this summit but we saw some synergy effort happen in the past months but actually they have some overlap scope especially in the deployment effort. So this is what we have from the architecture level for those three projects and then we take a look for the ETSI architecture. This is actually is the ETSI Mac. There are another architecture actually called Mac deployed in nfv environment will more detail to split like manual module into nfvo and meao but then we take a look into this basic framework first. We see the full architecture design from ETSI there are several models. The first one is I should say there are kind facing portal there are other APP proxy to redirect to the request to different edge and they also have meao is actually the application and the MEPM is actually related with life cycle management for APP for infrastructure and for some rule and request management actually those rule and request management is related with to call the VNF or PNF API to enable the traffic to be the most in most deployment model we might be in the aggregate data center and mobile edge host is about edge nfvi this is for host applications service and the VNF in edge side so this is overall architecture actually in the ETSI framework or the white paper you can also see some detailed definition for the MX is about the interface to external users and M is about management interface MP is about the platform interface so if we map these architecture with we introduced before we will skip VCO because we didn't find a very detailed architecture or source code so we just compare ETSI architecture with code and Acreno so this is a diagram the green boxes is about the model that we have in code like they use ODL as a data plane they also use OpenStack and Kubernetes as openinfra they use a tool to deploy OpenStack and they also use XOS as a tool but it's a very rough mapping and you can also see there are some other things like portal or like some OSS in OSS they provide an analytic model to pull those information happening in different VNF or infrastructure to suppress those components up to OSS layer when we go to Acreno this is another picture actually it's still a rough design actually I asked a question to the guy from AT&T that when they designed the big picture what is the the answer is that AT&T framework is just a reference so we only can do rough comparison so you can see we see some from actually it's for the V and for starting X is to do the network network control computer storage also they define some network edge models like unicycle and tricycle models and they will use own app as orchestration but we think it might be too heavy and they also use a design a lightweight edge orchestration model and they also they also have some design admin model user interface and user interface this is what we have and from the Acreno design we saw some tools actually it's very important to telcon for the OSS tools like AI tools testing framework those things and inventory tools we will introduce what our decision making and what we are going to design based on those projects so here we are going to talk about the China Unicons Mac architecture before that we will have some slides to explain the decision factors that we made when we do the selection from those three projects the decision making factors actually this is a diagram to show the three angles one is about the readiness is mature to support a production environment second is agency because the edge business is emerging and we should decide to get into this area as soon as possible the third one is our the size of the every circle when we replace with this open technology what's the return we can get so this table is about some options for the challenges and our consideration is about first we need to focus on those more readiness models and more urgent models second we will go into the hard one like open VNF we think it might be a little bit difficult and for the web box infrastructure it's ongoing but this is our strategy first to use the technology more mature second we go to the difficult ones and regarding code and Aquino this is a diagram we show from six angles first is VNF capability hardware redesign mobile edge application application edge VNFI and edge multi-edge deployment and OSS data feeding and from our perspective we think the code cover some scope like VNF the web box hardware redesign this is very important to edge because in some edge data center they need a re-architecture based on the hardware but we think it might be difficult and for the Airship Plus Stunning X because Stunning X is from Wind River there are some validation in the past one or two years based on the wind river platform to support VNFI to support like acceleration features those things so we think this might be more ready if we deploy it into edge environment but I should say it's a bit of a conflict because if code use Distro from Stunning X it's durable I think and this is the design we have done there are several colors the block the purple one is actually already existing from China Unicorn like OSS they already have some OSS platform and they also have VNFI platform NFEO and NFEM this is the existing one and in some we add two black boxes in this diagram this most commercial one like the base like the VNF and PNF Mac or the UPF we use and the yellow one and the green one actually we port from open source and we use it as a base to start our edge journey and the blue one actually is a third party third party applications like MEPP and those are ecosystem applications to support different edge so that's why besides there are ecosystem the ecosystem is very important and so the red one those models we are going to develop we can take a look for the portal level we might be have we need three portal one is for user another for user admin the third one is actually for ISV they will upload the application to edge we need to check and then put into application repository and for the life cycle side actually in the ETSS design there are application like life cycle and VNF life cycle we add two another one IS life cycle in the future we might need some path life cycle management and on the layer there are existing model to manage the data centers for cloud but in the future we need some data center management for edge so we want to add some MEDC managers in the VIM layer okay so that's it and Jian Feng will go through some validation progress we have done in the past months about the study X okay thank you so next I will share about some more detail of study X and the study X validation result in our lab in the collaboration yeah this page is just a very whole picture for the status of study X project as we know the study X project has been announced several months before in last summit and recently the study X code base has just released officially this is good news for study X study X the position is comparing to the ETSI's MAC spec study X will do the infrastructure basic for all the edge computing cloud from this architecture in the corner the study X project actually is based on OpenStack but study X has a lot of enhancement component comparing to vanilla upstream OpenStack especially we can see there are some configuration management some fault management host management and some other life cycle software application life cycle management component this is the part I want to talk about here by comparing to the ETSI MAC spec to give a clear mapping relationship between the study X actual code base and the ETSI MAC spec yeah the first component I want to talk about is fault management and also the event suppression features from the left part is from the ETSI MAC spec for the fault manager part there is some definition for the interface feature requirement some other definitions here and the right side as the actual study X running the snapshot of the UI we can see here the up picture we can see in the right block is the part study X has enhancement work comparing to upstream OpenStack is for advanced fault management features and the bottom one the second picture is talk about the event suppression and the event management features it's also the add-on feature comparing upstream OpenStack code base this is very from the testing and the validation you can see this part is very valuable to our customers the next component is about the system configurations the system configurations is about the mainly to provide the advanced systems resource discovery and report features for the deployment again the left part the bottom the back the bottom left picture is the architecture of the study X project to how to do the advanced system configuration features here maybe the time is limited I will not go through all the details of the design with the architecture but we can see from the snapshot of the UI we can have a glance have a feeling about the feature of the study X about the system configurations it's very important to say to mention here the first picture we can see the study X UI can show the topology or provide network topology that's very important it's the feature the upstream can provide we know about that the overstecker can provide the tenant network topology in a visual way but for the provider network that's the part of the study X has a more advanced feature to the easy deployment management and again the single in the middle and the bottom picture from this picture we can see from the study X by UI or by API the study X can provide more detailed information about the hardware resource for example for the network Knicks the study X can provide the detailed information about the MTU about the bandwidth ability for some other hardware acceleration information for example here we can see in the bottom picture this UI this web page can provide a lot of detailed information about the every network part this is the system configuration features the next is virtual machine actually management we have validation testing the left web page means during our testing sometimes we can just shut down the physical machine to trigger the study X the virtual machine actually management community to work to do the live migration from the failure failure node live node from the result of the PIN command we can see there are three lines so the network cannot be reached means the virtual machine cannot be accessed but after from our testing after 30 seconds for the CintoS virtual machine can be used again so from the validation actually the data is very promising 30 seconds is quite quickly for other regular scenario and here is code of the study X component for the HSE control we can see from the code this component is written by C++ next is the controller optimization component for the open stack or other control play services actually management from the left this is a picture to show the study X deployment or scale definition there are three different deployment mode for the study X project the left one is a single physical node and the middle one is the two node mode in this mode two nodes can be deployed the same control playing services and to be the support to each other and the third one is more than three nodes it's more like a regular open stack deployment for the other common private cloud here actually we are talking about the controller HSE control is for the middle deployment mode is two physical node deployment from our testing we can see sometimes we have three different cases to test the controller HSE features the first one is in this table the first one we manually to stop one controller here we use the normal computer services we just stop it and after one second the controller services will be restarted successfully and the second test case we just disable the service in the controller node then the result will be the 15 seconds later the NOAA computer service will be studied and enabled again very promising and the third one is we just shut it down one controller HOST the physical node will be shut down how about the result the result we can see from maybe this is a gap we need to do more work to make it better we can see the neutral the neutral service will be needed to be restarted manually this is our result can show and again this is the picture this is the UISM we can see during our validation after we disable or stop or just shut down the node the services the warning and the alarm will be shown in the fourth management page very visible management the next is the eventually management this part the feature is about how to discovery the detailed hardware deformation and exposure to the high-level software to be more smart or more efficient scheduling the first type of information is the DPDK DPDK cared about information the second system maybe some network hardware can be used together with the DPDK better or worse then the DPDK software can get the detailed information about that for the better scheduling decision the next is for the physical network the bandwidth data can be the hardware acceleration devices information in the edge computing cloud environment for example especially we can see there are two important features we can get from the hardware is the SRV capability and the SMARNIC capability that's very important for the smart scheduling that's that's my part of the detail information about the study X and the mapping relationship to ETSI Max back and the next part is the conclusion I would like to invite to have the conclusion again add one comment to the slides that showed in this page the dual model actually is very important to an edge in most of our department in the cloud there are three node normally required to avoid the your brain's spirit to a database but in starting X it's provided two node model especially for the edge in very limited space so the conclusion here is among those three projects that we showed in the slides code and Aquino have code base and they have some progress in version release and we think it's more mature for us to start and between code and Aquino code is with more scope in VNF open source and hardware hardware you know integration for central offices with Aquino is more focused on edge infra and VIM things according to the China Unicorns roadmap we will keep on those projects not final decision but we will see the progress and mapping to ETSI standards it's not a mandatory standard it's a reference and we see some gaps for those three projects especially in the mobile edge application of gestation like the question asking last session how to do the application roaming between different edge side those things like the MEP for the life cycle models and also have some there are some work which is in most open source an open stack company or provider they may make those experiences to integrate with VNF and PNF functions like how to call those APIs to enable the traffic things DNS handling things it's not totally mature yet but I think it's a good start okay that's it any questions or okay we have the round out of time excuse me can we take one question one question is to what extent do you think that your traditional telco VNFs will share resources with these new MEC applications whether there will be new like ice layer for the MEC or whether they will share the same infrastructure with your telco services sorry and can you repeat the question are you planning to use the same open stack with your telco services like your UPF and with the MEC the same open stack we will use the same infrastructure to support VNF and MEC APPs are you planning to use the same infrastructure they can be the same for the under layer infrastructure because for the technology side they can be the same but in some cases when you look into a telco they have different operation teams to support the VNF operation and the application operation so they might require some isolate environment to support VNF and application just a question how China Unicom is planning to separate or use the same infrastructure layer so for China Unicom the I think for almost our global telecom operators the infrastructure of the VNF such as UPF and the MEPP are the same but I think the most important that at the access office maybe the scale is very small so we have several smaller servers so I think one that we have the specific requirement of the servers such as we need the small scale servers and we need the high performance the servers supporting the AI and other computing capabilities so for China Unicom and when there is a construction of the IDC and the regional and local DCs maybe the servers is different but at the IDC we use the same infrastructure for all the VNFs and the MEPP okay thank you