 So, let's get started. Our topic today is StarlinkX Enhancements for Edge Networking. My name is Kai Leng, I'm a software engineer from Intel. And this is Dr. Chen Dan from Senior Director of Edge Computing from China Unicom. So, for today's presentation, I've divided into six sections. In the first section, I will introduce you what is Edge Computing and why we are talking about Edge Networking today. And I will also present you the StarlinkX project, which was opened last Vancouver Summit. I will also go deep diving into those technology details, especially focusing on the networking enhancement for the Edge. After that, Dr. Chen Dan will share a business case with the Department of StarlinkX in China Unicom. In the last two sections, I will firstly show you the status of StarlinkX Networking project and share you with our future plan for the Edge. So, here comes the first question, what is Striving Edge Computing? As you may already know that the traditional clock meeting has a view of data center centric, so that means that all your workloads should be wrong in your data centers. But with the growth of the amount of data and devices, as well as the diversity of your workloads at the Edge, the data center centric view may not meet those requirements at the Edge. So, here comes the Edge Computing. For example, you may need in the auto normal striving use cases that the vehicles may need the latency to be less than 5 millisecond, so that latency is important. You may also need some other cases that are higher bandwidth to ensure that your user experience. We may also need the data locality for some use cases where the data should be kept private and secure. But not least, we also need connectivity and security to be enhanced at the Edge. For example, the user may want to continue their services even when there is very limited network connectivity. And since our smart devices today have more and more data stored in transfer, so the added level of security is also crucial. So, as we just presented, that Edge Computing is quite challenging and charming. It's quite demanding and charming. There are also some challenges associated with it. For example, we need to improve the service capabilities in the AR and VR applications. We may also need to comply with the data locality when dealing with healthcare center use cases. We may also need to significantly reduce application latency when in the autonomous driving use cases. So here, why we are talking about Edge networking? Because we already found out that the computing driving factors have quite a lot of comments with the networking requirements. For example, latency and bandwidth requires more performance and efficient network. And we may also need a remote management of complex non-homogeneous networks for the data locality and scalability. For the connectivity, it needs reliability and autonomous side operations with limited connectivity. And we also need enhanced network security, KPEX, OPEX, as well as TTM, not only for Edge computing, but also for Edge networking. So the networking plays a key role at the Edge. This means if we want our Edge infrastructure to land in our real life, we need to firstly fill up those gaps at the Edge networking. So here comes the StarlinkX project. What is StarlinkX project and what it is doing to meet those requirements? The first question is what problems is StarlinkX solving as the data growth is massive? So StarlinkX is providing enhancement to provide a smarter network. The second thing that StarlinkX is trying to do is to enhance the... to deal with the distributed scenario where the architecture can be different and the manageability can be an issue. The third thing that StarlinkX is trying to do is to enhance your reliability for your Edge side. So think about today if you need to build up your Edge infrastructure. You may need to evaluate, you may need to firstly scope your hardware requirements and evaluate a bunch of software... open source software components including OpenStack. You may pick up OpenStack and do some... do quite a lot of reconfigurations to meet your business requirements. Your business case may be one of those listed on the right. It can be drone surveillance, agriculture and so on. So the intent of StarlinkX project is to provide a packed solution which is a combination of a bunch of... open source components to give the reconfigurability of the proven technology for the Edge cloud. It also provides system-wide orchestration, simple deployment to geographically... diverse remote Edge regions. It is also a deployment-ready solution with quite a lot of enhancement including reliability, high availability and so on. So StarlinkX is an Edge virtualization platform which is in general composed of two parts. The first part is in the middle, it is a combine of some StarlinkX services. And there is another part named upstream projects which is composed of firstly the OpenStack components including Nova, Neutron, Cinder, Swift and so on. And another part of upstream projects are Kubernetes, OBS, DBDK, Libvert and so on. So in today's presentation we are mainly focused on how StarlinkX is dealing with the network requirement at the Edge. As we just said, benefit from the packed solution that StarlinkX can scale small or large easily. It can have the single dual or multiple server deployment to meet your hardware constraints and also adapt it to your business cases. So in the next section let's go deep diving into the technology details of the StarlinkX networking enhancement. As we said that the first Edge networking requirement is for the performance and efficiency. When we talk about today the network performance we may think there are two words that can come up into our mind. One is bandwidth, another one is latency. So for the bandwidth StarlinkX is providing node-to-node VM to VM high-performance networking by enabling OBS, DBDK, SIOV and PCI path through. It is also working with OpenStack upstream to enable smarting and FPGA the next generation of hardware acceleration technology. From the latency side StarlinkX has done quite a lot of enhancement in KVM to reduce to provide real-time and low latency enhancement. This including the reduction of variability of interrupt latency as well as for the reduction of high-resolution timer latency. So what is different from OpenStack upstream is that StarlinkX is providing a mission-ready network performance by enabling OBS, DBDK enabled by default. And with all the enhancement in that practice solution. There is another section which will talk about hardware acceleration technologies tomorrow if you're interested. So StarlinkX is providing another component named configuration management. It provides the installation and configuration mobility for your Edge cloud. It also it firstly provides the ability of auto-discovery of your new nodes at the Edge site and think about what you need to do if you would like to reach your network performance, your best network performance today. You may need to do quite a lot of reconfigurations of your network parameters. So this component provides the facility of those configurations. And the benefit from the system inventory agents deployed on each host, we can do the nodal level configuration including the network interface, huge page numbers, and size for all your hosts at the Edge. It also provides the ability of inventory discovery if your host is associated with one of the hardware acceleration technologies like SRV-PCF or some others, it can be easily discovered and managed. So from the network efficiency side, StarlinkX is doing enhancement based on OpenStack Neutron to do some enhancement from several aspects. For example, it does some bulk operations and removes some unnecessary operations for L2L3 scheduling and a rescheduling. For L2L3 agent, it introduced event-driven sync task to replace the traditional periodic sync task so that your system can be more responsive. So StarlinkX is also doing some concurrency scenario enhancement and by handling some still RPC messages to reduce the number of invalid scheduling operations in your Edge cloud. StarlinkX is also doing based on L2L3 population to introduce a registration mechanism so that some other components like floating IP, BGP, UVPN to leverage that L2L3 population technology to reduce the ARP messages across all the sites. StarlinkX is also trying to support a VLAN transparent and doing some enhancement in quality of service, BGP, UVPN, service function chain and so on. So the next requirement that StarlinkX is trying to fill is that the movement of complex and non-homogeneous networks. So the firstly StarlinkX is introducing a host-management component which can provide a full lifecycle management of the host via REST API. So the first benefit of this component is that it can detect and automatically handle the failure and initiate recovery of your host. It also supports automated and user-level cluster connectivity tasks so that your Edge operator can easily find where the failure is from. It also improves the way the physical network topology is presented to your Edge operator. If you want to know which part of your host are connected to which external physical infrastructure, all these provide an improved low touch manabilty of your Edge site. But host management can also do some enhancement for the reliability since it also monitors the processes on your host as well as the results utilization. So StarlinkX is also doing another enhancement based on OpenSec Neutron to introduce a network segment range management. So as we just discussed that in your Edge site that you may want to, I mean the architecture are different and you may want the segment range 0 to service business 0, segment range n to service business n. So when you would like to do this in the current OpenStack deployment you will need to interact directly with the host configurations and restart your Neutron server. But with this feature that StarlinkX provides you can manage the underlying segment range by REST API where you do not need any direct host interaction anymore and this also enables full network orchestration. For the cloud administrator it gives them the privilege to control the segment range globally or on a pertinent basis. So you can assign tenant 0 to connect to use segment range 0 which will eventually service business 0 and you can assign tenant 1 to use the segment range k and n and so on. As the external physical infrastructure can change their configurations quite frequently at the Edge so StarlinkX is also providing the scaling of segment range so that to meet your requirements. The third thing that StarlinkX is trying to do is to provide enhanced reliability and autonomous side operations with limited connectivity. So StarlinkX is enhanced based on OpenStack Neutron to provide the L2L3 rescheduling. Today it can support automatic rescheduling of DHCP servers as well as the routers from the L2L3 agents from the newly come up agents or from the agents which have the load unbalanced. Here is the example of DHCP server ability. You may see in this illustration that the three DHCP agents have different loading that some overloaded but some almost empty. So StarlinkX is introducing a threshold based algorithm to automatically rebalance your loads of those computers or DHCP agents. Actually StarlinkX is also working with upstream to provide this rescheduling ability via REST API or via script approach. StarlinkX is also evaluating the redistribution based on external monitoring system with more information like CPU, memory and so on. Furthermore StarlinkX is providing a full management component which provides a framework for the infrastructure services to report their errors or alarms or events via API. All these alarms and events will be stored in centralized logging or alarm system so that this can be managed by the operator via REST API. All the alarm and logs can be platform related. I mean this can be physical and it can also be virtual resources. Nowadays StarlinkX supports the network management including network connectivity, neutral agents, ML2 drivers, BGP peers and so on. Here is an overview of how StarlinkX is doing with infrastructure, high availability and orchestration. With the components that I've introduced in the previous slides like fold management, config management and host management that StarlinkX is using based on external entity named the infrastructure service to manage and orchestrate VM level, high availability and low migration. Furthermore, StarlinkX is providing a complete stack which is composed of the controller level failover as well as for the service level monitoring and migration. The last ad requirement that StarlinkX is trying to fill up is enhanced network security. There are quite a lot of firewall driver solutions currently available in the upstream. StarlinkX is enhanced based on OpenStack Neutral to select OVS DPDK firewall driver. There are some solutions which are stateful, some are stateless, some are native, some are non-native. Currently StarlinkX is using OpenFlow Plus contract based OVS DPDK driver and this is a totally used space and stateful and native solution. Apart from this StarlinkX is providing the patching support via software management which you can update your software versions to mitigate some of the network vulnerabilities. So in the next section I will pass the presentation to Dr. Chengdan to share with you some business insights in the China Unicom. Good afternoon everyone. I'm Chengdan from China Unicom. Before I present the StarlinkX test results, please allow me to introduce the China Unicom's full stack cloud network architecture and the ad service platform. As we know with the development and the combination of the SDN, NFA, Big Data and artificial intelligence technologies, the Fauzi network will become the key infrastructure in the digital transformation of all industries. So the Fauzi services have the characteristics of lower latency, the larger bandwidth and the more extensive connection. The traditional vertical, the Fauzi network has many deficiencies. For example, in the areas of the resource sharing, the agile, the innovation, the flexible expansion and simple operation and management. So to effectively meet the service requirements of EMBB, MMTC and URRC or Fauzi network as well as to enhance the industrial competitiveness, almost all the global telecom operators have started the network reconstruction and transformation aiming to establish the DC-centered new network. As shown in this picture, the Fauzi network of China Unicom will be an elastic network based on the three-layer DC, that is regional DC, the local DC, the IDDC, which will quickly respond to and shorten the deployment time of new services. Towards 2020 to 2022, China Unicom will construct the 70 to 80 regional DCs, 600 to 700 local DCs and more than 6,000 IDDCs. With new the management and the business models, as you know the multiple sites are computing technology as a result of the ICT integration. For the telecom operators, you know that tens of thousands of the IDDCs may be the best high-quality resources compared with the OTT companies such as Ticent, Alibaba, Facebook and so on. Now China Unicom is committed to building an open iCloud service platform and providing the rich service capabilities and the unified APIs for the application developers aiming to establish the incubation and the commercial use of the innovative Fauzi network. The three main characteristics on our platform, the first one is open, we can open the cloud, the API, we can provide the open management API and the open network and application capabilities, including the LBS, the Anis, the AI, the Kills, the real-time, the transcoding, the cloud rendering and so on. The second one is agility. We can provide the job as an past services, as well as job authorization and management. So customers can apply for their resources on demand. However, we are also facing many challenges in the process of IDDC construction. For example, the specific customized servers and lightweight open-stack or container are needed for adapting to the bad environment of the telecom-obsessed offices. But the good news is that we have found that the Starly X project could perfectly satisfy the ads requirements of our ad platform. As we know, the Starly X project was announced at the last OpenStack Summit in Vancouver and released all the results on the source code officially several days before. So the Starly X is an ad computing-specific cloud management project with many module and advanced features we need. So Chen Yunming can be decided to use the Starly X as a base of ad computing testing at the first stage. Next, I will go through this addition, this adding services in more details. This page is talking about the fault management service in Starly X and it's even the suppression feature. This part can be mapped to the fault management interface spec of the ETSI's IMEs documents. The left picture is a document piece from the ETSI's IMEs spec and the right side is two snapshot pictures of Starly X running webpages. The content inside the red color block is from Starly X fault management service which is enhanced feature comparing to the original OpenStack components. The second step note picture of Starly X fault management UI is about the log event processing result. And this page is about the system configuration service. What is the system management form? So the system management is a service to provide man-hyper functions for developer under configuration. For example, it can auto-discover the new nodes in the edge site and it can manage the installation and configuration parameters such as the neutron config, the urgent to the parameters, etc. The left picture is also the related content from ETSI's IMEs spec and the right side, three pictures are the UI pages, the snapshot. That's a very important feature. There is a very important feature here that is the provide network topology operation UI which is a missing feature in the upstream OpenStack and is very helpful for the deployment automation in edge scenarios. And meanwhile, the UI can show many of the low-level details of natural nicks as well. Here is the voting motion, HA, management and acceleration. This is the validation test in China Unicam's Cloud Laboratory by killing host the nodes manually to take the operations of automatic live migration. Currently, the testing results are very good. We can see the live migration time of the regular... The central VM may need about only 30 seconds. The bottom right picture is going to show the source code, the state of this module. It is written by the C++ with high efficiency. In this slide, the controller HA's optimization is also very critical for edge computing. Though it is not part of the ETSI's IMEs spec, SLAX has three kinds of deployment to the skill, the single node, the dual nodes and the multiple nodes. The controller HA feature is to focus on the dual node deployment. The right side pictures are showing the test results in our lab. The first test is to stop one controller node computer service manually. Then it can be restarted after one second. The second test is to disable it instead, and it needs 15 seconds to be restarted. If we show down one controller node and restart it again, most of the controller services can be restarted. But with only one exception, that is a neutral service, which is a gap currently. In this page, the inventory, the management is also not part of the ETSI's IMEs document, but is very important for the edge computing cloud. These types of detailed information can be collected by the SLAX for higher-layer software. The first one is DPPK. Since for natural, the interfaces, the parameters, the second one is a language of the physical, the nicks. The third one is hardware acceleration devices such as the SIOV, the SMARTNIC, and so on. So, and the strong support from the Intel and the Nandan cloud, we had a full validation on SLAX in the past six months. The SLAX can improve the efficiency on the high availability in both VM and controller level. It can also optimize the required nodes number to fit the two-fits deployment scenarios. Besides the features are added in false management, the loading, the upgrading, the inventory, the discovering, and the VMF acceleration, which has an interface recommended by the ETSI spec. So, the SLAX can provide the capabilities in VM applications and VMF hostings. It can also be extended to support the containerize the applications in the future. ETSI SLAX is one of the top strategies for China Unicam to build and open the ad platform to provide open interfaces, to support the ecosystem applications hosting and avoids the vendor locking. ETSI opens the infrastructure technology for computing. So, SLAX will play an essential role in China Unicam's edge strategy. So, the next session, back to Kailun. So, let's talk a little bit about how StarlinkX is cooperating with OpenStack upstream. You may have some confusion that StarlinkX have quite a lot of components that are in common with OpenStack upstream. So, actually, the StarlinkX upstream is composed of two parts. The first one is the OpenStack-related components, including Nova, Neutron, Cinder, Swift, and so on. And there is another part, which are the OpenSouth blocks, including Kubernetes, OBS, DBDK, Libvert, KVM, and so on. So, StarlinkX has defined upstream workflow, which makes them to analyze firstly their legacy patches in those staging reports and send the reports to the upstream community to review with them. This will decide the direction of how these legacy patches enhancement will be dealt with. They will either be dropped, be re-based, be capped, or be upstreamed. If they are decided to be upstreamed, we will work with the upstream OpenStack projects, for example, to write up the specifications, bug-fixings, or patches to have them merged into those upstream projects like OpenStack. So, the eventual target of StarlinkX upstreaming is to align with upstream. This means there will be no zero patch in the staging reports of all the staging reports, including STX, Neutron, Nova, etc. And StarlinkX will update to OpenStack thing for the StarlinkX July 2019 release. This is a progress of StarlinkX networking project. We have around 150 patches in Neutron and Neutron Lib, and we have divided them into 18 functions including QS, DHCP, and so on. Currently, we have seven roofing to review in the PTG session and 100 development per the alignment with OpenStack community. We have some other enhancements and bug fixes already merged or being reviewed. So, from the downstream side, StarlinkX is continuing to do some enhancements for the edge, like OVS-DPDK driver we've just mentioned, as well as for the V-switch configurability, DP support. And RX-MODEQ affinity support and so on. On the other side, StarlinkX is trying to support the containerized OpenStack services to fill out the gaps there with the Kubernetes deployment of OpenStack services. And StarlinkX networking is also enabling the V-switch functions based on nodal labels. So, let's talk a little bit about our future plan for the edge. StarlinkX is targeting the container architecture for the next generation. So, StarlinkX networking, firstly, need to support that container architecture by two means. The first one is to support the containerized services to fill out the gaps in OpenStack Helm to support OVS-DPDK. The other one is to support the containerized VNF, we can see CNF in that architecture and have the accelerated network performance hardware features to be enabled in the containers. We also need to fill out the gaps in the containers support of MODE tenants and MODE interface. We are also evaluating the service function chain in the containers. On the other side, StarlinkX is doing some enhancements for the edge, like time sensitive networking and network edge virtualization SDK integration. And StarlinkX is also doing some high level integration with the orchestration system like on-app, open networking automation platform to have a high level orchestration system to take control over the StarlinkX. This means to take control over your edge side. I think that's all for our presentation. I'm glad to answer one of your questions. So, within the StarlinkX project, you had a number of different parts. Will it be possible to use some of those parts but not the whole system on top of an open stack cloud? I think of things like the configuration management or the host availability. The answer to the question I think is yes, for sure. But as you know, as I've just presented that one of the benefits that StarlinkX project is providing is to provide that packed solution. But in the future, there will be only the StarlinkX services in the StarlinkX project with no, I mean, no other staging reports like Neutron, Nova, STX, Neutron, STX, Nova, et cetera. So, you can pick up any of the StarlinkX service like a configuration management or foot management or host management with one of your edge infrastructure. But you cannot expect all the functionalities should be worked well. Things, we may miss up some of the other external entity, for example, infrastructure service to handle those, I mean, orchestration or alarm migration. You can pick up some of them to, I mean, to meet your use cases. Okay, any other questions? I have a simple question. Which specific use cases are you looking at in China? Okay, I would like Dr. Shen to answer that question. Which specific use case are you looking at? Our specific use case. You know, in the 2020 and China Unicom have deployed the pillars, the large scale pillars in 15 China provinces, including Beijing, Shanghai, Shenzhen, Tianjin and so on. So, the scenarios include such as the HD video, the cloud gaming and the stadium and the water project, the smart city and the smart agriculture and maybe the spot, the venue and so on. So, that's it. So, man, and we have this year we deployed the 30,000 trials in 15 provinces. If you are interested in it, maybe we can chat back to China. Okay, that's all.