 Hello, everyone. Welcome to Open Extracture Summit. My name is Hao Wang, came from Fiber Home Company. My colleague Lin Xiangchen and I will share our topic about the edge computing today. As we know, now edge computing is developing repeatedly over the world. From 2019, the Fiber Home also has its own edge computing solution based on ThunX and other open source projects. We are trying to improve the efficiently and innovation in different areas like 5G, industrial internet, intelligent plant and so on. So in this breakout session, we will share some inference about the issues we solved and also some thinking about the future of edge computing infrastructure. Today, we are going to talk about the edge computing solution of Fiber Home, use cases and the issues we met, thinking about the future of edge computing. Okay. So first, I want to introduce the history of edge computing in Fiber Home. The Fiber Home is an internationally information and the communication network products and the solutions provider. We are focused on cloud from 2010. And then in 2018, we started up to research and develop the edge computing infrastructure cloud from plan 4. In 2019, Fiber Home joined into the Starling X community to make our contributions and also leading the China national project of edge computing research. So now we release FITOS light 2.0 plan 4 edge. From those years, Fiber Home gained some certification and awards in edge computing area, like edge cloud leading solution award in opening for China days. Participate in making three industrial standards and the white papers of edge computing to promote the development in China. With the development of edge computing, we met more and more customers requirements that we summarize them to this list. You can see there are different requests between edge cloud platform and edge node. For infrastructure hardware, there are more kinds of hardware requests for edge node than edge cloud needed. As same as the operating system. For cloud platform, basically user needs the container in washroom machine and bare mental both. But for edge node, light washroom solution is a significant request that we need to run container in bare mental, which has limited hardware resources. For operation, customer pay more attention to their low touch deployment and failure handling for remote edge nodes. In addition, AI ops technology will be brought into the edge area too. For security, we think they need a full stack security solution for both edge cloud platform and edge node. Including the net security, virtualization and the container security, data security, OS security, and so on. So to meet those requirements well. Fiber home is seeking and building the technology stack of edge computing to spot different industries application platform like electronic equipment manufacturer. We build edge cloud to work like a brain to provider the standards API to spot those applications. And also manage those age nodes, which are more closer to user. Through the different cans of network that 5G, LAN, PUN, we provide three cans of edge nodes to meet various scenarios or requests. Like edge cloud. There may also include eyes to SAS services, but more later and the flexible. Of course, the edge cloud and the node are not alone. They have interaction with other bigger and more computing power clouds. Like public cloud and enterprise management clouds owned by the user himself. To implement the full stack, we must need a reliable, flexible cloud platform. That's why we choose to develop for the field wise for edge computing based on stunning acts project. Based on it, open stack and the Kubernetes cloud could could be spotted together to provision machine containers and bare Mentos work loads. We enhanced the visualization there and the plan data to add a more components like AI option and the security control. So next, I'm pleasure to introduce my colleague Lin Xiang Chen to show you what uses case we do and the issues we met and thought. Next, let me share with you some use cases and issues encountered by the home in their edge computing field. Earlier this year, we participated in a PLC pallet project for edge cloud in a domestic telecom operator. The main requirement of this project is to deploy edge clouds into cities. These edge clouds can provide virtual machine and container services, then deploy VR and face recognition applications on the edge cloud. While the edge cloud resources are also provided for VCD and business use in the future. An edge cloud management platform will be deployed in the center city to achieve unified resource management and application management. This project is a good test for our edge cloud platform based on stunning acts. Thanks to the stunning acts community. The next edge computing application scenario we're doing is the new VR experience provided by edge cloud collaboration. Deploy edge cloud in areas close to users, install 8k VR panoramic cameras in business halls or scenic spots, and upload them to the edge cloud after being processed by the VR stitching workstation. And the VR and the user with their VR content can be obtained nearby to achieve lower latency and a better user experience. In the future, we can continue to study the use of powerful computing capabilities at the edge in combination with AI to provide users with intelligent media experience, such as free perspective and emotional perception. In the process of these edge cloud engineering implementation projects, we summarized the three scenarios and modes of the operator customers for edge cloud. In most scenarios, the edge cloud system is divided into three layers, and the top layer is the edge cloud management. It's responsible for the unified resource scheduling and management function of multiple edge clouds. This layer generally implements unified scheduling and management of two resource of open stack and the Kubernetes. The regional edge cloud in the middle layer is responsible for the processing of slightly larger edge service and the management of remote edge nodes. The bottom layer is generally a remote edge computing node or edge gateway, which is relatively lightweight implementation of the edge. Each layer is generally connected through their three-layer routing methods. Among them, the regional edge cloud in the middle is also divided into three modes, namely edge container cloud platform, edge visualization platform, edge container and virtualized co-platform. For the first two modes, both can be achieved through StarringX. While the third tab supports both container cloud and edge cloud, you need to provide a virtual machine or bare metal on the top of existing StarringX. And then implement the configuration and installation of the fifth-counter container platform. We will briefly introduce our implementation scheme later. As mentioned earlier, operators and industry edge computing have different requirements and definitions for edge cloud. In the aforementioned edge cloud POC pilot project, it is required to use the four-edge node solution. The picture on the left shows the distributed cloud solution provided by StarringX community. At the edge, it is a complete sub-cloud and with control plan functions instead of remote computing nodes shown on the right. The scenario on the right needs to extend the edge nodes and solve the three-layer routing communication problem. Therefore, we implemented it based on secondary development of StarringX. The StarringX work node is deployed and configured across layers 3 under the agreed time delay of 50 milliseconds. It reasonably solved the problem of the remote computing nodes. We encountered a need during the implementation process in the development process using all-in-one or standard mode. When there are more than 10 computing nodes, the application apply of StarringX OpenStack failed. The error prompted is that Amanda deploys OpenStack Neutron HelmChart failed. After investigation and analyzing, it was found that the reason was that they are in their HelmChart of OpenStack Neutron. Their relevant configuration items of Neutron agents of the computing nodes were duplicated, which caused Amanda's integrity manifest file to exceed the one megabyte size limit of ADCD. Therefore, the way to fix this problem is to extract the common configuration items of Neutron server and the Neutron agents by modifying their charts of OpenStack Helm Neutron. The next issue we encountered is IPv6 dual stacks in the current transaction from IPv4 to IPv6. Customers such as operators have increasingly urgent demands for IPv6 dual stack. However, the current StarringX community support for IPv6 is slightly insufficient and basically does not support dual stack. Therefore, we focused on this feature starting from the configuration of StarringX system in network configuration to enabling feature gates of IPv6 dual stack in Kubernetes and then to Calico dual stack configuration. Then modified the relevant charts of OpenStack Helm configured to dual stack of OpenStack and finally realized that the virtual machine or port container can obtain IPv6 and IPv4 addresses. The configuration on the left is the result of their final modification, as well as parameter configuration items when deploying control 0. We mentioned these three scenarios and the most of the eight clouds in some cases. The ice virtualization management platform already exists at its age. Therefore, customers needs to deploy their Kubernetes container platform inside their OpenStack virtual machine or bare metal. Our fit count container platform is based on StarringX because StarringX block service is a very good solution to the reliability and operation and maintenance of Kubernetes-based basic services. So we need to try to install StarringX Kubernetes container again in the virtual machine or bare metal provided by the containerized OpenStack in StarringX. In response to this problem, we started with OpenStack starting the PXC boot virtual machine, modified the StarringX kickstart script, then used heat template to create OpenStack virtual machine and network resources. And finally, implemented the StarringX next year deployment using StarringX deployment manager, which called 0.proverning. Okay, that's all for the introduction of the second part. Give the rest to my colleague, Hao Wang. Thank you, everyone. Okay, thank you, Lin Xiang. Next, I will introduce what we are thinking about the future of edge computing. What important point we should concern next step? As we all know, security is more important point that we should focus on, especially in edge. Note that those big data centers, it is more complex and harder to protect those edge cloud or node. So we think there are five layers that we need to pay attention, which is physical environment, computing environment, boundary of cloud, network and management. There is also needing a stack of security technologies to be considered. So as this stack showing, we consider the edge cloud platform should have security protection features from bottom layer to top layer. Most of the features we have done, but still some works needed to implement, like the data security management, which is including the security configuration, security policy management, security alert, so on. Okay, that's all we want to share with you. Thank you, everyone. Bye.