 How many of you heard of cube edge project or how many of you are using cube edge? Not many are using it. So today, I would like to give you a general introduction about this project. And I hope you can go to our website for more information. So I would like to start my presentation. It's up introduction. I'm Montefou from Huawei. And now I'm leading cube edge project. So actually I already asked you these questions. I'm sure you are interested in edge computing. So I will go through the main content. First of all, edge computing is a very hot domain after computing. There are special benefits with edge computing. Of course, there are also challenges. First, network is not really stable between the edges. Sometimes it's a public network or some private networks. And there's a limited bandwidth. And all the nodes and devices on edge is in a large number. And if you are in the scenario of industrial control, there's a heterogeneous hardware and particles. And also the devices on edge is quite small. So the resource is constrained. And maintenance cost is a challenge. But your data request can be sent to the edge and come back quite quickly so you can achieve data locality and security. Compared with cloud computing, it's more secure. About cube edge project. There are several pieces of key information. Cube edge targets edge computing with cube edges. In March this year, cube edge is accepted by CNCF as a sandbox project. And we're incubating this project. So the main feature is to achieve synergy between cloud and edge. So our main idea is that edge is the extension of cloud. A lot of vendors already have existing and strong service on cloud. So if you can connect the cloud and the edge, users can enjoy more comprehensive service. And also enjoy the fast and localized service on edge. Yesterday we released a cube edge 1.0 version. We already had four minor releases. And in our white paper, it's listed as reference architecture. And now there are over 100 contributors in our project. We should say over 100 people have contributed to our project. There are also a lot of companies contributing, including cloud companies in China, such as China Mobile, and also international companies such as Intel that are our contributors. This is the community activity in the past six months. The data on the left is quite small, but generally speaking it's quite active. And the level of activity is increasing. And the 1.0 version and the previous releasing introduced the support for some edge devices. And I will give you more details. This is our vision and mission or our goals. As I already mentioned, this is the edge computing platform based on Kubernetes. So we hope we can use the primitive method to manage the nodes on the cloud. So we hope we can adopt the same method to manage the nodes distributed on the edge as well as the applications on the edge as well as the connected devices. We hope this kind of management can be used across different fields. That means including cloud or edge. And we also would like to manage KPI. It should follow the design style of Kubernetes. And we also need to support Kubernetes primitive, native API. And in terms of scale, we would support massive number of nodes and devices. This is the latest Kubernetes architecture. So the edge part is enlarged. The part of cloud is presented quite simply. The main controller is edge controller. Its main function is to do shadow management on edge. For example, your node is not on edge, but the part is scheduled to the edge. So we use the edge computing and use edge controller to write the latest information of these applications to Kubernetes server. So other components can see the part and node and their configure mode. So they can be processed using the same method as on cloud. So in order to achieve the coordination between cloud and edge, edge controller will send a message to Cloud Hub. Cloud Hub and Edge Hub are based on WebSocket. We know when you create connection on WebSocket, it can achieve two-way communication. So the nodes on edge can be in a private network. You don't need a public IP. As long as you can access the Cloud Hub, you can consistently communicate with Cloud and it's both ways. In terms of edge device management, we achieve the less management or device train. Some edge computing platform, there's also concept of device train. So when you have a device, you can control the device through device train and you can map it on the cloud. We can change some commands on the device train to control the edge device. So we achieve the decoupling of the map because on edge, there are many different types of device. A lot of scenarios use MQTT, but other devices use Bluetooth on model pass or OPCUA, which is industrial control protocol and many other protocols. So device vendors may have new protocols. In Cloud Edge, we also embed Bluetooth model pass. Of course MQTT is embedded already and HCO. And if a device vendor would like to add more protocol, they can create a mapper. So they can have abstraction of the communication. And now currently MQTT is communicating with Event Bus. And Event Bus is MQTT client. It can also communicate with the device on edge. So this is the implementation of Kubernetes. After some analysis, we find native Kubernetes has a lot of functions. For example, the part is not really needed on edge and a lot of built-in storage is also not necessary. So we decided to rewrite lightweight Kubernetes. There's also meta-manager which can consistently manage the metadata. And lightweight storage is seen to light. So after synchronization, the meta-manager can write the message in the lightweight storage. When a Kubernetes doesn't have consistency of metadata, so if this breaks down, the product from the cloud will not be recovered. But after our consistency, even if in disconnected scenario after the node is recovered, the code and other data can be carried over. And the device tree is responsible for controlling the information of all the devices and the copy is saved under the consistent storage of edge. Event Bus is to control TTP. So some devices are exposed, the portal is exposed through FCTP. So in this box, edge part, it's one binary. And the overall memory is 10 megabytes. Even if your memory is only 128 megabytes, it can be run. So the resource consumption is not just this component. The MQTT broker and the Docker are supported in the first version. Then the resource consumption may be over 100 megabytes. And now we are doing the support for container D. And we will continue to support more lightweight runtime. And we also introduced the key feature, that's edge mesh. It's originated from the concept of service mesh. And the main purpose is to achieve the data communication capacity across edge. In the early version, we could only access the IP portal or we couldn't have service discovery, et cetera. Actually, edge mesh is the edge computing service on Kubernetes. And in current version, 1.0, we can't have cross network edge-to-edge communication. The 1.0 version relies on the connection or connectivity of the various edges. And we are going to connect private networks and also the communication with the cloud. It's going to be a very important way of communication. And also in 1.0 version, in the early version of QV edge, the most support was given to edge device. To see the end or at the station on edge with strong computing power, there was no specific support. In the QV edge architecture, each connection is directly connected with the master. So when the node is broken from the master, there is no capability for dynamic transfer. They have the requirement for the scalability of applications. So in 1.0 version, we introduced edge side. So we have synced Kubernetes master to the edge. And after that, Club Hub and NG Hub is not that important anymore. And more importantly, for master when it go down to edge, if master is in different network with node, how can we connect it? So in the later versions, we will support through edge mesh because edge mesh has just launched and edge side has also just launched and they have not connected yet. In addition, this is a traditional version, a transitional version. So at that time, we use this kind of a mode. But the problem is Docker consume a lot of resources. So in the 1.0 version, we add the CI interface back. And then we have the verification of container D integration verification. And now we are in the alpha version. You can have a try. And later, for the Docker tools, we will eliminate it. And all will be supported by CRI and later we will use CRIO. So in terms of resource consumption, it's lighter. Another one is the evaluation framework of what we have done. For example, we haven't released all the data yet before 1.0 version. We have a framework for the performance and this framework is pretty easy. We use two layers of Kubernetes. The lower level is Kubernetes. Every port will be regarded as edge node to simulate all the nodes in the edge. And another one is independently deployed Kubernetes and good edge component on the cloud. So through these two layers, we can simulate a huge amount of edge node deployment. Actually, through our evaluation on the design for the communication between cloud and the edge, we use edge hub. At the edge, they can do convergence of messages between nodes and synchronize it on cloud. So what's different from K-Bus is that the nodes connecting will be reduced. And in the function of Kubernetes, API servers communication data volume will inference its scalability. One way is to ensure the same amount of connecting node and you ensure the data volume and to ensure the scalability. And second, if your communication package volume cannot be changed, then you ensure the long connection. You can also ensure scalability. So I think for Kube Edge, compared to native Kubernetes, it will have a better scalability for this framework. It will validate our hypothesis. Currently, we have some people from the community. It has the 4,000 nodes, but it's only a reference. In terms of performances, in terms of communication between cloud and edge, we are using WebSocket. And we are studying KuiD protocol. We have a version based on KuiD, but the performance is on evaluation, and in terms of performance, it will improve a little bit. Next is the application scenario for Kube Edge. We use the core part of Huawei cloud computation and make it open source apart from Kube Edge. We have the Huawei cloud service and the application is in a smart park. The main demand from the customer is that they want to do the facial recognition and they can identify the people's ID. And at the gate, they want to monitor the traffic of the park as well. And for the camera, they do not use smart camera. They are using traditional IP camera. You use IP address. You can read the video flow. So overall application is pretty simple. So we use facial recognition and the traffic flow and synchronize it to edge. As we all know at edge, an original video flow have a big data volume if you have an application on edge. In the video frames, you only capture a small picture of the head side that you make a comparison. And the comparison is based on your business design. And whether you want to build a big data cloud match or you just add it, make it on the edge. In the initial scenario, the client, they use the central big data model in cloud. So their facial recognition is on cloud and for the edge, they will catch a picture of a head size from the video and upload to the cloud. And based on the head image, they can do the face detection. Another case is a case for CDN customer. It's not using edge side. And the user, their main CDN is for ordering of videos. Generally the case is to send a video from central cloud to the edge. And we slice the video. When you watch the video, actually you can see, you can adjust, you can change and you can adjust the pace. So you need to do slicing and then you need to change the, shift the code. And then maybe your coding may be different from the original video. And you also need to provide the flow media. So we can make it directly connect to the cloud. And then it can also do the, we can run video transcoding rendering slicing at the edge, managed as job and deployment. Let's take a look at the roadmap of the project. Actually, we started from November 2018 and then we announced the open source of this project. And we provide a lightweight, we are lightweight agent at edge. And in the past versions, we provide support for the edge equipment. And also support the synchronization of the state data synchronization of states. And we also have a mesh layer for the communication between cloud and edge. And we also support CRI and container D integration at 1.0 version. We need to further do CSI because in the current version, we deleted a lot of storage type support. In the edge, we do not have AWS storage. But for the clients we met, they still need the demand for external storage. And they also want to have cross edge data sharing. So the full 1.0 version, we already have done CSI and storage compatibility. But we do not have enough headcount to work on further work on it. So it will be done later. And we will also have some remote control functions so that you can monitor the edge status. And we will further optimize the openness of the community. Also, we have over 100 contributors. But through previous survey, people think that to become a contributor, there is some threshold. And we will further increase the guidance or files so that people can contribute to it due to time limit. Previously, I planned a demo, but I will not do the demo here. But you can go to our booth and take a look. And for the demo, the demo is pretty simple. Through kube edge, how to achieve the control of equipment or have a communication with the equipment. And here, WeChat is just a portal to enter. And it's mainly on the cloud. And we have a web server. Through web server, you can select a song and a kube edge. WeChat app will write the song name at kubassmaster. Then we will synchronize the equipment updated to the edge. And then we will connect to a Bluetooth sound amplifier. Then we can receive topic message from the subscription and play music by the track. So just to show you how to add the definition of equipment and how to update the equipment status. And for the demo, you can go to our Huawei booth. And I will not show it here. And the last part is about the basic information of our community. You can see our website. And we have Slack channel. And for every two weeks, we will have a meeting. And we will upload the meeting video to the YouTube. And for domestic or Chinese friends, you can scan the QR code and you can join our group. And we can have a further communication in the group. So any questions from the floor? I want to ask why you choose to build an HD? Because after evaluation, we think if we want to simplify or remove some kubelet, then we need to cut a lot of code. So we decide to write by ourselves. My Chinese is not so good. I will speak in English. I see your route. For example, there is a 1.x version of the route behind. How do you monitor? Now, basically, you can see the application status, but this is not enough. We want to support Matrix API. Then you can experience similar to vanilla kubelet. Because now, if you use Permissius, it is broken. We don't have it in Cube Edge. In Huawei, there is a project, but it is not open source. We need to make it open source. Sorry, he is not using microphone. The interpreter cannot hear. I will repeat his question. The first question is about Edge nodes management, how to provision nodes. Actually, for the Cube Edge project, we do not do it fully because in Edge ecosystem, some other projects are taking care of it. Currently, our starting point is that the Edge nodes need to focus on OS and the Cube Edge release pack. We have an installer. It can help you to install the components in the Edge node. For any nodes, you can edit manually. You can register on cloud and create node object. Edge core. Then you fill in the cloud hub information. For security, we have very fundamental certificate based security. We will have hierarchy security design. We have prototype. Now, we have some discussion in the community. The main idea is that we will use spec in cloud to issue certificate. When your cloud is disconnected with Edge, they are on the same certificate chain. They can still do the validation of certificate. But when it is disconnected, you cannot issue new certificates. But at the Edge, you still can do the certificate authorization and validation. It's extending from center to Edge. For Edge, you just do the simple decision. Would you extend to the controlling of external control? Under the actual status. It can be reported back to the subfield. And in risk condition, if you change some field value, there may be some problem. In this project, it can be disconnected. Is there a possibility that it can be connected on the same certificate chain? It can be connected on cloud and on edge. Or it can be connected on edge. Now, if there is a group on cloud, you can use QEdge cloud to make a difference. Then you can connect QEdge to Edge. In our exhibition, the cloud storage is running on cloud. The cloud storage is running on edge. Thank you. So, this time is up. I'd like to finish the session.