 Okay, let's start. Today, my topic is Edge and Kubernetes. I'll manage all the monitoring devices on the world's longest policy bridge. Okay. First, let me introduce myself. My name is Wei Huan. I'm from a Chinese company called Harmony Cloud. I'm the chief architect and I'm responsible for Edge computing. I contribute to projects like KS and Kubernetes and this is my email. If you have any questions about Edge computing or you have any scenarios about Edge computing that you want to discuss with me, please write to me. Okay. Today, the content of my speech includes the following aspects. At the first, I will give a brief introduction about the Edge, including my investigation of the current mainstream cloud-native Edge computing frameworks and why we choose Google Edge. Then I will focus on how we applied the Google Edge to the Hong Kong dual-high-micro bridge project, including how to define the devices on the bridge, how to associate each device with the ID in KS, and how to manage and operate the applications deployed on Edge nodes. Finally, I will briefly share some of our best practices on Edge computing. This is a overview of Google Edge. Google Edge is the first CNCF-incomputing Edge computing project. It starts from November 2018, and now it has more than 900 floors and almost 400 stars. Also, some key points from the perspective of the Google Edge architecture. First, it's very compatible with the Google API, and it's very stable because it has a very reliable message push from cloud to Edge, and it can be very lightweight. We can tailor the Edge D, and also Edge D is tailored from Kubernetes, and it can support some very wide area, no success and support very large scale, no divisive excess, and finally it has the Edge autonomy. Also, I investigated some cognitive open-source Edge computing frameworks like K3S, Poignage, and Super Edge. All of these frameworks become popular in China. But why we choose Kube Edge? From my point of view, and I think we have some reasons. First, from my point of view, K3S is a solution to manage Edge nodes at local. And K3S does not provide a solution like wide area, no success, and both OpenEdge and SuperEdge do not provide the lightweight solution for no success and the device management. And both OpenEdge and SuperEdge may have the stability risk in the case of cloud Edge network fluctuations. We all know that it's caused by the real risk problem of Informer. What's this fast about the world's longest cloud sea bridge? With a total length of the 55 kilometers, the Hong Kong-Chu Hai-Mai cloud bridge is the longest cloud sea bridge in the world. And in order to monitor the safety of the bridge, all times, there's a large number of sensors deployed on the bridge. And each sensor will generate large amounts of data. All the data needed to be collected and processed in real time. And any abnormal information from on the bridge need to be alert immediately. But there are some problems in contact with that. Large amounts of monitoring equipments and a very large number, a large amount of data has very low value density. That means there are lots of invalid and duplicated data that we don't have. And the bridge is very long. It has a long distance under the very difficult construction. And also, if we want to position the problem of the bridge, obviously we will need 5G transmission. So here's our solution. We use 5G communication plus beta position and plus edge computing. And we use Google Edge. And so in this way, all the data collected from the sensor will be processed locally at Edge. And any abnormal information could be found through real time AR influence. This AR influence program also run at Edge. And all the applications are managed at the cloud. And it can be distributed by the cloud and supports the dynamic operation and maintenance at Edge. So here you can see we will have many Edge bugs deployed from the side of the bridge. This edge boss also means Edge node in the Google Edge framework. And also we have the 5G attenuer and many sensors. Like here is the sensor. It's a shutter sensor. And here we can see all the devices on the bridge. On the left, it's the edge boss. It's composed by a car pod and the base plate. And on the right, here you can see many other sensors like rain and snow sensor, microform, and the environment where integrated shutter. And also the MU. Totally, you can measure 14 types of sensor data. And here is the picture of the shutter. This shutter can create light intensity, carbon pressure, noise, temperature, humidity. And also, but how to manage this shutter? So in the Google Edge, we can define this device. We can define this kind of shutter device called device model. And we can define each device instance using this model. Here is the demo file of the device model. Here we can see we can define many properties regarding this device model. We have the property name and the property description, property type, and the method of property collection. Currently, here we can see read-only mode. And also, here we can see the difference. We can define many different trust model properties. And also, we can have an demo file to define the device. Here we define the shutter difference instance. Here we can, in the specification, we can define the reference device model name and the protocol. And we use the mode bus. And also, the slave ID. And also, like the serial port, bond rate, data bits, private bits, these properties are all special. Our specifics is the shutter. Here also, we can see there's a segment regarding the data. The data means we set the data field for the third-party data push. This means the data collected from this shutter can be pushed to the cloud. But it also can be pushed to some third-party storage, just like InfazDB. And here also, we can also see the status. This segment is used to define the data reported to the cloud. And also means the dress train. Here we defined many types of the trains, like where you can see the last, the cover, the pressure, and the humidity. And we can also deploy and manage the different applications to the edge from the cloud. And basically speaking, the types deployed to the edge can be summarized as three types. One is the business applications, like earnings, pump, cash, et cetera. And all the device mapping programs that we write to the cloud, the data, and the push data to the cloud, they all deployed as containers. And these containers are also deployed to the edge. And also, there's a type of problem, like the AI influence programs, said in order to do the influence work immediately edge edge. And this is a picture of the whole architecture from the clouds to the edge and to the sensors. Yeah, we can see the clouds. We have the Kubernetes and the cloud call. The cloud call is the key complement of the Google Edge. And at the edge, you have the edge call. Edge call is the key complement of the Google Edge at the edge. And also, here we can see how MQTT bloated her. And here we also, we have many components in order to do the data collection, data processing, and the data conversion. Yeah. And here, the cloud is responsible for connecting to the bridge, the bridge health monitoring system, and managing the edge nodes and edge applications. And the edge node is responsible for connecting to various sensors of the bridge and push all the data to the cloud. And here you can see we have already connect many sensors at the edge. And also, finally, we also have some best practices, first about the data collection frequency. The data collection frequency of the mapper, actually before, would directly affect the CPU and the memory resources and occupied by the mapper. And generally speaking, the system resource consumption is proportional to the collection frequency of the mapper. When the edge resources are limited, the collection frequency should be strictly controlled. And generally speaking, the other type is to push data that are collected from mapper. First, you push to Cloud Cloud. As we all know, the SQL right is stored in the edge of the cloud edge. You only store the latest piece of data from the current device. The edge cloud does not provide storage for large amounts of data collection. And second, we can push the data to the database. And that means after the mapper collects the data, it does not push it to the MQTT at local, but can be directly pushed data to the third-party database like E-FilesDB. And also, I have some best practices regarding the data reporting frequency. Generally speaking, the data reporting frequency of a mapper is also proportional to the edge cloud resource usage. So if edge resource is limited, the data reporting frequency should also be controlled. And about the edge catch, that means when the cloud and edge node is in a weak network or intermittently, just connected, we hope the business data at edge node will not be lost. But the storage space of the edge node is also limited. And we hope that the edge catch could be designed as lightweight solution. That means once network is good, the edge can directly push data to the cloud. When the network between cloud and edge is disconnected, then the edge node will catch the data. And when the network is restored again, the data will be synced to the cloud like a scheduled task. And about the AR model, warm-up at edge, that means, and this I think is very useful in the scenarios like with edge AI, we need to update and add an AR model in seconds. But sometimes the image size of the AR model may be very large. And the network of edge nodes may be very poor. In this case, if we don't adopt any image warm-up function, then the AR influence work will be likely to be interrupted during the image updates. This is what we don't want to have to happen, I think. And also, we need some massive edge node access. For example, if we have more than 100,000 nodes need to connect. In this scenario of the massive node access, we can guess that EGCD, Agile Server, and the cloud call are likely to become the performance bottleneck. Generally speaking, there are two ways to resolve the piece problem. First, by simplifying the node status object and approach status object, the traffic impatch on the cloud caused by the large scale edge called reporting information can be reduced. And second, use some operation, like the dynamic backup operation, we can avoid the simultaneous access of a large number of edge nodes. Yeah, this is also very useful. And then finally, in the case of the massive node access, we also should adopt some scheme, like as long as the edge node is powered on and connected to the internet, then the cloud side should automatically discover the edge node and automatically install any edge components. Yeah, I think this is all my methods learned and best practices from the cloud edge. Thank you, thank you.