 Hello, everyone. My name is Siqi Luo, and my presentation title is From Ground to Space, Clownative Edge Machine Learning Case Studies with Kubi Edge Sentinel. Let me just make a simple introduction of myself. I work in Huawei Services as an algorithm engineer. I mainly focus on Clownative Edge computing, such as Edge Cloud Synergy AI, Lifelong Learning, and Edge Resource Scheduling. Also, I am a developer of two open source projects, Sentinel and Yanoos at Kubi Edge. This year, I serve as a project mentor of open source promotion plan and help participants to integrate AI algorithm into Kubi Edge Yanoos. There are four agendas in my presentation. First is the background, framework, and challenges of distributed computing. And then two practical cases. One is about smart robotic perception. The other is about Earth satellite synergetic farmland counting. At last, I present takeaways. With the rapid development of Internet of Things communication technologies and related embedded devices, Edge computing is emerging and brings new trends to cloud technology. It can be seen that the services of major cloud vendors have evolved from the traditional public cloud that mainly relies on ultra large scale data centers and high speed interconnection through optical cables to the hybrid cloud that connects private clouds and public cloud private lines and to the Edge cloud that connects Edge nodes to central private lines or public networks. In the future, a fully distributed cloud with multi-path interconnection between central data centers and Edge data centers may even emerge. Edge computing will provide AI capabilities for tens of billions of terminals in the future, creating an intelligent world that is aware of everything, connected of everything, and intelligent of everything. Cloud plays multiple roles in this way. It not only provides computing chips, hardware devices, and network communications, but also actively deploys related system architectures, algorithm capabilities, solutions, products, and services. Confronted with the change of Edge computing, a cloud-type Edge computing platform named KubeEd was established. It extends the power of Kubernetes to the Edge and applies to scenarios such as Edge cloud synergy computing, intelligent Edge computing, and lightweight. KubeEd provides ultra-lightweight cloud-Ed collaboration, high-speed cloud-Ed communication, and Edge apply autonomy. It furnishes the development of the Edge computing industry. Over 1,100 members contribute to KubeEd, which is also backed by over 30 industry leaders. After active discussions in the KubeEd community, SIG AI was established in 2020. The SIG AI focuses on technical discussion, API definition, reference architecture implementation in the Edge AI field to enable AI applications better run on the Edge, including cost-saving, performance improvement, and data protection. The scope and related planning are in three directions. First, Distributed Collaborative AI framework, etc., defines a Distributed Collaborative Programming framework for AI applications, helping to quickly develop Distributed Collaborative AI applications and enable them to run at Edge. SIG AI has incubated the project of Sedna, making an impact on AI field. In addition, the Collaborative Benchmark framework, YANUS, is launched. It supports AI developers and end-users with efficient development datasets, toolkits, and the best practice discovery. About Sedna, it is the first Distributed Collaborative AI framework and enables Edge Cloud synergy capabilities to existing chain name and inference scripts. Also, it supports seamless migration of existing AI applications to the Edge. Now, it has realized unified dataset management and model management. It has achieved full chain name and inference framework, that is incremental chain name, joint inference, factor-related learning, and lifelong learning. Mainstream AI frameworks such as TensorFlow, PyTorch, MySpa, etc., are compatible in Sedna. Sedna shows its advantages in reducing deployment costs, improving model performance, and protecting data privacy. However, for AI developers, testing Edge Cloud synergy AI model performance is also important before they deploy it to practical applications. And that is why we need YANUS. Kubi Edge YANUS is the first Distributed Collaborative AI benchmark and provides large-mile dataset, baselines, metrics, and even simulation tools to facilitate more efficient and effective AI development. In Edge Cloud synergy domain, we lack business dataset and algorithm, and it is expensive to test for all scenarios. In addition, closed testing environment and heavy work to customize test cases bring much barrier to test Edge Cloud synergy AI. Hence, to adjust it above pain points, YANUS provides comprehensive benchmark specifications of Edge AI, test cases and typical Edge AI scenarios, and end-to-end test bands. In the wave of Edge computing, AI is the most important application in the Edge Cloud and even in the Distributed Cloud. With the widespread and performance improvement of Edge devices, it has become an inevitable change to deploy some tasks related to AI on Edge devices. The technology of implementing artificial intelligence systems based on Edge devices, Edge servers, and Cloud servers using multi-node distributed or even multi-node collaboration is what we call Distributed Collaborative AI technology. The core driving force of Distributed Collaborative AI is that data is at the edge first, so that Edge devices are gradually equipped with AI capabilities, and this makes us believe that although Distributed Collaborative AI is still in the early stages of development, it is an inevitable change and will go much further. As the Edge computing power is gradually strengthened, it is possible to perform inferences and even some changes at the Edge. We are also witnessing the continuous evolution of Edge AI. The current mode that training at Cloud and inference at the Edge will evolve to Edge Cloud collaboration or even Distributed Collaboration. For example, typical collaborative technologies include collaborative inference, federated learning, incremental learning, life-long learning, etc. And this technical value drives the rapid development of Distributed Collaborative AI. In addition to technical value, Distributed Collaborative AI shows huge business value. Compared to Public Cloud AI, Distributed Collaborative AI adapts to Edge scenarios. It reduces transmission latency and bandwidth and protects data privacy from leaking Edge. When compared to Private Cloud AI, Distributed Collaborative AI can be combined with Cloud computing power to reduce Edge construction and maintain codes and achieve cross-Edge knowledge convergence. As is shown by research-dive analysis forecasts that the global Edge AI and even Distributed Collaborative AI software market will grow from $400 million in 2019 to over $3 billion in 2023. Moreover, McKinsey predicts that Edge AI and Distributed Collaborative AI will cover at least 20 industries. This promising business value also drives the fast development of Distributed Collaborative AI. However, we can't avoid the challenges of Distributed Collaborative AI. First is the Edge resource fragmentation. Due to the high cost of intelligent and heterogeneous software and hardware, computing network and storage resources are usually limited. We need to adapt to a wide variety of equipment and power requirements. Second is the Edge data silos. Edge data is naturally geographically distributed over different devices due to privacy and network bottlenecks. Data often has security concerns and needs to be strictly protected. Third is the Edge data deficiency. Data silos, difficult labeling and short data collection period cause insufficient Edge device data. While the data collection time of new project is short and cost start is required, the full model of cloud-based training has poor effect. Fourth is the Edge data heterogeneity. Inconsistent distribution of Edge data in time and space, for example, non-ID or OOD issues, results in unstable performance of the same model on different Edge nodes. Hence, in the next agenda, I will exhibit how could be could be Edge Six AI solve these challenges with two practical cases. The first case is smart robotic perception. Nowadays, robots have been generally to be utilized to conduct delivery talks in restaurants, office, as well as inspection talks in industrial scenario. The main technology for robots to localize and navigate is laser radar. However, laser radar usually fails to detect low obstacles such as rams and curbs, and finally results in robots falling down. Let me play two videos. No, no, no, no, no. As shown in the two videos, the quadruped robot is walking in the garden but gets trapped over the curb. That is because laser radar can't detect curb, which is very low for a robot. Therefore, AI visual detection method is proposed to help to solve laser radar deficiency and recognize environment accurately as shown in the two figures. In the two figures, low obstacles like curbs and stairs can be detected. Due to limitation resources in robots, we decided to deploy Edge Cloud Synergy AI, in which a robot as an edge becomes smart to detect low obstacles and make intelligent decisions. However, as data heterogeneity exists in Edge Cloud Synergy AI in robotic perception. When a robot receives images from a totally unseen sport, or the environment has severe weather, or the environment has different brightness from its usual level, robot still falls down as the AI model fails to recognize accurately. For example, we change a model by data from our garden and the second floor and test it at the first floor. As you can see, the inference results of the garden and the second floor images are almost perfect. We can tell curbs and rows in the results clearly. But the inference result of the first floor is a math in which we can't even recognize the closest curb. So, it is not hard to conclude that robots will eventually fall in this spot. And this is a typical data heterogeneity example. The second challenge is the edge data deficiency in robotic detection. It is caused by the fact that it is too hard to change an accurate new model quickly for heterogeneous data. First, it requires huge manual labeling cost, especially for image labeling. Second, for a single edge site, few samples are collected in a short period. Hence, third, it usually has a cold start for a new spot or a new project. Therefore, to tackle the challenges above, we utilize live phone learning. But what is live phone learning? Literally, live phone learning is a continual learning process for AI models. They will become smarter from historical knowledge and overcome catastrophic forgetting. While in our Edge Cloud Collaborative Live Phone Learning, we utilize multi-task live phone learning in which the cloud knowledge base stores and learns tasks and data from different edge nodes. When the cloud knowledge base is updated by new models, it synchronizes them to edge nodes on demand. Through this iterative learning and update in knowledge base, we achieve Edge Cloud Collaborative Live Phone Learning. Most importantly, we solve edge data heterogeneity in robotic vision detection by unseen task detection and online unseen task processing. First of all, unseen task detection works at the inference stage, usually at the edge. It recognizes and collects edge heterogeneous data, which we call unseen task, such that we can allow robots to stop in time and avoid it falling down. While online unseen task processing guarantees that heterogeneous image will be processed in real time by notifying manual intervention or charting operation, but not waiting for a new model for them to process. Second of all, to solve edge data deficiency, we decide unseen task changing at cloud. Through multi-task learning and transfer learning methods, which require less labeled samples, we can train new models for unseen tasks, turning them to be seen. And via this technique, we can keep AI models deployed at the robot smarter and smarter. Now I will display the demo of smart robotic perception. In this demo, a robot will conduct a delivery task, and it needs to deliver a gene from the building E1 to E3, along which it detects unseen tasks and learns unseen tasks. There are two rounds of demonstration. In the first round, it shows unseen task detection, online unseen task processing, and unseen sample uploading to the cloud. And in the second round, it shows the effect of unseen task training, helping robot pass through low obstacles. There are four screens in which the left two screens show the situation of the first round delivery, while the right two screens are about the second round of the delivery. Also, the inference images will be displayed in the two bottle screens. Now the robot accept its task to deliver a drink, and it starts. The inference result marked right below, implying unseen image detected, which leads to robot falling down. At the same time, unseen images will be collected as is shown. Now we have eight unseen images. When the robot falls down, we tackle it by online unseen task processing method. That is real-time notification of manual intervention. When robot fall in the right screen, after unseen task training, robot can pass this ramp successfully. When robot comes to a ramp, people on the ramp change its schedule path to cross the curb. But it can't detect curbs at the first round and falls down again. While facing the same problem in the right screen, the robot detects curbs successfully and switches its gate to cross the curb after unseen task training. At the end, robot only delivers half cup of the drink at first round, but full cup at the second round. As we see, the second delivery completes successfully. It took 19 minutes and 53 seconds, which is 28.05% savings in delivery time compared to the first round. We also developed a cloud robotic lifelong learning application, in which it will show the number of unseen images of each round. Moreover, it also shows the number of unseen images for each category after labeling. And we can take an overall view in this page, which presents basic information of this application. And that's all with this demo. The second case, I will introduce Earth Satellite Synergy Farmland Couting via Edge Cloud Joint Inference. The background is that when remote sensing satellite is used to conduct AI tasks, like estimating the planting area in the whole country. It faces many challenges. For example, it has three times the redundancy transmission requirements due to big errors. Its limited dog link bandwidth results in huge data transmission, which is the most power hungry part. Given that GPU cannot be deployed on satellites due to space and power constraints, HD images fail to be inferred accurately for farmland counting. Therefore, Edge AI deployment can help improve bandwidth utilization and support more applications. Also, Edge deployment helps to reduce data transmission and save energy. Finally, we propose Edge Cloud Joint Inference to solve the AI tasks in space. We decide the Edge Cloud Joint Inference framework as shown at right. And we prepare small model which will be deployed at the edge node. That is the satellite. And big model at cloud, that is the GPU servers. Small model uses less storage space than big model and has faster inference speed. When Edge Small Model detects a hard sample, it will send it to big model at cloud for more accurate inference. And the status data and application data are transmitted through ground signal station between satellites and the cloud. And most importantly, Edge Cloud Joint Inference solves edge resource fragmentation by leveraging and scheduling resources across multiple edge nodes by the cloud. In addition, it solves edge data silos and achieves data sharing among multi-edge nodes by the cloud and the certain security mechanism. And the test results show that the average scores are precision and record are up to 99%. This Edge Cloud Joint Inference design presents the following benefits. It improves the detection precision, reduce the satellite energy consumption and reduce the transmission cost. At the same time, Earth satellite synergy still faces problems like heterogeneous data processing, inference acceleration, cloud migration racial deduction, and Edge Cloud knowledge sharing. In the future, these problems will be settled by Settler through Edge Cloud synergy life on learning, model compression, and dynamic cloud migration decision. Finally, let me conclude my presentation. First, I introduced distribution collaborative AI, which suffers from four challenges. And Kube Edge SIG AI leveraged Settler life on learning and joint inference to solve the four challenges. And we applied Settler to two practical cases to show the effect. That is smart robot perception and Earth satellite synergy farmland counting. And welcome you all to concentrate on other SIGs of Kube Edge, such as SIG Robotics, SIG Media, and SIG Networking. We work together to solve core problems in cloud-ative Edge computing. Thank you all for listening and please contact us if you have any interest through the below QR codes.