 Bungdia Valencia. It is a great honor to have the opportunity of keynote at Cube Come Europe this year. We are from the open source cloud and AI team in Huawei. I will remember the Cube Edge volcano commander project. In addition, I was involved in the early days of forming Kubernetes Policy Working Group and the Sincere Security Tag. I want to give a big shout out to Ihor Dorosky who has been instrumental for helping me to kickstart the Policy Working Group effort. I want to say, dude, you are in my prayer each and every day and please stay safe and hopefully in the near future we can meet face to face for a discussion. So the topic where I'm going to present today is about incremental deep learning for satellites with Cube Edge and MindSport which is a newly open source deep learning framework. So let's have a quick overview of the satellites. LEO short for low earth orbit satellites has been a recent trending topic in space industry. For example, people are most familiar with the starting effort from SpaceX. There are still many problems lies ahead for LEO satellite. For example, the problems of constellation management, the problems of how to minimizing the cost of communication between the earth and the orbit, the problems of monitoring of health status, debris, collision avoidance, and also the problems of creating new application scenarios. For example, for climate change, for space mining, for deep space learning. So last year with collaboration among BUPT, PKU, CMCC, and the Huawei Cloud, the first satellite from the Tianzuan or the Sky Computing Constellation program was launched. With that, we actually did an experiment which is truly pioneering. We combined the space technology with cloud native and AI. So in the next two sections, you are going to hear from two extraordinary speakers on how Q-Batch and MindSport helps the cloud native and AI technology in space. So next up, please welcome Bao Yue from the Q-Batch team to give a deep dive on cloud native for space. Now let me take over the presentation to introduce how Q-Batch works in cloud native satellites. Q-Batch is designed for edge computing and cloud edge coordination. We launched it in 2018 and it became a CNCF Sandbox in March 2019. It graduated from Sandbox into a CNCF incubation project in September 2020. And from 2020, we have made some in-depth attempts at a wide range of fields. We now have five six-end working groups, including the AI-seq, device IoT-seq, ME-seq-seq, robotic-seq, and the roundness working group. Since the launch until now, Q-Batch has a healthy group. We have held lots of conferences and meet-ups, attracting people from all walks of life to join us. We now have more than 900 contributors, including 260 code 7 meters. All of these people are from more than 70 organizations. Let's take a look at the Q-Batch architecture. Basically, Q-Batch is a natively extended Kubernetes. We have two components, cloud and edge. In the cloud, we have a cloud core. It talks to Kubernetes and processes the request to the edge. The edge has an edge core, and it talks to container engine and devices. The edge takes down to 17 megabytes memory footprint to run, and also supports the OCI conference container run times. There are some key features of the Q-Batch. First, we support the Kubernetes native APIs for developers. We also support similar cloud-edge coordination, edge autonomy, no-resource environment. With the IoT devices, Q-Batch provides the device mapper to simplify the integration with different device protocols. Also, we provide a cloud view of global metrics from the data. Sedna. The Sedna project is an AI trackage over Q-Batch. It provides the edge cloud synergy AI framework to support joint inference, incremental learning, and federated learning. In Sedna architecture, you can find we have a centralized global manager in the cloud to coordinate with all the components on different edge nodes. On each edge node, we have a local controller to control the edge cloud AI tasks, and it also manages datasets, models, status synchronizations. Sedna also provides APIs for developers to quickly integrate the third-party algorithms through the library. Based on the framework, the component worker runs training and inference tasks, both on the edge and the cloud. Q-Batch and Sedna plays an important role in the cloud-native Sedna. It enables the edge computing on the Sedna and the joint AI inference between the Sedna and the ground stations. First, the Sedna is used to build multimodal joint inference in the Sedna and the ground. Incremental model learning is on the ground. In this way, a small module is used on the Sedna and a night module is used on the ground. So the Sedna requires few resources to achieve better AI inference effects. In addition, the device mapper of the Q-Batch is used to model and manage sensors of set lines in a unified manner, allowing management personnel on the ground to obtain the working status of onboard devices in real-time. All of these communication with each other through a highly reliable cloud-edge channel is established by the Q-Batch. Next, let's welcome Xiaoman to give a deep dive on how mentors for complete the task code for incremental learning with Q-Batch. Optimizing a vast amount of data using artificial intelligence and cloud-native is important for end-users. So how can it work if we like to detect a farmland area on using this data? There are two states that you can see on the slides. We will train an image detection model by Microsoft with tremendous amounts of data. So, Microsoft ULV-3 Tiny model is pre-deployed on a set line and served with Q-Batch Sedna via TinyMS, with a size of no more than 30 megabytes, which is facing with a set line's memory. The set lines can detect the farmland area by themselves. Though some cases can't be recognized because of this low precision model. So, when discovering hard sandhose, Q-Batch Sedna local controller compresses and sends these data sets to the ground, using high precision image detection model for inverse. These hard sandhose can also be used to train a new model and improve the precision. So, the Q-Batch Sedna global manage performs incremental learning tasks using MySport to train it. After refreshing a model that is fractured and retrained by MySport, Q-Batch Sedna pushes a compressed partial model with a size of 3 megabytes, which facing with a one minimum base per second of light to the set light. When set lines receive a compressed partial model, MySport decompresses and updates a model. And then, Q-Batch Sedna redeploys a new model so that the accuracy can be better using new models in the end. When set lines receive a compressed partial model, MySport decompresses and updates a model. And then, Q-Batch Sedna redeploys a new model so that the accuracy can be better using new models in the end. You heard of MySport many times, so what is it? MySport is a newly open source deep learning framework which was launched on March 28, 2020. It is a really user-friendly AI framework that only requires the developers to master the basics of tensors, operators, models, and Python programming without a steep learning curve on many of the underlying complexities. Moreover, with the features such as high-order differentiation optimization, automatic parallelization, and graph operator fusion, MySport could achieve very high performance. I will just introduce three core features of MySport here. MySport performs automatic differentiation based on source code conversion using the just-in-time compiler. It supports complex control flow structures such as wild A4 and flexible function programming such as higher-order functions and enclosures. Secondly, in all these automatic parallelism uses serial algorithm codes to implement distributed parallel training and maintain high performance. The paradigms of distributed parallel training include data, model, and hybrid parallelism. MySport uses a new type of distributed parallel training that integrates all these paradigms. Therefore, by the given data, MySport shortens image classification model training by 23% with recent 50 and the duration of Chinese free training models by 62% with births with fully utilizing the hardware computing power with full image upload on devices and deep graph optimization. Thirdly, MySport deploys one framework for devices Edge and Cloud. This strategy of develop once and deploy everywhere boosts the development and deployment efficiency. Our team also developed a high-level API toolkit for MySport which records tiny events. You can see the architecture on the bottom of this slide aiming for a non-AI central service faster adoption of deep learning framework capabilities, especially for new beginners. Since this inception, MySport established an open and global community for developers. In merely two years, MySport has achieved more than 1.2 million downloads with more than 20 MySport study groups created all around the world. The community adopted open governance with a 14-member technical theory committee and 26 workshops. The community has also pioneered inequality and diversity with efforts like MSU MediaTek. With various community partners, not-for-profit collaboration like pre-trained outdoor red camera model for natural protection further helps AI for good. As I said before, MySport can make incremental deep learning simple. The AI workflow in this case has been explained in detail. The model we trained can detect the features that are nominal, such as typical font-length shapes and differentiate them from unusual patterns. We can use AI to determine which datasets are important to send to the ground segments for processing. This can ease burdens or constrain the space-to-ground network experience with a transmission and large-volume data. Since 2,000 MySport have many other amazing capabilities, we think a lot of AI technologies we describe above will be used in many ways will be developed over in the next couple of years to assist this mission and many more, such as deep-based learning and cloud-related policy. And in future, we hope that cloud-native space computing that can enable better orbit ground and orbit-orbit communication monitoring and resource management. If you are interested in this project, you can join a community and participate in experiments. Just scan the QR code and follow our channels, like our websites, Twitter, and some other social medias. We'll share the latest news with you. Thank you. Bye-bye.