 Hi everyone. Thank you for joining the talk. I'm Kevin Wan. Actually, unfortunately, Ying is not able to travel here, so I will deliver the talk. So, just a little bit to introduce myself about my background. I actually was one of the earliest Kubernetes contributor and the maintainer from China in the early days back to 2015. And I'm recently working on the QBAGE project as well as the other two CNCF projects, which is called the volcano. It's a batch scheduler and also a KMATA for multi cluster management. And I'm also helping promoting cloud-native technologies, mainly in China, but also the other regions as well. I have been organizing several community events. And at Huawei cloud, I'm the leader of the cloud-native open-source team. You can find me on GitHub or Twitter. Okay. So, just a little bit background about the edge computing. So, we know that today the network is kind of more and more developed. There are a lot of people exploring the path to run their applications on the edge to achieve better latency and also like to deal with the offline autonomy and also like to help improve their data privacy sort of things. But there do have a lot of challenge. For example, the network is totally different with this in the data center network. It's actually over the internet between the central cloud to especially the edge. And the hardware, the underlying architecture is very, very different. In the data center, we got very typical servers with a kind of certain model of the types. But on the edge, there are more like ARM-based edge servers, for example, and there are a lot of more other architecture hardware. And also there are a lot of challenges to using like the GPU resources on the edge. So, the resources on the edge today, it's kind of still expensive compared to those in the data center. But how to deal with the requirement that people do want to run applications on the edge, that's why we built this QB edge project. So, just to give a very high level overview of the motivation. So, actually, QB edge is trying to bring Kubernetes as well as the cloud-native technologies to the edge computing scenario, focusing on the applications and also the resources and the data and the services collaboration between the cloud and edge. We know that in the cloud, in the data center today, we have very much mature services, a lot of capabilities we can use. But on the edge, it's kind of still in the starting stage. That's why we also bring this project with open source, donated to CNCF to make it really joint development in the community. So, just to also give an overview about this project journey. Actually, the QB edge project was started in 2018, one of the earliest open source project working on the edge computing, especially with cloud-native, with container technologies. And besides we're adding more and more features to this project, actually, we keep exploring a lot of kind of traditional use case, traditional industries to embarrass, to adopt cloud-native technology. So, that's kind of very interesting. For example, in 2020, we got a very large scale adoption in the highway in China. So, actually, in that time, it's already managing 100,000 edge servers, definitely with multiple clusters. But the total scale today is still very large. Yeah. And also, during the year of 2020, we added a lot of features and started some of the subprojects to help provide kind of more tailored features to different scenarios. And you can find that actually, we have a subproject called SEDNA to enhance the AI workload collaborating between edge and cloud. And we have the project mesh to deal with the data plan, communication between applications in different edge network. And also, we got a very interesting user to actually use Qwage in their vehicle to manage the applications, to kind of bring the smart carbon concept to actually make the vehicle able to do software layer OTA instead of firm layer OTA. Yeah. And also, we got a very interesting use case that actually some of the academic, the round container technology, container containerized services in their low-orbit satellite. You know, the satellite, it's a little bit different to the kind of typical age. It's moving very fast. It takes like, it has only like six to ten minute time window each time to get a stay online while the other other times offline. And also, we keep walking with the community to do the large scale scalability test and also the security auditing stuff. So currently, we support 100,000 age nodes inside the one cluster. So it's kind of extended the vanilla Kubernetes scalability, though the usage is a little bit different because with the age computing scenario, you don't need to expect all the nodes always online. Yeah. So just to give a brief overview of the architecture, how QBH make it happen, we know that actually Kubernetes is designed for data center, right? It requires at least one gigabyte to run a very simple Kubernetes cluster and also the network between the control plan and the node need to be very kind of stable, very low latency, right? And every time the node, the QBH let's start up, it need to wait for the connection with the API server basically. That's why we went to, went into the path that we decided to run actually the node component, the QBH on the edge. And with some of the lightweight optimization, we requested the, we reduced the memory consumption to down to like 70 megabytes to 80 megabytes. And we also bring the node level data persistency to make the node able to work when it's offline and able to recover without connection with the control plan. And the connection between the cloud core, basically the control plan and the edge core, the node part, it's actually kind of re-implemented this watch mechanism over WebSocket. So that enables the whole architecture survive over very bad quality network. And also for the IoT devices, actually we know that there are a lot of device talking with different IoT protocol, right? But people really want to decouple their application development with the protocol. That's why we also bring the device mapper layer to kind of provide an option to define all the device data with standard message. So your application is able to like access the device data, access the device data with the standard message. You don't need to integrate the device protocol. And a little bit into more details that how it's happening without changing the control plan. Actually, people still able to kind of get full Kubernetes experience on the on the cloud from the API server. Actually, what we did in the QB edge is that we hooked the step after the scheduler make decision and then the QB at the spin up containers. So in QB edge, actually, when a scheduler makes decision, it's the cloud core to fetch all the updates from the pod and find the the right node corresponding node to run that pod. And the pod will be sent to the basically the connection between the right the cloud core and the right edge core. And the edge core itself will persist the definition of the pod at the node level in the SQLite and then start the container. So with this, it enables the node to run like the vanilla pod. And also, even if the the connection is kind of broken, the cloud core is able to recover the pod from the local storage, basically the SQLite. So just to do some of the updates of this project recent last year, one of the things we have been working on is about we really improving the designer implementation to decouple the application development with the IoT devices, the live devices. Yeah, so the main idea is make actually the device as a service. So your application can really interact with the device just like you interacting with the Kubernetes service. Yeah, and for the DMI, the device management interface itself brings the more possibility to customize your device lifecycle management, especially to discover whether your kind of device support like OTA of your firm, for example, and also if you want to do some more at once, the usage like virtualizing your device, that's possible. The implementation is kind of or fall into the mapper implementation. Yeah, and then when your application run, it just communicate with the mapper to fetch the data. And what we are doing this year is actually we want to kind of provide more automated functionality for the data plan. So basically enables user to define rules to like tell the system to forward your device data to some of the backend, for example, time-based database. Yeah, and also we already provide some of the implementation according to the new device architecture as an example, and you are able to implement your own according to this reference implementation. For the Cnet working the data plan, basically, last year, actually, the EdgeMesh have kind of single point of failure, basically the EdgeMesh server. During the past releases, we refactored the whole architecture. So it's now kind of adaptive. It will automatically choose one of the EdgeMesh agent as a kind of the helper node to forward the control information. Yeah, and also you can find that actually there are kind of multiple layer concepts. The underlying underlying that actually if you are part of the kind of need to talk with another part across the different private network, we have the P2P layer to help you build tunnel between these two networks. And then starting from like the cluster API service name, it's actually the similar experience with vanilla Kubernetes. And also we have the node level DNS resolution. So it enables you to still provide service even when the Edge nodes are offline. Yeah, and for the AIC, as you know, Cetina is the project to help you simplify the AI workload collaboration between Cloud and Edge. We currently provide four modes of the inference and the learning. So especially in the recent release, we provide multi Edge inference. So that enables you to do more advanced inference. And for the learning, besides most of the people may know about federated learning, but besides that, with those running on Edge, the data collection is kind of sometimes very hard or takes a long time. So incremental learning and also the lifelong learning with knowledge base is very important and very useful in that case. Currently, Cetina project already provide support with the AI frameworks like TensorFlow, PyTorch, and the other ones. Yeah, so for this robotic sake, it's actually, there are a lot of discussion there. And the recent updates that they're going to be a new project to simplify the remote control of the robots. So this is just a proposal currently to explain the whole architecture. And we kind of want to achieve like the, for example, to people to customize the slider of sensitivity adjustment, and also to bring like the RTA RTC based remote view of the from the robot perspective, you can access the pull request address to take a look with more details. And another spec that I want to bring is that security is definitely very important in every open source project. So in the last year, we finished the third party security auditing with the West TIF. And also the ADA logics. So you can now find the full audit report with the QR code there. And also, QBH is one of the CNCF project already passed the SOSA verification, and it's now level four. And also, we integrated the fuzzing testing for this project to help automatically discover some of the potential security security issue. And also, in the last year, we provide them a threat model and the security protection document to help people understand when you're deploying QBH, what kind of thing you need to take care about, especially where can be kind of potential attack point, you need to especially add more kind of product to that. Yeah, you can also scan the QR code for more details. And also, for the community currently, we have our own security vulnerability management process. And we already deal with a lot of CVE issues. If you found any potential security issue, you can just refer to this process to report and keep track until it gets fixed. Yeah, I also want to introduce some of the case study. I think the most interesting thing in the QBH community is that it really brings a lot of industry to cloud native. So on the right, you can find that there are a lot of industries already using QBH in production grade, like the China Highway Electrallic Tolling System, as well as the Hong Kong-Zhu Hai-Makau Bridge, which is a bridge tunnel system. And we do have some of the adopters in the Europe to use QBH to manage the public transportation. And I think another very interesting thing worth mentioning is that a lot of these the case are come from the community. There are a lot of partner, a lot of vendor providing the commercial QBH solution to help their customer to build their own platform. So we think that's a very important kind of metric of healthy community. Yeah, and here I just want to dive a little bit more about the two of these case study. One is the low orbit satellite. So it's actually we know that currently there are a lot of commercial satellite running in a very low orbit. Right. And also, as I said, the satellite is moving very fast. It's kind of different with fixed location age. So network challenge can be even, even critical. And you got like eight to 10 times per day to have this satellite online and each time window only six to 10 minutes. And with the QBH, the academic is actually managing kind of the small AI model in the satellite to do some like easy inference, for example, to filter some of the picture, because the picture may just capture the cloud, right? The cloud doesn't contain anything. But the people may want to do some analysis for agriculture sort of things. And also, when once they get the meaningful picture stored in the satellite, when this when the satellite to come back online during the time window, it started transferring the picture back and the ground control center will run the large model AI model to do more accurate, more heavy load AI analysis. And also, actually, this enables the satellite to enables the user to kind of upgrade the applications on the satellite. We know that once you launch it, the satellite run into this orbit, it's kind of you are never able to touch it physically. So remote update is definitely another very important topic. And we also with the joint inference, it saved a lot of the bandwidth between the satellite on the and on the edge. So it saves the battery and it kind of extend the lifecycle of satellite providing service. Another use case is that we actually with Cubase to help adopt a cloud native technology on the offshore oil field. So for oil field, it's actually kind of more complicated scenario with for the security issue for also for the device management, there are a lot of kind of sensors need to clap the the data and they need have AI AI model to filter the data to analyze to analyze where there's any potential risk for this oil field. So with Cubase, it's quite easy to help people to manage the containerized applications and also to easy to deal with the camera stream, also with the data from the sensors. Yeah. All right. So I think that actually there are more of the use cases, but I won't go into much details. I just want to mention that or this achievement replies relies a lot of the community effort. So from this you can see actually we keep growing in the community to get more contributor involved. And also Huawei as one of the the starting company actually during the last year, we send more people working on this project, but the community grow even faster. So that's the exciting thing. And also today we have a lot of community partners as well as the end users, including the academics and research organizations. So in the slide you can find that there are hardware designer and IoT companies, the IT service providers as well as the telcos, the cloud service providers, and also the research and the research and the academics. So we think that also very important to have a kind of diverse community. All right, just a little bit more about the future of this project. So today, QBG is kind of not limited to a single repo or a single framework for the core part of the framework. Our goal is to make the more mature, more secure to support larger scale management between the cloud and the edge. But on top of that, as you know that there are kind of a lot of scenarios varies with with each other, we are going to have more like dedicated scenario based tool kit. For example, the Sedona is the one more likely focusing on the AI collaboration between cloud and edge. And we are going to have the IoT kit on top of the DMI interface, DMI framework to simplify the IoT application development. And also we are going to have the robotics kit, which is kind of more advanced. It takes use of the like the edge mesh, the Sedona and as well as the IoT kit. And we are kind of expecting to have more. So if any of you have any idea, please feel free to come discuss with us and let's make it happen in the community. And for the online, the idea is that actually, the first day the project get started, we support multiple architecture. So you can find out today like the x86 and the arm 32 and also the 64 are already supported. And this year we're going to provide official support risk of five hardware. And from the operating system level, actually last year, we also provide official support with Android. So quite interesting. And this year, we're going to provide support with the windows. And also, besides kind of the current managing nodes on the edge, we are exploring the path to provide the cluster management on the edge for some of the use case, they have kind of more resource on the edge. Yeah, and also, regarding the country, a lot of people joining the community, especially providing applications, providing their own platform distributions. This year, we're going to have the conformance testing for multiple layers to make sure users are able to choose kind of any of the hardware together with the platform distribution and to do the other with the kind of upper layer platform integrated. Yeah, that's all about the material I want to call the slogan from the CNCF website that let's work together to make our native ubiquitous. Thank you. Any questions? Yeah, I have my colleague in the back with very few t-shirts. So if anyone asks a question, you might get one. I guess there will be a question in some people's mind, like in order to contribute to this project, do we need like us to have our own specific hardware to help contribute? Because in general other projects, they have a more general platform that everyone can have. But Kubech, it could require some like distributed devices, although small, right? Is that required? That's a very good question. So let me repeat the question for online audiences. The question is, does Kubech require some specific hardware for the development? Actually, if you are working on the kind of platform, the framework layer of Kubech, the answer is no. So we also run tests with cloud servers to kind of mark the environment, and we use some of the other projects to kind of inject the network issue to kind of simulate the network between the cloud and age, but actually you don't need any special hardware to do the development. A little bit different thing is about the IoT part. If you do want to develop with some certain, for example, the sensors, you might need to have one of that. Otherwise, it's just the general hardware. Cool. Thank you. So my question is also hardware related. What kind of IoT hardware is supported like? What's Raspberry Pi? Yeah, yeah, just let me go back to this architecture page. So actually, in this page, you can find that there's a kind of age where it's actually a node. And these are kind of the leaf devices or IoT devices. For the node, I think the requirement is like, you need at least 100 megabytes for the operating system. 100 megabytes for Qwage, basically the age core and container runtime. If you use container D or CIO, it takes like 30 megabytes. And that's about it. And the CPU requirement is quite low. So you are able to run it on Raspberry Pi. Yeah. Thanks for the great talk. Just a question about this satellite use case. So how do you manage the software upgrades? Let's imagine that the satellite comes into the connection with the base station. There is an upgrade and it goes bad, right? And it takes all the six minutes where the satellite can be visible. And then basically, until the next time, you're going to have a broken satellite software or how is this managed? Yeah, for the, this is a very good question for the application upgrade. It actually takes time, right? It's like because the bandwidth between the satellite and the ground station is kind of limited. And they do have like two paths. One is for the control signals, like the checking whether this satellite is back online or not. And then another channel is kind of for the data exchange, especially for kind of sending the pictures back and that they also can send some of the data to the satellite. So currently the application upgrade, basically the container image downloading goes into that path. But it actually takes time to get the image fully downloaded. So the kind of the practice that is people try to avoid big upgrade of the container image, they can upgrade the little bit like the turning of the parameters and of some of the configs. Yeah, and also we know that the container image is layered. So if your diff is not that big, it don't take much time. Yeah. Okay, thank you.