 Hello, I'm Kevin Wang, currently working on QVH project. Today I will introduce QVH, the Kubernetes native edge computing framework to you guys. So the talk basically come up with four parts. First of all, the QVH brief, then the community updates, and after that we will go through the latest use case and also the future plan. So the QVH project was started in 2018, targeting on providing the extending the Kubernetes and cloud native technologies to the edge computing cases. The project was donated to CNCF in 2019 and last September we moved to the incubation level. Now the community have around 4,000 get up stars and one get up folks. Since joining CNCF, we received over 600 contributors from all over the world and including around 200 code subimeters from over 60 organizations. Since we last met, the community now have four SIGs and working groups, including the AI SIG, the device LDC, the MEC SIG and the wireless working group. So from the list of the community contributors and adopters, actually you can see there's a very good diversity over the industry. We have the contributors and adopters from hardware, IoT companies, and also the there are technical operators and the IT service providers working together in the community. And also there are a lot of cloud service providers serving providing the QVH services in their products. And also we have a lot of academic organizations doing research around QVH for the edge computing platforms and our applications. From the overall architecture, QVH is actually targeting on providing the functionality of managing both the nodes in the cloud and the nodes on the edge within the same Kubernetes API cluster. So in the cloud we have the cloud core which is the component to deal with shadow management for the applications and the resources on the edge. On the edge we have the all-in-one component, the edge core, which provides the node level autonomy functionality and also it's actually optimized for the low-source hardware. And to better integrate with the IoT devices and the different protocols used in the industry, QVH provides the extensibility framework, the device mapper to simplify the integration with different protocols including MQTT, Modbus, Bluetooth, OBCU and a lot of other protocols. Users are able to use the QVH CRDs to define their own device protocols, the device types and also easy to integrate with the protocols by developing their own device mappers. For the data plan to simplify the service communication between the applications located in different networks on the edge, QVH provides the edge mesh framework to serve the underlying network agnostic layer for the applications to discover each other and easily communicate just as the same experience when they are running the Kubernetes cluster in the cloud. So for the key features, one thing we added in the recent release is that QVH now supports the native API endpoint access on the edge. So by the autonomic QVH API endpoint on the edge, the applications, the operators or the other plugins that rely on the Kubernetes API or even the CRD mechanisms are now able to run very well on the edge. Even when the nodes are disconnected, the API is still accessible from the node. So that means your application will not be disrupted. Besides that, QVH are good at providing the seamless cloud and edge of coordination mechanism by providing the bi-directional communication terminal to make it able for the system to manage the nodes even when the nodes are located in private subnet or even the network between the cloud and edge are very limited. For example, there are very high latency, the package loss are very high, and to implement the node autonomy actually QVH persists a bit of data on every node. That means even the nodes are disconnected from the cloud for a very long time, it can still recover or get ready very quickly since there's no any request needed from the response depending on the response from the master on the cloud. And QVH optimized the OEM1 component in the cloud, which takes around only 70 megabytes. It also supports the OCI conformant run times including container D, CIO for the less run time overhead cases, and also supports like Cata container or Vetlet architecture for the security concern use cases. And to simplify the industry, the IoT device integration, QVH provides the device API and also the underlying device protocol extensibility framework to make users able to define their own device protocol or integrate with their own device types. And another thing is that QVH provides the ARM V7 and ARM V8 as well as x86 as the native hardware architecture in the overall project development cycle including the code development, shipping, building, testing, and also the releasing. So hard works. So basically, for example, when deploying a container to the edge node, so what QVH did is actually it hooks the vanilla pod spinning up process. So after the scheduler made the decision, the cloud core from the QVH first got the notification and forward the pod information to the corresponding nodes located on the edge. And then after edge core the updates, the meta manager will persist the pod information on the node and then forward it to the hd or QVHlet. And the rest of the steps are just as the same as the QVHlet running in the cloud. So what are inside the cloud core? So basically the most important thing is the cloud hub. It's a symmetric component with another one, the edge hub in the edge. So basically these two components providing the messaging over a web socket or over a quick to deal with the bi-directional communication over the different quality of network or even ensure the data synchronization can be successfully received and processed. The edge controller is providing the shadow management for the core QVHlet score APIs, including nodes, pods, config maps, etc. On the edge, the device API and the device controllers are for the IoT and edge device modeling and integration. It also provides the shadow management for the devices on edge. The sync controller responsible for inconsistency detection and the triggering the reconcilement. The QVHCSI driver is actually a plugin to hook the storage provisioning, etc. requests to the edge to make it easy to integrate with the existing CSI drivers, CSI backhands that are running on edge. The admission web hook are responsible for the extended API validation and provide the best practice enforcement, including the automatic pod autonomy configuration. So what are inside the edge core? Edge core is actually building together with edge hub, the meta-manager, hd, and divesting, and event bus. So meta-manager is dealing with node-level meta-data persistence and also it now serves the autonomic QVAPI endpoint on edge to support the access request from the plugins and the operators running on edge. The hd is an optimized lightweight QVAT and device screen deal with device status synchronization between the cloud and the edge and also provides the direct access for the applications on the edge nodes to the devices. So what are new in the community? We now have a special interest group focusing on the AI workloads running better running on the edge and also researching a better pattern of the workloads running both on the edge and in the cloud, including the, for example, the joint influencing the incremental learning or federated learning stuff. So in this segment, we will first of all verify the integration with the mainstream AI frameworks to make sure they are able to run appropriately on different types of edge devices, edge nodes, edge servers, and also we will keep researching on the synergy mechanism for the AI workloads and also we will define the benchmarking stuff for the relevant research. So the Sedna project is actually one of the toolkit focusing on this set of work. So in the Sedna project, we're trying to provide each cloud synergy AI framework by providing the dataset and model management across the cloud and edge. And we're also trying to provide each cloud synergy training and the inference frameworks to support including the joint influencing incremental training and the federated learning. So in the architecture, you can find out that actually we have a centralized global manager in the cloud to deal with and coordinate with all the components on different edge nodes. And on each edge node, we have the local controller to support the AI workload synergy pattern, including joint inference, incremental learning, and federated learning. So for the application developers, they actually are able to use, still able to use like the TensorFlow, PyTorch, or the other AI frameworks, they just need to import and help library to expose some of the measurement data to the Sedna project. And then the Sedna project, the framework will automatically hook the, for example, the inference function to make it able for another try when the local inference on the edge fails. So the device IoT is focusing on providing the better extensive, extensible framework to easily integrate with the different device and protocols. So the areas of focus are including the API obstruction and extensible, the U.S. community communication framework, and also including the data management workflow mechanism and also to simplify the map or development. For the MECC, we are actually focusing on doing the research to design and implement reference architecture that QBG is used to empower the MECC platforms. The works fall into this thing are including the MECC services, for example, the service discovery and the communication between the different networks. And also the underlying network functionality, better integration with the and better exposure to the container network or to the higher level layers. And also we will verify and do more research on the MECC infrastructure layer, including the various hardware acceleration and also the ODIC Kubernetes cluster multi-MECC management. The wireless working group are really targeting on optimizing QBG to better running on the different wireless environment, including the low quality wireless network. And also including the cases that the H nodes are frequently changing the location and also the H nodes may randomly get offline very often. And also we will think about and do more research to find out how to optimize the quality requirement to the underlying network. So new use cases. First one is that they're managing the already monitor devices and sensors on the world's longest cross-sea bridge. So this is actually a bridge tunnel system over the sea, which is around 30 kilometers long. And the network are mainly wireless network with very limited bandwidth and with very high latency. And also there are a lot of sensors and monitors that are provided by different providers and uses different protocols. So QBG is very available in this case because it's able to easily integrate with different device types and the different device protocols. And QBG provides the functionality to be able to manage the containers on the H nodes even the network between cloud and H are very limited. Another use case is by the China mobile. They are actually using the cloud age synergy platform to manage over 100 subsidiary cooperation from the central cloud. The network environment is that all the subsidiary cooperation need the net to the central cloud. That means all the components in the central cloud, they can have floating IP while the components on the H are not. So QBG is very useful in these cases to provide the bi-directional tunnel to simplify the remote application monitoring management and the debugging from the central cloud. And also the applications located in different network can easily talk to each other without worrying about too much about the online network environment. All right, for the lower term this year, we basically want to provide the data plan communication functionality across different networking environment for the applications located in different subnet were located in both cloud and H. We will also verify the integration with the existing third-party C9 plugins to help users easier to integrate with their existing software step. And we will also improve the device management extensibility and simplify the customization work when people adopting QBH, especially the IoT or manufacturing or other industry cases. For the platform itself, we are working on to support a multi-active HMode for cloud core to support a larger scale of a single cluster. And also we are working on to develop managing the clusters management on the edge. For the community perspective, we are targeting improving the contributor experience and also hold more contributor events to welcome the new developers to become contributors and the contributors to grow in the community. We are also expecting to have more cross community collaboration with the other open source projects and organizations, including the RFH, the green node, the increase foundation, et cetera. And here are the useful resources of QBH community. So we basically hold weekly meeting alternately between the Pacific friendly time and Europe friendly time. So if you have any questions or any idea to discuss, please feel free to find us on GitHub, in Slack, or on the community meeting. Okay, that's all about the talk today. Hope you enjoy the show. See you.