 Right. Hello, everyone. Welcome to the introduction to QBH, the Kubernetes native edge computing framework. I'm Kevin Wan, one of the co-founder and the maintainer of QBH project. Hey, everyone. This is Indy. I'm also a co-founder and the maintainer for QBH. Today I will with Kevin to give you an introduction of QBH. So first, we are going to talk about why we need edge cloud computing. First, it's a low latency because normally the edge is on the remote side and the latency back to the cloud will be high. That's why we want a solution, that application running on the edge side to reduce this latency. And also because especially with LD edge, a lot of LD data will generate it on the edge side. If we want to migrate all this raw data back to the cloud, we will be a really high cost and drain our bandwidth. To save the bandwidth, we want to generate a solution running on the edge. And also we are really concerned of this privacy and the security. A lot of edge side will running using the data generated on the edge side, including the personal data, business data. That's why we want to preprocessing and mask some data, especially sensible data on edge to protect the production and business security. The last and not the least, we're talking about the local autonomy. Of course, the edge naturally will run on the remote. The connection through internet could be, I mean, interrupt in the middle. So we need an offline solution for the edge side and make it a self-recovery when the connection restored. For the detail of the Kubernetes architecture we are talking about in the following slides. For this project itself, we launched it in 2018 and it become a CNCF sandbox project in March 2019. It graduated from the sandbox into a CNCF incubation in September 2020. For now, since the launch until now, we already got more than 4,500 stars on GitHub and 1,300 forks on GitHub. And this including more than 800 contributors, more than 220 code custom meters. All these people are from more than 60,000 organizations. Within our community, we already set up D6, CIG AI, CIG device IoT, CIG Mac, focusing on telecommunication, telecoms. And we have a workgroup wireless. Since last time we set up a new CIG called CIG Robotics. The charter is under review is focusing on the robotics. Here is the not a complete list of our contributors, adopters. Easily we can see this organization including hardware, software, cloud provider, telecom, and also academic schools. So we have a great device-ified community. It's organically and healthy. Now we are talking about why we pick Kubernetes as our platform to build on and the challenge we met. The benefit. So if we want to use a counter-aligned application, it will be easily migrate and build once run anywhere. It's already recognized by the industry. And based on the Kubernetes, that's already the faculty standard Kubernetes for the container optimization platform. So if we build on Kubernetes, the developer will have the same experience across cloud and edge. They will have the same experience to develop the application. And also Kubernetes have this extendable architecture for us to easily extend the APIs. To build on Kubernetes, of course, we have these challenges. First, the edge probably have the limited resources. So the resource run of the edge will be constrained. It may not enough to Kubernetes component to run on, for example, Kubernetes, Kubernetes proxy, and other components. And we already talk about unstable network. The connection between the cloud and the edge through the Internet may not be stable. So it's different from the network running in the data center. Also, the bandwidth will be limited. Latency will be high. That will be a hard problem for us to conquer. And also we need autonomy at edge. Because of this unstable network, so the network could be interrupting the middle and disconnected, disconnected edge from the cloud. That's why we need the edge to run autonomously. And also we shouldn't let the control plane to evict or micro application when the disconnect is detected. It's different from the behavior running in the cloud. And also because a lot of IoT devices attached to the edge, we need to manage a heterogeneous and a large amount of different devices to attach to the edge. Here is the key features of the Kube edge. First, we support Kubernetes native API. The developer will use the same API to deploy the cloud application and add the application. We provide a seamless cloud edge coordination, edge autonomy, and also the Kube edge supports the low resource environment, low resource boxes. And it provides simplified device communication through the device controller. And also it provides a cloud view of these global metrics from the data and all transfer back to the cloud for user to monitor the edge and also the devices. Now I give the control for Kevin to talk about the Kube edge architecture. So for Kube edge actually to resolve the challenges we observed in the last slides, we are actually choosing the remote node architecture. So basically we are putting the node component on the edge because that's much easier to optimize for the resource constraint devices. And also actually the component, the edge core running on the edge node is just an all-in-one component. It takes down to 17 megabytes memory footprint to run. And also for the container runtime, all the OCI component in the runtime are already verified integration with Kube edge including the container D, Docker, CIO, and the control containers. And besides the CNI plugins, we also provide our own data plan implementation, the edge mesh, which is a simplified framework to build a connection across private networks. The applications are different in the different edge sites and talk to each other with standard Kubernetes service or cluster IP. In the cloud actually we are just using the cloud core to hook any status updates of nodes and ports that are running on the edge. So for the Kubernetes component, they have no any awareness about the additional components. They just think that they are managing the normal nodes. For all the nodes that are running on the edge and the ports running on the edge, a cloud core will forward the request, put them into a message and rely on the biodirectional communication channel to sync with components on the edge. So the default protocol of the channel is WebSocket. We did a lot of verification and test that it's very stable and can work on very low network including high package loss rates, and the high latency network. And also, quick is introduced as an alternative online protocol to provide the support for corner cases, and we are expecting some performance improvement in the longer future. To simplify the device management on the edge actually QB edge provided a framework to decouple the application development and the device protocols. So the device mappers are the protocol converters to convert from the real device protocols into standard message format defined by QB edge. For the applications running on the edge, they just need to know about the data format, the topics format of QB edge, they don't need to worry about the underlying details of the protocols that are used by each device. Okay, so currently the the beauty in protocols are Modbus, Bluetooth, OPCUE and also the on this, which is a new protocol and introduced this in the latest release. And the user are able to rely on the extensible mechanism to integrate integrated with their own less protocols. So little bit more details about inside the protocol. So let me just first introduce a little bit more details about how QB edge run applications onto the edge nodes. So as we know that in vanilla Kubernetes after a schedule scheduler made decision for hard to find a node. The people at the corresponding node will spin up the the vendors of that part. While in QB edge actually the cloud core will get all the watch events of the pod and the notes that should be running on the edge, and then send it through the channel between cloud and edge to the right edge. And the meta manager on the edge node will persist and the the pod information on to the into the local lightweight database, which is a SQLite and then forward the information to the HD, which is a lightweight. tablet and the rest are the same process of comparing to a standard tablet. So with this mechanism you can find out actually it's from a pod lifecycle it's actually quite standard of comparing with with original behavior, each component that they are just working. As normal they don't need to worry about the other steps added by kill kill page. Okay, so inside the cloud core actually one of the very important. Audio is the cloud hub. It deals with the message construction and managing the only connections from the age node. So makes it able to do for the other components talk to each age nodes, even though they are located in the private network. And the each controller is focusing on the shadow management for the, the guys of the net is actually the, the, the nucleus. The guys, including the notes, thoughts config, config maps and the secrets, etc. And also the device controller is relying responsible for providing the device API lifecycle implementation for provided by QBH and also focus on the shadow management for all the devices on the, on the age, and the sink sink control is is responsible for detecting any inconsistency between cloud and age and trigger additional synchronization to make sure the age is up to date. So it's worth mentioning that it's actually QBH is kind of full time, deep based synchronization between cloud and age. And to avoid the any network consumption peak cost by released from the list of watch mechanism. So which which can make the whole system stable and work well in a very limited bandwidth environment or high latency high package loss environment. The CSI driver is plugging into to hook the storage provisioning request that is required by the CSI plugins on the edge. And the mission web hook is to enforce the best practice, for example, simplify the configuration of the part of autonomy policy. Oops. The components in the actually the modules inside each core are the module which is a symmetric implementation of the hub. The meta manager is one of the most important part inside the core, which focuses on managing the node level. Consistency and also provide lightweight when that is a server for all the applications, especially operators that running on the edge. That means that all the operators and the applications on the edge are they are, they are actually getting data from the local database, less reliability on the indication and the network between cloud and age. The HDD is a lightweight Kiblet, we removed some legacy and redundant limitation of vanilla Kiblet and the device screen is to deal with device status synchronization between cloud and age. So, for the age core overall, it takes down to 70 megabyte memory footprint, which is quite lightweight. You can, you are able to run on run the whole setup on age box down to 256 megabyte memory. Okay, so for the new features since we last met. We made a very good progress on enabling larger scale cluster. So, you are now able to support a multi active deployment. You are now able to support a multi active deployment cloud core. So you can have basically have multiple cloud core instances are working together at the same time to manage the nodes on the edge. And also, for the matter mapper device mapper framework, we upgraded the, the overall architecture to simplify the new mapper development. So you are now able to use one commander to generate a new mapper code skeleton and then you can add your, your own code. And also for the applications running on the edge, we are providing a custom message routing mechanism between cloud and age. So this is very useful when you have some applications in the cloud or in the age that want to access the other components in the in the different location. Especially when you want to call some external services in the cloud from the edge. And also, in this release we support the HDTV request. And in the longer term we're going to support more protocols for the applications. And also for the data plan of QH the age mesh project. We decoupled the implementation from the main repo into a separated repo to to make it more focused on providing the data plan the network functionalities. So with the new age mesh you are now able to enable the applications in different subnet talk with each other using Kubernetes service mechanism where we're relying on the class IP. The underlying network, they will automatically rely on the AbleP2P to build connections across different private network to make sure every product can find each other. Okay, for the six and several projects on the CIG AI is recently focusing on the Sedna project development. We got to a new release published since we last met and the CIG MEC and also the wireless working group. They are currently focusing on the reference architecture, relevant to the alcoves and also especially some weak network situation with the node keep changing the location. And also the device IoT CIG as I said, we are keep working on the device management framework and also simplify trying to simplify the device map development. For the new CIG robotics. Here are the discussion about the CIG Trouter to clarify what to do and also to indicate where we are. And yeah, under the subproject Sedna and age mesh we already covered. Okay, so just some basic information about the Sedna project. So this project is actually kind of AI toolkit. The goal of QB age to provide the age cloud collaboration and age cloud sanitary mechanism for AI workloads. And the resources on the edge today are still very limited. So we, while we have rich services and rich harsh rates, the company power in the cloud. So, when we are not satisfied with influencing QAC on the age we and with the Sedna project we can easily fall back to the cloud to achieve the joint inferencing for your day to day AI workload. And also, it's, it helps simplify the incremental learning in your day to day application running, basically, back to any of the samples for the, the hard information and the trigger the cloud training and that they update the models on age. So the reason the two releases we introduced that the life long learning model to to extend the data and the sample question and also the model training upgrade iteration. And also we enhance the federation federated learning to support more scenarios. So robotics sake currently we are still under discussion, but basically we want to focus on the API definition and the reference architecture, as well as implementation, relevant to robotics ecosystem to achieve the age cloud collaborate with collaborative architecture or robot cloud, collaborative architecture. So the, as a very beginning we may focus on containerizing the some of the software's including the OS and the gable. And also, you are more than welcome to join the discussion and leave your comments to this secret partner. Okay, so for the use case I just want to update to one of the adoption that we used on the words longest. So it's a C. It's actually a bridge terminal system with a very low network quality. They don't have tables. So, so, so everything need to be every communication need to be done with age. The network or the four G network. There are a lot of sensors on the bridge to monitor the performance of the bridge itself, and also the environment and also track the traffic to to generate a lot if there's any emergency or if there's any traffic situation, and also we need to track to make sure the bridge is in a good situation. So with the device. framework. One of the sensors are implement the drivers are implemented as the US mappers to decouple the underlying protocol with applications. So the agent applications they can just focusing on analyzing based on the data collected or working on the AI model to generate some inferencing results to send it to the cloud and in the cloud they can achieve the global monitoring global metrics and also a higher level analysis based on the data. Okay, so for the longer term, the data plans definitely very important. So we will keep enhancing the image mesh project to provide the to simply find a cross subnet communication and also to enhance the story storage integration to achieve the age cloud collaborative storage. And also enhance the more secure age protection, as well as providing decentralized security functionality for applications across the age, and also device management is definitely very important. We are going to support more protocols as well as simplify the development. For the community, we will host the more contributed events, as well as improving the experience and for the collaboration with the other community besides cell FH, especially a criminal, we are planning to integrate to have more collaboration with Ajax foundry, A3 was an age and it leaves foundation. Alright, so that's for this word about the introduction to the project. So nowadays we are updated really developing on GitHub and also we are relying heavily on Slack to discuss about technical stuff. You are more than welcome to join our community and please have a try on this project if you have not yet tried that. Alright, that's all about the introduction. Thank you for listening and just list Q&A time.