 Hello, welcome to the QBH introduction session. I'm Kevin Wan, the maintainer of QBH project. Today I will go through the QBH briefs, including the background motivation, major design consideration architecture, and also community updates, including new sakes and new features. And then I will quickly go through the use case and the future plan. And hopefully we can have time for the Q&A. So we know that the network is becoming more and more powerful today and more and more people start to build business at the on the edge. So we know that the moving from central cloud to edge is very helpful to improve the end-to-end business latency and also reduce the bandwidth consumption between cloud and edge and also very helpful to improve the security and privacy while the workload between cloud and the edge are very different. So in the central cloud that's we are, that are actually we are very friendly. So as AI for example, more than half workload in the central cloud are training and also there are a few are inferencing and the transcoding while in the edge cloud it becomes a bit different because the harsh rates, the computing power is less than the central cloud and it's much more closer to the end user. So it typically consists more of transcoding workload and including a few AI inferencing. And while on the IoT edge, it's actually very, very close to the data source and the end user. So it's almost running the AI inferencing workload there. So basically from the cloud to the edge, the closer to the edge, the lower latency and while the underlying hardware are very different. So in the central cloud, we know that we have a kind of very standard physical servers but on the edge, they are very different like can servers and IoT gateways. So the hardware and the harsh rate types are very different. So while trying to build each computing platform with Kubernetes, we know that the Kubernetes and the cloud native technology are very successful in the ecosystem, especially in define the containerized application model which make it really easy to build once on everywhere and also the layering container image mechanism makes it very easy to optimize the, the whole image size. And also Kubernetes provided very good application obstruction and already becomes the de facto standard in the central cloud. If we build with the same technology, it's very easy to achieve same experience across cloud and the edge. And also for Kubernetes itself, the architecture is very accessible, including the API machinery, the CRD mechanism and also for Kubernetes itself, it's very easy to add customized components, customized controllers and or operators, but there are also some challenges. So we know that the age, one of the difference to the central cloud is that the resources are quite limited, especially in the IoT and the industry case, the can servers can be down to, for example, 512 megabytes and maybe even less, but for a vanilla Kubernetes, almost the smaller deployment need one gigabyte, right? And even just a QLED, it takes a lot of memory and also the network between cloud and edge are quite different. Some of the edge locates behind a firewall or inside a private network doesn't have a public IP and also the bandwidth are very limited and also the latency is very different comparing to the network inside a data center. And if we want to run the applications on the edge really in a good experience, we need to resolve the autonomy on the edge because the edge may be disconnected and offline often or for a long period of time. In that case, we should not evict or migrate the applications because we just lost the connection, not lost the physical servers and the instances there. And another big challenge is that in the IoT and the industrial cases, there are a lot of edge device, use different device protocols. So how to simplify that for the application developer is also a very big challenge. So QB edge tried to build on top of Kubernetes and extend the powerful functionality from central cloud to achieve same experience and build the standard application management mechanism across cloud and edge. Especially QB edge provides a seamless cloud and edge coordination by implementing a biodirectional communication channel and making it able to communicate even going through a private subnet. So I want to also highlight that QB edge starts with putting the Kubernetes node onto the edge because that's much easier for low-resource use cases. And also QB edge introduced the node level meta-data persistency, which reduced the disaster recovery time to make the node get ready even more faster. And also we did some optimization to reduce the memory framework footprint on the edge components. So itself takes around 70 megabytes. And also we support the OCI conformance runtimes. For example, we integrate with cryo, the total memory consumption can be less than 100 megabytes. And also another thing is that for IoT and the industrial internet, QB edge provided a extensible framework to simplify the specialized device protocol into the platform so the application developers can focus more on developing their own applications. So this is a very high-level view of the QB edge architecture. So on the top, let me, sorry. On the top you can find actually it's a vanilla, still vanilla Kubernetes cluster. And QB edge just added a few edge nodes. So to the Kubernetes master, because we understand the mechanism of Kubernetes work. So it's actually trading the edge node same as the cloud nodes, but actually the cloud core is dealing with the shadow management of the real world edge nodes on the edge. So in the biodirectional communication channel, we use a WebSocket by default and we also develop the quick support as a alternative in some different cases. And also on the edge, as I mentioned, that we support the OCI conformant container runtime and also we support the CNI and the CSI storage on the edge. And for the part communication because the network environment might be different in the data center or in the cloud. So we introduced a edge mesh layer to resolve the all the communication things. And for the devices. So QB edge today is actually relying on MQTT because MQTT is a bit more popular than the other industrial protocols. So basically for the device developers they can develop a protocol converter to integrate with the device using its actual device protocol while converting all the message content to MQTT. So for the application developers, they are much, much more easier to develop their own applications. They just need to deal with the communication through MQTT instead of the other actual device protocol. So for the cloud core here, I just want to highlight that it's actually dealing with the shadow management for the nodes on the edge and also the devices. So for the 2D Kubernetes cluster or the node object and also the application object life cycles, life cycle actions are reflected. So the 2D Kubernetes master just treat all the things in the same. So inside cloud core, we have edge controllers. So this one is the shadow management for actually the core APIs, including the nodes, parts, config map, et cetera. And also for the IoT and industry device management on the edge, we introduced a set of device API, including the device model and the device instance. And we also added a device controller to reflect the life cycle updates. And it's basically the shadow management for the devices on the edge. And another thing is that we added a sync controller to do a reconcilment between the cloud and the edge in cases that any inconsistency detected. The QBH CSI driver is actually a plugin to try to hook the request like started to provisioning request to the edge. So with QBH CSI driver, it's very easy to integrate third-party CSI drivers on the edge. So basically you can install any existing third-party CSI driver implementation to QBH. You don't need to worry about the communication between cloud and edge. You can just install the whole backend to the edge and QBH CSI driver can deal with everything. And the admission webhook currently is evaluating the extended API like the device API. And also we are developing a lot of best of practice enforcement for the edge computing use cases. For example, if node become offline for a period of time, we may not evict the pod in the default period of time. So how QBH works, so this one where you might be familiar, it's actually a life cycle of a application running in a vanilla Kubernetes cluster. So actually what QBH is a equivalent replacement of a vanilla equivalent behavior. So in the vanilla Kubernetes, we know that after scheduler find the proper node for pod, it will update the pod spec to fill up the pod node name. And then the QBH will guide the notification and then spin up the pod, the containers on that node. So in QBH, how to make it impossible to spin up a container on the edge? So actually after scheduler find the pod to a node, the cloud core will got the notification of the pods updated and it will filter out all the pods that are scheduled to the nodes on the edge and send the information to all the relevant nodes 101. And then the corresponding edge core on that node will got the message. So the meta manager will reflect that the local metadata persistency basically are persistent the pod onto the node and then tell HD to spin up the pod. So what are inside the edge core? So each hub is actually a equivalent implementation comparing to the cloud hub. It's actually dealing with the communication between cloud and edge and also make it possible to talk across firewall or even the edge is located inside a private network. So the meta manager will do the, basically provide the node level metadata persistence functionality. HD is a lightweight tablet implementation. So we basically vendor the vanilla tablet and skip the sum of the packages that are not quite relevant in the edge computing use cases. So the device screen is dealing actually for the IoT and the industrial internet cases. It's thinking the device status between cloud, edge node and the device. So the edge bus event bus is a MQTT to talk between device screen and the device mapper actually. And also the edge mesh as mentioned in the previous slides it's dealing with the part to part and also the service load balancing and also the node level local DNS resolution. So for the community release development we actually release every three months. The feature planning starts in the beginning of each release lifecycle. And we typically freeze our code when it's only two weeks left to find our release date to focus on fixing bugs and stabilize the release. Another thing especially to highlight here is that we know that on the edge a lot of hardware are based on the ARM architecture. So QBH community we support X86 and ARM architecture as native architecture through a whole release lifecycle. So for code we know that we're just coding the IDE and for building we have the first building and for testing we have ARM architecture special tests and also in our CI we have full ARM based CI tests and also the X86 verifications. So for QBH community a very exciting news to share here is that we moved to CNCF incubation in September this year so currently we have more than 3,000 starts and 800 folks on GitHub and also more than 500 contributors join the community and we also have a lot of organizations joining the community development. And another thing I want to highlight here is that we are currently moving to the SIG based doneness model so we enforced the IoT device SIG IoT device so this one is focusing on the IoT and the industrial internet relevant work to improve QBH and also provide better user experience the developer experience for the IoT and the industry internet. And another thing is the MEC so this one is to basically leverage QBH in the leveraging MEC platform development with QBH inside and another thing on the discussion is the AISIG this SIG is trying to provide better fundamental functionality for the AI workloads coordinating between cloud and edge like federated learning and also incremental inferencing so here is a short list of the organizations inside contributing in the community so we can find that we have the IoT and hardware organizations including ARM, Samsung and also we have a bunch of telecos and also the IoT service providers and also some of the academics are also doing research around QBH so for the user cases we currently have more than 20 adopters and here are some of the production of adoption including the China Highway ATC system and also RISCOM they use the QBH in their factory to improve the production line safety and also Shanghai IoT they build with QBH to develop a smart campus solution with QBH so just a brief about the MEC special interest group so this one is actually focusing on the reference architecture leveraging the MEC with QBH and also if any enhancement discovered they will also focus on a joint effort to develop in the community so a brief about the areas of focus are MEC relevant services so basically service discovery and communication between cloud and edge or between different edge networks and also make the services able to work when the edge and cloud are disconnected and also the MEC relevant networking so basically this one is trying to expose more telco capabilities to the container network or to the service level so make the applications able to benefit more from the online network information, locations also the security functionalities etc and also for the MEC infrastructure so this one is actually basically trying to support various hardware acceleration also the multi-MEC and the multi-communist cluster management and also to support various kind of workload on the edge so like for example the UPF so this thing will not directly develop the user plan functionality which is actually a part of more MEC business thing but we will try to improve QBH to make it a better underlying platform that leverage the MEC workloads and business so if you would like to know more about it please visit the online link to go through the SIG Trouter and also we have a dedicated select channel for this SIG so one of the recent work the MEC SIG is working on is that the reference architecture building MEC platform on top of Kubernetes and QBH so actually we are borrowing a lot of concepts already defined by the GSMA operator platform so for example the northbound interface and also the user network interface and also the east-west broadcasting east-west bound interface and also the southbound interface so basically it's defining how the services and components to how they communicate with the other part so the blue boxes are components that will be provided by QBH so actually you will find out there are some of the boxes new to the QBH architecture so QBH would basically focus on the following features like for example the cross-age cloud service discovery and also the nearby access that basically means making it possible for the application to know the network location of it itself and better serve the end user terminals so the end user terminal they can get access to a nearby application and also when any cross application container communication happens it will try to deal with the network locality to save the global network bandwidth and also the SRM based on 5G network events and the H service discovery it will increment the dynamic resource scheduling elastic scaling and the thing over sort of capabilities and also the hmesh will cover from L3 to L7 cross-age and hcloud network streaming and also microsoft service governance traffic governance and also the northbound interface because that's actually affecting the developer experience and also the application mechanism this one is still working in progress and we are doing a lot of cross-community discussion with the other organizations and other open-source projects if you are interested in this please join the MEC select to join the discussion so the C device IoT is responsible to simplify and develop and maintain the extensible device protocol framework and also in the QBH community we will provide some of the device protocol converter implementations and also make it very easy for the other developers to develop their own device protocol converters to integrate to QBH so some of the items here are the common API obstruction basically the device model and device instance APIs and also the device controller, device train, device mapper framework that we need to keep improving the extensibility make it much more easier for the IoT cases so the first we want to integrate more and more device protocols second thing we want to integrate more functionalities like data pipeline, data workflow on the edge to make user able to specify some of the hardware filter software filter to reduce or convert from broad data to meaningful data and then send forward to the business applications and another thing is about the data management so we know that the devices are producing a lot of especially the time serial data so even it's very expensive to send all of them to the central cloud we can persistent them on the edge and also for the mapper development so this thing we will try to simplify and improve the mapper design and provide reference implementation, SDK or code skeleton and lastly this thing is also I will also work on a lot of cross community cooperation try to integrate with other IoT projects and also verify interoperability and compatibility with the other projects and the devices etc this thing we also have a dedicated slack channel on Slack so some of the recent work done by device IoT are the improvements of device API so we basically simplify the definition of adding specific customized industrial protocol and also adding new fields collect the cycle and the report cycle for different property visitors so in some cases the data the property collection period and the property report period might be different and also we introduced the data section to help users define what kind of data they want to persistent on the edge and also integrate with some data processing workflow so the full proposal can be found with the underlying link and also another thing is that we recently simplified the mapper design reference in the QBH 1.5 release we also refactored the Modbus mapper implementation to reflect the new design reference some of the other updates are set up and the maintainability so in the new releases this year we introduced the component config it's actually the same concept to Kubernetes component config so basically we added the Kubernetes style API to simplify the component configuration and also added the two commands to generate config with different values and for the notice set up we basically deal with the automation of the registration and the TLS enforcement and also the certificate rotation and for the installer, the KMME we added support with CentOS, Respbian and also Debian and this year we added the HA support of Cloudcore it's an active standby mode but actually it can serve one Cloudcore active Cloudcore instance can support 1,000 H nodes so the community currently under design and discussion to design the scale-up model for the Cloudcore and also we improved the message delivery reliability between Cloud and H so basically introduced the application layer activist verification to make sure there's no message lost when communicating between Cloud and H so runtime and observability updates this year we actually supported almost all the main frame OCI conformance runtime including Cryo, Kata, and Dennis joining Docker and the community we supported last year both X86 and ARM architecture including ARMv7 and ARMv8 are verified and for the observability we added support to use Kupkado logs to fetch logs from pods on the H and also added support to use metric server to collect the metrics from nodes both in the Cloud and on the H and also in 1.5 we added support of using Kupkado EXEC to access a pod on the H from the API server in the Cloud so that's very useful for the application developers to develop their applications on the H while not going to the actual place of the conserver there so for the use case I will just share one the largest so in the China highway the previous tolling mechanism is actually by manual the efficiency is very low and the operation expense are very high and also sometimes there are some tolling so that whole vision happens so in the new digital transformation progress Kubeh together with Kubernetes helped the whole cluster deployment so on the H there are ARM can servers and also some of the X86 industry PC so currently the applications running on the edge nodes are code content and the current plate recognition applications so the devices on the H are actually some of the cameras and also the gates so in the end the benefits are improved the traffic efficiency with 10 times more so for the cars going through the tow station it's optimized down from 15 seconds to 10 and also the truck time consumption of trucks also improved a lot the highlights here are they achieved a very low end to end latency because it's very distributed system all over the tow stations in different cities of China it's very highly distributed so the whole system currently there are 100,000 edge nodes and around one million applications so with Kubernetes and Kubeh the operating team they are benefiting from the unified management experience and can upgrade or scale out the applications on the edge in a very easy way and also on top of the infrastructure we added support in the application layer to map the hierarchy management to the different departments of the traffic tolling and the traffic management so for future technical things we will keep working on providing cross-subnet communication support especially for the applications running on the edge and also we will support the edge and the cloud communication integration integrating with existing CNI and Envoy to serve better the microservices and for the devices we will keep improving the framework extensibility and also add more device protocol integrations and we will also provide the decentralized security for applications on the edge so they can do the authentication and authorization even is disconnected to the central cloud and also the community is working on to provide the online framework to serve the edge and the cloud AI coordination so from the community-wide we will try to provide better contributor experience and also host more contributor events and also more cross-community collaboration alright, these are a few useful community resources so we have a website and most of the work actually are done on GitHub and Slack so if you have any questions we welcome you to open an issue on the GitHub or just send a message in the select channel we also have the MEC and the device analytics channel link here if you are interested in the works they are working on please feel free to join and we come to hosting the community meeting every week alternate between Europe friendly time and the Pacific friendly time you can check out the details with the meeting calendar and also we are recently working on the community documentation to support multiple releases and also multiple languages if you are interested in improving the document please feel free to join alright, that's all about the presentation so if you have any questions please feel free to ask in the Zoom meeting or join the QBH community to ask questions there thank you