 Hello, everyone. Welcome to join my talk today. I'm Fisher, she's from Hawaii Cloud and today I will talk about Qubeh, whole Qubeh, extending the Qubeh to the edge with real-world industry user cases. Okay. Okay. This is a brief introduction of myself. Now I'm a technical committee of the Qubeh project. Yes, I'm also a senior software from Hawaii Cloud. This is my GitHub ID. You can find me at the GitHub. Okay. Next, this is some background I want to introduce for the edge computing. From the background, you can see from the right side to the left side, it is the closed service regional edge at the city edge at the near side edge. Yes, you can see in the closed services, we always do some AR training, the big data processing, and the other stuff, and then the regional edge at the city edge. Yeah, in this edge, we have to do some VCDN and AR processing at the others. Yeah. The left side is the near side edge. Yes, you can see more and more data is generated at the edge. Yes. So how to manage these edge devices at the processing edge data has become more and more important is this. Yeah. So this is the background. Yeah. The next one is the Qubeh project. First, I will have a brief introduction of the Qubeh project. Yeah. Qubeh project is open sourced in GitHub. Yes, we are the first clone native edge computing open source project. Yeah, we have some open governance and we're connecting the clone native edge computing ecosystem. Yes, we have many stars at Fox or GitHub. Yes, we also have many contributors from several organizations. Yeah. This is some background of the Qubeh project. Okay, next is the innovation journey of the Qubeh project. Yeah. First, you can see we are open sourced at 2018. Yes, in 2018, we contributed to the CNCF at the Setbox project. Yeah. At the same year, we released one of their version. Yes, in 2020, we have many use cases. Also, in this year, we become the CNCF Ecopation project. Yeah. In the 2020 years, we released many subprojects like the Sedna for HAI and HMesh for the edge networking. Yeah. We always have some larger scale use cases this year like the clone native vehicle and the clone native satellite. Yeah. I will introduce later. Yes. In the next year, we always released some new features. Yes. Now we are applying to become the CNCF project. Yeah. Great graduation project. Okay. This is some project journey of the Qubeh project. Okay. Next, I will introduce some project updates. Okay. First, this is the Qubeh architecture. Yeah. I will have a introduce some brief detail of the architecture. Yeah. For architecture, you can see Qubeh is built based on the Kubernetes. It includes the three parts, the cloud, edge, and the devices. Yeah. Many people ask me what's the difference between Qubeh and Kubernetes. Yeah. You can see Qubeh is built based on the Kubernetes. In the cloud side, you can see we use the Kubernetes master. So users can use the Kubernetes API to talk with the Qubeh cluster. Yeah. This is the cloud side. Also, the control plan. You can deploy the control plan in the public cloud or in your data center. It's all fine. Yeah. Another part is below is the edge part. Yes. Each part we can call it edge node. You can deploy the edge node out of the data center like some edge node located in different places. Yeah. This is the edge part. Edge part actually includes two parts. The left part is we call it light Qubeh. Yeah. What means light Qubeh? We do some light with cutting based on the Qubeh. Yes. Another part is for the IoT device management. Yeah. This is edge call, edge node. Yeah. Another component I need to introduce is cloud call. Yes. Maybe you will ask me why we need the cloud call components. Yeah. Because we think the network between the cloud and edge is always unstable. Yeah. The network may be disconnect, reconnect, and have high latency. Yeah. Because we think the edge node is always located in many places out of the data center. So we have do some enhancements between the cloud and edge for the network issue. So we built the cloud call components. Yeah. You can see we use the web socket at a quick protocol between the cloud and edge. Yeah. This connection can ensure the data can stand from the cloud to edge reliable. Yeah. The edge call can also report the port status and some other IoT devices, status from this connection to the cloud. Yeah. This is the overview of the Qubeh architecture. Yeah. In summary, Qubeh is built based on all the Kubernetes. Yes. It's not a Kubernetes distribution. Yeah. We have do a lot of enhancements, key features in the Qubeh. Yeah. It can focus or manage the remote edge from the central plan in the center or from the public cloud. Yeah. Okay. This is the overall architecture of Qubeh. Okay. This page is what is the processing flow of deploy and port from the cloud side to the edge. Yes. From left side, you can see it's a native Kubernetes master. We use APS server, ETCD. Users can create a port from APS server then your scheduler to schedule port to the edge node. Yes. We do is the cloud call. You can see then send the port from the cloud call to edge call. Then edge call can run the port in the edge node. Yeah. This is a workflow for deploying port to edge node. Okay. Next part is about the IoT device management. You can see from the cloud to edge, we built a framework to manage the IoT devices. Yeah. You can see from the overall architecture. Yeah. The right side is the IoT devices. It can connect to the edge node. Yeah. Actually, we built some APIs based on the Kubernetes CID from cloud. You can use this Kubernetes API to control the edge devices connected to the edge node. Yeah. Then we also built some config rules from the cloud. You can use these rules to control where the data generated by the edge devices to published. You can control the edge devices, publish the data to like MQTT broker or push to the database or push to other applications. Yeah. This is how we manage the IoT devices. Okay. This page is a detailed architecture of this framework. You can see the above is the Kubernetes master. Yes. The below is edge node. We built a module called mapper. Yes. Mapper is deployed as a container. Yes. It's act is a driver between the devices at edge node. Yes. The edge devices can use this driver to connect to the edge node. Yeah. We do a lot of things in the mapper container like do some data processing. I will introduce later. Yeah. We also introduce interface called device management interface in the edge curve for the IoT devices management. Yeah. Okay. This page is the detailed design in a mapper container. You can see in mapper we have the API line and the control play line and the data play line and the device driver. Yeah. Sorry. Yeah. So if you have some devices with your own protocols, you can deploy, you can develop a mapper. Yes. This mapper application can connect your own edge devices to the edge node. You only need to write some code in a device driver layer. Yeah. Then your devices can connect to the edge node. The API and the control player data play is all done in QBH. Yeah. This is how mapper to process the data in mapper. You can see after the data is sent by the driver to the data plan, the data plan will do something like the data set like this and provide push capability. We can push to database and push to other applications. Yes. We also can push to database as this is the right side. Yes. We can also provide some rest for API so users can use this API to pull the data from the edge devices. So yeah. So QBH only collects the data from the devices. It will not do other processing. So if you want to do more processing, you need to do in your own application. Okay. The next one is about the edge node architecture. Yeah. From the left side is the overall architecture of the edge node. You can see the above is the cloud. It's the control plan. Yes. We will send the port or ROT metadata from cloud to edge. In the edge node, we have a module called edge hub. It will receive the metadata and then it will save the metadata to the edge database, then send to the latitude lead to run the container in an edge node. Yeah. You may ask why we save the metadata in the edge node? Yes. Because we think in edge devices, the network between cloud and edge always disconnect, maybe disconnect for a long time. So in scenarios, if the edge node get offline, then do some like restart, then the application need to recover. Yeah. So we save the metadata to the database in edge node. So when the edge node get offline and have some restart, the edge core can load the metadata from the edge database to recover the edge application in edge node. So we can ensure the application run in the edge node more stable. Okay. This is the architecture of the edge node. Yeah. The right side is we do the lightweight cutting based on the Kubernetes. We fork the Kubernetes in Kubernetes organization and do some lightweight cutting and then we replace the Kubernetes in Kubernetes with our own Kubernetes. Yeah. This is our architecture of the node. Okay. Another subproject I will introduce is Hmesh. Yes. Hmesh is a networking solution in Kubernetes. Yeah. You may be asking me why we need the Hmesh subproject because we think some other components like the co-proxy or some other thing I plug in always more big for the edge node. Yes. So we do a lightweight network solutions. Yeah. The left side is key features of the Hmesh, like built in edge local DNS, and the search discovery and access experience also support LIO4 and LIO7 traffic management. Yeah. At the multi-edge at the cross-stab net, cross-stab net communication. Yes. Cross-stab net is a very interesting topic because in edge scenarios the edge node may be located in one place, another place, and the edge node can't access to each other. Yeah. In data center, all the server can connect to each other, but in edge it's not. So we need to, some users want to, the application running different edge nodes to access to each other. So we have done some work in the Hmesh. They can go to the cloud and to another edge node. Yeah. This is some work of the Hmesh project. Okay. Next one is another sub-project called Sedna. Yeah. Sedna is focused on the HAI. Yeah. You can see from the architecture, Sedna is built based on the Coolidge project. It can do some, at least here, do some cloud edge joint training at the inference. As a multi-edge joint inference, it also can integrate with other AI frameworks. Yeah. Yeah. If you want to use this framework, maybe the Sedna has some plug-in. You need to integrate the plug-in to your AI applications to do some cloud edge to edge synergy for some model transmission and so on. Yeah. This is Sedna sub-project. Okay. Next is security. We have to do something like the audit review. Yes. We have audit report. You can see. Yeah. Next day, Coolidge is also one of the first since that project integrated with the fuzzing test. Yeah. This is a fuzzing test report. Okay. Next, we also have done the thread module and security protection analysis. This is the report of the Coolidge. Okay. This one is the CVE process flow of the Coolidge. Yes. When users report as a bug or CVE, our security team will deal with it and then release a patch to our restricted disclosure vendor and then to public to the public. Yeah. This is the CVE process. Okay. Another sub-project I want to introduce is from our secret robotics. It's called Kuberob. It's focused on the robotics management. Yes. From the left side, you can see this is all architecture of this Kuberob project. You can see it can manage the robotics from the cloud. Yes. From the cloud, it can do some, like some simulation and the other like robotics management. Yeah. This is in the cloud. Yes. In the edge, edge node, robotics at is edge node. Yes. In edge node, we can run some application in the robotic. Yeah. Okay. Next, I will introduce some respective use cases of the Coolidge community. Okay. You can see this is some use cases of Coolidge community. We have many use cases across many industries, like the intelligent transportation and the smart energy and the industry intelligent, like smart CDN, clone native vehicle and the satellite as a smart compass, the financial as much logistics, chemical plant and many other industries. Yeah. Okay. Next, I will introduce some details of use cases. Okay. First one is the satellite use case from the architecture. You can see above is the satellite in the space. Yeah. Satellite as edge node. Yes. It connects to the ground control center through the ground station. Yeah. So we can control the edge node in the sky from the ground. They can deploy some application or do some joint AR reference and so on. Yeah. It also deployed our subproject like Sedna to do the cloud edge joint reference. Yeah. Okay. With the Coolidge, so the satellite took around the data transmission value is reduced by 90 percent. Yeah. Also have some other benefits. Yeah. Another use case is about the offshore oil field. You can see from the architecture. In the sea, there are many edge node and edge devices need to manage. Yeah. Our users use Coolidge for manage all these sensor like a camera at edge node from the central platform. Yeah. Yeah. You can see the network between the edge node and the central platform is like some wireless network and optical transmission network is always unstable. Yeah. But use Coolidge, we can ensure the control command or other data can send to the edge at the cloud reliable. Yeah. This is another use case also very interesting. Yes. You can see from the roadside is the architecture unit real world. You can see it has a camera and roadside unit at the edge node cloud. Yes. In this case is the first the camera will monitor the intersection and then send the some like video pictures to the edge node. Then edge node will do some processing and analysis. Yeah. Then after some analysis it will send the result to the vehicle or people through the roadside unit. Yes. In this use case is Coolidge can manage the edge node from cloud. Can also manage like the roadside unit at the cameras. Yes. Or yes. Or manage by Coolidge. Yeah. Okay. This is another use cases. Okay. Next I will introduce some about the community because Coolidge is an open source community. Yeah. This is some open governance policies of Coolidge. Yes. We have the technical data community in the community. Yes. As some other sub community as a special teams. We do some project level governance and six life cycle and leadership management and do some other project level policies. Yes. We also have some six like the incubators. The six are focused on some areas like the node, there was an OT networking scalability and so on. Yeah. We always have some sub project. Yeah. Like the Sedona, Hmesh, Kuberab I described. Okay. Next. This is some partners of the Coolidge community. Yes. We have many partners and adopters from the community all over the world. Okay. Okay. The next page is about the whole edge computing platform about Coolidge from the above to the below. You can see the above is the industry scenario based CATE. We do some CATE based all the like AI, IoT, MEC and robotics. Yes. We have built some sub projects like the Sedona, Hmesh and so on. At the middle is the core framework seen in Coolidge like the schedule at the age node management. Yes. Networking. Yes. And so the below is the hardware and OS Coolidge supported now. From architecture, you can see Coolidge can run all the Linux windows and Android and some other servers. Yeah. This is the overview of Coolidge platform. Okay. That's all. Welcome to join the Coolidge community and to make the glow native workload to run in more industries. Okay. Thank you. Do you have any questions? Okay. Well, if you have any questions, you can go to the Coolidge bush to discuss with me. Okay. Thank you.