 Yeah. Hello, everyone. Welcome to this Cubase session. So in this session, we'll focus more about how and why Cubase has designed the architecture so that they have separated the orchestration capabilities from the computation functionality in the architecture. So how many of you have ever worked or have ever heard about Cubase? OK, so many of you. So I don't need to explain in detail how and why how Cubase is good enough and how it is providing a complete solution for the edge computing. So I will focus more about the separation and the architecture design than the definition part of it. So yeah, let's get started. I am Harshita. It's been three years now. I'm working in a cloud-native environment and have been active in the open-source community, gave a few talks on various open-source projects like KKP, which is a multi-cluster management system, and OpenEBS for stateful application management. So currently, I'm working as a software developer in Kubernetes. So that's it about me. Let's move forward. Today, initially, we'll start with a bit of a description of what is Cubase. And then we'll move to the architecture part of it. And I'll describe separately how the cloud and the edge components are working in Cubase. And then finally, we'll discuss a bit about how the communication between the two components is working. OK, so Cubase is a Kubernetes-native edge computing framework. What does that mean exactly is developers can write their containerized application once and deploy it anywhere like we do for Kubernetes, but also keeping in mind that the resource constraints of the edge and the network capabilities and the privacy requirements for the edge computing. So it's not about separating the both of them. It's also about making a synergy between the cloud and edge so that we can utilize the Kubernetes capabilities and leverage the experience and the loosely coupled architecture and the extensibility of Kubernetes, but also combine it with the edge computing part. So it's about extending the edge from the extending edge from the cloud so that we can utilize the cloud capabilities. Plus we can also make the design such as edge can work autonomously. So in case the network is not working, so for that time edge can work without the network connection. But as soon as the connection is maintained again, the metadata is synced again and it can start working again with the communication with the cloud. So Cubase is one of the first open source software which has both the cloud and edge components open sourced. So anybody can use it. So it's free and open for the community to explore both the edge and cloud components. It's also designed such that millions of devices can be maintained in the edge side. And if you have an application which requires much of a resource and heavy processing, you can run that on the cloud side. So as you can see, nowadays, Cubase is spreading much more and more. And there have been a wide range of applications for that. May it be satellites or traffic sensors or CDN platforms, et cetera. So OK. So let me give you a brief review of what were the initial challenges in using Kubernetes and extending it to the edge. So first is resource constraints, obviously, because the edge devices can be a small box. So it obviously has some restrictions. And as we know that Kubernetes at least requires one GB of memory sometimes. And Gateway also requires much memory. So that was first challenge. How can we limit the resource requirements on the edge side, but utilize Kubernetes capabilities on the cloud side? And poor network connection, because the edge nodes sometimes can only work on a private network. So how to maintain the connectivity between the cloud and the edge? And the third was edge nodes should be able to work independently if there's a network outrage or network connection breaks for some time, so that the processing is still happening on the edge side. So that was the third challenge. And yeah, so the fourth one was before Cubase, there was not, especially if we talk about Kubernetes, it's not designed specifically for IoT devices, et cetera. So that was something which needs to be kept in mind while making the architecture of the Cubase project. So let's see how Cubase solves or tries to solve these problems and challenges. All right. So as I said, the project itself does not see edge as separate. But also it's kind of an extension to the cloud so that we can utilize the cloud capabilities, but keeping the edge nodes working independently too. And so to make them work synonymously and in synergy, it's required. And I think it's necessary to have a bidirectional connection, not just a one-directional communication from the edge to the cloud so that all the updates and made application update or a device update can be synchronized from the edge to the cloud or the cloud to the edge. And yeah, the third part was keeping edge lightweight. So what they did is they reorganized the cubelet of Kubernetes to make it a bit smaller, a bit lightweight. So now it's around 70 MBs. It requires 70 MBs of memory. Also for the applications which are using Docker, they have also supported a lot of other CRIs so that you can create a lightweight container application which will be easily maintained or run on the edge side. And yeah, it also has designed the architecture so that the device management is easy. And because we are using Kubernetes APIs, the devices are created as CRDs. And you can view them similar to any other CRD which we do in the Kubernetes environment. Also, there's not a few are checking your nodes using the Kubernetes. You can see both the cloud nodes and edge nodes. So both the nodes will look similar to you. And you can see all the devices. You can manage the devices. You can update the devices using how you used to do in the Kubernetes environment. Also, the autonomy part. So what they have specifically done to achieve this is they are maintaining a separate database on the edge side so that any update and the meta update is cached in the database in case an outrage happens. Edge node can get the meta data from the database and it can work autonomously. So as I said, they see edge as an extension of cloud and bi-directional communication is maintained. I'll explain a bit more about how they are achieving it. It's loosely coupled, so the edge side can work autonomously. And the fourth side is the node autonomy using the database maintained on the edge side. So let's go to the architecture part. I'll explain a bit how and what are the components exactly. So on the upper side, you can see there's a clear separation between cloud and edge. And on the cloud side, the green box shows the KITS API server. It is kind of similar to Kubernetes. There is no change made to that. And so the cloud core, which is created by the Q-Bedge, it is one of the components of the Q-Bedge, has controllers. So controllers have two main components, first have two main types mainly, which is device and edge controller. So device is more of a focus on managing the devices and pushing the updates and, for example, add, delete, create events to the edge side. And the edge controller is more on pushing or syncing, add, create, delete events of any application of config map or secrets. And it also makes sure that a particular pod is scheduled on a desired edge node using a node selector. And then the communication parts of it comes so the yellow box and just above the separation, the cloud hub is used to make a communication between the cloud and the edge so that the events and the metadata can be communicated easily to the edge. So it maintains a web socket connection to the edge. And when we come to edge part, so the big box you see is edge core, it has multiple modules and components. When you see edge hub, it's a web socket client. The cloud hub was the web socket server. And it's used to maintain the connectivity and to push, update or watch events to the cloud side. Then you see a meta manager. Meta manager is basically a message processor which all the messages which are coming through the web socket are processed by meta manager. And as I said before, to maintain and to achieve the autonomy, they have created a local data store on the edge side, which is the green box you see here called node level data store. And the HD, which you see here, it's kind of a lightweight cubelet, as I mentioned that they have reorganized a cubelet to make it a little bit lightweight. So it takes care of nodes, edge nodes, and parts, and secrets, whatever the key test resources are created. And down there, you can see a lot of container runtime supported as Docker, container, DCROI. And on the right side down, you can see mosquito broker, which is used basically to have the connectivity with the devices connected. Maybe it's a speaker, as verifier, or any sensor. So let's move forward. I'll explain more about each component of cloud, cloud core and edge core. So Cloud Hub, as I said, is a web socket server. And it watches the cloud side and caches and sends the metadata on the edge hub side, which is a part of edge core. Edged is a lightweight cubelet kind of. It's an agent which takes care of the container's applications and the nodes. Edge Hub is a web socket client on the edge core side. MetaManager is a message processor. And it's also responsible for retrieving and saving the metadata on the edge side so that the nodes, edge nodes can work autonomously. EdgeController is something which takes care of syncing the events and making sure that the pods and the resources created on the cloud side are synced to the edge side. And they are created on the edged part of it, which is kind of a cubelet on edge side. EventBus is used for the communication with the devices. So Mosquito is used, which is the design is more of a sub-model, which is publish and subscribe. So the broker has some topics, and the device, and the client subscribes or publishes to that topic. So for example, a device has published some message to the broker, and whatever client, whoever client has subscribed to that topic will get that message. So it's kind of designed for specifically taking care in the environment, which has less resource consumption and which has to decrease the latency and to maintain or to work in a small environment. So device join is something which takes care of syncing the device status and data. So to summarize this, on the cloud side, we have some controllers and Cloud Hub, which takes care of communication. On the edge side, we have Edge Hub, which takes care of the communication. We have MetaManager, which takes care of syncing and taking the data from the database. We have a database which saves the metadata for the cloud for the edge nodes. We have Mosquito Broker device join event, to take care of devices, devices management and sending and receiving messages. So on the edge side, applications and devices both are managed. And device join and event bus focus on devices and that focus on care test-based resources, like config maps, applications, et cetera. OK, so I'll explain a bit more about how the autonomy is achieved. As I mentioned, the data on the application side is distributed from the cloud to the edge. And it's stored in a database on the edge side through the MetaManager. And you can see the database through this link. And the database, which is used right now, is SQLite. OK, edged. Edged will be able to relate more, because as we have been working on Kubernetes, we know how Cubelet works. So it's kind of a lightweight Cubelet. And it takes care of ports and resources created. It also helps on deploying the containerized workloads. And multiple CRIs are supported in order to make sure that the resource consumption of the application is less. Yeah, yes. I have listed separately the CSI supported and the CRI supported, so that you know what is currently being handled on the edge side. So for CRI, we have Docker, container, DCROI. And for CSI, we have mostly all the volume types which are supported in Kubernetes, Edge Controller. So as I talked about Edge Controller before, but it's something which watches the KTES API server on the cloud. And it pushes or gets events for the KTES resources. So for example, you have created a pod on the cloud side, so the create event will be sent to the edge side so that a containerized application can be created. And suppose you need to watch a resource for it to get reconciled, you can sync the watch and update status from the edge side to the cloud side. So two separate controllers are created for that. So for the down direction, what I mean by that is add, update, delete operations from the cloud to the edge. Downstream controller is created. And for the upside direction, which means the edge devices and the applications, update and watch events, upstream controller is created to sync it to the cloud side. OK, so let's jump to the communication part of it, how both the components tries to communicate. This is a part of a config which I have copied from the set, which is used to set up the edge side. So this is a part of Cloud Core YAML. As I mentioned, that Cloud Hub is a part of Cloud Core. And so we used a web socket for the communication. We can also use Quick. So both are supported and both can work to maintain a tunnel or a channel to the edge so that the messages are easily communicated to the edge side. And let's jump to Edge Hub. This is the part of the edge side to maintain the communication with the cloud. So this is a part of configuration which we need to do to set up it. So I'll give you a little bit of description on how to quick start and easily set up QBedge. So there's a installer of QBedge which you can use, but it has some limitations. For example, it can only run on a few OS right now. For example, Ubuntu, it cannot be run on a Raspberry Pi. But it's pretty easy because it does all the work for you mainly. It checks if the prerequisites are there already. And if it's not, it installs them. And it creates the cloud core service. And for the Edge, it creates Edge core service and starts the modules for the cloud and the Edge. So QBedium in it part. It's used to start of the cloud components. And the join command, it's used to start of the Edge components. And the reset part, if you are done with your resources and you want to clean it up, you can use the reset command for it. So mainly in the Edge side, for the quick start and the basic setup, you need Mosquito. For the device communication, you need Docker. And for the cloud side, you need to have a Kubernetes cluster. And so what the initial step or the basic steps would be, you create certificates on your cloud side. And those certificates need to be moved or copied to the Edge side for the communication. And then you can run the cloud core as a binary on your cloud side. You provide the location or the master address of your Kubernetes cluster in the cloud core. You're using a Qube config, or you can provide a master ktaserver address. On the Edge side, you need to provide the cloud core API and port, which you have already set up in the first step. And the Mosquito should be running there on the Edge side to maintain the communication with the devices. Third part is, you can manually or automatically add the Edge nodes on the cloud. For creating it manually, you can deploy a JSON on the Kubernetes cluster. For automatic setup, you can just add a configuration in the Edge core YAML, which will automatically add the Edge nodes for your Qubes setup. So this is the first command, which I mentioned about, which is Qubedium init. As you can see, I have provided separately a value for Qube config. By default, it takes value from the root directory. But I wanted to provide a separate Qube config for my cloud setup. So as you can see, it downloads and manages cloud core components and CSI drivers, et cetera. And these are the logs for the cloud core service started. As you can see, their downstream controller started upstream controllers, device controllers. All these are part of managing and syncing the events from the cloud to the Edge side. And a WebSocket is created to maintain the communication with the Edge. So the service was running properly. So let's jump to Edge core side. As you can see, Qubedium join command is used to start off the Edge service, plus also add a node to the cloud. So I have provided a token, which can be taken using Qubedium get token command. Or there is a secret created in Qubedium Edge namespace on the cluster, whose Qube config you already provided. So through both of these ways, either of the ways you can get the token in order to join your Edge node to the cloud. You can provide the name of the node. I have provided Edge-node for that. And you need to provide the cloud core IP port, obviously, for the communication, which you have already set up in the first step. Then the next part would be registering the node. Manually registering looks something like that. You create a JSON and apply it on your Kubernetes cluster. So a little bit about device management. As I said, the devices are created as CRDs on the cluster. And it has two parts. One is device model and second is the instance. Model generally describes how a device should look like. And the device CRD is more of an actual device. You can describe what the device looks like. And device controller on the cloud side takes part and manages the devices which are created as CRDs using the kubectl command. So this was a brief introduction of how the cloud and Edge components work separately and how they are trying to make the synergy between cloud and Edge and how the communication is working. I know this is quite a lot for a short session. But in future, I'll try to set up a few demos and so that you can understand easily how you can actually make it to work and actually create a device and see how it's working on that side. So that's it for today. Thank you. Any questions?