 Hi, everyone. We are in Gloria Chavadnini and Yakov Dazan. We are working at Red Hat. During this talk, me and Yakov will introduce a project that will let you take full advantage of the power of the edge computing. We'd like to give you a way to manage, observe, and the edge workload and the edge devices using your existing Kubernetes environment. This is Project Quarta. During this presentation, we'll show you the main picture of the Project Quarta. We'll show you how to configure your edge devices, how to deploy containerized workload on the edge device without relying on the Kubernetes scheduler. Project Quarta let us to monitor our edge devices and let Quarta has to get data from them. We can also access the hardware to the hardware that are available on each edge device. I'd like to give you a quick overview of the Project Quarta. On the Kubernetes side, the Quarta project is composed by the Quarta operator and the Quarta Edge API. The Quarta operator is a Kubernetes operator used to manage the workload of the edge devices via Kubernetes API. The Quarta operator is also responsible to deploy all the Quarta project customer resource definition, such as edge device or a edge workload CR. The Quarta Edge API is an HTTP entry point that handles all the communication between the devices and the cluster. By default, all the devices are connected to the Quarta Edge API and they can in this way retrieve the necessary configuration or they can also push some information. And this way the edge devices shouldn't talk directly in the Kubernetes cluster. Quarta let also has to manage your edge devices from a single point using Kubernetes CRD for Quarta and edge device can be almost connected from Raspberry Pi to a big server or a drone. So let's start with the first demo. Before any edge device can be used with a cluster, it needs to be registered. To facilitate that process, we provided MakefileTarget in our Quarta operator project that generates a script that contains all the necessary information to make your device be visible for our cluster component. So let's have a look how it looks in practice. Here on the left hand side you can see my repository checked out with the Quarta operator and the Makefile I mentioned. On the right hand side I will be monitoring all the edge device CRs registered with our cluster which correspond to devices registered with the cluster. At the bottom there is the comment prompt on the edge device where I will execute the installation script. That's generated with Make agent install scripts target that will generate install agent dnfsh. After the file is transferred to the device it needs to be executed with additional arguments like IP address of your HDP API endpoint, Flota API endpoint and port it listens on. The installation will take quite some time as we need to install our RPM and dependencies like NodeExporter, Podman, Ansible or NF tables. When the installation of those components is complete the device will automatically restart and after successful boot it will register with the control plane. Here you can see the CR listed and let's have a look what's inside that CR. Flota agent when it registered it sends some information about hardware configuration of the device like in this case you can see labels describing the architecture of the CPU and other. In the status part of the CR you also can see hardware information. Flota has a notion of edge device sets which are supposed to logically group devices serving the same purpose and by that sharing the same configuration. So that configuration can be defined at the level of a device set and the device set can be assigned to multiple edge devices. Whenever a configuration is specified at the level of edge device set it is applied to all devices belonging to that set and whatever is configured at the level of that device is overridden with configuration coming from the edge device set. Conversely, whenever device is removed from a device set the configuration encountered at the edge device CR would be applied. For example we have edge device set named Demoset which defines configuration for metrics. In this case we set system metrics gathering interval to 60 seconds and at the same time we disable it. If we want to apply that configuration to an edge device we need to label it with flota slash member of label with the value corresponding to the name of the edge device set we want the device to belong to. Like in this case we put Demoset value to the label and the device would belong to Demoset edge device set. Let's have a look how a common configuration can be applied to a group of devices using device set. I've got three edge devices connected to my cluster and at the bottom you can see the respective configurations on the devices. When I apply changes to the configurations those listings will change. I've got definition of a device set called Demoset. The name is important because we need to make an association between device and device set. The creation of the device set doesn't change anything because there is no connection between an end of the device and that device set. To make that connection we need to apply flota slash member of label equal to the name of the device that we're going to apply. The labels were applied and the configuration changes on the devices. If we remove the label from one of the devices it will revert on device configuration to whatever is specified at the level of edge device in the cluster. If you remove the device set the configuration will be reverted on all of the devices and they will go back to their default configuration. In flota workload software you want to run on your edge devices needs to be wrapped as a container image and we use Podman to execute your images on your devices. So you can provide information what you want to run, how it should be configured on the device using standard Kubernetes pod specification that is embedded into our custom resource describing your workload. And because our workloads are described with YAML you can use your standard CI CD pipelines that you use to deploy your workloads in cluster to deploy that on the edge. What is important to note is that workloads in flota are not scheduled by Kubernetes scheduler. They are scheduled by flota operator using labels. So based on label selector and labels that are put on devices and workloads we decide where the workload needs to be executed. Like in this case we have edge workload manifest where you can see device selector that matches profile label to be equal HTTP and it matches against edge device CR. In this case if there is a label profile equals HTTP the workload on the left hand side HTTP engine X would be deployed to that device. During this demo I will show you how to deploy the engine X ATP server to devices connected to the cluster. Here you can see those three devices and manifest for the edge workload representing engine X server in which we expect the workload to be deployed to any device having profile equals HTTP label. Also in the POT definition we have the engine X image and 8080 host port should be opened for set workload. After applying the workload we would expect the workload to be shown in the status on the bottom screen but they aren't because the devices are not labeled yet. After labeling the devices the status of the workloads are shown and they would transition from deploying to creating to running in some time. Now all the workloads are running and we can try to connect to engine X server on one of them. We will use links and port 8080 and we expect default engine X website to be shown which is there. We can try another device and the result should be exactly the same the same default engine X website and it works. If we unlabeled the device the workload will be removed from set device and when we try to connect to it we only will see error and we can expect the same behavior when you remove the at workload CRs the connection will report error. Now we know what an engine device is how to deploy a workload but would it be useful to retrieve some information from the head device Flata give us a simple and effective way to monitor metrics locally in the device and to export them periodically to a monster. The metrics collected will be saved on a time-series degree like Thanos. Each edge device can collect metrics from all the default workloads or from the system of the device itself. Metrics are collected by a Prometheus node exporter so the available metrics are the same as those of Prometheus node export. It is possible to configure metrics production. First of all the metrics collection is enabled by default so we can for sure change the collection frequency or define a subset of metrics to be collected using allow list. It is also possible to disable completely the metrics collection and in this case all the infrastructure related to metrics acquisition will be turned off included Prometheus node export. So let's start to configuring our metrics collection. This can be done including subsection in the edge devices you are so in the green box there is the metrics section and we define the interval of acquisition equals to 30 seconds. The default is 60. We can also disable metrics easy to do just a hot in the edge complex you are the tag disabled through inside the metrics section. We can also configure the allow list also if we want to monitor just a subset among all the available metrics we can just create first of all a config map and named only the metrics we are interested in and then refer to this config map inside the edge device CR including allow list subsection in the spec. And if you want we want to restore the configuration to the default it's super easy we just need to remove the Netflix subsection in the edge device CR. So project plot done. Assume that the device can be of any type and they can found can be found anywhere on the earth so that's why project doesn't assume that the device are always connected if they are battery powered the battery could die and the connection quality may be poor or maybe there is no connectivity at all. So let's see what happened in action in the next demo. In this demo we will see what happens when devices have connection problems. We observe our cluster from Agraphana dashboard we can monitor the number of Flutter objects had devices and edge pro clothes and the devices that have registered at this time Hollywood device has registered have to warn you in this demo the clock is ticking faster in fact two more devices are already registered. Let's move to another dashboard to understand more we need to see that all the devices are no longer connected to Flutter let's assume that the connection quality is really poor the device are not sending any more information to Flutter project but device one managed to reconnect unfortunately it loses connection again but device two managed to connect in this demo we are lucky within a few moments first device two then device three and finally device one succeeding in reconnecting successfully another feature of the Flutter by project Flutter is to run Ansible Playbook on the device first we need to define an edge compete CR an edge compete CR allows to specify some configuration so potentially not only Ansible Playbook to apply to the device we also need to assign a label to the edge device in CR and to specify on which device we want to run the Ansible Playbook so to be honest there is things are a little bit like more complicated under the hood so as the Flutter operator received the edge compete CR so in this case edge compete one it generated for each device with a corresponding label a new CR called Playbook Execution this CR allows us to link each Ansible Playbook Execution to a device and to monitor its states and this example we create the edge compete name edge compete one and we assign the label the corresponding label to device one and device three Flutter operators then create two Playbook Execution CR one for the device one and one for the device three now the Ansible Playbook Execution can begin the device then update the status of the execution in this example device one successfully completed the execution while device three add some problem but let's see how it works on the project Flutter during this demo we will execute our first Ansible Playbook on the device first we need to define the Ansible Playbook so let's start with a simple one that just came to create a a low-txt file on the device then we should define the edge config CR so we name it edge config demo one and now we should set the content let's encode the demo Playbook copy and paste the value on the content field we could also set some other fields such as the execution timeout in seconds now on the left there is the device and in the Ansible Playbook claims to create a low-txt file I'm checking the time not cheating let's start to monitor the edge config available in the cluster then the Playbook execution and finally the status of the edge device especially the one related to the Playbook execution we need to add a label to the edge device to link the edge device one with the edge config last thing else to send the edge config in a moment the status of the Playbook will change from deploying to executed in the meanwhile let's see that the Playbook execution has been created to link the edge device one to the edge config demo one the status has changed into successfully completed so let's check the file here it is and look at the content Flota can facilitate data transfer from your edge device like in this case shown on the diagram there is some workload running on the device and it creates some files be it picture files some statistics or maybe it produces some data models based on some inputs for the workload then user wants to have those files available in some central location that's available for their processing in this case in case of Flota we use S3 API as the target endpoint that is used to upload the files it doesn't have to be AWS S3 bucket it can be any service providing S3 API whenever files are created in a specified directory on the edge device Flota agent detects that and uploads them into the remote endpoint to configure that behavior user needs to provide data section of this edge workload specification like in this case we have source directory listed which corresponds to in container export slash source directory so it is a sub directory of export directory that is visible in all of the workloads run with Flota in case of that specific configuration it boils down to slash export target defines the directory in the remote bucket where the file should be placed in this case it's snapshot so the path to the files uploaded from the device bucket slash snapshots to instruct Flota agent how to access the remote storage user needs to provide storage section of the edge device manifest where there is S3 dedicated configuration which references config map with information about the bucket and secret which provides credentials to access set a bucket like here on the left hand side you can see config map listing different parameters of the bucket or the endpoint where you want to upload the files and on the right hand side you can see the AWS credentials that should be used for accessing the remote storage the other functionality that Flota provides is access to on-host devices like in case of that manifest there is a dev video zero device available on the host machine and you want to make it available for the workload running on that machine in this case we map the dev video zero path to exactly the same path inside the container running the workload this way user will have access to that on-host device to for example take photos or record videos to make things simpler we instruct Flota to run the container in privileged mode for the purpose of this presentation I have prepared Python HTTP server listening on ATT port and whenever get request is made to that server a photo is taken using camera attached to the device running the workload the photograph is stored in the export directory on the device the whole script is wrapped in fairly simple image as you can see on the screen let me show you how all of that works in practice with Raspberry Pi deployment I connected my Raspberry Pi to the cluster and let me show you how it's descriptor look like you should see that we have already different architecture ARC64 as reported by the agent and I labeled the device for scheduling with app snapshots label I also configured S3 storage by referring config map and secret with bucket information credentials the config map looks as follows we have the bucket host name port and region defined and in case of the secret you can see the credentials let's see the workload that we're gonna use there's the app snapshots label for scheduling and the container in the part uses dev video device to take pictures and set the video file is mapped from the host device dev video 0 the workload exposes 8080 part on the device to allow users to trigger snapshots and the snapshots will be uploaded from target directory to snapshots sub directory on the S3 bucket after the workload is deployed it should show the transition into running state it is now so we should be able to call it 8080 port and trigger photos taken we'll take several photos and while we do that their names will gradually be listed on the lower left pane which issue ls on our bucket there we go the first two files and another set of files should pop up shortly there we have 5 files and in total we should have 6 files right now let's download several of those files and see what they look like they should show the raspberry pi that is used for this demo let's see the first image that's the raspberry pi I mentioned second image should also show the same device but this time with some visitor from the future and third device great Scott it works today we've shown you how to connect linux device with a Kubernetes cluster this is a project Flutter and features it has to offer like configuration of a fleet of devices using device sets workload deployment to a Flutter managed device with the edge workload CR how to get insights into your device and workloads performance with metrics support for Ansible to introduce changes to a Flutter managed device system how to upload data from devices to a central location finally how to use host devices in your Flutter workloads to learn more about Flutter visit our GitHub organization and website on the website you will find documentation and how to guide for all the features demonstrated today and more thank you very much