 Thank you very much for attending this session. The title that Chad GPT suggested for this talk is From Micro to Mighty, a Guide to Customizing Edge Devices Using Red Hat Device Edge. My name is Ricardo Noriega. I'm a Principal Software Engineer working in the Edge Computing Team at the Office of the CTO in Red Hat. My name is Miguel Angel Ajo, and I also work at his department. We are teammates. So I'm going to give a little bit of a brief introduction of what is edge computing for us. How you can use Red Hat Device Edge to customize your edge devices by building certain type of images. Then Miguel will talk about OS3, a technology that is very suitable for the edge. And then we will do a live demo. So let's see if we don't fail miserably. So what makes the edge different? We see that more and more devices are connected to the internet. And the trend is to put computing power closer to where the data is generated. And for us, it's basically edge computing. We see this trend in many industries, like health care, automotive, defense, and so on. But the idea is also, we know that these scenarios are way different from what we are used to in the data center. But we would like to use them to manage these kind of scenarios in the same way more or less. So we have the same requirements. We have ease of management, security, especially in these environments where the devices might be located in remote locations and without physical barriers. We need to manage these devices at scale to be energy efficient. And the kind of peripherals that we connect to these devices are very heterogeneous. We have cameras, sensors, actuators, and other human interfaces, and so on. So these are the type of devices that we use in edge computing, usually single board computers or systems on chip that they are not expandable as servers, for example. There is usually no remote management interfaces like we find in the data center. We need to, these devices are located in resource constrained environments where we have to be very thoughtful of power consumption, memory, and so on. These are the limited resources that I'm talking about. Storage capacity, the network types are very different. Maybe Lora 1 interfaces, 5G, or maybe the network connection is unstable, or even if it can be not present or in disconnected environments. Limited hardware, very, very limited in hardware in terms of cores and memory. And maybe one of these devices is connected to a battery that is powered by a solar panel, and we need to be also very energy efficient. So Red Hat has been working for years now in adapting OpenShift to different topologies and different environments. So we started from the standard cluster. Well, before with external LCD, but standard cluster, then we created a compact cluster where control plane and compute nodes are in the same three nodes. Then another topology was remote worker nodes where we could place worker nodes somewhere else, not near to the control plane, and then single node. And one year ago, actually, we introduced Microsoft in the OpenShift Commons event. Last October, we announced Red Hat Device Edge, which is the offering or the solution that combines an Edge optimized operating system with Microsoft as the lightweight Kubernetes runtime. And what I'm going to show you now is the process that you can follow to create and customize Red Hat Device Edge images. So for that, we have a tool not only for that, but we have a tool called Image Builder that you can download. It's a couple of packages. There is a plugin for cockpit that you can use as a user interface, graphic user interface. And you can build different sets of images, typical rel, Red Hat Enterprise Linux, but also the Edge optimized operating system that I'm talking about. So the workflow, basically, is as follows. It's you add sources or repositories. Then you create a blueprint definition. A blueprint is basically like a recipe of how your image is going to look like, which packages, customizations, and so on. And then you build the OS tree. Miguel is going to talk more about OS tree, but it's basically a version control file system. So once you create your OS tree that contains all the packages that you need for your Edge device, you expose your OS tree, and then you can do two things. Or use a typical rel ISO to point to that OS tree and it will install it. Or you can create with Image Builder. Use a USB installer image to plug it into your devices. Or also you can build a raw disk to DD into your SD cards and so on. So this is more or less a high level view of the workflow. So this is a screenshot of the sources page of Image Builder. By default, it comes with two repositories, two sources, which contain the basic rail packages. And for this purpose, we have added two more. The fast data path repository that contains OpenVswitch and the OCP 4.12 repository that contains MicroShift. It's just a couple of RPMs, basically, with its dependencies. But with two of these sources, you will be able to install MicroShift in your image. Then in the left side, you can see what is a blueprint definition. It's written in a Tommel format. And you can put name, description, version, and so on. And then you can choose the packages that you need in your Edge device. So once the image is produced, you can plug it into your Edge device. It will get installed, and it will have all these packages, dependencies, and so on. There is also a section for customizations. You can add kernel arguments, firewall rules. You can enable system D services, plenty of stuff. And one of the coolest features that is coming to Image Builder is that you can inject or embed container images into the image itself. So for disconnected environments, it's very suitable. In the right side, you can see the output types or the image types that you can build. Amazon, AMI, QCOW, or plenty of others. But for us, the most important ones are the rail for Edge commit, container, and installer, for example. The commit and container is basically the same. The commit is just a terrible file that contains the OS tree. And you will be responsible to expose it to your devices. And the container is basically a terrible file that you can load into Docker or Potman, and it will be automatically exposed. And then the installer is basically to create this USB stick ISO that you can plug into your device and it will get installed. The OS tree settings is basically, as I mentioned before, it's a version control file system. So you can think about it more or less like a Git repo. And you can choose where it is exposed. The parent ID, in case you want to do an upgrade, you need the commit ID of the parent so you can create a following version of the OS tree. And then a reference, which is like a branch according to Git. This is the package section. As you see in the right side is Microsoft and OpenShift Clients. But you can add more if you want. And then go ahead, Miguel. OK, yeah. So as my colleague Ricardo was saying, OS tree is like the Git version of the system image management. It's being used in OpenShift already under the hood. It's been there for a long time already. And it allows atomic upgrades of your system. You can download deltas of your file system and upgrade in an atomic way. Also, the file system is read-only. You will always have read-write sections of the system. It's very lightweight for over-the-air updates. So this is very convenient for edge devices that maybe have a very low bandwidth allocation. And it's also able to do deduplication. It can manage the boot settings of your system. So when you switch to a new version, it's going to put the new kernel in the boot partition. And it's going to set up anything on the ATC directory that needs to be updated. There is an image somewhere on the Red Hat documentation, which I think, oh, you cannot see it perfectly there. But you have a repository where you are serving the S3. Normally, that will be an HTTP server. But there is work to make that also possible over container registry, so you don't need to set up your own infrastructure. And the edge devices will consume that repository. In this example, you can see three devices. One of them already has all the delta updates. The other one, at some point, updated one of the deltas. And the other still is consuming the update. And yeah, with this very short introduction to S3, we can go into the demo. OK, we wanted to show you on the screen Ricardo. So this is the device that we have here, is a Jetson Xavier, which is not completely supported, but it's what we had. Yeah, and all those terms. Yeah, but yet. So in RHEL 9.2, I think we will have beta support for the origin version of those boards. So it's not very far. And I will show you. Microsoft running here. Don't mind the restarts, because every time you stop and start the board, the bots will have to restart and reconciliate. But you can see the minimal offensive services that we run together with Microsoft for the storage, for network, and so on. And our application, which is running here. I can show you. Yeah, of course, the kernel is talking to me, because I'm connected via serial port to this world. So we have our manifests in the ATC, Microsoft Manifests Directory. You can see a customization file in here, with the namespace, deployment, the service, and MDNS route. And then a fix-up that we had to make, because we are running on top of Fedora at this point. It's like an extreme customization that Ricardo made with Image Builder, which is also possible. You can use Fedora, or switch the kernel, things like that. And we had to do it for the demo until we have a running Unreal 9.2 that we can use. So for this demo, we have created this ISO installer image that contains its base Fedora 37 operating system, plus the NVIDIA drivers for the GPUs of this board, plus the container runtime, plus Microsoft, everything in a USB installer that we plugged, and then everything is there. Sorry. Yeah, no, thank you. So if we look at the routes, we see that we have exposed Whisper local route, that it's being exposed via MDNS. And if we reload, this was before, if we reload the demo, we can see that we have our web application running in there. Just to show you very quickly, we have a small web application with a JavaScript runtime that is waiting for audio, then sending it via post to the back to the server. And the server is just running the OpenAI Whisper model, which is we are not data and data scientists, but we wanted to put something nice that you could see here. So it's running, and for every transcription request, it's getting the audio, transcribing it, and returning it. If there is audio, it will be returned to the browser. And we also have liveness and readiness, because sometimes, I mean, this is a little bit memory-typed for this model. And sometimes it could crash. So we use that capability of OpenSIF to monitor applications and recover. So if we start recording the web application, we'll listen if the demo gods are happy with us. Maybe they are not. Oh, yeah, they are. So the model will start transcribing whatever I say. Maybe it will do a good job. It will not. The model also has a translation flag. So si hablo en español, lo debería traducir al inglés. Este público es muy simpático y son todos muy guapos. So to give you an idea of how well this runs, I will do it very quickly. You can see side by side the same application that is running on my laptop, because this is something nice of working with OpenSIF. It's very easy to run your application anywhere. The network of the board just crashed, because it's not supported. And they have a bug on this specific version of the board. And this is why they are not supporting it. Anyway. So we have 30 seconds for questions. Thank you. Yeah. Thank you very much. Thank you. Yeah.