 My name is Robert Sidor. Joining me today is Andre Patanga. And we get to work together on quite a few different kind of projects, one of which most recently has been quite a bit of work around Edge, especially with REL. So with regards to topology, kind of jumping right in, a lot of times what we're seeing is that Edge and Edge discussions tend to be around extensions of the IT shop. And the IT organization is trying to figure out how to piece together their existing knowledge on Linux and what they're doing with Edge. And this oftentimes amounts to what they're familiar with. So racks that bring compute closer to the Edge space where they're actually doing collection of data. And a lot of times this means open shift for Red Hat or compute that is extending what we're doing with a pass. But today, we're going to actually talk about something that's actually further to the Edge, which is smaller devices and bringing REL and Linux down to these Edge devices. So next slide, please. Yeah. And I would add to Rob is that not two small devices, because there's also Edge deployments that are tiny sensors and very specialized hardware. What we're looking at today is a typical use case, which is more like Edge servers, if I'm right, right, Rob? Exactly. So a little bit of housekeeping. We're not going to talk about LF Edge today in particular. But I think that if you're watching this webinar, you should probably want to look into LF Edge, a Chrono and Edge Foundry. These are projects that the Linux Foundation and Red Hat are involved with. So think of LF Edge as really being kind of an umbrella that has a number of different organizations involved. And they're really creating a common framework for hardware and software standards. Well, projects like a Chrono is really looking at a set of open infrastructure and blueprints for the Edge. So I really would push you to really look into what those things are and how they're going to impact the industry. Let me second that. The stuff that's going on at LF Edge is super exciting. Some of these solutions are getting to a great level of maturity. They're very comprehensive. If you want to take Rob and I's presentation today, it's really about our learning process, right? We're two individual engineers that were exploring Edge and exploring really the building blocks of Edge, right? So as we're going to show, we're going to talk about image building, about updating the Edge devices, sort of our observability using open source components that are out there. But if you want to look at the state of the arts and sort of the future and some of this really good stuff that's happening here, you'd be remiss to not look at what's going on at LFedge.org. Great, thanks. And so let's dive right into what we're going to really hone in on today, which is Linux at the Edge, and specifically how we're working with RHEL at the Edge. This means that we have a number of different things that we have to think about from management perspective. So how are we going to actually build the code for Linux that's actually going to be put out on the Edge? Today, we're using a GitOps approach. We're also using Ansible in order to manage that. And we're looking at Ansible from a couple of different key vantage points. One is, do we have access on the network? And is the network able to provide us with enough comfort where we can manage the Edge devices with Ansible like we would other Linux instances? Or should we put Ansible down onto the Edge and actually have it call back out? So basically, it's a difference between a push and a pull model. And because of this, we've been looking at technologies like Ansible Runner, Ansible Builder, and Reactor, which kind of make Ansible really highly attractive for Edge architectures. And then of course, the whole point of actually putting Compute closer to the Edge is to work with data, maybe reroute data, manipulate data, bring rules or AIML components down to the Edge. And how are we going to run those? And we're going to talk about Podman in that context today. So go ahead to the next slide. So another bit is from the configuration management. How are we going to manage these images? We know that in the past, when we had image management with VMs, for instance, we might have image sprawl, other things. So how can we manage these as code? And can we put these in GitOps kind of fashion so that we can manage them with Git? And then in addition to that, can we manage them with novel things that we have, approaches that we have such as OS Builder or RPM OS Tree, which we're also going to talk about today, so that we can reduce the network traffic, reduce the bandwidth requirements, because many of these Edge use cases actually have limited network, either bandwidth or request latency. So and then from the perspective of where are we going to design and build the applications, how are we going to manage the images that are going to be brought down and actually run? How are we going to actually build the Edge images themselves for the OS? Where are we going to collect telemetry? How is that telemetry going to be brought back up from the Edge? We're going to do that from our Kubernetes environment in our example that we're going to talk about today. Next slide. And that brings us to OpenShift. So OpenShift is Red Hats Kubernetes. It's a fully certified Kubernetes platform that's all open source. We're running Ansible on top of Kubernetes. We're also running the Elk stack on Kubernetes that's going to collect our telemetry information from Prometheus endpoints from those Edge devices. Can we use smart management? So what that is is satellite, you know, formerly spacewalk, but now satellite. So are we going to add functionality so that if we are using an RPM approach versus an RPM OS tree approach, can we manage the OS and configuration? And then also leveraging, again, Ansible and two different approaches in order to manage these actual devices. Yeah. And thank you, Rob. We're going to start by taking a look at building these images. As Rob mentioned, trying to define essentially a GitOps pipeline to create the images, so how these images are put together based on LibOS tree, RPMOS tree, you know, everything that you need to really create these images, create the image layers, put them out where the Edge devices can get them, update them. Let's take a quick look at that. So the basis of it, right, like if you look at an Edge server image, one of the key distinctions from a traditional, you know, well operating system would be that we want to deliver that essentially as an image, almost like as an appliance, right, for a lot of reasons. So the basis of that is LibOS tree. For those of you that may not know what that is, it's essentially a way to define OS file system trees, almost like Git, right, where you can have layers and tag specific versions of layers on there. The essence of it is becomes almost like an appliance deliverable that you can push out, as well as the updates. Instead of being per RPM sort of updates, they are, you know, full on atomic updates, what you may call, and then that has a lot of advantages for an Edge standpoint, because as we're going to see, one of the things is that we want to limit the downtime, limit, you know, maximize the uptime of those Edge devices. So we're going to talk a little bit about how we do rollbacks into that and sort of health checks. You know, it's either all in, either all the updates succeeds or it kind of rolls back. So, but the basis of it is that take a look at RPMOS tree and LibOS tree and start thinking about how, you know, sort of these complete file system trees can be delivered to the Edge device, as well as how that simplifies the update process. So to take a quick look at RPMOS tree and how that works on rail for the Edge, but in general as well, the idea is that it becomes a immutable image, right? Mostly read only. We still have state maintained under Var and Etsy. And the essence of it is that there's no in-between states between, you know, in-between state between the updates, right? It's not like I'm just adding an RPM here or modifying this or that there. Everything gets defined as a git, you know, file, right? Everything gets defined as code, as text, you know, configuration files. And then these images are created, layers as it gets updated, and then that gets pushed out to the Edge device and it gets staged there, right? As the updates are coming out and pushed to the mirrors, but then they only, they get updated, the system gets updated when you reboot, right? So you schedule that reboot at a specific time and then when it reboots, it boots into this new partition and then it succeeds or rolls back and we have green boots, which is a method to take a look at if it was successful and roll back. So the basis of it is, you know, you may ask, well, how do I build these, you know, RPM OS tree images, right? And, you know, we encourage everybody to check out the open source OS build projects, specifically Composer, right? So Composer is a really great tool. The idea with Composer is that you define what goes into your image as a text human readable file, and then Composer has a process that generates these OS tree artifacts for you. Not only OS tree, it does other types of images as well, but for the purposes of this presentation, we're gonna focus on OS tree. So the idea here, you can see it's pretty easy to read and understand, it's just a text file that gets committed to your GitHub or Apple, right, to your SCM repo. And, you know, for example, you can add specific packages. In our case, as Rob was saying earlier, our applications are gonna be delivered as containers. So we're using Podman as the container tool for that, right? So in our case, we would add, you know, Podman and the specific version that we want there. This is what's called the blueprint. And then once you have the, you know, you have your blueprint in Git, you know, it creates an artifact, but how do you actually, you know, boot into that artifact, into that file system tree, right? How do you actually, you know, provision that system? And the way that we use, you know, to Rob's point before, using traditional skill sets that you may have today in the traditional data center, we're just using a kickstart file, right? Here's a typical kickstart file. And the only difference is that I'm, you know, kind of mounting an OS tree here. And it's picking that up from an HTTP location, right? So remember before we said that these are immutable images, right? Layer, Git like in their characteristics. I'm creating those by defining a blueprint in Composer. And then once those get created, I just put them in an HTTP location. And then my system, you know, my edge server can pick up that from, can pick that image up from HTTP and boot it. So right now we have basically three artifacts that I talked about so far, right? We have my blueprint for Composer, which is a text file. We have my kickstart file, which is just a text file. And we have the actual RPM OS tree file system tree. And one important thing is that the good thing about using kickstart is that you can have a pre and post section, like you had traditionally in kickstarts. So if you wanna drop in certain configurations or certain things that are specific for your use case, you can do that as well. We actually provide, you know, image builder is sort of the Red Hat product around it. The cool thing about it is that you can do it through the command line in a Git like way, like we're gonna describe in this presentation. But if you're a Red Hat customer, you can take a look at having a GUI for it using the cockpit administration program. And as you can see here, you can even refer to a previous commit, right? So you're just kind of layering that file system as changes come in. And this is shipped and supported by Red Hat as a commercial product. So overall taking all of these elements that I described so far, right? Let's quickly take a look at this pipeline. As Rob mentioned, basically we have the southbound here, which is my edge nodes and the northbound, which is all running in Kubernetes. And it's my image build pipeline, right? So imagine that this is a Kubernetes cluster, right, OpenShift, et cetera. Tekton is the CICD pipeline behind it. So here's my Git repo, right, my SCM. What do I have there? I have my text blueprint, very simple, right? How I compose my image. I have my kickstart file, which is everything that I need to actually do the deployment of that image. So as an operator, as I'm making changes to my code here, right? It kicks the pipeline. It automatically, you know, what we do is we launch a VM, right? You know, we don't want to have like a static image builder, right? Our infrastructure there. So what we do, we sort of dynamically create a system, a RAL system, right, an OS build system that's gonna actually do that composing that I described earlier using our blueprints. And as Rob mentioned earlier before, we're just using, you know, like that composer process, it's just really creating, you know, a file system tree based on RPM packages. These could be Red Hat RPM packages or third-party RPM packages that you may have your software shipped as. Then we download the sort of the image artifacts, archive the previous image artifacts to Nexus, and then we publish, you know, basically that our RPM or Street artifact is a tar ball. We upload that to our HTTP location and then have a similar pipeline to the kickstart file, right? As we made any changes to our kickstart or not, right? It templates out the kickstart file for me, archives the previous version and publishes the kickstart here to my HTTP location. And the cool thing about this is that it doesn't have to be like a single location where I'm gonna have 2,500, 10,000 edge nodes getting there. These could be distributed regionally, right? I mean, it's really as simple as an S3 bucket or whatever other HTTP location that you have here. So it's pretty economical and, you know, it can be geo-distributed fairly easy. And that's all that you need to get your edge node to boot. And the actual boot process can be done through Pixie Boots, right? Just, you know, you imagine that I just brought in that device to the edge. I light it up, it's set to, you know, Pixie Boots. It Pixie Boots picks up the kickstart file and the image artifact. And then it fully builds itself with the latest version. And to that point, there's the idea of over-the-air updates, right? And some of the advantages of that. As we know, edge servers, edge locations may have intermittent or disconnected bandwidth. Normally, you know, we want some kind of lightweight way to do that. So we only transfer those Delta layers from the image, right? Remember that I was saying that as we're updating that image, we stage that. You stage those images locally at the edge device and then reboot into it. While we don't have to download a full, you know, gigabyte image every time there's an update. So this is a graph that shows that, right? Here I have, you know, a couple of nodes. This one, the blue guy here has this version. So it only needs this revision three layer on top of it. Whereas a new build that doesn't have anything will get the full image. Any comments you want to make so far, Rob? I'm going pretty fast. Yeah, I think, you know, with RHEL 8.3, what we've, you know, we have the new image capabilities with, you know, admins can now stage their updates so they can consume less data and apply the updates on the reboot. Like you just said, but what that also means is that I can choose the best time for a maintenance window and apply the updates on my terms. You know, something that maximizes, you know, uptime. So some things that we've considered are like using performance co-pilot or PCP in order to check bandwidth on the network for these low bandwidth or intermittent, you know, intermittently connected devices. And then choosing when, based on those, on that data on telemetry, when to actually push the updates and stage the updates, right? So this gives us a little more control over how we're managing the edge devices. It does. And another thing that's cool about it as well, and we're going to get into more details about this in a second, the edge nodes have the ability to kind of, once every 24 hours or however frequency you want, check the update mirrors to see if there's new layers to the image, right? So, and sort of initiate a download and a pull of that, of that, you know, layer. So the cool thing about it is that it's firewall friendly as well. We're not necessarily pushing it to the edge device, which would be difficult and error prone. We're actually, you know, just updating the mirrors using the pipeline that I showed earlier. And then the edge devices can check them every 24 hours and download and stage whatever is necessary. And what we want, and yeah, actually that's exactly what I was describing here. It's a simple conf file that you would change. You know, your update policy can be to stage as I was describing, or it can also be to update as soon as there is an update, you know, depending on how you prefer to, it's just HTTP traffic, right? And the cool thing is, as I mentioned, you can stage those updates at a local regional center, or even, you know, if you have a rack of edge servers, you know, maybe in that rack. And, you know, if you have any questions so far, please put them in the QA, moving fairly fast. Some of this may be new to you all. So just hit us with questions, no problem. And finally, this is the last part of what I was describing earlier, right? We know we want to have this image-based deployable to our edge devices, right? We want to be able to layer the updates by images, and we want to boot into that to Rob's points whenever is the proper outage window that I have there. And what I want is I don't want uncertainty and risk, right? Because if that update fails, I can't really go to that edge device and kind of touch it and kind of fix it manually, right? That's completely prohibitive. So the idea is that because I have these atomic images, right, these full images, what I'm gonna do is boot into that one, right? So here's, for example, on this version of the image, right? And I do the updates by rebooting, and I boot into that second version of the image, right? And then what we do is we do a health check, right? There is a arbitrary script that can actually test your application, right? Not just the health of the update process itself, but seeing if like, hey, after this image was updated, is my DNS server still responding? Or whatever is the workload that's running on that edge node, is it still functioning? And if it is, then okay, then I carry on and keep going that way. But if not, I can trigger an automatic recovery routine that brings me back to the previous known and trusted image. So think of it this way. On boot up system D would run some check services that are kind of grouped together under like a health target. And if the targets reached the status basically, success or failure would help us determine what to do. So if we rebooted and we had just done an RPM OS tree update and something failed, we could have it say, RPM dash OS tree rollback on reboot for the failure, or we could say retry and see if that fixes it. In either event, the goal here is to make it so that if there's a failure, we can put it back into a previously known good state, right? And your edge device isn't gonna be taken offline just because of a failure on an update. Well said. So think of it like a generic health check framework for system D, I think that would be. Yep. And that concludes the image and update section of the presentation. So please post any questions you have so far. Again, the idea is that we're not using a traditional Linux image for this, right? We're using RPM OS tree, lib OS tree to create essentially a fully deployable appliance, which is a file system image, which is version and layered. We can create these using a pipeline, a GitOps pipeline. We can update those by just committing code to Git. And then finally, we can distribute those and pull them, update them, health check them, and et cetera. So the idea with this is that we're trying to solve the image builds, the image deployment and the image update process. And again, go ahead. And another part there is, we're also trying to optimize the network for these edge devices, recognizing that many locations have a network bandwidth or semi or sometimes disconnected issues. So we're looking at, how can we stage those updates? Can we create convenience, maintenance windows to do those updates? Can we roll back to a prior known image if a failure happens? So, and one thing I have to add, Andre, is that we're talking specifically about edge here with regard to REL, but all these features also exists with the ability to compose images for REL, not just REL for edge. That's true. That's true. And one last comment is that again, LF edge or, you know, initiative, this is not the only way to do this, right? There's complete solutions around it by the LF organization and initiative umbrella. This is really a learning process that Rob and I went through, right? To understand the individual components of the edge solution. So let's take a quick run through through logging metrics and observability, right? So now I got my edge nodes deployed. You know, how can I know that? How do I know if they're up? You know, how does, you know, observability, measurability, these elements come together? So, you know, again, this has been a learning process, right? So the idea is that we're trying to figure out on the edge device, we need to have OS metrics, right? We need to understand, you know, how my performance is going from an OS perspective. And as Rob was saying, even things like, you know, network performance, bandwidth, you know, how tied up I am for system resources. So that's the first type of metrics that we wanted to get. But very importantly, as we're deploying these application workloads as containers, we need to have pretty in-depth knowledge about the applications themselves. So the way that we solve this in our demo and in our, you know, test environment has been by using performance co-pilots, you know, PCP for the OS metrics. And we're going to explain why we chose that in a second. And using Prometheus to get application level metrics. So basically we're running PCP performance co-pilot as a container, running the apps as a container. The apps are instrumented to provide slash metrics. And a performance co-pilot is also providing slash metrics OS data. We aggregate, one of the challenges that we had is that this could be a lot of different apps, a lot of different metrics. And we don't want Prometheus to have a ton of endpoints to mind per edge device, right? We wanted to present a single edge device. And that's where performance co-pilot really worked for us. We were able to kind of aggregate all the app data and all the OS metrics as a single Prometheus endpoints by using performance co-pilots, which can expose, you know, the metrics that it gets from all of these locations as an open metrics Prometheus slash metrics. So me as a system administrator, right? As an operator, I can have a custom dashboard running Grafana, running on Kubernetes on OpenShift, right? That's consuming that Prometheus time series data. This can be persisted so I can do retroactive analysis as well. Quick run through performance co-pilot for, you know, we wanted this talk to be accessible to everyone, not assuming any previous knowledge. So performance co-pilot is a great way it has these agents, a ton of different agents, right? And the cool thing, why we selected it for this project one is because as I mentioned, you could aggregate all of these different endpoints and expose these metrics as Prometheus time series data, but also because it's lightweight, right? We've done tests on very lightweight deployments, minimum bandwidth, you know, and these agents are fantastic. They're mostly written in C and they give great performance for the amount and the richness of the data that they provide. So basically PMCD in our example is running as a container on the edge device, which is here on the left. And then it provides this endpoint in our case to Prometheus, but you can also kind of as an administrator use some of these other tools to chart, log, and manipulate this data in various ways. And here's a little bit more, this is a great, I just wanted to give a shout out to the folks that created the PMDA for Prometheus, right? Which allows us essentially to pick up Prometheus endpoint metric from anywhere, right? From all our applications, very simple config, very, you know, we want this to be easy to maintain and understand, you know, as newcomers to the edge space. And this offered all of that. We can capture all S metrics, application metrics in a single sort of fabric. And then Prometheus is highly scalable, right? So you can have a distributed kind of, depending on how many edge nodes you have, you can have a federated sort of Prometheus setup or it scales sufficiently well that in our test environment, we have 2,500 edge nodes and it was sufficient to have one Prometheus endpoint. So let me add on to that for a second. Sure. So think of it this way, you know, when we push things down to the edge, what we're really looking for is telemetry information from the operating system. The thing that's going to run either a container or the application, right? So in our instance, we're going to talk about Podman in a minute and we're running containers within Podman and then the actual application itself. All those things would normally have Prometheus endpoints and they do. So we can have a Prometheus exporter in our application that's running in the container. Just because the application's up and running, if the container's up and running is the application running, is Podman working correctly? Did we reboot what's going on with the operating system? So we have all these Prometheus endpoints. If we were to expose all those like you normally would in a Kubernetes environment, anybody who's running Kubernetes knows that you have multiple, multitudes of Prometheus endpoints that are being scraped and collecting telemetry data within your cluster. But from an edge perspective, we don't necessarily want to call in and scrape multiple Prometheus endpoints from multiple deployed applications that are running on that edge device. So knowing that we have all these Prometheus endpoints, could we bundle them all up using PCP and expose them to the outside world as a single Prometheus endpoint, scrape all that data, maybe even limit that data so in an example that Andre and I have, we cut down the data by like 98%. So what does this mean? Prometheus endpoints and HTTP endpoint, that's expensive operation if we had multiples of those. So what we want is hit the Prometheus endpoint, want Prometheus endpoint, scrape that data for that entire stack of stuff, bring back absolutely what's necessary. So maybe some of that data is going to the knock, maybe some of that data is going to go back and help us make deterministic approaches to how we're going to download and update maybe the application or maybe push the application, maybe the OS update from Ansible. So we're gonna use that data for different reasons. So we wanna be able to filter that data and make it as small as possible and oftentimes because of network issues and we wanna limit the network bandwidth and the amount of data that's going back and forth. So PCP and what Andre's talking about with that PMDA, helping us expose that single, Prometheus endpoint is an important thing to consider depending upon how you're viewing the edge. Is it in the data center? Is it really a remote device someplace? Do I have network bandwidth considerations? So you have to take all that into account when looking at these different kinds of approaches. I think what we're showing here is actually all supportable right out of the box with RHEL today. So we didn't implement anything that wasn't, it's all open source, but it's all also supportable. Yep, if you're a Red Hat customer and you have RHEL in your environment, you can play with all of these elements that we describe here today. One last thing I wanted to say this before we move on to the next section of the presentation is that another reason why we chose PCP which is a really cool reason is that they have this awesome PMDA, you know like this performance agent, collecting agent for BCC, right? Which taps into BPF. Needless to say, if you're talking observability and in general getting metrics in a lightweight and powerful metrics that you couldn't get before, BPF is making a huge impact in the industry. And when we're talking about collecting OS metrics, we can even collect BPF metrics through BCC and into the PCP PMDA. So this is another great reason. And Rob was mentioning the performance inference engine. This is another thing that's really, really cool. The performance co-pilot project has this sub feature called the inference engine, which can make intelligence, can make choices and trigger actions depending on certain thresholds and et cetera or even complex conditions of metrics. So you could potentially trigger a podman run to redownload a container or an Ansible run to fire off an alert or something like that. This summarizes a little bit everything that we talked about, the scalable backend services in Kubernetes, picking up data as a single endpoint from each one of the edge devices. And we chose these, you know, chose the stack because it's simple, because it ships in a row, it's widely available, everybody understands it. You know, again, the idea of skills and keeping your skill sets simple. So now let's turn over our attention really quickly to the last part of the presentation. We talked about image management and updates. We talked about observability and sort of measurability. And now let's take a look at application management. So with podman, really we need a way to run applications that we're actually delivering down to the edge. Now at Red Hat, you know, we use podman. So I know many people are probably, you know, used to consuming Docker. Think of podman, it's, you know, it's an OCI compliant container engine. You know, think of it that way. It's very, it's highly compatible, if not fully compatible with a Docker API, but there's some major differences. So Docker runs kind of an client server architecture and podman runs in a daemon less architectures. So when working with Docker, we have to consume the CLI. That communicates with the backend daemon. The main logic actually resides in the daemon under Docker. And that builds images, executes containers. And that requires that the daemon have root privileges. Podman architecture really allows us to run containers under the user that started the container. So when you think of like a fork exact, and then that user does not need to have root privileges. So I can run rootless containers. The other advantage here is, is that basically no user can see any other users running container images, right? So because I'm running in that daemon and that rootless condition. So because podman is daemon less, each user can only see and modify their own containers. Think of it that way. And there's no common daemon for the CLI tool to really talk to. So think of it that way. When you have, we have additional tooling built around that. So if I want to, unlike Docker, if I want to have more fine grain control over how I layer container images, I would also add that we use a tool called buildup. Yes, it has a, Dan Walsh is the person here at Red Hat who helped create these projects. And he's from Boston. So we put an AH on a lot of, at the end of a lot of these words. So I have to call out that, Dan is really responsible for a lot of this and has done a fantastic job with podman and helping us create these tools. Scopio, so how could I move an image securely from one registry to another registry? So Scopio is another tool that we use. Yeah, go ahead, Andre, if you want to talk about it. No, no, no, you get started. I'll add some color. So think of it as, we want to look for whether or not we have an update label and can we actually pull, create a policy that actually pulls and does an update of an image and runs it within podman. So this is a little different than maybe what you're used to working with in Docker, but it adds a lot of capability if we want to keep things up to date. Yeah, it's similar to that idea that we had for the image, the OS image itself, right? The idea is that we only have these layers and they get pulled down in an automated fashion, rather than pushed to the edge device. The same idea here, right? This is sort of like, as autonomous as possible, right? Where we have the OS continually updated but we also have the application workload continually updated, right? Podman is checking nightly to make sure, at the local registry, as Rob mentioned, we have the tools to kind of manage the distribution of those images to those regional registries and then automatically update itself based on policy as new versions of the workload get deployed. So you'll notice that we have, we say as a system D unit here, because podman is Daemon-less, we can communicate with system D and the combination of system D and podman can make sure that our application is continually up and running. In a lot of scenarios, we're building the images on a Kubernetes platform and we're doing all our great development in a PaaS environment. In some cases, we're looking at putting Kubernetes at the edge, but that requires a lot of resources versus some of the solutions that we're looking at require a lot less resources. We have eight gigs of RAM is a high end and two core processor, we have to run a bunch of applications. So what do we really wanna do? We wanna run the application. We don't need to schedule the application. We don't need to move it from host to host. We don't need to do a bunch of things that Kubernetes necessarily provides, but we still want the advantage of running applications in containers. So from the development aspect, all the way down to the deployment and using the registry and everything else. So podman fits a really cool niche because we can run it as a pod and run that container in a pod and do some of those cool design patterns. But really, for many edge applications, the application just needs to be up and running. And if it fails, it needs to restart. And so system D in combination with podman is a perfect fit for that. Well said. And of course, where do these images come from? So a lot of times people start off with their container development and they might go to Dockerhub and I'm not saying anything negative about anybody's, where they're storing the images, but what you want is something that you a trusted source for your images, right? So whether you're, if you try to create your own images, you know that that's a complex task. So creating a base image. So what we provide is a universal base image that you can start from. Think of it as like small, medium and large, does it require system D? Should it use, is there an RPM update, you know, like DNF kind of capability in it? How fat do you want that base image? Can we start off with something that is securely created, scanned regularly? If there is an update to that image, can we pull the update to that image and have it update all of our images, you know, from the base image? And so what Red Hat has created is what's called the UBI or universal base image. And these are, if you go to the catalog.redhat.com or registry.redhat.io, what you'll find is if you do a search there on UBI, you'll find a number of different kinds of base images that you can choose from. You can use these whether or not you're a Red Hat customer or not, you'll get support if you're, you know, obviously a Red Hat customer, but I would encourage you to be very careful with how you're choosing your base images. Many of them are unsecure, they have root access. If you're running them in Docker, you could get contamination from one container to another, maybe exposing yourself to security issues. Definitely look at your container image pipeline. And how are you scanning that and adding to those images when you add the applications to those images and layer things in? Always start with something from a known trusted source, whether that's from Red Hat or some other source or you created it yourself. Yep, as we're deploying these applications to Podman at the edge, have a really good, secure, trusted, universal base image, right? So you don't end up with like a very heterogeneous environment, right? A lot of what we're trying to do at the edge is have things look and be the same, right? We're creating this image based deployment. So all the devices are the same level at the OS. We're creating Podman across the board. Now we're deploying these applications. We don't want to have all of these different versions and versions in releases of packages, right? So whether or not you're a Red Hat customer, universal base image, runs everywhere, use it. It's a good way to start and standardize across your edge deployment. One last thing there is if you've used Red Hat images in the past, they used to require a subscription manager on the underlying node in order for you to use them. So universal base image or the UBI eight images don't require that. So I would suggest, go give it a try, give it a look. If you already have Docker files and you're pulling the image from someplace and you see that there's a UBI image experiment with it, pull it down, you'll probably see that there's different kinds, minimal footprint, larger footprint, depending on the needs that you have. That brings us essentially to the end. I hope you've been posting questions to the chat. We're gonna have a little bit of time for Q&A. But before we go, we wanted to leave you to a couple of things. LF edge, I'm gonna say one more time, incredible work that's being done by the community, sponsored by the Linux Foundation and an umbrella of incredibly smart people. Again, our presentation really was our learning process as individuals and looking at the building blocks of an edge solution in terms of image, container and et cetera. What these folks have done is bring very complex and mature and very powerful solutions for a number of different dimensions of the edge challenge. So please go there and check this out. And a few more resources, Rob, you wanna take us through these resources really quickly? Sure, I think, so Ben Breard, who's listed there as the first, these are all videos on YouTube already existing. Ben Breard, we work with regularly with the RHEL edge team. In addition to that, there's really these seven great videos that are here to walking you through specifically how to create images, install the images and then deploy them doing all the things that we talked about today. So each one of these videos is probably less than 15 minutes. The material is really good. I would suggest starting there and you could do that in a VM on your own machine and get started today. That's true. Whether or not you're a Red Hat customer, you have like a Red Hat account, as you can get a Red Hat developer account, which is free and essentially build everything that we just talked about and some of these videos are gonna walk through that process. I think we have one question in QA. Let me see. Three questions in QA. You said it will boot into Pixi and update. What will it boot into? What server configuration is required for this? What edge node preparation is required that it gets into Pixi versus the installed OS? Great question. So it's really, really simple, right? Imagine this, right? We talked about earlier that these are edge servers that are going into a small closet somewhere in a manufacturing facility, right? I'm basically, I have a new factory that I wanna enable with this solution, right? So basically I have, let's say, a couple of devices that are pre-configured for me. All I would do is on the BIOS of that device, right? That physical device that I'm delivering to the factory. I would set it to boot in the BIOS, not to the hard disk, but boot to Pixi boot, right? And then within that environment, I have a DHCP server that's pointing me to, within that network, right? That I'm part of that this device is plugging into. There's a DHCP server that has a boot server record, right? That points me to essentially that Pixi boot path, right? Then in that, it downloads automatically the kickstart, right, from my HTTP location. That kickstart points to where the image is. The image gets downloaded and laid down on the operating system. And, you know, there you go, to all intents and purposes. One thing to add there, Andre, is that the initial image may be larger, but the subsequent images, if you're using RPMOS tree are actually deltas, so they're much smaller than, you know, when you're doing traditional RPM updates. That's right. And your Pixi boot configuration too, what you can do is that, you know, the first time that you boot the server, this is an important part that I would be remiss to mention to the person that asked, you know, my network server, my DHCP server is pointing at my Pixi boot server, right? My Pixi boot server gives me the kickstarts and gives me the image. I boot into it, it gets installed into the hard disk, right, of my device, and then I boot into that, right? Then what I do in my Pixi boot server is that once I boot it once, now the next time it's gonna tell me to boot for my hard disk, not from Pixi, this is something that Red Hat supports out of the box in our Pixi boot server. And then the next time that I boot, I'm gonna build into my hard disk. And as Rob said, I can get the new version of the image if there's one and boot there. So the business value of that to the person that asked the question and to everybody else, this is a really cool part of the solution, right? Is that I really don't have to do anything to these physical devices before I deploy them to the edge site, right? Literally nothing, just when I buy those devices from Dell, from HP, or from, you know, from wherever, right? All I just ask when that, you know, by my OEM is that, you know, the BIOS be configured to boot from, you know, the network under my first time, you know, and then on my network for that location where I'm deploying that physical device, I configure the HCP to point to a Pixi boot server, you know? That's really all it is, you know? So all the configuration just happens at the network for that area. And there's nothing special that I need to boot. I can bring one device to devices and importantly from a skill standpoint, the person that's deploying these edge devices doesn't have to do anything about Linux, right? Doesn't have to be a deeply skilled engineer, right? They're just literally putting that box there, plugging into the network and booting. And every other aspect of what we said is more or less like this, right? These images are, you know, updated in an automatic fashion. Podman and the container image are getting updated in an automatic fashion. So from the day one, when I'm first deploying that, you know, edge server to my factory in my example to forever, I never really have to touch it, you know what I mean? Unless something disasters goes wrong with my hardware. But in that case, I would just take a new piece of hardware and plug in where that other one was. Let me see if we have more questions. Can you post it on the chat window? I just responded there, Andre, that everyone will get a PDF of the Prizo and that has the links in it. Okay, I see some of the questions have been responded already. I think we're doing on time. We're good, I think we kind of did. We hit it, fellas, everybody who stayed on to the end, we really appreciate it. Massive respect to all of you for learning and kind of teaching yourself and coming with us in our learning journey. This is what Rob and I've been doing for the past few months. Don't forget to go to LF Edge. Rob, any final words from your side? No, I'm just responding to someone who's asking another question here. But thank you very much for joining us today and we really appreciate any feedback that you have. Yeah. Wonderful. Okay, thank you so much, Andre and Robert, both of you for being here and leaving us through this discussion today and thank you for everybody who participated. As a quick reminder, the recording will be on the Linux Foundation YouTube page later today. And unless you guys have anything else to add, I think that you're good. Yeah, feel free to email me and Rob if you want to follow up. If this is something your company might be interested in, as we mentioned, Redhead can help you with that. Or if you just want to talk about the open source components, we're open to that conversation as well. Okay, wonderful. Thank you so much again, everyone.