 is Ben Breard on the product management team here at Red Hat. I've basically spent the past five years kind of operating in between REL and OpenShift, kind of covering all of our immutable operating systems, container runtimes, image offerings, and things in that kind of world. And now, really focused on Edge and what it takes to be successful there. Yeah, so I just hit 10 years at Red Hat. So it's awesome. I have the greatest job in the world. I get to play with all the fun stuff and work with all these super cool use cases from customers. So it's absolutely great. And yes, I at one time had an RHCA one day in the future, I hope to update that. But we'll see how this goes. OK, so what is Edge? Well, for everyone's edification, that's terrible. Yeah, anyway, we'll keep moving here. Basically, we now have the situation where we have sources of data all over the place. And there's all kinds of use cases to capitalize on this data quicker, faster, more efficiently. And in some cases, we've reached the limits of where we can actually ship data to a cloud or data center environment to process this. And so putting compute closer to these sources of data, sometimes those sources could be humans. Sometimes it's sensors. It can be all types of use cases. But that's typically the combination of what we're talking with Edge. And so of course, you guys can probably imagine all of the things that we take for granted inside of the data center or cloud environment are now luxuries from an infrastructure side that may or may not exist at the Edge. So that's kind of a key thing. And another point I would make is depending on what industry you're in, Edge is going to look completely different. Sometimes it will kind of look and feel like a remote office, maybe. It's totally different for telcos and content providers and so forth. In the industrial manufacturing space, it looks quite different as well. So every industry kind of has its own take. And so it's good for us to look at this from a holistic view, not really just from one industry and realize that it does vary quite a bit. If we step back and kind of look at common challenges here, we do see that scale is often associated with these in a data center. Pretty common for us to look at the tens of thousands, approaching hundreds of thousands, types of deployments. In Edge, though, it's often that these numbers start in the millions range. And you can just imagine trying to perform a task on a million devices, we start hitting limits of what you can do with protocols and things of that nature. And so we have to look at the scale problem very differently in this world. In terms of just interoperability, I think of right now we're kind of seeing the West as kind of the West. Excuse me, the Edge is kind of like the Wild West in a lot of ways. In terms of just what hardware is available, what accelerators are being used if any. Oftentimes, there's kind of an existing legacy footprint and how newer systems are interacting with that. Or maybe they're interacting with hardware systems talking to PLCs on a manufacturing line that they can't stop for safety protocols. And it's just a lot of moving pieces that's going to continue evolving and adapting a lot over the next few years. But how these protocols work at a very low level, sticking with the manufacturing case, is there's not a lot of commonality between the vendors and types of communication that we see at a very low level. So it's a complicated matrix of layers of technology that intersect here. And of course, from a consistency perspective, basically keeping things updated, we oftentimes talk about the convergence of operational technology with IT networks and systems. And that right there creates sometimes organizational tension, but also creates huge opportunity for us to solve problems and really work with customers to make sure that we can meet basically requirements and have these systems kind of converge over time. All right. And we look at kind of an application lens here. You guys are going to see that all the traditional cloud native and a lot of the AI and ML stuff, this slide should really say open shift and rel on it. Because these are really the types of applications that we see being super relevant in the edge space, there's already a lot of traditional footprint out here. Like I said, rel specifically has been doing edge computing long before we had that word or that label as an industry. And so there's already a sizable footprint out there for that. We do see most of the growth happening on the cloud native side could just be repackaging of existing applications as well as literally forklifting workloads from other environments and putting those out at the edge. And then of course, the AI and ML side is obviously growing really, really fast. And whether you're training models or just executing them close to sensors, for example, or doing inferencing on a webcam or these types of things. We basically see the OS as being a rel specifically being a great fit for a lot of these use cases. All right, so I'm going to put this in context. And we've got a lot of cool deployments for open shifts in an edge context. We can now do smaller 3-0 clusters. The remote workers landed in 4.6 or much, much more progress has been made there. And then the single-node work is coming, is on the way. But today, just to put this in context, we're talking about just what can I do with Linux? And so you could think of this. I've had two people come up with this term. We know we have K8s, and there's kind of K3, which is like the Slendown Kubernetes. Kind of think a lot of this is like K-0, right? Like what can I do with container run times and the OS? And it turns out you can actually do a heck of a lot here. And so just again, kind of going back to the trends that we see happening, a lot of times you'll see edge computing connected somehow with some type of digital transformation type of initiative. Could be the IT OT convergence is another huge, huge things going on, or really people just trying to make better use of analytics on top of data to either make them smarter from a competitive standpoint or improve customer experience, or really just increase operational efficiency. All of these tend to be the bigger trends that people are going after. It's not a little bit about the verticals already, so I'm not going to rehash that. But what can you do with kind of a standalone OS? At this point, really just a container host that where the concept of a cluster actually doesn't add value. And so these independent systems that just once you put them in motion, keep going, that's a really, really common use case here that we can solve really well, particularly on smaller footprint devices, either just edge servers or a gateway to where we're really just passing packets back and forth. I talked a little bit about computer vision a second ago where we're doing inferencing on a feed of what's coming into the system and trying to identify what's happening on it and make decisions from that. The kiosks are still a huge thing, particularly for like in the transportation space, that's still we see huge investment happening there. And then of course, we see the classic kind of IoT use case rolling up under edge as well. All right, so with rel in the next update, which will be 8.3, we are on a time-based model now. So the November release, we kind of have these four things landing, which represents our first step in the journey of kind of adapting rel for edge. We're not finished with this release. Again, this represents kind of that first step for us. And effectively, what we have is this tool called Image Builder, and we'll go over this in more detail. But basically, we can generate these pretty small footprint operating system images that can need to be purpose-built for a particular piece of hardware, use case, workload, or kind of a generic container host. And then we get a whole bunch of benefits because we're using RPM OS tree in the background, which makes it super easy to update, super easy to be really efficient with those updates over the wire, and then we have some cool technology that will help us roll back if we need to. And we'll take a look at those in more detail. But before we get into kind of some demos and some other things, I want to kind of talk a little bit about running containers with traditional workloads. I mentioned earlier that there's pretty sizable legacy footprint in a lot of these edge use cases. And basically, there's no technical reason why we can't just drop in containers next to traditional damons running on our Linux system. It works great. Now, depending if you need to orchestrate and do fancy things with those containers, you will probably hit the limits of that pretty quick. And that's obviously where Kubernetes has a massive amount of value. But if this is more of a static workload type case, this is pretty simple to do. It works really, really well. Again, one of the things I do want to point out is that in REL 8, we make it easy for just regular damons running on the system to really present the same kernel primitives that give you container isolation to services installed on your system. So it's something I find in talking with people that a lot of people don't know that we can easily add all kinds of namespaces, like sec.com, these types of things are really easy. And there's really a list of kind of like eight line items you can put into a unit file that starts an app. And it will just basically give you that very, very similar type of isolation, which is super cool when you consider kind of how connectivity is increasing and how important security is, and will continue to be in the future. OK, so we are from a container run type perspective. We are talking more about the Podman side of the house right now. Hopefully everybody here is kind of familiar with the differences of CRIO, which is meant to talk to the Kubernetes CRI and Podman, which uses basically all the same underlying components, but it's kind of standalone in terms of it has a CLI, now has a Docker-compatible API in this version that's going to come out in 8.3. It's just super lightweight runtime that works incredibly well. One thing we like about it for this use case is we have much, much better integration with Podman and SystemD than we ever had in the Docker world. And so again, this makes it super easy to just, again, going back to kind of that static workload model, half-container images that can auto restart, they just, the system knows how to run these just like any other service. So Podman is really great for that. So again, we've got the new API coming in in 8.3, and then another thing that makes this whole model really, really nice is this auto update based on the tag on your images. So this is something that's technically going to land as tech preview in this release. But if you're managing container life cycles at your registry, which you should be doing, everybody should be doing that. And if you want to have a certain tag land on a certain set of boxes, maybe you're going to phase in or have your whole, all of them are going to pull the prod application. Basically, we can have timers on these ends and now check that tag at whatever interval is appropriate and just kind of auto pull that image and update as new ones are made available on the registry. So again, little features like that make life super simple and again, easy to scale because these are all client-initiated actions. That's nice. OK. So let's talk a little bit about what we're getting in 8.3. So I mentioned Image Builder is kind of the front door of this tool. This is made available via the Cockpit UI. There's also a CLI and an API for this. But really log in and approximately four clicks, you're going to get the default image. If you need to customize it with RPM content, you can. Here I'm going to include CRUN as I really enjoy CRUN because I like using SecretV2. It's super fast to instantiate containers as well. But here we'll just go ahead and we'll commit this to the blueprint. Again, you don't have to do this. By default, we'll give you everything you see here with some slide, a small app core install with our container tools as well as some goodies that we'll look at in the next couple slides. But you see, I just select the image content, rail for edge, commit. And this is going to generate an RPM ministry commit that we can then serve out from the central place. And just again, this is going to give us that remote update capability. That's it. It's going to kick off the build right here. We can see this going. It happens pretty quick. On my junky laptop, this will complete in, I don't know, seven, eight minutes if you're running on good hardware. Expect faster results. All right. So let's talk about the mirroring and providing these updates. So again, we create the initial image the way you saw. We are going to create updates for these images using that same process. So it's one thing you have to understand that you are now driving these boxes and you're driving that update cadence and how you have fine grain control over everything that happens on these systems. And again, once these updates are created, we can put them on any type of web server. So if you want to just a patchy box or host a container somewhere, if you're going big on a prod environment, please use a CDN of some type, just depending on what type of load and number of nodes you have. And then these last two, we've kind of itemized a couple of the configs on your systems that are going to control where they're looking for this web endpoint. And then the last one is one of my favorite things where we can, if a new update is published, we will automatically pull that down on the nodes. And I'll pick that up in a couple slides on how we actually take the update and accept it. But it's super easy to stage these updates locally on the system. OK, so let's take a look. I am using the terminal on the web console. And I wrote a little script so I wouldn't fat finger things for everybody. But if I check the status of that image build, I can see node zero is finished here. And this compose image is really just going to download it and give me an artifact that I can work with here. And this happens super fast since it's local. But I now have this tar file with that commit of the RPMO history locally. And now I'm going to build a web-based server to host my commit. So I'll spit out the file here. You can see it's super simple. I'll just give me a patchy, extract this tar ball that I made an image builder, and then go ahead and serve that. So no magic at all is needed here. So just go ahead and build that image. It's going to happen quickly because it was already built on this node. And then I'm just going to bind it to port 8,000 so I can run this particular one's root list. There's no, again, no requirement to do that again. But it's a good proof point that you can host these any number of ways you want. And then once that's going, I'm just going to curl the latest ref of the commit. If you guys have used RPMO history or looked at it, you'll know that it's kind of modeled after Git. So a lot of those same ideas and concepts that you probably are familiar with from Git, RPMO history basically leverages those. All right, so once you have made an image from image builder, that is a good way you can easily serve that up. And let's talk about the updates themselves. So day one is pretty simple updates. So a lot of edge environments, some of them have amazing data center style networks, which is super fast. And efficiency at this tier is not, it's nice to have, but it's maybe not a requirement. But in some environments, we have just horrible connectivity, like microwave links that make modems or old dial-up modems look really fast. Retail, we still see fractional T1s and these types of things. And so what's cool is even if you have constrained networking, this now makes it possible to update those devices. Because it's much, much more efficient by only sending that delta of the update over the wire. If you generate what's called a static delta, you can actually pull it over a much, much less TCP overhead, which is just great. And again, it increases that efficiency. But even if you have really, really great connectivity and bandwidth isn't like a nearest scarce resource, you still probably want to be using that for your applications of workloads, not OS updates and these types of things. So having that efficiency really helps regardless of what type of infrastructure you have. And again, this is just a great side effect of using RPMOs tree natively for all this stuff. Now provisioning here. If you're familiar with our cost, you may be wondering, why is this not ignition? Well, we're looking at ignition. And we may include that in the future as an option. We're certainly open to that. But right now, a lot of these devices have like, we just see this gamut of hardware that has all these weird requirements. So Anaconda works incredibly well to fit this RPMO history commit onto those systems. So Anaconda is, again, just makes this really, really easy today. And so this little example just has a kind of a bare bones top section. And then really, instead of having a percent packages where you would normally list out all the content to install, we're just going to use RPMO history commit. And we're going to point it to your mirror. If you point it to your production mirror, once you deploy the system, it's going to know where to look for updates automatically. So that's probably a good thing to do if you can. And that's all you need to do. You can do any customization stuff and percent post if needed. Pre is still there. But kind of a good rule is to keep these as simple as possible. OK, and so that's how we can easily just get the commit onto your devices. And now a little bit about RPMO history. This really gives us this great kind of the best of both worlds if you think of a traditional like embedded type firmware, like your router that may have an AMB partition on it. So we kind of blend that AB update model with the benefits of a package-based distribution. So it's nice. So if we ever need to make a change to what's on a system, because one of the key things here is to be able to adjust for change that happens in your environment over time. And so this model is super easy to adapt over time, which is really powerful. So again, we get the benefits of this AB model where we can fail back if necessary, as well as kind of that flexibility with the packages, which is great. So really everything of the operating system that lands under slash user gets swapped out with the, or at least in each commit, there's the full OS in there. And then once you pull that into your repo and clone it locally, we're only going to send a delta, but you get that full commit locally where you can go to either state of the device. We do maintain state in slash bar in Etsy. So the whole operating system is not technically immutable, like from the strictest definition possible. And that's not a bad thing because true immutability often requires a significant amount of infrastructure to be available. And that's not something that we can count on in these environments. So again, maintaining your configs and container images and these types of things is generally a really healthy and convenient thing to do. So that's it. And we always get this known state that we're operating in on the system, which is powerful. So I mentioned earlier that we can automatically stage these updates in the background. And so that's a great way to approach it. That's probably what I would do, but it depends on your environment and what you should do. And then whenever an update is staged, typically you may want to align to a maintenance window. Again, a lot of these systems are responsible for like critical infrastructure and can't just accept reboots like free form, like we would expect in a cloud environment. And so it's pretty easy to schedule reboots with a timer or there's a number of ways you can use any type of a management system. But once you have a scheduled reboot, when they come back up, they'll be at the next update automatically. So that's how that works. So updates will cost you a reboot. However, as long as you time that, typically accepting a reboot should be less than, less disruption or potential unknown disruption than updating a live running system and making changes on the fly. This is a really good model. Okay, so last little screen shot is to kind of give you guys a look if you haven't played with RPOS tree on a system before. I'm going to SSH into, this is a bare metal system I'm running. And again, I'm using the web console terminal to do this. I'm running a container that's just sucking the four cores of this little box and dry. And I checked the status. I'm running a single commit. So this box has just been provisioned from Image Builder. That is the commit of the update. And I'm manually triggering an update because I don't want to wait for the timer to do it automatically for me. And this particular one just pulled in new, kind of the container tools packages have been rebuilt for this one. And we can see when I check the status that I have a new deployment here that is staged and not running on the system. Of course, my workload has not been interrupted at all. It's still going. Now I'm forcing a reboot just to move into this really quickly. You can see how I got impatient and tried to SSH before the system came back. And I checked the container runtime. And of course, my application is running as expected because it is being managed by system B. And we can see here the asterisk has moved up and I'm in my new update. Now again, this model is familiar if you've been using like things like atomic hosts in the past or maybe silver, blue, and fedora. Rail CoroS does use this model as well. So hopefully this makes sense to everybody. Now, this last thing is kind of what's new and unique to what we have in REL. Basically, this technology is GreenBoot. And it's the first time that I'm aware of where we can have custom health checks for applications running on the system. So let's say my node has like three critical things it needs to do. I can basically write scripts in GreenBoot gives us this framework to run those scripts that integrates with the boot process of the system. And if they fail using a counter system, so it's not strictly like, well, it's as flexible as you need it to be. If an update causes one of the critical roles of that node to fail, we can revert the state of the system and go back to where it was working, right? So super, super powerful here. And yeah, we're basically really excited to have that linkage again, between the workloads and operating system update level. One of the customers we worked on on this capability really closely with, their feedback to us is basically the combination of RPM, OST, and GreenBoot is gonna save us millions of dollars with our deployments. Once these systems get provisioned at the edge the goals to not go back and revisit them physically. So you can imagine how having this type of safety mechanism in place is a big deal for a lot of people and living into this space. All right, so with that, basically, again, I mentioned 8.3 as kind of the first step in our journey of kind of meeting the challenges of the space. We do see the security story of REL being a huge value to edge deployments. In fact, that challenges we talked about earlier in this talk, I would say all of those challenges kind of live on top of the security concern, right? Because again, in the data center, things tend to be physically protected. You know, we have cages, we have badges, systems and may or may not have any of that for the edge. And so being able to promise that same level of security without physical protection is huge. And so REL has a huge value proposition there today and one that's gonna get even better in the future. And the other thing around edge in general is, we see complexity as being a huge challenge for really any IT project in general. And we see REL in particular at solving a lot of the complexities in this space. So hopefully kind of the little example in these features we've brought through in 8.3, hopefully you can kind of see that if I need to put these applications that's relatively static on some smaller devices, maybe they're big servers, it doesn't really matter, and just keep these, maintain them steady state, make sure they're updated. This is a super simple way to just go meet that need and be successful. And then of course, why not do that by leveraging kind of the existing investments in people skills and technologies, right? That people know and love from Red Hat. So that's a key value prop in everything here. And so at this point, I guess everybody's probably going crazy thinking, oh my gosh, I've got to get my hands on this and try it to go conquer edge in my environment. And so one, you've come to the right conclusion. Two, it's super easy to go do this and get your hands on it. If you go to the OS build GitHub repo, we really have kind of this whole thing documented out here and you can walk through it. It's super simple. You can just do it in a couple of VMs if you like. Really, however it makes sense, but it's, you know, it'll take you anywhere from 20 to 40 minutes depending on your setup, super simple. And of course we'd love to get feedback from you guys, hear what you think. And again, this will GA pretty soon when 8.3 hits the streets and we will update this demo to reflect that. And with that, I guess that's our look at how we are adapting Red Hat Enterprise Linux for the edge and I appreciate everybody's time and being here. Ciao.