 My name is February art on the product management team here at red hat. I've basically spent the past five years kind of operating in between rel and open shift kind of covering all of our immutable operating systems, container run times, image offerings and things in that kind of that world. And now really focused on on edge and what it takes to be successful there. Yeah, so I just did 10 years at red hat. So it's, it's awesome. I have the greatest job in the world. I get to play with all the fun stuff and work with all these super cool use cases from customers. So it's absolutely great. And yes, I at one, one time had our HCA one day in the future. I hope to update that. But, you know, we'll see how this goes. Okay, so what is edge? Well, for everyone's edification, that's terrible. Yeah, anyway, we'll keep, we'll keep moving here. Basically, we now have the situation where we have sources of data all over the place and there's all kinds of use cases to capitalize on this data quicker, faster, more efficiently. And in some cases, we've reached the limits of where we can actually ship data to a cloud or data center environment to process this. And so putting compute closer to these sources of data, sometimes those sources could be humans, sometimes it's sensors, it can be all types of use cases. But that's typically the combination of what we're talking with edge. And so of course, you guys can probably imagine all of the things that we take for granted inside of the data center or cloud environment are now luxuries from an infrastructure side that may or may not exist at the edge. So that's kind of a key thing. And one, another point I would make is depending on what industry you're in, edge is going to look completely different. Sometimes it will kind of look and feel like a remote office, maybe it's totally different for telcos and content providers and so forth. In the industrial manufacturing space, it looks quite different as well. So every industry kind of has its own take. And so it's good for us to look at this from a holistic view, not really just from one industry and realize that it does vary quite a bit. If we step back and kind of look at common challenges here, we do see that scale is often associated with these. In a data center, pretty common for us to look at the tens of thousands approaching hundreds of thousands types of deployments. In edge though, it's often that these numbers start in the millions range. And you can just imagine trying to perform a task on a million devices. We start hitting like limits of what you can do with protocols and things of that nature. And so we have to look at the scale problem very differently in this world. In terms of just interoperability, I think of right now we're kind of seeing the West is kind of, excuse me, the edge is kind of like the Wild West in a lot of ways. In terms of just what hardware is available, what accelerators are being used if any. Oftentimes there's kind of an existing legacy footprint and how newer systems are interacting with that, or maybe they're interacting with like hardware systems talking to PLCs on a manufacturing line that they can't stop for safety protocols. And it's just a lot of moving pieces that's going to continue evolving and adapting a lot over the next few years. But how these protocols work at a very low level, sticking with the manufacturing cases, there's not a lot of commonality between the vendors and types of communication that we see at a very low level. So it's a complicated matrix of layers of technology that intersect here. And then of course, from a consistency perspective, you know, basically keeping things updated, we oftentimes talk about the convergence of operational technology with IT networks and systems. And that right there creates sometimes organizational tension, but also creates huge opportunity for us to solve problems and really work with customers to get, make sure that, you know, we can meet basically requirements and have these systems kind of converge over time. All right. And we look at kind of an application lens here. You guys are going to see that all the traditional cloud native and a lot of the AI and stuff. This slide should really say open shift and rel on it. Because these are really the types of applications that we see being super relevant in the edge space. There's already a lot of a lot of like traditional footprint out here. Like I said, rel specifically has been doing edge computing long before we had that word or that label as an industry. And so there's a there's already a sizable footprint out there for that. We do see most of the growth happening on the cloud native side could just be repackaging of existing applications as well as literally forklifting workloads from other environments and putting those out. Out at the edge. And then, of course, the AI ML side is obviously growing really, really fast. And whether your training models are just executing them, you know, close to sensors, for example, or, you know, doing inferencing on they like a webcam or these types of things. You know, basically see the OS is being rel specifically being a great fit for a lot of these use cases. All right, so I'm going to put this in context and we've got a lot of cool deployments for open shifts in an edge context. We can now do smaller three no clusters. The remote workers landed in in four six or much, much more progress has been made there. And then the single no work is coming is on the way. But today just put this context we're talking about just like what can I do with Linux. And so you could think of this. I've had two people come with this term we know we have k8s. And there's kind of K3, which is like the slim down Kubernetes kind of think a lot of this is like K zero. Right, like what can I do with container runtimes and and any OS. And it turns out you can actually do a heck of a lot here. And so just again kind of going back to the, the trends that we see happening. A lot of times you'll see edge computing connected somehow with some type of digital transformation type of initiative. Could be, you know, the IT OT convergence is another huge, huge things going on. Or really people just trying to make better use of analytics on top of data to either make them smarter from a competitive standpoint, improve customer experience or really just increase like operational efficiency. All of these are kind of tend to be the bigger trends that people are going after. It's actually a little bit about the verticals already so I'm not going to rehash that. But what can you do with kind of a standalone OS? At this point, really just a container host that where the concept of a cluster actually doesn't add value. And so these like independent systems that just once you put them in motion keep going. That's a really, really common use case here that we can solve really well. Particularly on smaller footprint devices, either just edge servers or a gateway to where we're really just passing packets back and forth. Talked a little bit about computer vision a second ago where we're doing inferencing on like a feed of what's coming into the system and trying to identify what's happening on it and make decisions from that. You know, kiosks are still a huge thing particularly for like in the transportation space. That's still we see huge investment happening there. And then of course we see the classic kind of IoT use case rolling up under edge as well. All right. So with RHEL and the next update, which will be 8.3, we are on a time-based model now. The November release. We kind of have these four things landing which represents, you know, our first step in the journey of kind of adapting RHEL for edge. We're not finished with this release. This again, this represents kind of that first step for us. And effectively what we have is this tool called Image Builder and we'll go over this in more detail. But basically we can generate these pretty small footprint operating system images that can either be purpose built for a particular piece of hardware, use case, workload, or kind of a generic container host. And then we get a whole bunch of benefits because we're using RPMOS tree in the background, which makes it super easy to update, super easy to be really efficient with those updates over the wire. And then we have some cool technology that will help us roll back if we need to. And we'll take a look at those in more detail. But before we get into kind of some demos and some other things, I want to kind of talk a little bit about running containers with traditional workloads. I mentioned earlier that there's pretty sizable legacy footprint in a lot of these edge use cases. And basically there's no technical reason why we can't just drop in containers next to traditional diamonds running on a Linux system. It works great. Now, depending if you need to orchestrate and do fancy things with those containers, you will probably hit the limits of that pretty quick. And that's obviously where Kubernetes has a massive amount of value. But if this is more of a static workload type case, this is pretty simple to do. It works really, really well. Again, one of the things I do want to point out is that in RHEL 8, we make it easy for just regular daemons running on the system to really present the same kernel primitives that give you container isolation to services installed on your system. So it's something I find in talking with people that a lot of people don't know that we can easily add all kinds of namespaces. These types of things are really easy. And there's really a list of like kind of like eight line items you can put into a unit file. It starts an app and it will just basically give you that very, very similar type of isolation, which is super cool when you consider kind of how connectivity is increasing and how important security is and will continue to be in the future. Okay, so we are, from a container run type of study, we are talking more about the pod man side of the house right now. Hopefully everybody here is kind of familiar with the differences of cryo, which is meant to talk to the Kubernetes CRI and pod man, which uses basically all the same underlying components, but it's kind of standalone in terms of it has a CLI now has a doctor compatible API in this version that's going to come out in 8.3. It's just super lightweight runtime works incredibly well. One thing we like about it for this use case is we have much, much better integration with pod man and system D than we ever had in the Docker world. And so, again, this makes it super easy to just, if you again going back to kind of that static workload model, have container images that can auto restart, they just like they just the system knows how to run these just like any other service. So pod man is really great for that. So again, we got the new API coming in in 8.3. And then another thing that makes this whole model really, really nice is this auto update based on the tag on your images. So this is something that's technically going to land as tech preview in this release. But if you're managing container life cycles at your registry, which you should be doing everybody should be doing that. And if you want to have a certain tag land on a certain set of boxes, maybe you're going to phase in or have your whole, all of them are going to pull the prod application. Basically, we can have timers on these names and now check that tag at whatever interval is appropriate and just kind of auto pull that image and update as new ones are made available on the registry. So again, little features like that make life super simple. And again, easy to scale because these are all client initiated actions. That's nice. Okay. So let's talk a little bit about what we're getting in 8.3. So I mentioned image builder is kind of the front door of this tool. This is made available via the cockpit UI. There's also a CLI and an API for this. We'll really log in and, you know, approximately four clicks, you're going to get the default image. If you need to customize it with RPM content, you can. Here I'm going to include CRUN as I really enjoy CRUN because I like using Secret V2. It's super fast instantiate containers as well. But here we'll just, we'll go ahead and we'll commit this to the blueprint. Again, you don't have to do this by default. We'll give you everything you see here with some slide, a small app core install with our container tools, as well as some goodies that we'll look at in the next couple of slides. But you see, I just select the image content, a rail for edge commit, and this is going to generate an RPM oyster commit that we can then serve out from the central place. And just again, this is going to give us that remote update capability. And that's it. It's going to kick off the build right here. We can see this going. It happens pretty quick on my junkie laptop. This will complete in, I don't know, seven, eight minutes if you're running on good hardware. Expect faster results. All right. So let's talk about the mirroring and providing these updates. So again, we create the initial image the way you saw. We are going to create updates for these images using that same process. So it's one thing you have to understand that you are now driving these boxes and you're driving that update cadence and how like you have fine grain control over everything that happens on these systems. And again, once these updates are created, we can put them on any type of web server. So if you want to just a patchy box or host a container somewhere, if you're going big on a prod environment, you know, please use a CDN of some type, just depending on what type of load and number of nodes you have. And then these last two, we've kind of itemized a couple of the configs on your systems that are going to control where they're looking for this web endpoint. And then the last one is one of my favorite things where we can, if a new update is published, we will automatically pull that down on the nodes. I'll pick that up in a couple slides on how we actually take the update and accept it, and it's super easy to stage these updates locally on the system. Okay, so let's take a look. I am using the terminal on the web console, and I wrote a little script so I wouldn't fat finger things for everybody. But if I check the status of that image build, I can see node zero is finished here, and this compose image is really just going to download it and give me an artifact that I can work with here. And this happens super fast since it's local. But I now have this tar file with that commit of the RPMO history locally. And now I'm going to build a web-based server to host my commit. So I'll spit out the file here. You can see it's super simple. I'll just give me a patchy, extract this tar ball that I made an image builder, and then go ahead and serve that. So no magic at all is needed here. So just go ahead and build that image. It's going to happen quickly because it was already built on this node. And then I'm just going to bind it to port 8,000 so I can run this particular one's rootless. There's no, again, no requirement to do that again. But this is a good proof point that you can host these any number of ways you want. And then once that's going, I'm just going to curl the latest ref of the commit. If you guys have used RPMO history or looked at it, you'll know that it's kind of modeled after Git. So a lot of those same ideas and concepts that you probably are familiar with from Git, RPMO history basically leverages those. All right. So once you have made an image from image builder, that is a good way you can easily serve that up. And let's talk about the updates themselves. So day one is pretty simple updates. So a lot of edge environments, some of them have like amazing data center style networks, which is super fast and, you know, efficiency at this tier is not, you know, it's a nice to have, but it's maybe not a requirement. But in some environments we have just horrible connectivity, like microwave links that make modems or old dial-up modems look really fast. You know, retail we still see like fractional T1s and these types of things. And so what's cool is even if you have constrained networking, this now makes it possible to update those devices because you can, it's much, much more efficient by only sending the delta of the update over the wire. If you generate what's called a static delta, you can actually pull it over like a much, much less TCP overhead, which is just great and again increases that efficiency. But even if you have really, really great connectivity and bandwidth isn't like a, near a scarce resource, you still probably want to be using that for your applications of workloads, not, you know, OS updates and these types of things. So having that efficiency really helps regardless of what type of infrastructure you have. And again, this is just a great side effect of using RPMO Street natively for all this stuff. Now provisioning here. If you're familiar with our cost, you may be wondering, why is this not ignition? Well, we're looking at ignition and we may include that in the future as an option. We're certainly open to that. But right now a lot of these devices have, like, we just see this gamut of hardware that has all these weird requirements. And so Anaconda works incredibly well to fit this RPMO Street commit onto those systems. So Anaconda is, again, just makes this really, really easy today. And so this little example just has a kind of a bare bones top section. And then really instead of having a percent packages where you would normally list out all the content to install, we're just going to use RPMO Street commit and we're going to point it to your, your mirror. If you point it to your production mirror, once you deploy the system, it's going to know where to look for updates automatically. So that's probably a good thing to do if you can. And that's all you need to do. You can do any customization stuff and percent post if needed. Pre is still there. But kind of a good rule is to keep these as simple as possible. Okay. And so that's how we can easily just get the commit onto your devices. And now a little bit about RPMO Street. This really gives us this great kind of a, the kind of the best of both worlds. If you think of a traditional like, like embedded type firmware, you know, like your router that may have a, an A and B partition on it. So we kind of blend that A B update model with like the benefits of a package based distribution. So it's nice. So if we ever need to make a change to what's on a system, you know, it's one of the key things here is to be able to adjust for change that happens in your environment over time. And so this is a, this model is super easy to adapt over time, which is, which is really powerful. So again, we get the benefits of this A B model where we can fail back if necessary, as well as kind of a flexibility with the packages, which is great. So really everything of the operating system that lands under slash user gets swapped out with the, or at least, at least in each commit, there's the full OS in there. Right. And then when you pull that in your repo and clone it locally, we're only going to send a delta, but you get, you know, that full, full commit locally where you can go to either state of the device. We do maintain state in slash bar and Etsy. So the whole operating system is not technically immutable like from the strictest definition possible. And that's not a bad thing because true immutability often requires a significant amount of infrastructure to be available. And that's not something that we can count on in these environments. So again, maintaining your configs and container images and these types of things is generally a really healthy and convenient thing to do. So that's it. And we always get this known state that we're operating in on the system, which is powerful. So I mentioned earlier that we can automatically stage these updates in the background. And so that's a great way to approach it. That's probably what I would do, but it depends on your environment and what you should do. And then whenever an update is staged, typically you may want to align to a maintenance window. Again, a lot of these systems are responsible for like critical infrastructure and can't just accept reboots, you know, like free form. Like we would expect in a cloud environment. And so it's pretty easy to schedule reboots with a timer or there's a number of ways you can use any type of a management system. But once you have a scheduled reboot, when they come back up, they'll be at the next update automatically. So that's how that works. So updates will cost you a reboot. However, as long as you time that, typically going, accepting a reboot should be less than less disruption or potential unknown disruption than updating a live running system and making changes on the fly. So this is a really good model. Okay. So last little screen shot is to kind of give you guys a look if you haven't played with RPOS tree on the system before. I'm going to SSH into, this is a bare metal system I'm running. And again, I'm using the web console terminal to do this. I'm running a container that's just sucking the four cores of this little box and dry. And I checked the status. I'm running a single commit. So this box has just been provisioned from from image builder. That is the commit of the update. And I'm manually triggering an update because I don't want to wait for the timer to do it automatically for me. And this particular one just pulled in, you know, new kind of the container tools packages have been rebuilt for this one. And we can see when I check the status that I have a new deployment here that is staged and not running on the system. Of course, my workload has not been interrupted at all. It's still going. Now I'm forcing a reboot just to just to move into this really quickly. You can see how I got impatient and tried to SSH for the system came back. And I checked the container runtime. And of course my application is running as expected because it is being managed by SystemD. And we can see here the asterisk has moved up and I'm in my new update. Again, this this model is familiar. If you've you know, if you've been using, you know, like things like atomic hosts in the past or maybe silver blue Fedora, real CoroS does use this model as well. So hopefully this, this makes sense to everybody. Now, this last thing is kind of what's new and unique to what we have in rel. Basically this technology is Green Boot. And it's the first time that I'm aware of where we can have custom health checks for applications running on the system. So let's say my node has like three critical things it needs to do. I can basically write scripts and Green Boot gives us this framework to run those scripts that integrates with the boot process of the system. And if they fail using a counter system, so it's not strictly like, well, it's as flexible as you need it to be. If, if an update causes, you know, one of the critical roles of that node to fail, we can revert the state of the system and go back to where it was working, right? So super, super powerful here. And yeah, we're basically really excited to have that, have that linkage again between the workloads and, and operating system update level. One of the customers we worked on, on this capability really closely with, you know, their feedback to us is basically the combination of our PMOS stream Green Boot is going to save us millions of dollars with our deployments. Once these systems get provisioned at the edge, the goals to not go back and revisit them physically. And so you can imagine how having, having this type of safety mechanism in place is a big deal for a lot of people and moving into this space. All right, so with that, basically, you know, again, I mentioned eight three is kind of the first step in our journey of kind of meeting the challenges of the space. We do see the security story of rel being a huge value to edge deployments. In fact, that challenges we talked about earlier in this talk, I would say all of those challenges kind of kind of live on top of the security concern, right? Because again, in the data center, things tend to be physically protected, you know, we have cages, we have badges systems and may or may not have any of that for the edge. And so being able to promise that same level of security without physical protection is huge. And so rel rel has a huge value proposition there today and one that's going to get even better in the future. And the other thing around edge in general is, you know, we see complexity as being a huge challenge for really any it project in general. And we see edge or rel in particular, it's solving a lot of the complexities in this space. So hopefully kind of the little example in these features we brought through in eight through and eight three, hopefully you can kind of see that, you know, if I need to put these applications that's relatively static on some smaller devices, maybe they're big servers, it doesn't really matter and just keep these, maintain them steady state, make sure they're updated. This is a super simple way to just to just go meet that need and be successful. And then of course, why not do that by leveraging kind of the existing investments and people skills and technologies, right, that people know and love from Red Hat. So that's a key value prop and everything here. And so at this point, I guess everybody's probably going crazy thinking, Oh my gosh, I've got to get my hands on this and try it to go go conquer, conquer edge in my environment. And so, one, you've come to the right conclusion. And two, it's super easy to go do this and get your hands on it. If you go to the OS build GitHub repo, we really have kind of this whole thing documented out here and you can walk through it. It's super simple. You can just do it in a couple of VMs if you like. Really, however, it makes sense, but it's, you know, it'll take you anywhere from 20 to 40 minutes, depending on your setup. Super simple. And of course, we'd love to get feedback from you guys here what you think. And again, this will, this will GA pretty soon when eight three hits the streets. And we will update this demo to reflect that. And with that, I guess that's, that's our look at how we are adapting Red Hat Enterprise Linux for the edge. And I appreciate everybody's time and being here. Ciao.