 Okay, welcome. Hope everybody had a good lunch. I know I did. A little bit of liquid bread with lunch, so I think people keep coming in maybe with it being after lunch. My name is Matt Porter. I'm with the Consulco Group. We're a small embedded Linux consulting software development company. Been here a lot of times at ELC Europe, and I'm going to talk today about Linux software update technologies. Try not to inject opinion in by any manner, spoken or body language, but we'll see how that goes. So thanks for coming and we'll get started here, maybe. Alright, so we're going to do a little overview. We're going to do a little background. I usually like to talk a little history about where we came from and doing software update. We've been doing this as a community since 1991 in some form, and then we'll talk about today's set of strategies that are used to update Linux systems, and then we'll do a detailed look at each for your open source project. I don't cover things that may be proprietary. Those don't really matter, and criteria are what strategy they're employing technically, any other features they might provide in these projects, maturity, try to get a little gauge of what kind of communities around these technologies, and then what kind of uptake they have, what kind of downstream projects are using these technologies. Alright, so with that, when we look back at history day one, if anybody was here in 91, 92, you might have started with H.J. Lou's boot root floppies. So you had, of course, five and a quarter floppies, and you'd put one in, it would boot up and get to root, and you'd put the second one in and root off that floppy, and then you run your own to build everything else up from there. And so that was state-of-the-art. And then we got to kind of the forerunners of the modern distributions with MCC, TAMU, SLS. And if you came into the Slackware generation, you missed the fun here, so packages and tar balls that might seem familiar from later things, but there was no dependencies. So the packages were just tar balls of logically packaged things with either static or shared libraries and just no notion of dependencies between these packages. Slackware came about partially because SLS wasn't very well maintained, and again, packages were into tar balls or weren't any dependencies, but you could upgrade from release to release with this carefully scripted dance. So be a bit flaky if you were out of date and try to go too far ahead. So as we get into the modern world, right, which was starting in that 93-ish time frame, then we had the Debian's, the Red Hat Linux, whose derivatives, right, were just not trying to leave anybody out, but those are the major players. And that's what brought in our modern DEB or RPM packaging that we still deal with today on most of our desktop systems, our workstations, right. And so the biggest thing that was added was the set of dependencies, right, that were tracked as part of that package format and so forth, and the ability to do updates and pull in these other packages and know what versions you depend on, right. What was key with those, though, is that when we do update those, the updates are anatomic, right. So you're updating one package at a time on the system. Ordering can matter, right. And when you do a update to a whole new release, so if I've got distribution foo and it has a set of package versions, right, you could do a release update by designating a set of those packages at some version and just go through non-atomically and update them all. So I'm not telling you anything new, right. A little history. It's the same thing we probably do on most of our distributions now when we roll up to the next release, right. And the real key is when you are updating those, right, in that non-atomic sense, that's all driven by this complex set of pre and post install scripts, right. And it's a little bit more complicated than that. Some of you probably realize that from packaging things up. But that's kind of the general overview of those things. So let's talk about any time you're looking at Linux software update, you've got to think about requirements, right. And there's many requirements, right. That's the first thing you have to keep in mind is that there is no one proper software update strategy, right. It depends on your requirements, right. Every product is different. You've got different things pulling you both technically, hardware constraints, and whatever your product is, right. What are the features of it, right. So you're not going to get any exact steps today. You're going to get guidelines only. So the things you think about when you're thinking of big broad categories, right. Well, do I need this to be power fail safe, right. Obviously when you're a desktop person, you're not worried about the power going out, right. Well, you may, you're probably on a UPS like me. But if you're on an embedded system, like we're really here to talk about, right, you're worrying about power failing in some cases, maybe others, you don't worry about that so much, right. Are you doing frequent updates? I mean, do you have hot patches coming every day, right. And you're dealing with CVEs in an aggressive matter, right. Or are they infrequent, right. Are you doing once a month, once a year, because you don't care about security like most of the consumer electronics companies. So the other thing is, as you're getting more advanced in software update, now you're thinking about, well, I've got some delivery channel and I've got whole fleet of devices, right. And I'm paying for that delivery channel, right, OTA updates. And so now the size of this update comes into play because when you magnify that and you're paying for that airtime to reach all those devices. So any of these fleet devices from the broad term of IoT to automotive rollouts, right, they are concerned about the size of the update, right. So you may need to keep it small. In other cases, it may be something where you're inserting a physical device, right. And the updates done there and maybe the size doesn't matter in that case. Another area is speed of update, right. Depending on the scheme you take, how much downtime, if your update strategy requires some downtime, you need to be worried about how long is that downtime, right. Or is it something I can do in the background and quickly recycle, right. Or can I do a partial update without having any downtime. So these are all considerations you're doing along the way. And also when there is verification, authentication, right. And my crypto hashing thing is what's coming through that channel, what, you know, my service is provided and not, you know, some rogue payload. So we'll start with kind of the first class and I go through these broad classes about update technologies because they fit into a pretty simple group. So we have the traditional method, right. I already touched on the non-atomic package-based, right, represented usually on the back end by dev packages, RPM packages, right. And so we've got that package-based granularity, right. We've got dependency hierarchy that usually explodes out, right. All those shared libraries you need and as you pull something in and typically driven in the desktop systems by, you know, APT or YUM on the front end. But one of the things we find in embedded devices is that approach is unacceptable, okay. And one of the reasons, right, as we've already touched on is that with that non-atomic update, right, the reliability is not there. Has anybody updated their desktop system and had anything left in an unstable on working state on an update? Yeah, me many times, right. And that's because you have arbitrary states that it's at in a non-atomic update scenario. So sometimes, you know, you have to rely on luck for those things to work. All right. So the next big class is the traditional full image update. And this is the way that a lot of systems or most systems, whether they've been embedded Linux systems or other embedded systems have been updated from the beginning of time. And so if we look at a typical one in an embedded Linux system that would be a dual update style, right, you would boot your active image and you come into the system. And as I say, completely unrealistic and simplified version, but good enough for that 30,000-foot view, you boot an active image and you're going to receive and install an update and that's going to go to the secondary partition that's like your inactive partition, okay. Once that's all been installed, authenticated, verified, all those details that go on in the background, right. While obviously your first image is what's running, you would communicate with the boot loader or whatever mechanism you use for boot loading if it's a separate entity like a U-boot, typically an embedded Linux system. And typically want to atomically toggle that B partition active and A inactive and reboot, right. And so when you boot up, you boot into the new active image B, right. Image A becomes inactive and typically you'll have some fallback mechanism, whether it's a watchdog or a heartbeat that will test for booting, there's some features in U-boot, it's kind of implementation details, but that's the general approach there. So then we get into the new kid on the block. A lot of people look at this and they say, oh my gosh, this is not keep it simple, stupid type approach. And that's this incremental atomic updates. And so where we first saw this coming into play was really like a lot of technology we leverage and the embedded side is on the server side of the house. And they've had needs to have incremental atomic upgrades that they can roll out very quickly, very small updates to address CVEs on their exposed the internet servers, right, maintain their uptime, also be able to roll those back, right, very quickly. And part of that is having that complete history of deployments as well. So the way it works is you have your actual release of your root file system is composed of a set of binary deltas. So rather than working on a package granularity, they're working on a per file granularity and a logical set of deltas, binary deltas on that. And one of the benefits of this is that the size of updates are minimized. So if you look at our package based granularity and you need to update a very large package, right, just to change this one delta in a file, right, in this case, if that was, you know, a three megabyte package, you might only have a 7k delta to go modify that in a binary delta incremental update scheme. So there's some advantages there that are nice that traded off for the additional complexity. And then one more thing actually was thinking today that I passed over this and it's worth adding in. So you won't see them in the slides that are already published. But containers, for the same reason from the server side, right, that they've moved a lot of application work to all into containers so that those can be upgraded independently of the base OS. This is another strategy. It can also be used in a hybrid way. So typically you build on top of a core immutable base OS, right? And it could be any distribution, a minor distribution. I mean, that's the whole point of core OS, right? And then you roll your updates out in container deltas. Now, one of the things here is that there's no discussion of, well, how do I update my base OS when there's some flaw in that, right? It's just treated that that's immutable forever. Not always realistic. But if most of your focus is on the application side and being seamlessly able to update those, that's a strategy. And it's often combined. So these things can be combined in a number of different hybrid approaches. So getting into, you know, we're hitting that part where we're going to get into some actual projects that implement software update mechanisms that you can go grab and play with. So the first, and I kind of go and order that group of technologies. So software update, a very generic name. So it's a singular dual image framework. So it can work in either approach. You can find it here on GitHub. It's a developer. It dank software engineering maintains this. It's written in C. Very easy to follow code. It's GPL2 license. And he's going to attempt to be modular with plugins is one of the things. He has a notion of handlers that you can plug in for different image types. One of the things that within those handlers, you'll find support for signed images. Also local and remote updates from a couple different schemes. There's a built-in web server. And another restful API support. We'll get into that next. He also has very tight coupling with Yubu in that. And then he maintains a meta software update layer to make it easy to generate images that are compliant and include both the client and compliant with the needs of the client. As far as community, it's actually in the past when I first looked at it, it didn't have a whole lot of contributors. And that was like a year ago. There's quite a few different contributors outside the author now. There's few outstanding pull requests and so forth. So it's not a solo project anymore. It seems to have a reasonable amount of take up for people to have the requirements it meets. And then downstream, it's used at least by Siemens. A lot of these things, it's hard to see what downstream projects are using different technologies if it's in a closed product. But there's a session later, in fact, talking about using software update in Siemens projects. So the way that software update manages these, it defines a specific image type. And you can see it's a very simple CPIO archive. It has a description header on that. And then it's just a series of sub-images, as I'll call them. So each of those images is described in that software description header. And then it's validated with SHA-256. As I mentioned on the first slide on this, the handler plugins implement all the details of how each of these image types are handled. So when one of them is identified, it'll invoke a handler for U-boot. So if you're updating the U-boot image, then it will do the NV update. It knows UBI partitions, also nor NAND without UBI there. And then the MMC style updating. And then the notion is, because that's extensible, say you've got an FPGA and you need to update the bitstream in your SPI flash, you can write a custom handler, carry that image in this overall update image, and have a custom handler go install that FPGA bitstream by writing to that SPI flash. Assuming it's in a non-volatile flash. So the software description field, you can also extend it. He has a lua parser in it. And so you can support any kind of arbitrary description in there. And just write some lua code to actually parse that out right now. So one of the things that you can do is you could go use a different parser and define a whole bunch of different hardware platforms in one image and use your custom parser to sort through those revisions or family of platforms and so forth. When you actually configure this, configuration file can be driven with lib config style syntax or an XML. And so you can choose either. The XML is obviously easier for some outside tool parsing for some people. It uses kbuild for configuration, so pretty familiar, kind of kernel developer centric. So that should be very familiar to a lot of people. And as I mentioned, it's got a built-in web server that's based on Mongoose. And then one of the other things that's been extended recently, and I believe it's the gentleman from Siemens using this project that was involved with this, is there's a restful API that he uses with the Hockbit server for remote update. So it's reasonably flexible to fit into different delivery mechanisms and so forth. It tries to focus on just that piece of doing the update and the ugliness of managing that. There's a couple strange things I saw in it, like there's a kind of dangling user space GPIO library, and there's some other ones out there. And I know he mentioned syncing up with using the GPIO to signal that an update completed. I don't know if that's intended for that, but there may be a little bit of dead code in the project that needs to be cleaned up, because it didn't seem to be used by anything. So just a little tidbit of what I saw in there. So moving on to the next one, Mender IO, if you're here and you haven't heard of them, I don't know how because they got a big booth and there's a bunch of talks on the agenda. So Mender, very interesting piece of software. It's fundamentally a dual image. Let me back up a second and say that on software update, you can actually implement a single image approach where you update in place or a dual image. It's configurable for either. Most people will go with a dual image if they have the space because of requirement necessity. Okay, so back to Mender. So it's a dual image approach. You can find the project there that the team at Mender maintains on GitHub. It's designed as a client server system fundamentally. As opposed to software update, it's just the client on the system and it deals with a web server. Mender has both a client and a separate server. It's written in Go. That's an important distinction I'm trying to show on each of these is where possible is that language can matter. Having to maintain Go or Rust or any of that less mature systems in a project might preclude it from being used in your system, but might also be an advantage. So it's Apache 2 license. They maintain a meta-mender layer. What's interesting with that is that builds a client into the device, of course, similar to what software update maintains as well. You can find it there. And they also have some reference platforms that they maintain as well. So you can start off and try it out very easily. When you look at GitHub and you look at the contributors, like a lot of new projects, the project contributors right now are overwhelmingly Mender employees. You judge that however you will. Just trying to reflect how big the community is and so forth at this point. So there's two models with the way that Mender works. You can run it in standalone model and you trigger the updates locally. So that's pull request. Then if you run it in manage mode, it runs as a demon. And it will pull the server for updates when the server is ready to deliver. So that would be more the continuous delivery type model of updates on the back end. Menders, their dual image setup, they call it in their documentation AB. There's many names. So they use a notion of a commit when an update has booted. So when the updates actually have been successful, that's considered committed. And on failure, it will toggle that inactive partition back the same way we discussed in the overview of how a dual image update occurs. So they do a very classic dual image approach, tried and true. It's kind of the same stuff that's been used on commercial embedded Linux systems since early 2000. And so the other interesting thing is they do follow U-boot de facto standards and use the features built in there. We'll talk about that in a second. They have reference platforms, as I mentioned, on Q and U. So you don't even need hardware. Wonderful, right? But if you do, BeagleBone Black is a reference platform. So you can go get it. You can try it out, see how their system works. And that makes it easy for getting started. So one of the differences we see in everybody's project scope is different. So software update, it's being used in some projects, but he doesn't try to be a complete end-to-end solution like Mender is doing, right? The whole delivery end and so forth. In here, in doing a complete demonstrable solution, they've made some assumptions, right? So you need to have a U-boot, a reasonably recent U-boot and a lot of commercial vendor code trees don't have a recent U-boot. You need one with the boot count limit. They assume that you can read extended 234 file system, that you have U-boot environment tools in your Linux root file system, and obviously a specific U-boot configuration to make this all work. So depending on some things, but they're also giving you a whole turnkey framework, right, on that end. And if you're running in manage mode, it's built exclusively around system D. Again, it's an assumption. Some people choose not to use that, but you might have to do a little work to get that assumption out of the way if you don't want to use system D. Obviously, there's a set of kernel options required for that as well. And then the layout partitioning the other assumption, there's a very fixed assumption of how the partitions are laid out, right? So you have the U-boot and one partition, you've got a persistent data partition, and then just two AB partitions that have both the root FS and kernel. So if you're managing something different, you're going to have to do a bit more work on top of what the reference platform implements. Right now, it's specifically designed around and supporting the MMC style, block IO style file systems. And so it doesn't have any explicit support for NOR, NAND, UBI. That's not something that couldn't be added, but not there right now. It's got great documentation when you go through of how to actually bring up a new platform and port the client to it and what all is needed. So there's a lot more details about all of the requirements on the U-boot side and the partitions and so forth. So really well done docs. Like I said, there's turnkey platforms that you can try out. And then they actually have a CI loop running for this with their code base. So that's good. And then the testing QA tools that go with that are all out there. So pretty well run project. Seems to have everything to kind of move forward. Okay. Now we're getting into that crazy stuff, right? So crazy stuff starts with OS tree. And OS tree is one of those incremental atomic update mechanisms. And you can find page there. And then there's a documentation page off of there that really describes a lot. One of the quotes I like is that self-described is get for operating system binaries. And it really is the easiest way to look through it. And if you go in, you work through some of the examples and play with it on your own systems, you'll see what they mean by that. Okay. Because the file system itself is stored in a Git like object store. And as we said in the overview of these things, it's using binary deltas now to track the changes between these objects. Okay. And these depend on immutable file system hierarchy. So it has, I can't even say that phrase today. So this file system hierarchy is based on the user merge style of root file system. And so everything that's persistent will be an etsy is the requirement. So as you're getting into these more advanced or frameworks that actually work, they start making assumptions and forcing those assumptions on you. So if you think you can do it the old way, to get some of these advantages, in order to get there, you need to change your root file system. So there's a lot of distros out there and embedded. And for example, you take something like automotive grade Linux, and it's got to be changed to, if they want to use this, to have that style of root file system to have it integrated in. So the interesting thing is when you get there, you end up with your binary delta of the immutable pieces. Okay. And you have, you can have multiple deployments of that. And then each one of those deployments has its own copy of etsy. That's the persistent data is the way that they manage that. So now we'll get a little bit deeper into that. So the way it works, you end up with that repository we talked about, right? Binary deltas, get like object store, that's going to be stored locally on your system in this OS tree repo. And then you can have any number of deployments. And a deployment is a checkout of any object hash. It represents some state of some root file system at any time. And those are then stored in OS tree deploy. This is all locally on your target. And then you'll see a nice little hierarchy that's easy to find them. So they're organized by OS. So if you had, you know, this distro foo and another distro, and this I think applies more to the server side where this originally is being used, then below that you'd have to check some for the actual hash, right? From SHA-256, that's how that get store manages those objects. And as I mentioned, each deployment has its own copy of Etsy. So when you deploy, you've got all these different check zones, so it's multiple root files. It could be just one. It could be two to have one to fall back on. Or it could be n number of these. Each of those will have their own copy of Etsy that's persistent. So you have to manage that in between, right, if you're doing an upgrade. All right. Oh, and the last major piece here, when you do a deployment and you want to upgrade to that new root file system, it requires a reboot, okay? And we'll explain why that is. Last piece of, well, as we get to the bottom of this. So, okay. So we talk about these multiple deployments. The way you deal with those is this OSTree admin tool. So you've got a server feeding. And we said we have this local copy of the repository, right? And if you do an upgrade and you have this continuous deployment feed, that'll get the next set of deltas and just upgrade to those. It's all built in to OSTree in their HTTP transport. There's also a once you've upgraded, you can also do a specific deploy. So I want to deploy a specific ref spec, right? You just refer to it by a SHA-256 hash and deploy that out. Once you have deployments, you end up to have indexes. It looks a lot like your Git stash IDs, right, that you refer to if you're used to Git. And you can go on deploy any of those. So you can control which of those things in your deployment tree are actually checked out, right, deployed. And then admin status, you can look at the status of all the deployed items. You can see all the indexes of them. All right. So how do we do an atomic update? We've talked about how we can deploy these things. But when we're deploying them, we're not actually, you know, changing to that root. We're not change rooting into that or anything. We're just deploying another copy of that root file system at a different point on the system. So you actually atomically swap the boot sim link, right, to a new directory. So you'll have under OS tree all of these boot directories, right? And this is in a case where that's how you actually update your boot. But whatever you do has to be atomic like that. And what's built in is that it'll move that sim link atomically. And then when you reboot, a bind mount is established. And you bind mount that currently deployed your, well, your preferred deployed deployment. So you can choose from any of them. Okay. And this can actually, you can combine this with a fallback type mechanism like the dual image does. So if you had two deployments, you had the latest one, and you rebooted a boot into that, and it failed, and you use the boot count thing, right? And the watchdog resets, say it didn't manage to make it all the way to your application, right? Then it could reboot and bind mount the previous backup deployment, say you had two of them deployed. So very, very flexible, and you can combine these same kind of concepts. So OS tree is kind of interesting in these. It has a lot of identifiable downstream projects, not all embedded stuff, but first place ever saw it use it was known continuous. Project Tomic might be familiar to people that's using that with a layer called RPM OS tree where they use RPM feed to produce these binary delta revisioned objects. And Paul platform is another front end to producing feeds, because really the hardest thing is starting to produce those release feeds, right? Not so much the target end of it, but the management of those releases and binary deltas based on some upstream packaging, whether you're using OE or something like that. So known continuous uses open embedded, has it integrated in. Project Tomic, like I said, using an RPM feed. And then automotive grade Linux is implementing OS tree support in, I think Leon here. Oh yeah, he's got a little piece of that or some portion of it doing that. So it's a it's a big job because the file system needs to change a bit. And one of the interesting things they found in that is that OS tree has this notion of it owns the entire process of downloading the update and deploying it and doing the atomic update all in one command. It's not designed to back end onto some other delivery mechanism right now. So they've had to do some things. There's a separate there's some separate commands I didn't show where you can actually manipulate static deltas directly. And so the initial cut in there is using that. So that's a more embedded project that's starting to use that. And I know that the background of that one was that the automotive industry is very interested in small incremental updates. And for a lot of well publicized reasons, addressing CVE is very fast. So that people don't remotely exploit your vehicle. All right, so there's another major incremental atomic upgrade mechanism. It's the SWPD. And everybody loves to have these generic names that are hard to say. But it's originally part of the clear Linux project. But it's actually an independent piece that can be used in other projects. It's also a client server based type system functionality is really, you know, very similar to OSTree. Same concepts, different names for the most part. They have a notion of a delivery stream. Let me talk about with these delivery streams of bundles in their terminology. And it's the same binary file system delta approach. Okay. And then they also like everybody has a OE layer at SWPD so that you can create an image that embeds all of the client and a file system that conforms to their requirements. So you can find that there at the Yachto project site. Now it's one of the key differences that make it worthwhile looking at is that they went to great pains in the client to come up with a way to not require a reboot. So there's some pretty intricate locking going on as the little dance is done to switch and change route live on the system. And you can see this is coming from a server type, large uptime type environment where they want to be able to avoid a reboot. It could introduce some risk. Obviously versus a clean reboot after the update. So that's worth looking at if you're going to do that. And then they have a tool. You look at OSTree, a lot of the feed creation tools are separate projects. They on the server side with SWPD, they have some tooling as a part of the project that help create those bundles, right, which are those versioned set of binary deltas that reflect a release that you're doing. And it also has the stream server that delivers via HTTP there. Interesting thing here is when you look at the project that don't see any contributors outside of Intel, like you do with OSTree with a lot of different projects from different companies using it. And right now, as far as I could find both, well, obviously clear Linux, but then OSTroS also uses it. So I don't know that there's any other uptake outside of those Intel projects yet. But think I say it's very similar approach to what OSTree is using. So if you understand one, you can pretty much go in and understand the other. Okay. And then again, felt I needed to add just a little bit more about this. And to be mentioned with these container-based solutions, typically, most of those are not taking the approach of I need to ever update my base OS, my minimal base OS. So you need to keep in mind that all of these right now, their focus is, okay, all my applications are in the container. Roz and Io can front end over a number of different base OSs. There's Debian and Fedora and something else. And then they do Docker-based deltas, Ubuntu snappies, essentially the same way, but obviously with the Ubuntu base. And then Project Atomic that we already touched on, right? It uses base OS. That one's actually managed with OSTree, right? And then they're doing Docker-based deltas for application things. So there's a lot of possibilities to do hybrid approaches here with keeping all your applications in containers and, you know, leaving the limited base updates to another approach, whether it's a dual image or an incremental update. You don't necessarily have to have one or the other. But all of these projects, their focus, at least on the high-end application, is the application and middleware update approach. They don't look at the base OS as much, so. And I want to share with you that there are a lot of talks. I wanted to put it all in one place. So if you're in the software update, that's like all the talks here on various aspects of software update, plus the one you missed this morning at 10 if you weren't there. So do check those out. There's deeper dives into some of these things. I tried to keep the opinion out of this and try to show the high level of all the different free ones I could find. And so I encourage you to check out these sessions. I can put it back on that if you're looking at that. So, all right. Ready for questions if I have a couple minutes? Yes. That's correct. Yep. So what he said, he thinks the problem is that most of these approaches require read and write to the file system, right? And that's true, right? So all of these are assumptions that you can run a client on Linux, right, and access some portion either you're doing it in place, right, with those incremental update things and deploying out, at least in that file system, another root file system. Also, you have the space to do it, right? So the same thing with the dual update, right, that you can go update that other partition, right? So absolutely. The which, which, okay. So, software update, for example, we'll go back to that one. It's a pretty simple one. He has a mechanism that when the handler, he has a handler for the actual standard payload, and that actually, the software description has the SHA-256 checksum coming in the image, and each one of those images is then verified against that. He doesn't do any authentication, right, just verification in that one. So not, I don't believe any of these do, pretty much everybody is doing verification. There are other systems that are doing authentication style things, but if you look at, I tried to stay away from the more of the back end and the whole security thing, because if we just take that one piece, if we start talking authentication, now we've got to talk about the whole chain of trust, right? Now we're talking about IMA because I can't have my binaries compromised either, right? So, but there are things like, over in the automotive market, the things going on with RVI soda project and so forth, that they're working out a chain for also authentication and so forth. So, yeah, most of these, you'll just see a verification. They're worried about the integrity of the payload, but not necessarily that whole chain of trust delivery. But yeah, that's an important requirement, right, that goes out wide. Yes. Yeah, good point. So yeah, that's one thing I didn't explicitly mention, right? You know, part of your requirements, and I said, you've got a platform, and what you do is really going to depend on your hardware as well, right? So, you're going to have to, if you want true atomic and guaranteed power fail safe updates, right? You can't do a single image update, right? And sit and pray that it's going to work and power won't go out, right? And that was something that the gentleman at 10 mentioned as well, right? And a common thing, you have to design your hardware to have that additional space, right? You've got to have that extra partition, right? If you want to have incremental updates support, because that's your market requirement to roll out these micro updates very quickly and apply them quickly, then you're going to have to have enough space to check out n number of deployments, however, whatever your scheme is, right, in that type of thing. So, it's kind of the same thing. Yeah. So, what Pentelus is asking, which approach is less? You can't say, right? Because it depends on the size of the file system, which things are persistent, right? It's more of a hardware platform situation and how big your application is. You can implement with OSTree or SWPD, a situation where you only have two deployments, but the thing is, you're keeping a repository local, right? So, you might have a little bit more overhead there, right? And it's also more complex. So, you're weighing off some complexity risk there versus a very simple dual image approach, if you can just do the AB approach. Correct. Correct. So, that's an excellent point. So, what behind saying is, you know, if you're relying just on an OSTree support, right, or, well, let's, Kyle, I don't show any favoritism there, but yeah, if you're doing the incremental atomic thing and you're relying on that, you haven't necessarily dealt with the fact that if I have file system corruption and I can't root off of that, how do I recover from that? So, now you need to cover that requirement as well separately if you're going to go use that to do kind of the fast path updates, right? Yes. Right. So, just to repeat that, if you're strapped for space, right, and you have this recovery thing, and it's funny, that's one we talked about a couple months ago, I think, in fact, but the point he's making is, is that you can combine that with having that in it RD that can go and recover from, say, a tar ball that's that factory image, and then you can go back and apply all your updates back again and recover. So, there's a lot of ways to do this, right? One here in the back. Okay, let me make sure I understand. If you're asking if you have a number of different blobs, but so you would maintain ideally, right? And it depends on your hardware platform and how you're but you can lay things out where those are all managed in that boot directory. So, you're talking about deploying to different hardware revisions, right? Okay. So, in, is it a statement or a question? Any of those could manage a full set of device tree blobs without a problem. Yeah, there's nothing specific there because typically you're going to store that whole set of DTBs in like, if you have the right kind of hierarchy in slash boot, right? And then have your bootloader select the correct one, right, based on platform ID. Yeah, at the end of the day, you need some way to ID that platform to get the right one to have a single image. So, that's another hardware thing, right? Well, the crazy stuff is always the most interesting, right? The rest is boring, right? So, yeah, OS tree is most interesting to me right now because it's kind of pushing the limits, but it's a little bit scary too, right? Because if you've done a lot of the dual image stuff and you know that mechanism works and you need stuff to work in the telecom environment, that's, it's the same mechanism we've been using. It's not a Linux mechanism. Generic approach. And it's simple. Yeah, they, there's been, I think there was mention of that on one of the, on the list on OS tree. And I think the intention, yes, if there's hook into the duplication stuff in the block layer for these mechanisms. And he is trying to keep everything. One thing is on the maintainer OS tree is trying to keep everything independent of any particular file system. But that came up and at least was an idea that wasn't rejected outright. So, yeah, yeah, it might help that one. Yeah, exactly. So, cool. All right. Thank you. We went over.