 Good evening, thank you all for coming. Hop you all had a good lunch. We won't make you one jump up and down to try and keep me awake. So I'm going to talk today a little bit about how we added software updates to the AGL Linux. So myself, I'm from company, ATS Advanced Serm Almighty Systems. So we do open source connected mobility work for the automotive industry. And we did this with AGL— so AGL Automatic Grade Linux is best summarised as open source Linux for cars This is a Linux foundation project, when the members are mostly car companies and their supply chain. To be clear, it's primarily about building an open source Linux base which car companies and their suppliers can then customize and put inside vehicles as they are shipped off a dealer's forecourt, y gallwn ddweud y fath o'r bgwylburnu yma, feel eich gwyllwyr iawn yn y bwyd ac yn y bwyd, mae'r fadwch ymlaen gael ei wneud oes yn fath o menlyniad i eich fath o'r bwyd, oes eich hynny y gallwn bod ystod yn gyfrannu'r platform ar y pwy oll oherwydd yr aroed. clarity yn y fath o'r bwyd ac oedd yr hyn yn eu cystaf defnyddol iawn. A oed yn gallu'n gwybod y botwm yn bwyd, The automovist isn't there, suppliers were safe after to rework all this stuff. Or purchasing something. Great expense. I won't talk too much about the AGL project itself. I'm not officially part of AGL anyway, so I don't take anything I say about it as truth. In fact running parallel with this is a talk by a Walt Minor who is Mr AGL. And I would recommend if you want to understand more about it coming, then go and see his talk. ond y gallwch yn ei wneud yn ei gael i'w gwir i ni i gael i chi wedi ei wneud i chi eich bod yw'n gwneud yw'n gwir i chi. Felly, y ffordd, y dyfodol i'r pwynt yn y rhaid o'r ddechrau y safartu, rydyn ni'n credu y ffordd yw ei gael i'r ddefnyddio ar fynd y safartu, ac y ffordd ar y ddefnyddio ar y ddefnyddio. A'r ffordd wedi bod yn y gwir i chi'n arferfynu hynny'n gwir i chi chi'n rhaid o'r ddefnyddio ar y cyfnod Ac yna'n meddwl y gallwch yn cymryd o'r cymryd arwyd, y byddai yn ysgrifennid gyda'r cystoffol yn cymryd yn cyffredinol. Mae'r cystoffol yn ymweld yn gwahod yw'r cyffredinol. Mae'n meddwl yn ymweld yn gwahod yw'r cystoffol. Mae'n meddwl yn cyffredinol, a nid yn ymdweud. Mae'n meddwl am y codiol, mae'n meddwl yn gallu llai. Mae'n meddwl, ac mae'n meddwl yn ymdweud yw'r cystoffol a'i wneud bod sy'n mynd i'r problemau a cyrddoedd y llifon SSL o'r llyfr o'r llyfr o'r llwyth gweddol. Beth oeddwn ni'n gweithio ceisio y dyma'r apod, sy'n gweithio yma'r sy'n gweithio'r cyfrwyr sy'n gweithio'r llyfr o'r llyfr o'r cyfrwyr. Mae'r cyfrwyr yn y cyfrwyr yn y cyfrwyr, ond maen nhw ydw i'n gweithio'n ei chanelio sydd wedi gwneud yn gwirio'r cyfrwyr. Over product piece of product development. And this is kind of the thing. You should remember when Google did this with the Chrome browser, apparently pretty much the first piece of Chrome, the Chrome browser that Google built, was the auto-updater. And they got everyone to install it. And then, once you'd installed it, they just kept pushing updates to it through the entire development process, even internally inside Google before it went out of code that was shipped to customers. And this is good because you battle hard on this process. And if the first time you have to do a software update for real is when you've already got a fleet in the field, then that's going to be a pretty high stress situation, to say the least. And it's also one of the difficulties with testing software updates is an update is always between two versions, right? And so when you've got your final production version, how do you test the updates? What's the next that you try against? Whereas if you've had the updates in early, and then all of the corner cases related to, for example, an easy one would be updating database schemers if you're using SQLite or something to source and configuration data, as you modify your program, you're going to have to figure out a way of rolling changes forward through versions. Now, if you've already done that 10 times by the time your system is released out into the world, then the chances that the 11th update is going to work is going to be much higher. Whereas if this is the first time you ever had to do that, then it may be that you increase the risk of running into a situation where something about the database schema means it's very, very hard to update. And you find out about this when you've already got thousands of these things everywhere. So it's much better to discover that really early on. And then you just tell the 10 people in your office to go blow away their databases and carry on. And you get the learning when it doesn't really matter so much. There's also another couple of cases, like test fleets. So beta testing pieces of software is quite nice to be able to do that. Automatically, we use it internally for our sales demos. So we have some pieces of sales demo hardware which are scattered around the world. We're actually based in Berlin, in Berlin, Germany. But we've got some offices ourselves out in Japan and Taipei. And those guys have hardware units, so it's quite nice to be able to push up these to those things remotely rather than happening to post them back across the world. And finally, if you get the development team to use this stuff early on, then by the rule of any developer tool that sucks gets replaced and improved by the beauty of dog fooding, by the time this thing goes live, it absolutely definitely won't suck, because no developer will stand for tools that don't work very well. So that's why you need these updates, why it's interesting to put it early on. So the goals for what we're doing as a project, though, are maybe a little bit different to a normal system. And so AGL isn't a single product that's a basis on which you can build products. So it's a basis on which you might want to build, for example, an in-car entertainment system or maps or something. And it's also not a single piece of hardware. So AGL supports at least half a dozen pieces of hardware commonly and quite a lot more if you count hardware vendors who've ported AGL to their platforms. And so it's actually very wide at both the bottom and the top. So this means that it's not a matter of building a single once-off software update platform. It's a matter of building a thing which can be used for lots of different boards, for lots of different products. And so we kind of had to meet people where they were with their existing looks like normal desktop Linux rather than force and pretty rather than force big changes on the existing stuff that people were doing. And so this is sort of another angle on the same thing that software updates have been a required feature for any shipping embedded Linux product like since embedded Linux. And lots of people have done it. It's been done dozens, hundreds, and hundreds of times by various people. But these systems all tend to end up as a point solution which is used by one particular product and maybe reused by the same team the next time they do the next product. But so far, we haven't really seen a kind of a community developed where you have lots of people each selfishly building their own thing for their own purposes but contributing those changes back up to some meaningful upstream. And then you get this kind of effect where everyone's doing it for slightly different reasons, but between ourselves we managed to build the commons of really valuable code. And I think that's something which so far the update world hasn't really happened in the software update world. And that's actually quite different from just making the code open source. Like open source ticket on GitHub is a requirement, but it's not actually sufficient because it has to be usable by other people for their crazy project, which is nothing like your crazy project. But what we want to do is to find a thing where everyone can build. So this is really about portability. And there's definitely like, if you ever read the UNIX as a virus, no, OK. His basic thesis was that the success of UNIX is because it is very, very easy to take it and use it on a completely different system that no one ever thought about. And that was his claim for why UNIX succeeded versus various LISP, whatever things didn't. OK, so I'll talk a little bit about update methods. So if you've used desktop Linux, then you've probably used RPM or DEB or something like that. And that has the advantage that it's actually very simple to do. Yocto has support for doing this, for generating RPM's built in. But it's maybe not what you want for a product which is going to get shipped out to lots of people. Simple as its advantage. First disadvantage is it's unsafe against power off. So if you pull the plug while an update is running with an RPM system, the system will end up in a indeterminate state. Now caveat, there are some bits of work going on inside Red Hat to make that work atomic. But it requires support from the file system. And it still doesn't solve the problem of changing between two versions of your system in an atomic fashion. And the other issue with RPM is the dependency resolution process by which if you have some application which depends on some library, there will be some combination of application versions and library versions that work together. Now if you've ever run unstable or Debian, then you'll have certainly experienced the situation where you type app get update, app get upgrade. And you get back this long error message that says, I'm sorry, I can't install this because, and it gives us a very long complicated explanation for a whole long and complicated set of reasons why it can't install this thing. And the fundamental problem is that you can have a set of packages which are mutually compatible over here. And you can have another set of packages over here which are mutually compatible. But it can be the situation that there's no way of migrating from one to the other while constantly maintaining package compatibility. Now if anyone here is a Debian developer, then my incredible thanks go out to you. Like this is getting this to work is an incredible task. And they managed to, by the time Debian goes stable, everything does actually seem to work. And I'm super grateful for anyone who is in the process of making this work because I understand it's not easy. Now the Haskell guys, in fact, they actually use an automatic theorem prover to try and solve a package dependency upgrade problem. Now that is a very clever solution. It turns out it still doesn't work in all cases because you can always construct pathological cases where it's impossible. And it also means the automatic theorem proving may never actually finish. So anyway, good for tests, but not great for a real system. Another popular solution to this is to do full file system updates based on a dual bank process. So you take your flash, and you basically divide it into two partitions. You have the old, the new, you run on one partition. And while you're running on this partition, you update this partition, and then you tell the bootloader to please go and update the other partition. And then next time it boots, you boot up with this partition, and it's all fine. Now this has the really nice advantage that if the power goes off or you have any problems halfway through, then the system aborts, and next time it boots up with the old system and it's unchanged. And you don't tell the bootloader to do the new thing until you've completely finished and flushed all the changes to the new partition to disk. And that also gives you advantages where you can do things like you can trial boot the new system. And if the boot fails for any reason, then it can automatically switch back to the old system. So if you send an update which accidentally bricks maybe some particular variant of a device, then you can automatically recover from that without them either sending the thing back or having some Vulcan death grip to go back into recovery mode. This is good because it's robust. A couple of problems with it. So the first is it tends to be quite device specific in how this update is built because it depends on some changes inside the bootloader. And it depends on creating this flash partition layout with a bootloader partition and then one operating system and other operating systems and spare space for users to put their own files on, things like that. And that's a bit in conflict with the requirement of this system to be generally usable by lots of different people. And you still need to make some more changes to build this into a fully-feature system. So you need some way of incrementally downloading image updates. You don't want to have to send an entire hard disk worth of flash drive, worth of updates every time you update a system. You want to be able to do that incrementally. So you could use R-Sync. You could use BSDiff, BS Patch to do that would be a solution. But that's the sort of thing you have to actually build. And then you also need to build the server-side infrastructure which can say, OK, this user has got version number 10 of their system. We're upgrading to version number 12. So we need to either you roll them one or each at a time, or you have to build a difference between version 10 and version 12, and then send that diff down. And that's a calculation which you have to do on the server, and you probably have to cache it because it's quite expensive. It's a bit of a pain. But the thing we selected for AGL was this system based on OS Tree. Now, this is interesting. This is a combination of the robustness of this dual system approach where you have a complete file system image which you've built, which you've tested, which you boot with an incremental update. And it looks a bit more like sort of getting that respect. And I'll talk about that a bit more on the next slide. This is a more modern approach. It doesn't involve treating your flashes as an array of bytes because it's not really an array of bytes anyway. And it has a really nice advantage. It's actually quite easy to make reusable between projects. So the work we've done to get this to work on these boards is actually quite portable between these boards. And I hope it's quite portable to new boards as they come up. So OS Tree wasn't developed by me. I mustn't take credit for it. I won't take credit for it. So Colin Walthers is sort of one of the guys behind it. And it's really came out of the Nome Continuous Project. So these were guys doing continuous integration builds of the entire Nome project. And they wanted to be able to send these out to devices to testing them. But it's actually found use in a bunch of other places. The Qt company have got a system based on it. QTA uses it. And there's a few other projects which are picking it up. And really the way to think about it is it's like Git, but for a root file system rather than a set of source code. So the way this works is you could sort of imagine it both as taking your root file system image and then checking the whole thing into Git source control and then checking that out at the far end. Now that doesn't actually work in practice because Git doesn't have a full set of permissions that you need for a Linux file system. But it's basically the idea you have a set of files which are hashed and stored indexed by their content. And then it builds up directory trees by basically a list of pointers to all of these objects. Now this has a nice advantage. I'll describe more about the directory layout in a second. But it has a nice advantage that you only have one flash partition for your entire system. So you don't have this problem with the dual partition approach where in the factory, when you first manufacture the device, you have to take your flash and you have to chop it up into several pieces. And that process where you chop it up, you can never change once that device is left. And you have to decide exactly how big your operating system is ever going to be during its entire production life. So you take however it is big now and you make it a bit bigger. And then you have to pay for that overhead twice because you have two copies of this OS partition. And then the bit that's left over is the bit that the user gets to use. So it can be expensive. And depending on whether you're selling mass market or high value goods will depend on whether that cost of flash is going to be important to you or not. But the system for OS tree works by having one partition and then multiple cheer routes inside that. So you have a full, what looks like a full user space down in some sub-directory and another one somewhere else. And the system chooses between those at boot time. So one of the nice advantages of doing it this way is in the same way that when you type git pull, it only pulls new file objects that haven't are new to the system. And it does that in an incremental fashion. And you get the same advantage with this. So because internally on the file system it stores all of every file is stored under a file name, which is basically its hash, you can very quickly discover whether or not you have any particular file, even if you've never heard of a name before. And that's used by OS tree when it downloads these updates so that you only have to fetch new objects. And there are actually some extended tricks that it can do to get compression between objects as well as one. Unlike git, it also uses hard links to share copies of files between two different versions of your operating system. So if, for example, you have a very large file in your update which doesn't change between releases, it might be a piece of configuration data, or in the car industry it might be some base map information outlines of all of the countries of the world, that probably doesn't change between updates of your system. And if that file is the same on both the old version and the new version, what it does is it uses Unix hard links to share that one copy of the file between the two, the old and the new version of your checkout. This means that rather than being double, it rather than requiring twice as much space as a non updateable system, it requires space, which is basically the biggest delta that you're ever going to deal with. And so very large files, you can really get some savings there. Obviously, if you completely rebuild your system and every file changes, you have to bear twice. And this is, while I'm saying it's like git, it isn't actually git. The biggest difference is that git has a very simplified model for permission bits on files. And this is by design. It's designed for storing files in source control. It's basically just directory and executable bit. Whereas OS tree has a full set of Unix permissions and also supports all the extended attributes. So if you're using SE Linux or SMAC, then all of those labels get bundled in. And it stores those as well. Shoi. Have you put into a new system if you've run around? Yes. How many times? It's optional, but one normally. By default, it's one. And then, sorry, the question was, when you get a new system update, what happens to the old, old one? So does all these old files kept around? Only answer is it cleans them up. So it will remove the old hardlink tree, this sort of cheroot that gets deleted. And it also garbage collects objects which are no longer referenced as well. It does it. So if you're running version two right now and you're updating to version three, then as part of the update to version three, it will delete files that are no longer needed in one. That's probably longer, but it takes a bit of time to think about it. But from the server side, I'd actually like to roll that back because there's something we didn't have. Yes. So the question was, if you're at version three and have to go back to two, is that going to work without network? So yes, so you can. That's arbitrarily configurable. Yeah, default two. So if you boot three and it's duff, then two will still be around. But one won't be by default. Two will be, but one won't be by default because they don't understand it. Yeah, make that number five if you want it yet. And that might make sense if the network was really expensive. You might want to be able to flick between several versions relatively cheaply as part of some testing process. And then that would make sense then. Well, sometimes you have to run it much later. OK, yes. Yep. Something works. Yeah, exactly. Yes. You can. It also depends on if you post the update from here. Cool. Well, good question. Thank you. OK, so this is an overview of what the Flash partition looks like in this OS tree world. So this is an example. This is the Raspberry Pi example, but others look basically the same. So the important thing to distinguish here is these are partitions and file systems. So there's two partitions here, the bootloader partition. And nearly all the Flash is taken up by a single partition on the Flash. So this could be a one EXT4 or F2FS file system here. And this is what we call a physical sys root. And that is distinct from a deployment sys root or a root FS. And this is the bit which is deployed. So as you're a user space application, this is what you see. And you can have multiple ones of those inside a single physical sys root. And when I was talking about the hardlink trees, so this is an example for two updates, both of which have bit wise identical copies of binbash. And so in that case, there's this user binbash inside version one. And that's a hardlink to some object. And there's a user binbash inside this charoot. And that's also a hardlink to this same fast system object. And there was also a hardlink named after the hash of the contents of the file. And that will be in this objects directory. So if you're used to Git, then this is your Git objects directory. And of course, hardlinks in Unix are symmetrical. There's no master of these three. It's just three things all pointing to the same file. And this is actually a little bit different to how Git works. So when you do Git, when you do a checkout by default, it actually takes copies of the files because you might want to open that file in Vim and edit it. Whereas with OS tree, it doesn't want to take a physical copy of it because we're about saving space here. And we can trust the system to not overwrite these files. In fact, we enforce it. So that's the layout on disk, on flash. And the boot process is, yeah, then you're, that is a problem. The problematic name is actually slash sysroute because that's by default bind mounted back right up to this top so you can have access to it. So yeah, probably don't do that will be my best suggestion or it wouldn't be that hard. Or it wouldn't be that hard to change it, although it might be easier to change your code that assumes a file called slash sysroute. Yes, yes. So in all, yes, exactly. Yes, so I will, yes, on. Yeah, maybe it's in the next slide. But basically what you do is anything which is inside that, anything which is inside here which is doing this hardlink stuff is mounted, there's a read only mount over the top of the read write mount for the whole system. So as a user space application, you're unable to change those files. And if you had, if you changed it in a way that broke hardlinks, it will be OK. But you wouldn't want to assume that. OK, so we talked about the file system. And I'll now talk a little bit about the boot process and how that works. So basic boot process, the bootloader is the thing which is responsible for picking what's called a deployment. So this is whether it's version two or version three that we're going to run right now. And that has to be done by the bootloader because the kernel may change as part of these updates. And so the bootloader is the only thing which can make that decision and make sure the write kernel is loaded. So it does that. It boots the kernel. And then there's an init-rd, or init-roundfest one of the two, which has to be aware of OS tree. So we've got some integration for that. And so that the init-rd is basically responsible for doing this chyrwt operation. So by default, the root file system is this crazy physical sysroute. And so it's going to have to do the process of chyrwiting or pivot route down into some subdirectory and does some bind mounting in order to give the system an environment that looks pretty much like a normal, standard Unix-type environment. And then finally, once it's done its thing, it then goes and kicks off your normal sbin init. So that's going to be by the system D or sys5 init. So that's the actual changes which you need on the final deployed thing. And I want to talk a little bit about the changes we did inside Yocto, sorry, open embedded for integrating this. So we've added a new image type to open embedded. And that does some shuffling that makes this root file system updateable. So you have to move all of the read-write data into VAR. Because if OS3 has this assumption that anything inside user is part of the updated world, and so you may not change anything in there, and it will get blown away every time the system updates, and anything that you may change is inside the slash var directory. So we make that change. We make the user move change. And this is a thing which is kind of coming from Red Hat, which is merging slash bin and slash user bin into a single slash user bin. There's sim links in place, so as a user, as a consumer of this, you don't have to worry. But it's called user move, and it's a thing that's required. And it takes all that, and then it commits the result into a OS3 repo. And this is a bit like a git commit on all these files. And that's logically the same as either a tarball or an EXT3 file system image. But in this case, it's a directory of objects that you don't have to worry about the internal format of, because OS3 deals with that. But this is another representation of this file system tree. And then, optionally, we'll upload any newly created objects to an update server. And it does this in a way that's incremental. So for example, if you've only changed half a dozen files, it's only going to upload half a dozen objects to the server. And also, finally, creates an initial bootable file system image. So one of the things that's sort of half hidden from you without doing software updates is the difference between a root file system image, as in slash and anything underneath it, and the actual thing that you have to write to an SD card to produce a working system. Now, normally, this is relatively simple in its changes, right? You've got to figure out where the bootloader goes and there's a partition that wraps it and not much else. In the OS3 environment, it's a little bit more complicated, because you have to do this thing, where this is the root FS that comes out of Yocto. And then you've got to bundle all of, basically, put it under this tree and put some objects in here. So it does that. And the integration we've done actually does all of this as part of the normal bitbake process. So these are image classes in bitbake, which is really nice. You just type one command and you get this whole thing done for you. There's a little bit of board work that we also had to do. And this is mostly figuring out how the bootloader is going to work and tweaking some things to do with partitions. So right now, we have Renaissance Archar Porter, which is sort of almost the standard platform that AGL uses as its reference platform. It has lots of platforms, but Archar is one of the important ones of those. We also support QMU. In fact, we do that via U-boot. So by default, QMU has effectively its own bootloader. You just pass it a kernel and a root. I'm pressing it BROs. If you're going to pass it the kernel, you can't do software updates. So what we do is we get QMU to run U-boot as its initial thing. And there's like a minus ROM option in QMU that does that. And then it's pretty standard. Miniboard Max, we also support. And we do that via U-boot. It turns out it's easier to recompile U-boot for the Miniboard than it was to try and figure out how to get this stuff working inside UFI. So we did the former. We support Raspberry Pi 3 as well. And you might get the process here, but we do that by chain-loading via U-boot because the Raspberry Pi has its own built-in bootloader, but it's actually very, very simple. U-boot is a supported thing. You just build U-boot for Raspberry Pi, and then the Pi boots, it boots U-boot, and then from U-boot, U-boot into our stuff. So far, we've done everything using U-boot because it's been kind of the easiest way to get where we want to go. But there's not actually that much in terms of code in there. And so supporting other bootloaders is a relatively straightforward process. And one I would be interested in having a go, but so far it hasn't been important enough to warrant to not just do the simple thing. OK, so that's the build process and the Yocto integration process. So basically getting all of those changes I talked about earlier and getting those inside. And so when you do a normal Yocto developer flow of BitBake, it does all the right things for you. So I'm going to go ahead and get a little bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit of a bit The one of the big changes is in order to support updateability there's a split between a very hard split between read-only files and read write files. And so anything which is read-only, anything which is updateable as in you want to be able to push updates to it has to be in one of these read-only erioedd, ond mae'n mynd i'ch gael gweithio'r ffordd. A gynnyddio'r ffordd yn barhau. Mae'r ffordd yn ymddangos o'r osu. Rwy'n golygu ffasgfodd yn barhau, ond mae'n gallu'r ffordd yn barhau. Fasgfodd yn barhau, ffasgfodd yn barhau. Rwy'n go. Felly, mae'r ffordd yn barhau'r ffordd. Mae'r ffordd yn barhau'r ffordd yn barhau. Yes, yes. I mean it's pretty, yes, everything except VAR, and ETC is a special case as well. Yes, so ETC is an interesting case. So what it actually does for, so the problem is basically this, right? There are some files in ETC which users, as in end users, may want to modify, and there are also some files in ETC which you, as the provider of this system, may want to modify. And the distinction between the two is actually quite hard to make concretely. So, for example, Wi-Fi passwords and usernames and passwords are probably on the user side of the thing. But then there are some borderline cases, like, for example, you could imagine which particular modems on the system are being used might be a thing which is being pushed down from above, or might be a thing that you've written some configurator for that the user can change themselves. And, like, this is a problem that exists in desktop Linux as well, right? Debian has this, like, slash ETC just default thing, which is a bit crazy, that's trying to separate out bits of the config which the package maintainers are going to work on, and bits of the config which the end user is likely to change themselves. So what OS3 does is it basically does three-way mergers. So if you can, as a provider, you can make changes to files and they get updated. And if the user hasn't changed and those updates just get applied straightforwardly, if a user has changed that file, then you get a three-way merge between the old and new version from you, the provider, and their changes. Now, this is crazy dangerous, right? And probably not the thing you're going to use in production. However, it does work today. And one of the problems with doing read-only root FS is, in order to make the system boot at all, you have to go through and find all of these cases and do some tricks to avoid them. And that might involve patching upstream packages or patching the packages to change these things. The long-term solution for this is the stateless stuff that the system D guys are doing. And basically what the system D guys say is packages may not write stuff to slashetc. Package configuration belongs in slash user along with the binaries, which makes sense, right? Because configuration that's provided by the package and binaries, which are provided by the package are fundamentally the same kind of thing, right? And then slashetc should be empty and then the system can change that and you can store things that you modify as a package in slash bar, I think. So they're kind of working on this, but right now it's not quite ready and you're very quickly going to find packages that don't work. Now the three I merge on slashetc means that your system will actually work today. And if you can probably break it if you try really hard. And it's a better situation to go from having updates for most of the system with some corner cases where you can break it and then eventually you have to fix all these problems and delete all the files from ETC than first having to make all of these changes to all the packages and whilst that process is going on, not actually able to do the software updates at all. So it gives you a development level balance, which I think makes sense. But yes, it's dangerous if you rely on this. I don't know. I haven't looked deeply into that corner of OS tree. It's I suspect it wouldn't be that hard to enforce that or maybe when the three way merge fails, just bought and blow away slash but I'm outside of my understanding really. I mean, the right answer is to do the split, right? That's that's what the right thing to do. So, oh yeah, writable files, all the user data is in var. That's the key change. Now, for the agile application framework, this is interesting. So background, if you've used Android phones, you've used a very similar thing to the agile application framework. And the idea here is you can have a split between two sets of developers for a particular platform. You can have the people who develop the hardware and the stuff that enables it. And so in the Android case, this is like Motorola who make the phone. And then you can have application developers who develop applications which are portable between multiple, multiple target devices. Now, agile has the support and it's historically it was kind of part of the sort of the ties and the ties and world. And so what you end up with is is to actually to software update domains, you have the underlying operating system, the kernel and all support things. And you have the the applications running on top of it as in the app framework managed applications that run on top of it. And they can come from different places, they can be updated on different cycles. Now, obviously the user installed applications are in slash var and they're specifically in Valib AFM on AGL. But the question really is how do you manage that directory? So the kind of a simple pure answer would just be to say, well, it's in slash var. It doesn't belong as part of our system. Thank you. Goodbye. And that runs into problems when people want to test. So one of the common use cases you're going to have is, okay, I just want to build this thing, flash it on to a drive and might get one of my testers to try out or something like that. And if they as part of that process have to then go and manually download a whole bunch of applications in order to build a usable system, it's not really a usable platform for people to work on. So you could maybe, you know, you're probably going to end up in that case writing some script called like install some stuff on my new device. But now it's a pain, right? You sort of build this image in this beautiful bitbake world, and then run some random, your own random scripts on the end of it in order to make things usable. And that's not pleasant. So that would be the ignore option. The middle option is would be to have a concept of applications which are installed by the system and applications which are installed by the user. And you could theoretically have a situation where you can have upgrades independently on those in these two worlds like as a platform provider, I might want to provide some things. And as a user I can manually install updates. But then there's a real question about exactly what the priority rules between those two look like. And it's also going to be a lot of integration effort. And there's probably some corner cases that are going to be very, very surprising like what happens if a user upgrades a minor version and then you upgrade a major version like who takes priority. So what we actually did inside AGL was a little bit simpler. And that was to we populate this var application directory just once the first time when the system is built, we have one population of this directory and then after then we don't update it. And in fact, the reason for that is for most of the use cases, this sort of makes sense, right? So when the devices are deployed in the field, they're going to get updates for these applications via the application specific update process, that's fine. When you're testing, you're probably either testing the low level, as in the kernel on board support level stuff, in which case you probably don't need updates for the user applications, right? The default set from last week whenever you built the image is probably going to be fine. So it works for that guy. And the application developers are probably going to have a situation where they have a direct install process straight into a prebuilt system, because they often won't be building the image themselves. They'll just be getting a binary image from some server that's been built by somebody else. And then using a process like use on Android where you type an ADB install to push your updates to it. And so they're satisfied by this solution as well because they don't need to update these things using the arbitrary route. So quick getting started and then we're done. So if you want to try this stuff out, it's pretty straightforward. If you get the charming Chinook release of AGL, Google search, you'll find it. There's this AGL setup.sh script that basically configures a bunch of layers and boards for you. If you just pass in the AGL Sota option to that, it will work. And then basically you build and you're done. The code if you want to look at it is in meta-AGL extra meta-sota. Once you've got the AGL code down, it's going to be easy to find that. And I've included a link there for the wiki page for the getting started about to do that. If you're not using AGL but still want to use this, then all the functionality for this, we've extracted out into a layer called MetaUpData. And this is basically just that code repackaged as a thing that you can just pull in anywhere. This should be pretty straightforward to add to any open embedded project. You pull in this layers, you need a couple of support layers. Right now our update client uses Rust, so you need a meta-rust, but that actually works really well right now. And just to prove that it is easy, we have an example project. So if you go to garagequicks.rpi on our GitHub, then there's instructions for doing that. And it's basically getClone the repository, run startup, and then BitBay, can you get a software-updatable image? I've got one minute, so I will say nice use of this is you can use it for testing continuous integration builds. So, for example, host your build output on an HTTP server, and then pull updates down rather than reflashing entire cards in order to do CI builds. And that's really nice, because if you do CI builds, normally you end up destroying SD cards, because they don't like huge amounts of writes all the time. And this is maybe easier than switching cards or netbooting. Okay, so that's me. AGL is now certainly without the box, and if you're not using AGL, then meta-updata are on your done. I think that's me, and that's time. So, yeah. Thank you very much. I'll hang around after if you want to ask questions. But yeah, thank you very much for your attention.