 Oh, Tohe has ever had the misfortune of having to develop any bits of software with works on IE6. No-one hit it any web date in the late 2000s? OK, any more memory using IE6 in the late 2000s? OK. Yes, it was completely, remember how completely terrible IE6 was? And I guess the point in my talk is that a 2020s Google monopoly will not actually be any better if I could be in the same horrible situation in the world of IoT and things as there were in the world of desktop software in the late 90s. And I just went to the talk on Android Things and it turns out it really is more terrible than you ever thought because with Android Things they're going to use signing and signed bootloaders so that you can never ever possibly install an open alternative on any of the pieces of hardware that they sell. And the thing that I sort of believe is that platforms here are not really the answer, so this idea of one vendor providing or encompassing a solution to whatever particular problem you have that they completely control is not really where you want to go with the world of IoT. And there's a lot of companies out there who would really like to be the mid, early 2000s equivalent of Microsoft where they want to have this power and because that's clearly going to make you an awful lot of money, there's an awful lot of investor money chasing all of this world as well. So Google's a big player, but there's dozens of them from the smallest few man band startups going up. But they're all basically based on the same principle which is we're going to lock you into our platform with some bottom end with some server side. And one of those is eventually going to win and then we will end up in this horrible world of IoT 6 so I kind of want to put an end to that. And what I think a better solution is one that's based more on a bunch of tools that you can piece together to solve your problem. And then when one tool sucks or one tool vendor starts becoming too arrogant you can go and throw their tool away and go and use a different one and the rest of your system works as it always used to. And for me like that tool, that thing that I want to solve is over the air updates. And this is kind of interesting because sort of basically everything needs one in the end. Whatever you're doing if it's running Linux you're going to have to update the thing in the field. And there are a bunch like it has been solved before but it's not what I call like a solve problem. If you want a C compiler you can just go, there's two, they're both really good. But software updates, there's a bunch of solutions and they're all kind of compromises in slightly different fashions and it hasn't really been a kind of bringing together where all of these projects kind of come towards a single project that sort of solves everyone's problems more or less completely for most cases, for more or less everyone and when someone says okay I need to do software updates you can say okay we'll do it like this. So I'm not going to claim that this is that solution but when we're talking about it's that solution but I want to kind of push towards that way and hopefully we can build a thing which is open and let's everyone solve this problem. So when talking about software updates one of the most common or one obvious solution for doing this is to use the package managers which are already built into your operating system. So for example RPM or DEB and this has the advantage that it's quite easy to do as you're familiar with the tooling already and Yachtil already generates RPMs. So it's just sort of a matter of piecing those bits together with the server to serve the updates down or maybe you run an app repository or something. So easy is its advantage. Disadvantage though is that it's not safe during power off so if you're in the middle of installing an update and somebody pulls the power on your device then it's left in an indeterminate state and that's kind of okay for computers. I mean people still hate it like the I'm sorry I'm updating your system you may not use it or touch it or turn it off or do anything with it. But it's not really okay for a thing that people consider to be a device is a physical lump of metal and you kind of expect to be able to do whatever you like with it. Now the other so that's one problem and the other problem is that when you're installing RPM packages you have the issue of dependency resolution. So you might have one application that depends on some version of a library and there will always be some set of versions which have been tested or at least you're reasonably sure will work together. Now the problem is and if you've ever used like non stable versions of Debian then I'm sure you probably hit this an error looks a bit like this one here. This was the first result from Stack Overflow when I typed it in but you run app get update app get upgrade and it says I'm really sorry I can't continue because really complicated reason involving package versions that conflict. And this problem is is actually really really hard and so if anyone here is a Debian developer I have like amazing thanks to you guys who managed to keep this thing working at all. But you have this problem where you can have a version of the system which is self content which is correct and installable over here which is okay. And you can say that the latest versions of all your packages are also as a collection installable together but there's no way of migrating from this old world to this new world. And that's sort of what this error is here somewhere and actually solving this is really hard. The Haskell guys actually use an automated theorem prover to try and figure out how to do package updates on their system which is clever but is maybe not the right solution. So the another alternative to this is to instead of having a system which is fundamentally a long history of composed package installs and removes you write your entire file system and have two copies of that file. You rather distribute packages you distribute an entire root file system as a single unit. So the way this works is you take your flash and divide it up into kind of two partitions so you have an old version and a new version. And you also need a boot loader so that's a third and you probably also need some space for changes that users might want to make. So you end up with a petition layout that looks a bit like this and maybe with two extra partitions if you want to read us your kernel lips on a petition. And this is safe right because you're running a system from partition A. And then whilst you're running that you install your updates on the be partition and then right when it's completely finished flushed a disk you make a switch in the boot loader. And now it's next time it boots a boot hole. There's some disadvantages to this that make it quite harder to do in practice than it is in theory. So the first one of these is that you need to partition this flash during manufacture and that flash layout is one of the things which you can't possibly ever update after the system has left the factory using this method. And exactly where to split those partitions is a is a very annoying problem because if you make the partitions too small then you won't be able to install your latest greater software. So we'll make the partitions really big but then you have all of this unused space at the end of these partitions which you can't use for which you have to pay for and which you can't use for the user to do whatever they're doing, storing their files, backups or whatever. And even worse than that you have to pay that this cost twice once for the A partition and once for the B partition. So that makes it more expensive than it would otherwise need to be. And it's actually not super trivial to do read-only root FS is kind of sort of in Yocto. If you do a Google search for read-only root FS you'll find long descriptions on how to do it. But it's, if you're not doing it today it's probably a little bit harder than you expect to make this work because most Linux applications unmodified will assume that they can kind of write things more or less wherever they please. And some places are reasonably good about keeping writing things into a bar and lib but you end up having to make this very strong split between files which are updated by the vendor and files which are updated by the user. And without making that split very hard it's very easy to find that for example some files in Sashiki C you want the user to modify and other files actually you as the vendor want to supply updates to. So it's doable but it's quite hard to do. And the fourth point is that the work to do these changes is often quite hard to reuse between projects. And I think that's kind of what we see today with these systems that lots of people have done this as part of a production situation. But there isn't really like an open source project where everybody upstreams all their stuff into it and it becomes a system that everybody can just use and you kind of get this benefit of sharing going on. And they tend to be like a one soft project to get this thing out the door and then maybe you modify it and reuse it internally but it's not a public piece of commons software that we can all use. Even if it's open source and freely licensed and available on GitHub that's different from actually in practice being able to share these updates between people with completely different ideas of what they want to build. So the technology I want to talk about today is this thing called OS tree. Now the way to think about OS tree is it's a bit like Git but for file systems. So one OS tree commit corresponds to a complete root file system image and you can have multiple of these commits inside a single flash partition on your device. And you can then have multiple checkouts of this software on that flash partition which lets you do atomic switches between versions of your software. Now this wasn't developed by me. I don't want to take credit for it. This came out of the GNOME continuous project and Colin Walters is sort of the guy behind it. But I'm really happy with it and super keen on it and want to talk about it some more. So because it has this model of sort of a Git model of content address objects doing things like incremental fetches of changes to your file system are very easy. So with that dual bank system that we talked about earlier you have problems. For example that might be hundreds of megabytes and you might need to be downloading that over well maybe Wi-Fi but quite possibly 3G connection. And so you're going to have to have some kind of compression to send that update. And you could use our sync but it's a bit of a pain. You could use BS diff or BS patch in order to send those updates. But it's a whole bunch of extra work that you have to do in order to make this thing usable in a practical production situation. With OS tree it works very similarly to a Git done HTTP transport. So you put all of the updates up on a server somewhere and then just go fetch them down incrementally. And your Git port and OS tree pull basically you can say go pull this hash pull this commit and it will go and fetch that commit from the server. And then any objects that are not referenced it will go and fetch those as well until it's built an entire system in a local content address object. And one thing to be aware of is it's not actually Git here. Git has a very, very simple permission deliberately has a very simple permissions model where there's basically directory and executable bit and nothing else. Whereas OS tree has a full set of full set of permissions so it contains user IDs for files. It contains the normal unique style permission bits and also all of the extended attributes that you need for SE Linux and smack labels and things like that. It's more complete than Git in that respect because it's designed for a fast system image rather than for a bunch of source code. I think the other change is it doesn't use Shea1 hashes to identify objects. It uses Shea256 or some cropped version of that which if you're following the news today you know Shea1 collisions blah blah blah blah. So it's safe against that although it's probably not actually security threat in practice. So I kind of described it in relation to Git so I thought I'd also try and explain what it looks like on disk. We'll come at this for a different angle. So this is a Raspberry Pi file system layout. So we have two partitions here, one this sort of small partition which is basically just the boot loader. And we're actually using uboot on Raspberry Pi. It does work and this is probably only a few tens of megabytes. And then the whole of the rest of the SD card is a single file system so it can be any file system you like. So F2FS or X4 or your favourite. And then inside this file system and this is sort of like the root file system image. But it doesn't look like a traditional unit or root file system. So the actual root ffases are down in a sub directory quite a long way down. And we've got two of them on this particular layout. So this is the first root file system and this is the second root file system. And when the system is booted it will cheroop down into this point. And then there's some bootloader magic and this objects directory which holds, which is the same as your Git object directory. This is holding the content of files named after their shahashes. So the terminology there is we've got deployment sysroute and physical sysroute to distinguish between these two root file systems. And one of the nice things about this way of doing things is that you can use hard links in order to share files which have the same physical, the same bitwise content between an old and a new deployment. So this means that rather than having to, in the dual bank case, you have to have two complete copies plus a bit of your entire system. With OS tree, if the files are identical between the pre-update and the post-update, they only have a single, it's only a single file actually on disk. Now there will be multiple directory entries pointing to that file. But if you've got large content that doesn't change between releases, then this is going to offer quite a big saving. And in any case, it will generally mean that rather than needing more than double of your file system, you'll need significantly less. So that you can imagine how this works when it's checking out a file, it knows the hash of the file and it will go and hard link it from something in this objects directory of files identified by their hashes. Now, one thing, this does sound beautiful and rosy, but when you're doing a checkout, you do still have to create all of the directories because hard links in Linux are to files only not to directories. So if you have two directories that happen to have exactly the same content, you still have to have two copies of the directory on disk. But that's obviously much smaller than any file for most practical systems. But it does mean that the checkout takes like tens of seconds where in theory it could only take a few seconds. So that's the, I've sort of described the layout of the file system on disk, the physical root file system inside the flash. But now I want to talk a little bit about how this looks as a user land application inside this updated environment. And so inside an OS 3 deployment environment, it looks actually very, very similar to a standard Unix Linux file system. So there are some changes here and those are all related to the idea of things which are managed by things which are read-write, which belong to the device itself and things which belong to the vendor, the person supplying the file system update. So OS 3 requires the user bin merge. The scripts that we have at the moment does that for you so you don't need to worry about it. But basically all of the executables go into the user bin and all of everything in there is read-only and even though the file system itself is a read-write file system because we use the same EXT file system to store read-write data as well, it's bound-bound with a read-only bit set. So even badly behaved applications can't go and corrupt these files. Now, so that's everything in user, read-only, managed by the OS 3 update process. Everything in VAR is read-write and that's for your storage of your user data. And in fact OS 3 moves slash home into VAR as well so I think we simulink slash home across. So the home directory is living there, things like conman, wireless configurations, all that stuff along in there. And unlike some of the ways of doing this, we actually pre-populate that with whatever VAR contains when you do the build. So this means that the system doesn't have to be able to survive completely blowing away VAR to an empty directory and then automatically creating directories and things it needs on startup. If you have a system that does that, so I think the system D guys are pushing towards this system D stateless stuff where they're planning on having a very, very, very strict distinction between read-only files which are part of the operating system and then like a read-write area where you can store files. But as an application, you have to assume that at any point someone who's going like RM minus R, that whole read-write partition and you have to be able to create that from scratch. This isn't as aggressive. It's pre-populated by whatever the first build of your software was. And so if you're having a very, very long update chain, then you do have to be able to create directory items and new things in there. But the basic overlay of that is going to be provided as part of the build and will be on the flash when you boot for the first time. Sashiti C is a little bit magic. It's actually three-way merged between the old version of ETC and the old and new update versions. And this is done by OS3. Now, there's a kind of a question here about how safe that is, right? Because if you're doing three-way merges, then there's clearly a possibility of the three-way merges failing. Now, that's definitely true, but it has a really nice advantage which is that during the kind of early stage of this process, you can be a little bit lax about which files in ETC are managed by the user and which are managed by the software update process. And this is actually one of the areas that makes read-only root FS quite hard to do because some files in ETC you do actually want to be able to change as a user part. And some, it's really not clear. So, for example, settings for, not Wi-Fi credentials themselves, but more general like which Wi-Fi devices are online, those sort of settings could be in either category, right? They could be a thing that we want to be able to push out as a vendor, or they could be a thing which maybe there's some application which lets you change it as a user. And in the early days of a project, it's probably actually a bit of a pain to have to manually sort through all of those cases and say, OK, this file is definitely managed by someone else. With this ETC through a merge, it just sort of does that in most cases automatically for you. And then towards the end of the project, you're going to migrate to something that looks more like system-by-stateless where all of the configuration files which belong to the system are all inside user and all of the things that belong to the local device configuration belong inside of it. But that's one of those processes that right now, some parts of some systems do it well, but it's quite easy to find applications that don't understand that or not in that world yet. And then finally, as a user, a land application, you have this bind mount to the right back to the top of this whole tree, and that's used by the OS tree tools to push software updates. So that's what user space looks like as an application. The boot process for this is actually relatively straightforward. It's more or less as you'd expect, given the rest of the description. A bootloader, U-boot. So right now, all of the things we've done if you use U-boot as the first bootloader is pretty straightforward to move it to other ones, but so far it's always been easier just to run U-boot on top of whatever we have on there already on the platform, rather than auto-work to a new bootloader, but it's pretty straightforward. There's tens of lines of code that need to be integrated at that point. So the bootloader picks a deployment, and this is where the AV atomic switch is happening in the bootloader. That deployment identifies a kernel, a U-boot kernel, and then there's an INIT-RD that has to be aware of OS tree, and that INIT-RD is responsible for doing this Chiru process where rather than having the root of our SD card as the top of the fast system tree, we're going to Chiru down keyed on this deployment. So it does that, sets up these bind mounts, and then runs your S-bin in it and the system boots to normal. The update process is quite nice. So updates run in the background, so you never have to go into this I'm updating your Android phone mode, which is quite nice for users. So the first step of the update process is to, given a root hash, so your update process starts by saying, okay, please deploy root hash, long hash, on to my system. It goes, so OS tree will go to a server and fetch down any missing objects. That fetch process, you can use a dumb HTTP server if you don't need privacy of the actual content of your update because all these objects are identified by their hash. There's no way that someone who can intercept that communication can, for example, put random files on your system because as soon as they're downloaded, they'll check some via the stream before they're written to this. So you fetch all these objects down, so you now have a complete set of objects that represent the file system, and that's obviously incremental, so any objects you have already have one of those. There's then a process that takes all these, so that's a bit like git pull, and then there's a process which is a bit like git checkout where it goes and builds a churu environment that hardlings back to all those objects it's just fetched. And then finally there's the atomic point where you flip the next boot in the boot loader, and then the system basically has to wait for a reboot. So one of the things to note here is that none of those changes are visible inside the Linux user space, and this is unlike, for example, an RPM system where when you install a package, it's just going to randomly change some files on disk under potentially running applications, and that mostly works because if you've got a library and you've got it open, because you're holding it open, you still keep reading the older version, but it's also possible to hit situations where applications start behaving a bit weirdly because libraries that they depend on have been overwritten underneath them. And we don't have this problem because we're building a completely new file system somewhere else that none of the user space applications can see or care about, and then when the system next boots, we'll get that as our root file system. And the question about when to reboot is kind of out of scope here, right? So if you're, for example, a car or a TV, you probably just wait for the user to naturally turn the device off, and next time they turn it back on, it will come up in the new version. If it's something like a router which is powered up sort of all of the time, then you probably want to have some logic that figures out when would be a good time, so maybe you go and wait a couple of days for the user to naturally turn it off, and if not, just wait until 3 o'clock in the morning and trigger yourself a reboot. And then obviously a reboot comes up, system comes up in the new world. So that's the nature of the OS tree part, but this is actually one part, and it's an important part, and it's a difficult part, but you still need to build that into a whole system which pushes updates from some server down to some update client which identifies things like... which identifies the device to the server and then eventually hands off to OS tree. So the problems here are things like we need to serve the update files themselves from somewhere, and it's relatively straightforward, but they do need to be somewhere. If the actual contents of your updates are considered secret, then that's a little bit harder. You need some way of authenticating devices to the server, and there's some support in OS tree for doing this so you can pass in extra headers which get pushed up to the server so you can write an HTTPS server that authenticates the clients in whatever way you choose. There's another part which belongs on the server which is deciding who gets what updates. Now, this is the point where a thing that looks like a relatively quick Ruby on Rails hack is now starting to turn into something that's a little bit more complicated and it's probably going to be a nightmare by the time you finish. And this is always going to be the case that, okay, we want to have a beta test fleet or we want to be able to incrementally roll out an update across, maybe, okay, we'll first send it to like 1% of our customers and then we'll send it to 10% of our customers and if none of those overload our support centre we'll roll it out to everybody. And then there's always going to be the case where one particular customer who's a really important customer phones up and says it's broken and your developer says can they just like try this version of the software? And you can go on that like one device, like right now please. And then of course your Ruby on Rails app grows another kind of pile of stuff and you can imagine this is going to turn into a nightmare pretty quickly. And then, and finally, there needs to be some method of provisioning new devices into the system. So if you're doing high volume devices this will be somewhere on some production line. If it's a lower volume industrial IoT type place then there's probably going to be some guy who does it. But he's still going to need a process by which you can say okay, I've got a new device. I need to give it some secrets inside the device and I need to match those secrets up somewhere on the server. And you're probably going to have to have some kind of database rather than a Google Doc spreadsheet that identifies this is Mr Brown's device and it's got this particular serial number. So options for that basically like if you already have this then the thing to do is to use our ROS tree integration into Yocto and then you just shell out to the ROS tree tool or call it by or it's got a really nice API that you can use to drive it. And then you get the underlying atomic mechanics of updates but you don't use any of these as any of the server infrastructure. If you don't have that thing already then I would highly recommend not trying to write it for yourself and use one. Remembering of course that the update server has the ability by design to install random bits of software on an entire fleet of devices and if some joker decides it will be a funny laugh to go and push some malicious update to your entire fleet of devices that's going to be very bad news and it's going to be bad news crossed with potentially very very expensive of fix when you keep getting annoyed phone calls and have to physically ship these devices back to get a re-program. So we actually have one of these both a server and a client as a match pair. The server is a Scala app and the client is we've got two implementations of the client one in Rust and one in C++ which is kind of a month or two away from being a production ready piece of software. Now these are both open source NPR licensed. Their original spec for this was developed by Genevi who were an automotive organisation who wanted a software update system for their platform. And so the requirements for it are not sort of something we've randomly made up they've come from some sort of reasonably serious organisation designed for automotive but actually more generally applicable to other systems and it was actually ATS that did the development for this under contract with Genevi originally. Server is a Scala, has a docker has a docker file that will run it there's a bunch of microservices that work together it's pretty straightforward to stand up if you don't want to do that then we also host it as a sass and you can pay as like a buck per device per month and we'll do the hosting of that component for you. You're cool, you're cool whichever makes more sense. I like the first 20 devices for that free so if you're just blessing with it that's probably the easiest way to get started and just pointed at the ATS garage and then worry about the scale options later. So I've kind of described all of the components for this solution and we'll talk a little bit about the integration work we've done to integrate this so right now all the integration work is Yopto so we haven't done the build route thing yet it's probably not that hard but we haven't done it. So the layer you want is this layer called Metrarch Data and this provides integration with Yopto for building these images. Now so the first the first step of that is we have a new image type which is effectively it lives alongside create a table of my root FS create any XT image of my root FS we have another one which is take my root FS and check it into an existing OS tree repository or commit it into an existing OS tree repository an OS tree repository is just like it it's just a directory somewhere with a bunch of objects in it and that's the point where it becomes an OS tree thing there's also some code in there for generating the initial image for writing to flash now normally inside an entire Yopto system the root FS generated by Yopto and the actual thing you write to flash are very very similar right you generate an EXT4 image and then you on the flash you have a partition table and you smash it in there somewhere this is a bit more complicated because of this and so there's now quite a big distinction between the thing that you write to flash and the root FS image from the next user space point of view there's a recipe in Metraup data sorry it's a BB class in Metraup data that handles that distinction for you and finally there's a bit of code which will upload changes from your local repository up to the server now OS tree doesn't come with this functionality by default and so we have that piece in and it does this in a reasonably so lonely upload objects which are not already on the server and it does it in a way which is safe against updates failing half way through and it does quite a smart thing where it walks this tree and knows okay if this node of the tree was on the server then all of its dependent nodes are also available on the server so it can skip pushing updates for nodes that are known for whole parts of your software update tree which are which are already which are duplicated already on the server and that's quite nice because you can build on a on a remote build machine and then push it up to an update server much much quicker because it's only a few tens of megabytes of files rather than having to ship an entire multiple hundred megabyte software update image across your network regardless of how fast it is moving only a few tens of megabytes is much better considered using this where we've hosted a build machine in Amazon AWS and then had the builds happening on AWS because you can buy some really like properly reasonably powerful boxes on AWS and then pushes that up to a build server and then incrementally downloads to your local development machine and that can actually be quicker than building on my local machine even though there's lots of bandwidth between my local machine and the flash card it's actually quite nice to build on a really fast machine just pull the updates incrementally so we've actually used that for development work so that's metrot data which is completely general between any different board that you want to target so one of the goals here remember was that in order for your system to be useful it needs to be very easy to put it on to a new piece of hardware so as much as possible of the work we've done is agnostic to the underlying hardware but there is a hardware specific piece and that's we've got a bunch of players like metrot data something which is basically the bootloader integration and sometimes some small changes to make it work on a specific hardware so we've done that so far for like QMU Raspberry Pi and a Miniboard Mac stroke turbots and if you're going to dig it out of AGL there's also one for the Rana Sass R-carboard but unless you're in the automatic world you're probably not using that in fact all of these we've done using U-boot including the Miniboard Macs where it turned out it was actually quite easy to build U-boot for that thing and then just reflash reflash the firmware the UEFI BIOS and the Miniboard Macs with U-boot and it works and you don't end up with the UEFI OK demo time I cannot type while talking so I thought I would record this so what I want to show is building a software updatable image for a Raspberry Pi which can be updated over the air and which is using lots of nice open source components so you can verify any of this I'm looking at you Android so this is our example image which pulls together a Pocky plus Meta-up data plus Meta-up data Raspberry Pi plus Meta Raspberry Pi and a couple of support libraries which you need now the idea here is that for your own projects assuming you have a project that's already working it should basically be a matter of dropping Meta-up data into that project making a couple of small changes and it will go so there's very little by design work to do I assume you're going to do it that way rather than take our base platform and then try to bring all of your stuff into it so this is what we've got so here are the layers and the Raspberry Pi example comes with a script which helps you set up your BitLake so we provide some template bblayers and local.com to make getting started easier and I'll just show you the layers here so yeah this is the standard set the support libraries Meta Rust because we're going to build the Rust client a couple of libraries from Meta-oE and we're done so again the local.conf is also very very stock I'll just make a couple of changes here to to use a standard set of download directories so our build server has like slash opto downloads and slash opto estate cache shared between all the users on the build server which is really nice because now if someone else has built the code already is already in the state cache this is actually really nice when you're reviewing pull requests from other people because you don't have to wait for ages for like QT to rebuild because they've already done it as part of their as part of their update cycle and it just pulls it straight in so the only other change is this distro here so we provide an example distro that does some of the underlying underlying magic here to bring in the bring in the OTA process and I'd expect for your system you just look at this distro it's basically just poppy plus a small link file so you just copy that link file into your own distro and it should work out of the box this is in system D we also support sister 5 in it as well I'm going to scroll to the end of this to prove that there's nothing magic going on so the image is your standard image there's no changes needed there this is RPI basic image which is basically core image minimal some extra stuff and that's provided by by Meta Raspberry Pi it makes the Wi-Fi work and things like that so we'll sit and wait for the next hour if this to build and now we're we're at the point where we're building root FS so this is creating the initial root FS and then, if I said we have these image OS tree which is like similar to the TAR or the XT4 targets so this is the point where it takes all the files checksums them and writes the file index by checksum into the into the service tree repository directory we create the OTA image this is the transform root FS image which is the thing we're actually going to write onto the physical root FS onto the file system this is the thing that includes potentially multiple copies of the OS and we're done so now I'm going to write this from SD card and stick it on the Raspberry Pi you have to imagine the Raspberry Pi part of this because so I didn't record that so we just DD this image that we create onto the card so fortunately I'm not building this on my laptop my laptop has a funky one of these funky new SSDs on it and the the net outcome of that is that the first SD card that you plug into your device is called slash dev slash sda and I still cannot get over the fact of typing a command line that looks like sudo oo dd if equals something o f equals slash dev slash sda every time I type that I'm like I'll press the button first unless you're SSH somewhere else so there's a couple more steps that I'm doing here and this is to resize the file system image to fill the entire SD card I have so when it built this image it built it as small as possible and I did it in so the partition is very small so I'm using Carted to resize that out to fill the entire SD card however big the SD card happens to be resize the partition and then resize the file system inside it if you don't do that you'll hit weird errors when you try and form updates because it will say I don't have any file system left because it's compacted the Yocto's compacted the file system down to the smaller size now the next thing I need to do is to link this device this file system I built on a SD card up to the software update server so I'm just using this as our SAS hosted update server so it does all the things you'd expect there's a set of devices so I'll add one here just give it a name for its own purpose and this is going to generate a config file which I'm going to stick on the device and that's the link between the device and the particular update server so you can set some configuration here like how often it pulls the server and the output is this little any file which you have to stick on the device now the actual the process here can go various ways in a production situation the back end of this is drivable via a REST API so if you want to script it you can do that for this I'm just for test and development I just manually drive it so we also need to copy this file somewhere on to the device now various ways of doing that the one relatively straightforward way is to just access it into the device or you can send it over to the protocol but I'm just going to write it by mounting the file system and copying it so this is the layout of the file system on the actual SD card so you can see the root doesn't look like a normal Unix root but then there's inside this relatively long directory there are this is the one deployment we already have and then right inside there is my original Unix user space as it will be so I'll just copy that file in now it's important to copy this into the right place this wants to be somewhere on the file system which is not overwritten as part of the update process so I'm going to stick it in the slash boot directory and our default integration assumes that's where you stick it and it's to be the appendage if you want to stick it somewhere else okay so that's it that's the device configured I'll just unmount that and stick it into the Raspberry Pi and this is the bit that you can't see happening in the background it's the Raspberry Pi boots and as soon as it boots it's going to get a network connection come online and talk up and say hello and when it does say hello we recognise it now exists as a device and we can install software on it now one of the other parts of the integration we've done is to use LSHW to report all of the hardware on the level device and that's just appeared on this right hand side here so you can see what kind of device it is also it's like IP address which is probably one of the most useful parts of this entire functionality which lets you work out this device that's sat next to me what IP address does it have right now please turns out to be super useful we also just about see here we report all of the packages which we install on the device and this is derived from the optical information and that's reported back up to the server as well and this is quite nice so for example it lets you do things like answer when the latest vulnerabilities of some package comes in you really want to be able to find out where that package is right now so I'm going to do this for SSL not picking on anyone in particular so we wake up one morning and the hacker news that there's a new attack and it's called leg bleed and now what it's done is it's marked that package as being bad and then you can inspect this database and find out which which devices are currently have known that this is offering it okay so that's the end of my demo so you've shown there going from basically a completely stock Raspberry Pi image the thing you would do if you went to the metal Raspberry Pi github page and got that set up we added a couple more layers and then we built it and now we have a software updateable Raspberry Pi image and that will work for any particular board that you have already so I should describe a bit about our future plans so more boards out of the box is quite high up on our list of things to do if you have a particular board that you would like to support then please drop us an email we can probably figure it out if you're a vendor and would really like to support your board then maybe just like send me one and I'll see what I can do to make that work if you're super keen and want to go and bring up your own board and release as a community thing and grease that process a little bit the other thing that's on my set of plans for this is to speed up the root FS creation so right now when you create the root file system you basically do an RPM install more or less of every single RPM into a blank directory now there is actually already most of those RPMs are installed no history knows about them and it's possible and I haven't worked out the exact details to be able to do this incrementally and safely where we use OS we basically would check in the output of every single RPM install and then merge them together as a final process and this would mean the bottleneck of recreating a root FS that currently even with fast hardware takes tens of seconds could be reduced down to a few seconds which would mean that it would be possible to effectively have a compile edit run cycle in the sort of tens of seconds range which is completely impossible with any system which involves tiring up or creating an XD file system for this entire content that's not super easy but will be amazingly powerful and because we can incrementally push those changes up and incrementally fetch them down you could make change a few lines on an external source that's checked out locally hit build and have it running on three or four devices inside a minute and finally the boot watch dog and rollback is one of those aspects which is obviously necessary but it's quite bootloader specific and so we've so far not done that in the public in the public version but it's quite straightforward to do that to a few boards but this again is running into it into the problem that often doing a watch dog properly often requires interaction with the specific hardware which is on the board and it's quite hard to do that in a general fashion so finally links so you can find them if you look at the presentation this is OS3 this is the cool thing that we didn't do this is the thing that we did do and the demo I showed you was this garage quick start R pipe so that's literally just like get claim that demo minus minus please use get sub modules two commands and you've got software over your updatable raspberry devices I think they've just added QMU support into it I noticed a full request went in whilst I was out here that they've extended out so everyone's QMU in the same way okay that's me, thank you very much on Phil Wise from ATS has anybody got any questions go ahead sir yes there are systems we mostly use in dual partitions but that she didn't mention is a sort of protection to look at some app flash failures the real question is this when you're doing a dual what is the point of thinking when you're trying to fly you have no way of storing what's going on what's going on now here I think it's already so you're calling the updates and then you get the point now doesn't think it's just unpredictable because you're not sure how much memory you're going to need when does that stop staying before you're in the appointment and if so the memory allocated for that and does that become unpredictable is that an issue okay so it does work so summary of the question what do we how does OS tree work when the size of the updates is large compared to the system resources so what happens if you have a very big update can we run out of flash or memory or something like that whilst doing the update so and so you said for the dual bank system you basically stream in the delta update and then use that versus the old to build the new okay so for OS tree your underlying flash requirements are fundamentally the size of the old thing plus the size of the new thing less any files which are shared between the two so it's like the intersection of those two those two sizes so the absolute worst case is double if you change every single file in a system you need twice the storage now if you're more careful about your updates and like they're going to get tested anyway so you're going to know how big that over is you can then have bigger file systems with a restriction that you can't update every single file in a single software update so the overheads you need for downloading are basically the size of one file as it comes off the network we're just doing some work right now to bound that securely so right now you can potentially send a very large file and it will keep downloading it up until the point where it realises it's way too big and then it will give up but we can bound that much tighter by saying what's the largest update because it's pulling objects individually and those objects will generally have a relatively small size there's no point where it has to store any very very large thing basically because the largest thing it has to store is the largest file in the system and maybe if you have a map database that could be quite big but I guess in your case maybe not and they're all normal Unix sizes and then the checkout process is only creating only creating a directory tree basically so that's like zero well it's not quite zero but it's relatively small there's a file size does that sort of answer the question I don't think I... oh yes so we pull the entire update first but once the extra files we create there will only be the differences between the two because files that are already used somewhere else will just be we'll just keep the one that's already there and then once you've got that locally it's then mostly a hard linking process to the file system yes it does, yes so again thank you OST for this for us yes so by default it keeps two and objects which are not referenced by either of those two will be deleted you can change that number I mean it's not completely obvious to me why you won't want to do that but maybe you might want to have a very quick process of switching between versions in which case there's a setting inside OS 3 somewhere for all that numbers but yeah it prudence things out as they're finished yes so if you have so if you can get the files to somewhere that the target device can see I'm pretty sure that works I've never done it with a USB drive I've definitely done it with just a dumb HTTP server so if you've got a laptop you can just serve the files off a patch on there and then send them over I'm pretty sure the USB case works as in you just mount it as a USB stick and pull from there I've never tried it though so I won't promise it works but I'm pretty sure that's if it doesn't work today it'll be very very small change because you could probably just do it by in fact you could do it just by r-syncing the copy of the USB stick of your existing OS 3 repository but worst case is HTTP server any more questions? ok cool well thank you for your attention applause