 Yeah, okay, so let's get started I'm Jan Lübe from Pengotronics, and I'm one of the two main developers of rock But that should be the topic here because I want to talk about update tools for embedded systems in general and But just I want to talk I want to hear from you, and I hope we can start a nice discussion about the Topics that have not been solved by the existing tools The existing tools are updating tools which should be able to update the root file system Application whatever that may be on our method systems. Obviously the Kerner device tree They've been in it rumfs Possibly the bootloader because even in the field we might find problems in the bootloader and have to update that Some systems can do that others don't Probably also depends on the hardware And additional components like firmware for FPGA CPLD's or microcontrollers on on our bars so What are the basic features all the different tools out there already supports? Failsafe upgrade so if the system crashes or anything goes wrong during the update the system shouldn't be damaged You should be able to roll back So if you boot into the new system and then decide you like the old one better should be able to switch to the old system again we want to have signatures on our updates because There's a dangerous internet out there So we want to make sure that the update that are installed are actually produced by us compatibility checks to make sure that we don't install the wrong Update on the system and break it that way even if it might be signed by the crook person Online and update offline updates are Basically you have the update system download the update over the internet But there are also systems out there which have no or very little internet connectivity And you have a technician who goes there with a USB stick or something like that and installs an update so The existing tools support Workflows like that. Then we need some build system integration What have you York to build route? Ptx just whatever builds an update image That should be easy and shouldn't be something you need to build yourself And in the simplest case we want to have an a b Mechanism to switch between two systems to get this failsafe and rollback update There are some additional features which not all of these systems have or Not in an extent, which I would call a finished or complete For example encryption of updated artifacts addressed so you can transport in USB stick with an update image to the system and if it gets lost nobody has access to the Application data, which is in the update then you have the question where to store that key to decrypt the update Which is not easy to answer and also the topic of Delta updates some of these tools Supported others don't depends on the use case if that's interesting So what have we as open source tools to do updates in that sense There's SW update by Stefano Babich from tanks Which has been around a few years. There is Mender or Mender.io Which also includes an update server More focused on on the online update side of things There's Rauch, which I started with a colleague of mine has also been around a few years and several others At the end of these lines I have a list and some comparisons done by other people But we shouldn't focus on the individual tools At least in the beginning And there are also fire-based tools compared to the image based tools I mentioned before for example OS tree, which was started. I think by some of the GNOME people who basically treat a Complete system Like a git repository so you can check in new versions transmit them over the net and Install them on others and switch between different revisions on the same system And there's also SW upd from interactive. I think it's used by clear linux Which groups different features into bundles that can be installed in the file system. So it's more like OS tree in that regard So I said I don't want to focus on specific tools because then we wouldn't get to the interesting topics which have not been solved so For the discussion Please talk about generic topics And I'd like to focus on image based updates or complete file system based updates not on systems like Debian or IPM or OPKG based updates because in our experience Doing updates on a package level is very difficult to test because you have so many combinations things that can go wrong It's easier to test Defined state of the system and then deploy that into the field So I'd like to focus on full system updates And there are also updated tools like resin IO for example, which Mainly a mechanism to deploy applications in containers on embedded systems So that doesn't cover how to update the base system itself how to update the bootloader so in some cases that's interesting, but there are different tools and different topics to solve there So basically what I'm looking for are war stories people who have used those tools Good experiences bad experiences missing features where we should spend our efforts next Unsolved problems So maybe to get feeling for the audience how many of you have developed systems with Such field update tools and deployed them already So maybe half How many of you are planning to do that and don't have that? Maybe a third Okay so At least from half of you I expect some answers so I've prepared an other pet Probably most of you know how that works it's Document you can add it interactively so You have the link here you can open that on your laptops readable so Basically what I said already is in that document so and I've prepared some topics which I think might be interesting to discuss here But I actually don't know so I want to First collect some additional topics and then do a poll which we should discuss first so that most people Find some interesting topics here So we have two microphones in the front So, please if you have something to say come up to the microphone In addition to the topics we have here You have any additional topics we should Put on the list put up to a vote. Just just come up to the microphone I have it under Related software so no no additional topics Or is there more interest in in discussing the basic stuff I Mentioned under Okay, then Let's just go over the Setup, then you have your Asian market your US market and so on An additional topic otherwise Yeah Just come up to the microphone any additional ones both both should work I know it is complicated to get that's right, but I was wondering about the encryption of the update so I know that the I think The key will be it's complicated to put somewhere, but so that would be one question. Yeah Internet is broken So and some people are writing binary diffs Delta updates for bandwidth constraint devices and hybrid setups for mixed requirements reliable system partition and Something else. Okay. Get a fixed battery for that one so my question is how to In the case if you cannot do a B updates if you just can't you don't have the capacity What can you do to mitigate the risks? So, yeah, let's go through them collect How many people are interested in discussing them and just pick the maybe three or four most interested topics so How to target different Markets and environments who's interested in that five six people and cryptid updates alternatives for AB systems for resource contraint systems 20 How to detect a successful update? 20 again migration of user data 25 Generic discussion about missing features in the existing tools automated testing of the update process itself 15 base system independent or versus the application update signing with hardware security module or crypto tokens benefits of Benefits versus complexity of multiple signatures so systems which require many Signatures before they update can be installed like the update framework six Data updates 15. That's basically yeah Let's say that that's the same as binary diffs updating secondary systems or processors peer-to-peer update distribution So if you have a network of many systems, which is connected to the internet slowly atomic bootloader updates 15 Hybrid setups reliable system partitions and partition for user application. I would say that's the same as Base system versus application updates common format for images with a manifest for signature and dependencies one and Ease of supporting a new bootloader one as well So what have we? Let's say migration of user data 25 and Yeah, basically let's start with that so Maybe I can start with what we've been doing for that for our customers Basically, they all have AB systems where the root file system and the application is stored But that's really only so for the application data for configuration data like IP addresses use on the network interfaces and so on those cannot be stored in the root file system So the obvious solution is to use some third partition for this data But then you have the question what happens if you need to change the format of that data in an update so they're basically If you different approaches you can do there Just define a format with something like Jason where you can add additional Properties where the old system won't be confused on a rollback. You might lose the data if it if the configuration data is written again Another approach is to have two data partitions Which is basically copied on an update to the currently inactive one and the new system then does some data migration like a database migration script or something like that The advantage is that the you can do real migrations of the schema But when you fall back you are running with old data So you need to have some way in the application to Maybe switch off or disable booting the old system completely So yeah, I'd be interested in how Problems like this have been solved By other people I think they've been solved good experiences bad experiences nobody I Didn't expect that I would be a solo entertainer here so Please come up to the microphone and Relate your experiences Actually have more to question So how do you deal for like things which are general open source tools or libraries which need some config file in slash ETC So do you consider slash ETC being data always or is that part of the root file system or how do you deal with that? I think for the user kind of application is quite clear The user must make sure that on an update his application can still read the old configuration or The application to just kind of yeah Migrate it but for the rest for root file systems or for things which you don't control it's a bit harder Yeah From my experience it's mostly the configuration of Yeah system utilities like SSH or network manager or host APD or something like that are pretty static and In many cases it's enough to do some templating from data in the configuration partition so on boot up you read maybe the IP address or by SSID and password From the data partition generate configuration files At runtime in an overlay file system or something like that and then start the services The benefit of that is that if you move to a new SSH version where you need to modify other things which are not stored in in the configuration or not modifiable by the user then you just change your template and The template is also contained in the root file system. So you always can generate a matching configuration to the updated software and You can also complete Configurations in your data partition and copy them back on start up Okay, but then you have some kind of a bi between the application and those templates that thing you kind of replace there or But that's I guess that's fine Yeah, then there's some tools which yeah have schema to generate configuration files from Yeah, declarative descriptions in Jason or something like that Our gears lenses I think if you Google that that might be something to try but in most cases we didn't need the complexity just templating of static values into configuration files for standard open source services So maybe to give also some feedback to your question So this is essentially also what we are using for our application. So on the one hand we Yeah, we didn't also mainly we didn't want to integrate the knowledge how to write these Configuration files into our application. So our application writes its own little Database so it's just text and key value database and then we have a Small little application that bundles that know-how based really for that Let's say distribution currently in there and to generate the configuration files. So for what and we only do that when we shut down the system because like By design for our application. It is desired that the user does changes and then they only Are activated once the system is rebooted Maybe also in regards of The topic you had in regards of user data so, yeah, we came to the conclusion that because our The configuration of our complication is really really complex. So we really made the decision to when you install We store the user data in a separate partition. So if you install an update which is Compatible to that user data, it will continue to use that if it's not compatible it intentionally falls back to a default configuration and on top of that is that We then still retain these old configurations as long as possible So we have like like three slots for configurations And if the user then decides to let's say installs an update things Maybe not what I wanted goes back. He still has like the last two configurations. He can go back That also works if you go back to old software version. Yes, so it's it's really like like a whatever like like caching the old versions and Yeah, not throwing them away immediately How do you check if that? Configuration is still compatible to the new software. That's something you update a tool does. Okay, that is then Really a manual step the user has to do So there's a little like an external configuration program where you can see, okay, what is in the other slots and say, okay? Yeah, based on date and some version information. Okay, that's where I want to go back to as well so Also to roll that so this is fits I would say in the image and for our application also the updates we have an external configuration tool and you use that tool for Downloading updates to the system as well as selecting With which configuration do you want to run to so it's really like a configuration tool Is there something you would need from an open source updating tool to handle these workflows? Are you using one? Yes, we are using the from Stefano the software update and so we're using that in a way that we have a proprietary USB protocol for going from a Windows PC to the embedded Linux system and then the data is Really pushed block by block into a software update and then everything goes normal so but no so theoretically yes, we could do also with a software update like Over-the-air updates and all that fancy stuff, but in our application for regulatory purposes It's that that is anyhow not allowed You have to be at the system when you do the update so thus this is good for us Okay, yeah, thank you Well about the topics stated the user data migrations if one in one of our products We use the strategy of moving almost all of the user data to the cloud so we only store like a token and the Network settings on the device to connect to the internet and then the device if it is not sure what to do What the configuration is it can just always go to the cloud and check what's the actual configuration? That the user has set up on that. I understand that this Approach is quite limited based on the device purposes, but looks but it works for us And I don't know maybe if your device is like hardly connected mean not hardly But definitely connected to the internet you can use that and as a fallback You can always run user migrations on the cloud backhand and all that so how's it going for you? Do you have a Way to identify the individual devices, so Yes, each device has a user token that serves as a cryptographically proved access token to the backhand and it has per device ID We use these IDs to track them in the software updates also So we can just distinguish a device a from device B. So maybe that helps a lot with the solution but I'm probably the issue as I can see is that in the case of Backhand outage we have only a local cache the configuration and the user cannot access the configuration That's the greatest limitation of this Approach as far as I can see Yeah, thank you yeah, we've got a AV system with a user partition as type you described and in our case we You know after an upgrade the first thing that's run when a new version starts it runs a migration script Which updates the user day thrift has been a changing conflict But the the main issue with that is of course it prevents rollback effectively and that's something we haven't we haven't We'd like to get to in the future I think the other one of the other topics you've got about is detecting a successful upgrade And obviously if we were to do that we'd need a way of rolling back and you've only got one way migration script that kind of prevents that so You know typically the actual content of any migration is very minor modifications to only a very small number of files So I guess it'd be good I don't know of any tools to do this We're good to have something that would just take a snapshot of the file of the user changes Before you apply the migration And then so that you could roll them back afterwards. I don't have any tools you can basically Store metadata with the snapshot for which all the version of software this configuration applies I think that's probably the way we want to go. We haven't quite got there yet. Do we have Actual problems with that happening in the field that hasn't happened yet so far We've not had to roll back, but I mean in order to make the system more robust It would be preferable if road back rollback was available. Yeah Okay, so maybe just one more comment and then I Think we need to go to the next topic We are using kind of we have an AB system with a data petition and Using combination of overlay FS for configurations like Network addresses and network configuration and on the other hand our application updates the database on startup That's how we do it And you're not doing rollbacks on that user data We didn't head to right now and there's also Not that many Right now, but we have to check later if there's something we need to do there to do rollbacks on that Okay, thanks so the next topic this one Yeah, alternatives for the two AB systems In general, I like AB best because it keeps the system simple. You don't have a specific role for each slot But what we've done and what's supported by Rao and I think also by SW update is that you have Smaller rescue system, which usually doesn't contain the application Maybe just kernel appended in a drum file system, which contains maybe a web interface and the updater so that can be a small as a few megabytes and Then you can basically if the main system fails to start you fall back to the updater system and From there the user can install an update to recover the system The main disadvantage is obviously that to install an update you must first reboot into the rescue system disrupt the normal running application and Then reboot again to switch back and rollback is harder because you don't have the old copy anymore but yeah, that's usually how we do it if we have to because the storage space is too small with with emmcs Problem doesn't happen that met That often anymore because you have four gigabytes or eight gigabytes of emmc and it's easy to to have AB systems any more comments to that or Did a cover that? Another interesting Alternative if you have Many systems in a local network and some other controlling system in there An alternative might be to have the rescue system Just boot over network if you have enough control over your bootloader Then you just have an a system on the device itself And if that fails to boot you go back boot TFTP from the controlling server and that one can then recover the system That's more flexible. I think then just having a and rescue Because then you just install a new Recovery mechanism on the server and recover any problems you cause on the devices itself throw OS tree into the mix there as a as an alternative So full disclosure, I work on metrop data, which is OS tree for for Yachto But the advantage there is that Rather than having this fixed three-way split between The A and the B and the user partition all of that is in one file system, and then it uses some cheeroot magic And hard links to avoid multiple copies of files So if the problem for not wanting a B is lack of storage space All this need to do an upfront hard partition Then OS tree sort of solves that problem and also solves the delta problem a bit later But you need to trust your file system. Yes so You in all cases you have to trust the magic that goes on in emmc That does block remapping in this case. You also have to trust the xc4 your favorite thing and Our suggestion for people who've wanted to not do that is Is to go back to your rollback thing? So you end up with a small recovery partition? Which will reformat the xc4 and grab a new image over the network and then Assuming you trust the xc's hard against power off and corruption then you can use that. Yeah, that's a good alternative. Yeah Okay, next one so detecting a successful update that ties back into yeah recovery obviously into doing migration and It's surprisingly difficult to do that. Well What we've been doing so far is just to have Systemless service that starts late in the normal boot and Yeah, basically reset the boot counter that works reasonably well, but There's a class of problems. You can't detect with that. For example, you have You need to have network connection to contact the update server So the new system for some reason doesn't have working network Maybe the DHCP client is broken or something like that. The system would normally you mark that as a valid boot And you still can't do a recovery or you can't connect to the system So yeah, basically the system is a brick There has been some progress on the system decide where these I think it's in the last few weeks It has been merged. There's a mechanism for automatic boot assessment Basically, they added some boot targets which allow ordering such check services into the boot So you can just add a system to the service which checks the specific aspect of your system Maybe self-test of the application a network connectivity check and so on and you can assemble them as system The services in a target and only if all those Chacks were done correctly the system would get marked as a successful boot So maybe the system has no network the user needs to power cycle the system three times and then it switches back It's still not perfect, but So far I don't have any better ideas, so I'd be interested in Anyone who has solved that In a better way, that's what I feel Yeah, any ideas what we should try in that direction so So I'm not sure what the exact problem is here so because I mean like in Fuego We just SSH the board and if we make connection, it's up So is the issue that you can't detect that the newly installed software is actually there versus the Rescue or that the newly installed software actually works correctly or well enough But we don't need to fall back. Okay, so Many of our customers have very constrained testing resources, so Yeah If all our software were perfect, you wouldn't need rollback or something like that. So we need to decide when to roll back and Obviously the kernel crash during boot is easily handled by not incrementing the boot counter So three kernel crashes later you run the old system again, but they are If the update system itself doesn't work anymore in the new system or if you can't access the system But still can boot correctly Then you have a running system which you can't use and you can't easily recover from So we need to have yeah automated ways to detect that it will be project specific probably But you're looking for some kind of standard metric for what constitutes I mean it Valid boot. Yeah, so the obvious thing is you have network access and the updating service demon runs or something like that I was hoping maybe And then if the second one works then the first one was good So I don't have really a solution But I just wanted to tell what we did so we just ended up having a service waiting until system D is ready Then trying to talk to all services, which are interesting So we have a list of interesting services not interesting services trying to get from them What's what's their status is and then we opens a very bad bit of Hardware which we depend on so we have modems in our hardware And so we try to get out of the modem. Are you working? Do we need to update you? Are you working still and we still have no solution to to end this? Because we can and we have no network connectivity, but it's not our fault And so and we can't roll back because we are part of a network which updates and maybe we are updated, but our Master is not updated. So I would be very interested in something Yeah Implementing a sort of quorum or so so ten are saying it's good to us saying it's not it's good enough for us Or so and we're trying with this, but we found no good solution for this But I think this is really needed to decide when to roll back and we're not as a So we've had this problem building update as I think this is an error which is always going to be project specific and my feeling my goal is This is a point where it would be really good to collaboratively really create some API because on one hand there's the update is and there's a bunch of those and they all have this problem and a bunch of other problems where they need to interact with the rest of the system and But the API is very narrow. It's just it's okay or it's not exactly and if we could make that That standard I think that API is gonna have some other stuff in it because you probably need to signal that you've made a roll back So the user user space can recover and then there's a whole bunch of other places where the update system needs to hook the rest of thing, but that would be one part of a Contract between the update system and the rest of the thing and then you can do your checks in system D and whatever and then We want a standard way to poke an updater. Yeah, I think so what the system D people have? Implemented there is already 80% of that So you have one target which contains the checks and another target which confirms the boot as okay or To handle a failed boot. Okay, so you can just plug into that with system D services And the the logic I think is enough to handle those checks Other the boot assessment system D business me today, but maybe they've done it already. Yeah Sounds like it's standardized enough. So we couldn't just use that so So so one thing, you know related specifically to the the issue of networking modems being down and you know multi Component systems. I think there's definitely a case for you know a higher level orchestration view of this, right? Each individual system has the ability to detect boot Successful boot and whether it's you know the ability to talk to some specific server on the internet or you know Your clients are up and running that's one level But then you're I think ultimately in any in any complex design You're gonna have to have some well higher level orchestration that can detect failures that are simply not detectable from an individual node in the system And I know we've talked to plenty of users that are looking at very large systems Very complex systems bringing all kinds of different levels of devices together and ultimately that that's what they're looking at right? The devices themselves can say yes I know there's something wrong with me But that can only go so far until you have to have something that it's completely outside of the system that says okay This portion over here is just not there anymore, and then you know at that point that you might have to send technicians out or whatever if you're talking, you know very large transit systems or things like that Do we have any standard API for that or is it project specific? Yeah, none that I've heard of everything all the discussions. I've had about it have been very very project specific I mean obviously that you know some of the lower level technologies They use you know the Json and the the various transport layers and MQTT and that kind of thing those things are obviously standardized But it's how the all the components are put together is going to be again very application specific Yeah, thank you. I'm not sure if this is Connected but is it possible to use boot integrity so you know the cryptographic hash of what the image should be After it was successful and then verify that on yeah at that level I think all those systems we've mentioned already checked during installation and You can combine that with something like the invariate you like Android does to detect corruption of the Yeah, fight system or application data on disk but in Most cases the problems that actually happen are just bugs in the application and not Corruption that happens on the device itself And so it's not a failed update. It's a failed payload The payload is wrong from the beginning. Yeah, okay, and yeah, because the testing wasn't good enough so the customer didn't catch that problem and It happens in the field and we still want to have some way back to the old system So we can recover the systems in the field we Often have different kind of faults and we can Detect them and we report to them on our devices But we I think we most of us have a watchdog that which is a hardware watchdog and usually On your case, we link our application watchdog to this hardware watchdog in case we have If the software goes faulty and not Environment which is detected as faulty and then we report it. So in this case, we will take we reboot the system So usually when we do a firmware upgrade, we want to know very soon if it is okay or not but by waiting for the watchdog To kick off or not, we might get some help in knowing if the system is okay because we already did it We have some failsafe well some sanity check in our embedded software So we can just use it and through the watchdog That's something we've also done with customers to just wait five minutes before the system is The newly installed system is marked as okay to just wait if Any crash or the the watchdog at triggers Reboot so that works well in cases way the applications already have some integration with the work watchdog The thing which is interesting is we did the work already because the watchdog is already in place for Insanity in our software What I've also been thinking about this. I think they're actually two kind of successful one is device is doing its work and So it can run and it can all do all kind of things and the other is I Can install an update so basically these are a bit orthogonal and we are putting it in the same part Or more or less here, and I think it might be different to Decide do I need both or do it either because if it's I can install an update I don't need to fall back on here on the old system to install a new update to fix things But and can but if it doesn't run right now I think that's two kind of things that need to be looked at separately because I'm not allowed to try to install an update If the update has broken But it might still continue running and do the necessary things in terms of downtime So so I can keep running the new system So there might be need to to separate the two and and decide what to do for either of them Yeah, I think we still have ten minutes left probably So I choose one at random Atomic bootloader updates or Delta updates probably more interesting. So in in route we now have For some month support for CA sync which is another project by Lena petering to Yeah, basically do Something like the our sync algorithm over whole file systems So it will only download those things. It doesn't have in the old System can yeah reuse parts it already has so that keeps the download size pretty small if you have network and That's useful for systems which are connected via cellular data or something like that But what we currently don't have is some explicit binary delta to the old version which could be installed in an offline way So I think should the OS tree people have that basically under control For the image updates mechanisms, I don't know of a finished solution So maybe someone has built something like that or experience with that so our use case was pretty much was stayed up there it's Updates over GSM cellular modems. So and and that's like 100,000 devices updated over each own cellular the modem and it's a price tag per megabyte that you sent So Delta update size was like the single target that we had to minimize And to solve that we looked at different algorithms like CA sync. There's also this VC diff It's pretty I don't know if it's standard up, but it's a common format for describing a Delta It's implemented by Google in open VC diff and it's implemented by another tool called X Delta free I found that the later one was was pretty much impossible to link with so I chose the Google version And then also there's different Google project that's embedded in in Chrome and in Chromium. That's called Corset And it's it's basically so that that's like the three options that we found Corset is also it's falling back to be as stiff if it doesn't really have the options to use its own special lines person So it's Corset and VC diff and and CA sync And in our case we actually ended up implementing all three of them in the same update tool so that when we built the update from the old Version of our system to the new version we actually generate all three of them then choose the best ones So in the usual case the kernel image is going to use one version and the root file system It's going to use another version and then we bottle that into the same update and send it out So and that's and then of course we also have the option to use a full update If that's what I do on when developing because I don't want to set generating like what do I have on target now? It's I don't remember so I just pushed a full update How much do you save with those approaches all depends on the update so but but actually so we're working with 20 megabytes of Roof file system size new in and now in a in the squad surface image and I don't know it's like Delta's are from three kilobytes 30 kilobytes to two several megabytes and And even removing stuff from your image is going to take up space in your Delta. So yeah, okay And and it works fairly well Yep, that's interesting We did our own, you know AB update system because we were very short of developer resource and time We just used our sync and we have an AB system So we just have an arsic demon on the server and we just ask things the B system using the a Image as the comparator which you do with our sinks one of our sinks options and it's works surprisingly well It works well enough that we've never had to go and improve it It's because we have a lot of devices over GSM and they're saving it's very efficient And that's the arcing batch file mechanism. We just know we just No, it's not the arcing that isn't so that there's something I tried I think has a mechanism when When it does a copy operation it can write basically The Delta to a batch file which can then be applied to the same base again on a different system So you can use erasing even Offline by transferring those batch files. Yeah, but we've always thought in our situation we're always online and so We're using the a to the B image and just start using our sync to Transfer the differences and that's really quite efficient We also have an old boards where which modem is quite slow So our strategy was also something like homemade, but at the file level So the diff is done like which files have changed But one of the main problem there is how to implement the removal of files that have been removed after the updates so This is hard to roll back because if you have a problem you have removed for instance an important file That may cause some trouble But you have none not an AB scheme, but a rescue production So that's not very much a problem in the worst case. We just fall back into rescue Okay So I think I'm almost out of time I Just I just wanted to add that when you're running like a Delta update system You also have the whole new class of problems that you want to be sure that you're updating from the right old version to the new version and and that takes like and also The update path becomes critical So if you're on a two version old file system Then you want to actually do two updates in a row and maybe you have like this big matrixes of updates You might want to fetch from the update so so ads and a whole new class Well, see I think solves that problem by simply doing that if online Do you have for the other tools a mechanism to check that you're updating from the correct version? Oh, well my update bundle ships a checksum of the old version So and fails if it's wrong And then now I have a problem the product devices like how much checksum in the product device So I have to store somewhere in the update how big is the old version also? So that's yeah, you hit another class of products gets complex. Yeah Okay, some final comments was that interesting for you. Yes