 So hello everyone, welcome to my talk, secure boot and over the updates, that's simple, no? So a little bit unusual experience, I have no feedback channel except what you are chatting, but let's try how this works. So first of all, I already tried the wrong button, but me, if we haven't met before, my name is Jan Kiska, I'm working for Siemens Corporate Technology, I'm an in-house embedded Linux consultant and developer, but here I'm also presenting in the name of our pro-laboration project CIP, and here I'm involved in the CIP core development, I also did some CIP kernel backpots, and beyond that I'm also involved in various other open source projects as a maintainer or contributor. So I'm presenting today, but actually the credits go to other people, they go to my colleagues, Christian Storm for example, who did the design of these and some initial implementations, the current ongoing integration and also the publication that we will see later on of the code, my colleagues Varian and Michael, also our contractors currently, Harald from Denks and Marek on the YouBoot site, so a lot of people involved in this, and yeah, those go, the things go to those. So about today, my agenda, I will start with the motivation of this work, why are we doing this, what is it good for? Let's use some of the concepts, dual copy update patterns in securing embedded boot patterns, and last but not least, combining both, we'll then look into the implementation aspects a little bit, so what are the impacts on the boot loaders we are using on kernel and in a drumFS we are loading, and also on the software update tool we are using here. Last but not least, we have some pre-integration available, and this is going to be quickly introduced. So the first question already showed up, the slides will be available afterwards, I didn't upload them, but they will come up online on the schedule of course as well. So where are we coming from, good old times in specific in our domain regarding embedded system that was kind of firmware, software was wrong, you ship it and you are done, because many of those devices were unconnected, we never dared to touch them unless there was really an issue, a functional fix needed, but the idea of having security updates on a regular basis, that's pretty new, well no longer, but in the past, and updates often are still in some cases, but less and less manually applied, so you have a procedure that isn't really involved over the air. As I said, the times are changing, or the times changed actually already, not just yesterday, the connectivity is getting more and more the standard, with that security updates become inevitable, so even if we don't think about it, well we do of course, as a company, but also as CIP, these security updates nowadays actually mandated by regulatory authorities, by standards, just to mention one IC, standard 62443, which is requiring you to have a security update process, however that looks like, well normally it looks like you have an over the air update. With over the air comes of course unattended update requirements, you don't want to click a button or insert a medium, that's important, and as it's unattended, you have to be robust, because the worst thing you want to have is a device which is bricked somewhere in the field, somewhere down in Australia, where you cannot get easily, even more at this time, so these things have to work under all conditions, atomically, rollback capable if something goes wrong in the middle of the update or if you deployed the wrong one. So I mentioned before the civil infrastructure platform project, I don't want to introduce it here in all the details, but just to set the context as this is flagged under CIP, what is it about, where does it come into play in this specific topic, well CIP is targeting and enabling open source and Linux specifically for industrial grade usage, making the usage sustainable, long term available, and last but not least making the usage secure. So along these paths, we are looking into improving open source project, integrating them and ensuring that they are long term available, and one aspect in that, but this has cross references, is the firmware update. So there is a working group, well there are many working groups in CIP and one working group is about software update, and the goal of this working group is to develop best practice patterns that we can apply on our product developments that we want to share that you can reuse, and also ensure that this pattern fits well with the other components we have with the CIP core, which consists of core packages that we want to use on the embedded system we want to maintain, that includes of course the kernel as well, it will likely include also eventually bootloaders, but this is currently the scope. We have a security working group, which is looking into making this kind of combination of work also certifiable, I already mentioned the certification standards before, so the goal is to work as far as possible on the platform level towards a certification, and well, key aspect of this certification is security concept, which involves as I mentioned before an update concept, which will involve also an over the year update concept. So naturally we have to align the work on this workgroup with the security workgroup. Of course we also want to provide not just the patterns, but also implementations that is done based on our core layers, so that it's ready to use and ready to derive from. The implementation is not only serving the need to use it, but of course also to enable the testing and the monitoring of the components that we are including regarding their functionality. So the test of all this configuration is then in the end also a goal of this workgroup, and enable that and align that with the test workgroup, which is providing the infrastructure and the organization of tests. So dual copy updates. So how do you update your device? Well if you look at your machine of your notebook, how updates come in, they come in in form of packages, and they probably will do a partial update of your device. Dual copy update is a different principle. It's coming from this ultra conservative better safe and sorry idea that you always have a fully working instance of your software stack running available. That's the A pass, so to say. And if you update the system, you keep that thing and you rather go for a second pass and prepare the update in that pass, the B pass. That gives you the chance, if something goes wrong with B, at any stage of the update, you can always roll back to A and you didn't touch it and that's the principle behind it. The benefits of such an approach is that it ensures consistent images. So rather than providing individual package updates on the running system, you provide a complete image integrated that's fully tested and that is known to be working in this configuration. It also avoids, when you do partial updates, that you have some kind of single point of failures in the update pass. Maybe the file system, maybe the package set in the combination, as I mentioned. So by providing this complete image in the B pass, you are sure that this one is working either or not and not touching the A pass. The whole concept is relatively simple, but that comes at a price. So depending on your requirement, obviously as you now have two full pass available, you need more storage, maybe twice as much, at least for the part you are managing this way. And depending on how you transfer the update image, that of course impacts the transfer size. Well, that can be mitigated and is mitigated today by doing the Delta updates. So as you know that you have an A version on your device and you want to ship a B version, you do a Delta image between both and ship the Delta rather than the full image, which can easily be a few megabytes or a few hundred megabytes depending on what you are shipping. So that is the advantage. If you try to compress this, it goes down. So I'm seeing some question I would better address at the end, like you're just seeing here. So please give me the time and I will address them at the end. So now look into how this is running on the system. So we have the bootloader, that for the time being we exclude from this pattern. This is also doable, but far from being generic and well, not currently in scope for this pass. We assume that the bootloader is the fixed pass. The bootloader selects now which boot pass to take, A or B. What we install here in this pattern is having a separate boot and the root partition and the root file system partition. The reason for that we will come to later on, but this is the pattern we recommend. You can combine both, but then you will lose some features later on. These pass are static and of course they are exchangeable. So naturally, if you want to keep some information of the device persistent across an update, you need an additional storage, an additional container somewhere, a partition where you put device specific configuration data or other things that need to persist when you are doing the update. As I said, the update images are unmodifiable, so you can't merge something in there. So on your running system, the upper pass, the A pass, you have in the root file system an update agent running, and this agent is now responsible for preparing and triggering the update into the second pass. So it takes the artifacts that needs to be updated for wherever, from whenever it needed, writes them to the second pass. Obviously, it also has to identify the second pass, come to this later on, it writes the images and then selects in the boot loader the option to now do the next boot with the B option in a try mode just to have the rollback part. So after writing the updates and preparing the switchover, the trial switchover, so to say, the system is rebooted and the boot loader takes the second pass, the B pass. But in order to avoid that, we get stuck somewhere in this new pass. We have to set up a watchdog, a hardware watchdog, something that you normally have on your system anyway because you can always have some kind of other problem which causes a lockup of your system. But here we need this watchdog specifically early at the point where the new version starts to run. So this watchdog has to be started by the boot loader and has to run at least as long as it takes to get this new version up and running. And once it's running, it signals back to the watchdog, I'm fine, and the watchdog will not pull the plug and restart the system. So if we happen to do the restart for whatever reason, while this new pass B is still under test, you will have a rollback automatically so the boot loader will see, okay, I've tried this part already, this pass already, this didn't work. So let's go back to the A version and let it handle what went wrong and report maybe back to the agent that something went wrong. So that's the basic principle of the dual copy update pattern. Some more aspects that we are trying to follow here strictly. As I mentioned before, there's a golden rule, do not touch the working boot pass on updates. So we try to avoid changing anything in the partition table, anything that could possibly break the system. We also avoid that if we deploy new artifacts, we deploy them on the same file system that the old ones, the working ones are running. And as you see in the previous slide, there are two boot partitions, which may just contain a single file, for example, the kernel, but still they are separate. So just to be safe. Another aspect I'm talking about A and B, but depending on where you are in the boot stage or in the update cycles, you may be on the B partition when you're doing an update. So whatever you deploy, it has to be agnostic to where it is actually running, if it's on A or if it's on B. So this is essential that you have in the implementation of the artifacts, this thing removed. Some of the aspects, the indication of this I will come to later on. That's important and that's actually something we can handle generically. Once you are done with the update, you have to confirm the new path is working. So you do kind of a test run on the concrete device. What the condition for it is working actually is, it depends heavily on the application. That may be just that the update service is already running again. If the update service is working against an online backend and you know that you always have to be online with this backend, it's natural to check this connectivity and only confirm that the update was working when you have the connectivity again. Maybe you also want to check if the device functionality is still the same as before. So that's also something you may have to check. Again, that is specific to the project, to the product you're developing. The hook mechanism, of course, is the same. At some point you have to tell the update agent, okay, your update went fine. I'm on the new version, please confirm. Well, also advice in general specific to the project. If you happen to do an update and the format on your persistent data partition is changing, also be careful when you upgrade this format, unless it's backward compatible in both directions. That, of course, should also come last only if needed. Otherwise you may ruin your rollback capabilities at this stage. So now security comes into play. The ideal world security is rather simple. You want to ensure that no tempered firmware is running on your device, that someone installed whatever attack, whatever hack on your system. And you have a single firmware image like in the old days. Well, then this firmware image is somehow signed by the device manufacturer or wherever it's the authority. It has a certificate. The ROM loader has some key and it probes this key, probes the certificate with this key during boot up and only boots if there is the case. Well, this is the simple ideal world. Obviously the real world looks different, but that's the base concept behind it. Because in the real world what we have today are multi-stage boot processes with multiple artifacts involved at multiple points that have to be signed sometimes with different mechanisms. And there are vendor-specific mechanisms in place, at least for the early stages, definitely for the later stages. You may consolidate over software solution with standard mechanisms. That's what we are trying, but the early stage is specific. How many you have depends on the hardware, how many artifacts you had and things like this. There might be changes needed during runtime, which makes the model of, okay, I have my firmware image here. And it's static, well, it changed the problem. If you really have to have a file system which is modifiable, the whole concept of doing a static certificate on that doesn't work. So there's also one thing to keep in mind and to look for. Well, that's why we have a data partition. And yeah, last but not least in all this now comes the update thing in this. So while this traditional secure boot, trusted boot is designed to have a static train of trust, you now have two paths to take at least. And that has to be keep in mind as well. So the basic pattern we apply and of course, this is very basic. There can be way more complex things, way more convenient things possibly, but also way more tricky things to implement. So let's start simple. So the basic pattern we want to apply here for the 90, 80% case, whatever, which works for us in a good case is to have it as simple as possible. So the first thing is that the bootloader obviously is protected by the hardware that is specific to the hardware to your system. So not in scope here. Then the bootloader loads and validates the next stage coming. That's for us. The next kernel in the drumFS, maybe some device description. That's the key artifacts you need to validate in the next stage. Then we apply here the pattern of having a static root file system, write or read only. And this one can then be hashed and checked with a static hash. But obviously, this has to happen early from the boot or from the root file, sorry, from the inner drumFS. So this is where we apply then this check mechanism, for example, the steam verity. And then comes the data partition. That's the variable part. So how to handle that? Two basic patterns for that. Well, the first thing is to keep it open and push the problem to the application level. That really depends on what kind of data you are storing there. If you're only writing out log files there, you may not care about for various reasons that these are not tempered with, or you have a mechanism to ensure that. But then this is, well, a different topic. If you have application configuration, which is trivial to check an application level, that may also be no problem if you keep it open. Well, if you can't, things become a bit more complicated. That is because if you just encrypt this thing, you have to deal with the key. And if you keep the key open, obviously you don't have any protection. So what you need is something anchored in the device, which only the device knows. If you have this kind of mechanism, obviously it's easy to protect the data partition either by signing it on the runtime and keeping the signature running or by encrypting it. So challenges with the secure boot pattern. So first of all, you now have to take a look at your bootloader. What is it actually doing? Normally it's the universal tool to boot everything and to fall back to another thing if something goes wrong. With secure boot, obviously this is no longer desired. So you have to lock things down to the desired production case. You also have to look into what kind of runtime parameter the bootloader may take from whatever source and disable them, freeze them, whatever is required. If it takes an interactive session, well, as I said, for recovery, for debugging, for configuration, that may also have to be locked down. Generally it has to, just to make life on attacker even harder. So these things have to be looked at. But yeah, as I said, there is a little bit of dynamic which is remaining. So because we now have two boot paths to take, at least this selection has to be still possible. Yeah, other things to have to look at too, beyond the bootloader while you're loading artifacts and you have to decide about what you load and how is this configured. So the kernel you're booting here may also have some parameters coming in and possibly modifiable, the kernel parameters. They should be locked down. We will see later on how this can be done. And last but not least, plan for key updates. So if you are shipping a new version and you have for whatever reason to revoke the old key that was signing the old version or you have to install new keys, plan for that that this is possible and that you're not shipping a device which doesn't provide support this kind of scenario. So now look into some application, some implementation aspect. So I will look into the software agent at least the one that we are currently using. There are others on the field. Look into bootloaders for this pattern to fulfill our requirements, what kernel containers to use, so to package the kernel and to sign it and some in-it-ramFS logic that needed now with this pattern. So starting with the update agent or the update manager on our device. So we chose SW Update. It's a project, open source project. A versatile tool coming really from the hardcore embedded world modeling way more than what we are using here. So this is a pretty useful thing. So maybe it was to look at for other reasons as well. Primarily, it's a role and for us, it's to write out the artifacts that we want to update and to control the bootloader to choose the second boot option. SW Update has the advantage to have various input modes. So besides the over-the-air scenario, there's also still the local variant possible, the offline variant possible. So it's also up to the project integration, how to deal with it. So maybe you have an application which already handles all the remote part. Then you can still use SW Update and just call it locally and integrate it locally. Or if you like to have a web server where you just push your updates to, that's also possible already integrated. We added the connector to the Hockbit backend, an open source project which manages large fleets of devices for updates. So yeah, and everything is possible. We also have proprietary connectors here internally for other cloud systems. So that's all doable with this tool. So it's in the center of the thing. It can handle more than what we are using here. It might also be interesting, depending on the use case. So it can also handle the peripheral firmware updates. So devices you have attached, FBA bit streams or other things. However, what we are looking here is the AB boot pattern. And for that actually, initially SW Update was not completely directly handy. So because the normal pattern you have is that you hard code in your description of the update and I put an example on the right side, how the description can look like. And that you put there, okay, I have an image here and I want to put it there. And this there might be a file system. It might be a partition, a specific one on your device. Obviously, if you have AB and this is continuously rotating, it would mean that you have to ship two update artifacts, one addressing the case where you target A partition, one the case where you target B partition. That's not really helpful. The nicer solution that we have now implemented is using also the built-in mechanism of Lua scripting in SW Update. And having a script that identifies on the fly, okay, this device is currently using B, then I will write to A. And what A and B is, that can still be configured there. This script is then addressed by this type selection. You look at the right, there is a type round robin. And that basically selects, okay, the script and the script then uses the device specification, okay, what are the target partitioned UUIDs in this case, I can write to and finds out, okay, what's the active one, then I will take the other one. It's that simple. It works not just with part UUIDs, it also works with other formats there. So now looking to the boot loaders. So the first path we are looking to today is the UAV path. And that involves, well, UAV firmware, you have this already there. It involves usually UAV compatible boot loader or you boot the kernel directly. But here, as I said, we need some switching logic. And for that purpose, we cannot currently fully rely on UAV on its core functionality. And therefore we developed a small boot loader which fills this or plugs this whole or this gap that we need here, primarily with two goals. So we want to have a robot boot path selection for UAV targets. That's one key element. And the other key element is that we have a watchdog early enabled. Interestingly, UAV has a watchdog. Unfortunately, the watchdog service is usually turned off early by the kernel after booting. So this is not really usable. So this UAV boot guard boot loader provides its own infrastructure to enable hardware watchdog or other watchdogs available that could monitor the whole boot process. So if you boot guard acts as a replacement for the typical boot loader you have on your UAV rootfire system like grab or system reboot or others, at least as long as they lack features that we need. So maybe eventually we will also migrate the features over to a standard boot loader and make that available as well there. But currently this is the plug or the solution we plug in here. It's also supported by SW updates where the control can be done this way, contributed by us. And the model we have here with AFI boot guard it maintains two FAT partitions. Well, it's a UAV vault, so you have FAT where it keeps the state. It's currently using, but is the state? Well, it's the configuration what to load after the boot loader. So the UAV executable to start afterwards, which parameters to pass to this executable, the watchdog timeout, and last but not least the state of the system. So the revision that we are currently running with this pass and the flex, for example, like, okay, I'm currently trying this pass or this pass is already stable. So this is the state that we keep in this UAV partition. Now the challenge with secure boot is, well, actually, first of all, the good side with secure boot is that we can, for UAV, we can fully rely on the firmware services for validating the images. So the normal pattern is to install the public key you've used or you've paired with the images you signed in the UAV firmware. However, that works. It's depending on UAV setups configuration. You sign, obviously, your artifacts you want to load. In that case, this is the boot loader itself, UAV boot guard, and the started executables. And that means UAV, the firmware, will automatically validate both the boot loader as well as the artifact that the boot loader starts because the boot loader uses UAV mechanisms to load and start the succeeding artifacts. So problem solved. However, as I mentioned before, there is more state to handle. There is the state that the UAV boot guard keeps that is unprotected. Well, if we would sign it, we couldn't change it anymore because the private key can't be easily on the device. So when we keep it unprotected, is this a problem? Well, first of all, what we have to do here is to eliminate the variables from the state which could be a problem. And that is, for example, the parameters passed to the next stage kernel image, the next UAV executable. And we are doing this by using the so-called unified kernel image. That is a container generating in the end a UAV executable embedding the kernel image, embedding the kernel parameters. So it makes them hardcoded this way and everything which is passed then by UAV boot guard still is just ignored. And also embedding optionally an inner drum FS. Maybe you noticed before on the previous slide there is no inner drum FS and UAV boot guard concept. We didn't use it so far. We will now need it by combining it into a unified kernel image. We have it. So problem solved. Because all the other state variables I mentioned before, they are not critical. The worst case in tech I could do with modifying these variables is to run in the dial of service attack on the device. But anyone who has a physical access on the device can very easily denial the service of the device either by physically damaging it or, well, you have signed artifacts simply flipping a bit and the artifact will no longer good because the certificate will no longer prove that the artifact is valid. So all the state variables otherwise are uncritical. They could be meddled with and the worst case to get as I said is denial of service on the device. Now the other part. When this covers probably the 99% that at least we have to deal with, that's U-Boot. Well, it's a kind of defector standard for many embedded devices specifically on non-X86. Well, the good thing about U-Boot for us, it already covers a large set of the required feature that we need for the secure software update. It has a scripted boot path we can use to implement the boot path selection and the failover mechanism. U-Boot also has watchdog drivers or at least the framework in many cases also drivers included. Otherwise it's easy to add. So we have that one as well. Obviously U-Boot is also frequently used in secure boot scenarios. So the features are there, the knowledge is there, how to lock things down, although this could still be improved. And well, maybe eventually we'll even have us an option for UEFI. Well, UEFI is already working secure boot. UEFI is also pretty advanced, but we haven't fully tried it out myself. Maybe others did. So maybe this would even eventually provide us switch to the other path for both. But currently we are using the native boot path for the U-Boot. Native means we are modeling with the normal boot scripting of U-Boot, the boot element, just adding two additional environment variables and a little bit of course of script logic there. These two environment variables control the required states we need to or manage the required state we need to handle here. So there is the U-State variable that manages the state of the update. So if we are idle, if we are trying an update or if we just booted an update or if this update failed, you see the state machine on the right. Don't want to go to all the details here. That's the one variable. And the other variable is suselect, which basically says, okay, my current working boot path is A or my current working boot path is B. That's all. And with these two variables and a little bit of scripting, we can plug U-Boot together and provide a similar functionality as we see if we bootguard on the UAV side. So securing this up. So first of all, you need to sign obviously U-Boot or other artifacts involved in the early boot stage, according to the sock you have running there. That's always different, unfortunately. Maybe this will improve in the future, but that's the current situation. Then you need to sign or need to generate and sign. That's the next stage that U-Boot is loading, fit images. That is equivalent to the unified kernel image we had on the UAV path. These fit images should contain the kernel, the edit.rum.fs and the device tree should be signed. The public key for this should, of course, be made available to U-Boot or to the storage that you have on the device to store these keys. That is the one thing. Furthermore, you have to lock down the U-Boot configuration for secure boot. There are some guides in U-Boot. There's also recently published white paper, which describes this at a pretty good level. Put the link here. Well, and then the same question comes up, how to manage the state variables. First of all, the value of the variables, as I explained before, are not critical to us. That's the good thing. The kernel parameters, for example, are already part of the fit image, namely the device tree that you put there, so they're not here in the problem. But how to deal with the state and where to put it. So the approach that we are taking here is, well, to keep the state, the U-Boot environment variables in external environment. That alone opens some problem unless you take additional measures because normally the external environment holds the complete environment and overloads or overshadows over the internal, the built-in, and also the protected environment you have inside U-Boot. So what we have to do here is to configure the system that only the two variables that we want to read from the external storage are really read from there. And all the other, what is written there possibly is ignored. Furthermore, type checking is applied on these variables. Well, these are normal integer. We shouldn't interpret them otherwise. And yeah, as I mentioned before, all the other variables are locked down. They are not read from the external environment. They are rather read only from the built-in environment of U-Boot, and thus they are protected by the protection of the U-Boot executable itself. So there were some changes needed to U-Boot. The patches, I didn't check actually if they have already been posted, but they are already internally written by Mark Vazut for us. They built on the variable flecks that existed U-Boot and enhanced them for this use case. Now, the last implementation aspect is booting the right root file system. As I mentioned, we cannot really hard-code the boot path. What you normally do, you say in your kernel command line, okay, root is that partition, that part UUID. Well, that doesn't work with the AB pattern because you never know if you are on A or on B paths. So what options do we have to make this decouple this? Well, one thing would be using native file system UUIDs, which are available to some file systems, but not to all. The other option would be to update the partition UUIDs on the update and basically writing new UUIDs out as you update paths. And then you can use, of course, again, the part UUID pattern. That is also not always working and maybe you also do not want to write on the partition, as I mentioned before, in this way. So what we rather choose is a kind of self-made file system UUID here. That is based on, well, very simple mechanisms. So first of all, we generate a UUID when we generate the artifact artifact and write this UUID in a, well, it could be in any file on the file system which shows ETC OS release and a custom variable there. We write out the UUID of this instance of the file system, this version, so to say. And you also embed this UUID in the inner drum of S, which is corresponding to this file system. And then we have a small patch. Currently, this is for the Debian pattern I will present later on to enhance the normal boot process by opening up all root file system options, fines on the system and simply doing the match. And then in the end, changing into that root file system, which matched. And then you have this self-made file system UUID, so to say. So, as I mentioned, we have done some pre-integration that is initially targeting our software stack here. So, we're using Debian binary packages integrated with the ESA build system. There are separate talks on this, so I don't want to go into details. The ESA CIP core layer, think of it as an Yocto layer just for the ESA system. That contains already description how to build an image. Now it's being added. There's ongoing pending match request for this, now being added by a specific demo target for QEMO x86 using the UEF eCQ boot pass. So, what will be added there? So, basically integration recipes and configurations to do the proper AB disk image layout to generate also an update container in the format that SW update can consume. SW update is not yet a directly usable Debian package. It's ongoing work, but so we have to compile it specifically for this use case and its dependencies. That's also described here in this integration layer. If you boot God, it's also not yet packaged. This is another package we have to add here. And, yeah. And furthermore, the root of a selection mechanism, as I described before, is also part of this integration. And last but not least, the signing, at least demo signing of the artifacts. You can find this here on Korean's branch. And also, there are some triad instructions. What we are planning to do after merging that, what's currently missing, but this is, well, it's internally ongoing work. It's just not published yet. It's the root FS validation pattern. No big effort anymore once the parts are there. We definitely want to add also the UBOT pattern. We are currently working on a concrete device where this is being implemented using also part of this published code already. That will involve changing the further parts of the thing, but it's almost ready. Actually, for us at Siemens, we want to use this publication and integration into CIP as a consolidation point because there were many projects running over the past years, using these patterns in different stages of the evolution. Now we really want to consolidate to one pattern and one source of recipes and scripts and things. And this shall be either CIP core. Further things to look into in the future. I would like to explore what we can do with on most devices, ARM devices these days with Opti and secure storage to protect the data partition. It would probably cover a broad range if this works well. I haven't played with it yet. Provide also in CIP the metadata path. That's more the Yocto OA style thing that we work on. Good share could be reused, not all code, but a good part of it. And then would also address Yocto-like systems. Last but not least, we already demoed in the past some full stack demo. And that should also be updated to the changes we have done here. This full stack demo, I'm not going to further here. There was a presentation by Akuhiro Suzuki at last year's CIP mini summit. That includes also remote backend, Hawkebit, and you can have a look at this, how this works. So this is the complete story, so to say, of an over-the-air update. So with that, I'd like to summarize. So secure boot and robot software updates, well, you may say it's no rocket science if you look at the individual pieces, but there are so many pieces, also nothing to do in a long afternoon. So it really takes some time to integrate them all. Luckily, all the pieces are available by now on also open source, but the integration and the configuration them in the right way is still the key. It's really no commodity. In the CIP workgroup, we want to provide and organize the blueprints, the pre-integration of these pieces, and we also want to ensure that testing is done on it and long-term maintenance. If you have a question on this, follow up directly or the CIP def-mating list. So our goal is really to make these features, this secure software update over-the-air updates commodity and it's way more easy to use them on your own products. So with this, I would say thank you and try to walk through your questions. If I have a little bit more time, yeah. Okay, so let's start. Yeah, so the first question is to send Delta updates over-the-air, you need to know which version is on the device. So yes, this is a problem. Normally, you have a database in the back end where you can manage the device and you can manage also which version is running there. That's what Hockpit is, for example, doing for you and that with that knowledge, you can organize shipping the right Delta to the device. Otherwise, you may have to ship more Deltas or you have to ask the device for this version. So unless you are maintaining a lot of versions in the field, the Delta works if you have only a few baselines to jump from. But I agree, it can become cumbersome if you have too many versions in the field. Next question. So, oops, I jumped. Damn it. So how... So the persistent data storage also need to be dual copy. Configuration data may not always be backward, forward, compatible. Yeah, good point. We don't have this in the pattern. We currently have no device to my knowledge in the field which requires this. It's one of these extension, as I mentioned. You can always make things more well, complicated or more advanced. So this would be a pattern to add into the future if it turns out to be really useful for a number of devices. But currently, we don't have a pre-integration for that. And well, if you have something to share, would be welcome. So next question. How do we deal with to get the AB updates for the lowest level firmware as well? So I guess this is targeting bootloader updates, firmware updates, and all the things. Yeah, as I said, this is very specific to the device. There's ongoing activity to consolidate this. The NARA is working on these things. You have standardization in UEFI to provide channels how to feed in it. But in the end, the implementation is a device specific. We really don't work on that actively, but we follow these activities and we would be happy to include them. What we did in some concrete project is some concrete implementation for whatever iMix 6, iMix 7, or something like this, exploiting the specific feature of the device. But this is something, as I said, it's specific and it also has to include often constraints from the from the specific project, what to deal with. So, yeah, would be nice to have a generic. Maybe we have it one day and then we can address it. But still, the normal case for us remains the boot loader is the more static part. It may be updatable, but not on every update. So the watchdog is needed at boot time, but regardless of the update procedure, yeah, I'm not sure this is a question. Yeah, I don't know, but maybe you can ask again. Yes, the watchdog needs to be used. We usually started at the watchdog, we started at the boot loader now. And of course, as me, Linux has to be available, has to be aware of taking over the watchdog as it's running already, specifically, it's a no stop watchdog. There are sometimes you need to patch something, but generally this is working fine. Updating the boot loader itself, I hope I address this question already. Ideally, you want to confirm by an external agent. Not sure if I get the question. So the question is, in other words, the persistence of the switch should be done by a command executed by an external agent. Well, the agent is running in the root file system we have running, because this is what it's currently being able to to prepare the device for an update. You could run also other patterns, SW Update, for example, is providing a pattern to have a specific update system booting up, but this is something we didn't address here, because this year tomorrow our experience is simpler. Hope I address this question, otherwise you can follow up later on. How do you upset the update? Confirm the update after checking the system state. So how do you set up the process to confirm the update after checking the system state? So the confirmation is taken basically by SW Update. You normally have their, yeah, an input channel. You can call or a command you can invoke and SW takes care of either then talking also to the bootloader to do this or whatever is needed there. How that channel looks like is depending on your integration with SW Update. Is it locally called? Is it called via connector by the back end and things like this? So there are various ways to this and this would go beyond the scope of this talk. I'm happy to answer this also offline. What patterns exist there or point you to the SW Update documentation. How do you deal with configuration files and data partition during update? Yeah, as I said, it depends on application specific. We usually do not touch them. You can of course add in your update scripts rules and code which is executed during or after the update better to do any changes to the partition. But we considered currently out of scope for the update that we are modeling here and this is application specific. So yeah, I'm not sure if I have further time. With me a signal. Otherwise I will just continue. There's another question. We've started to see devices such as Nick supporting live firmware updates not needing to reboot. We haven't however seen systems for example servers supporting live updates firmware or BIOS updates. Can you offer some thoughts on why server systems have not been able to update BIOS without needing a reboot? Well, that's the tricky part is here. It's really implementation specific what you can do in the end. It's just like for our system. You want to show that the new version has been booted once you can of course just flash over the new version over the old one and then pray that the next reboot will take a working version. But normally you want to confirm this. You want to commit the changes and that usually involves restarting the software that is being updated and firmware is not just something that only runs during the boot process and then it's gone. Often these times it contains software which is run even after the boot process during runtime and that also you want to update and you don't want to update while the system is running. It might be sensitive managing hardware resources and it's all doable but the engineering effort goes up. So that's the reason for that. Okay, so I have one more minute to go and there is one more question or two more questions. What does key refer to in this context? Package key. So key was a general term. The key is basically the public key we are using here which was used to sign the images. That is a general term. So anything we need to validate that the certificate of the signed artifact has been valid, so to say. It can be used open SSL and in the drumFS to verify the root file system instead of DMVarity since it can validate the entire root file system rather than DMVarity since it verifies only the blocks loaded. In theory you can put anything into the open SSL into the in the drumFS that fits in there we haven't tried this pattern yet. If it's useful and it's better than open other than DMVarity we are happy to take this hint and maybe even code to make it available. So this is open. Yeah, thanks for the information. So I think I managed to run through all slides to all questions. No, actually not. There are more tabs. So sorry folks, I couldn't address them all. You can you can reach me on the Slack channel as well. So with that, thank you. And well, maybe see you later.