 Okay, so let's get started. If you are looking for the talk about surviving in the wilderness, system update and integrity protection when you are in the right room because that's what I'm going to talk about in the next 15 minutes. The reason why I want to talk about that is partly because it's a complex problem, partly because I do have some experience in that area and partly because it's apparently a tradition that every ELC must have a talk about system update. So this year it's my turn to talk about that. But I'm trying to do it a bit differently compared to what was said for example last year. So I hope to give it a different twist so that you also learn something new in this talk. So let's start first with a short introduction of myself. My name is Patrick Oli. I work for the open-source technology center at Intel and I've done various things on open-source. Most recently the parts that are relevant here for this particular topic is that I have worked on security technology and in the end of the project it's also on system update in the OSTRO-S, yorkato-based distribution and then the technology at that point was IMA for integrity protection and SWAPD, one of the many existing solutions for doing system update. Then for the yorkato-project we started to evaluate or basically do a summary because other people have done that before but there was no good going point on the web anywhere to just look at one particular page that summarizes all the different mechanisms that are available to someone who is using yorkato or open embedded. So I helped starting that wiki page. I invited other people to contribute to it and we I think now have a very good comparison using a common language that describes for different mechanisms and the most recent work that I've done is I've actually taken some more technology, the severity and whole description made it integrated into a new project that is called IoT reference OS kit for Intel architecture. That's a mouthful so I'm probably just going to abbreviate it as ref kit to be a bit more brief and that's part of yorkato. It certainly has a focus on Intel architecture, but that's just what we are using as reference with technology itself. Yeah, might be Intel technology, but specific or it might be just something something more general that hasn't been used in the in the yorkato project so far. The advantage is that we are now going to exercise additional use cases and we're going to test those and I hope that this will extend the reach that the yorkato project has had in the past. But for example, the deam verity and oldest encryption that uses components that are not currently an open embedded core but come from say meta open embedded. But that enough about me. Why why this talk? Why is system update in particular important? Usually in other presentations, you will see some horror stories about security gone wrong. I'll spare you that. I think the fact that you are here might mean that you are familiar with with a problem. I just want to point out. Yeah, the general general approaches hardening before shipping. Yeah, that's important. You definitely need to do that or your product will be insecure out of a ball out of a ball out of a box. But then that's where the system update part comes in. You can't count on catching everything in advance. There will certainly be new vulnerabilities, new attack methods, and there must be some way to to update your device in the field to address these new problems. And the other part that I put into the title of my talk is surviving in the wilderness. With wilderness, I mean that an embedded device might actually be used in an environment that is not as friendly as your PC at home. It might be deployed somewhere where people have access to it, who might want to have not so pleasant things to it, like hacking it, like getting their own software up and running on it, and one mechanism to cope with that besides making the hardware, for example, hardware that hacking resistance is integrity protection. If you can get your device up and running so that it only runs software that has been verified to be unmodified, that's already got good good start. I'd like to add that integrity protection must also cover the configuration files. That's part of the integrity protection that we will see later. That is not necessarily part of all mechanisms. Some that only do the executables and then you still have a problem. So that leads to the content of the talk. I'm going to talk about different uptake mechanisms, but I'm trying to do it from a high level because I was interested in what are the key differences between these mechanisms and how does that influence the system architecture and how does that influence the possibility to add integrity protection. So these two parts go hand in hand because they both depend on the architecture of your system. The last part is actually about the work that I've done myself. Because I really reused a lot of open source components and reused knowledge that other people have already gained and used examples from the web. I'm really here just standing on the shoulder of giants. I was integrating a lot of existing work and it made me feel like a small dwarf compared to those people who came before me. And to make sure that you don't forget that part I'm going to wear this dwarf head. That's my promise to you. You will be easily able to identify the part where I'm not claiming credits for the work done by others. But I'm allowed to take it off in the last third because that's about my work. And I assure you this is not a Santa Claus head. No matter what my kids might tell you if you ask them. It's a genuine dwarf head from the coal mines of the Black Forest in Germany. Trust me. So let's get started with the talk. Tuxon and me. It's a complex word. What I mean is that I want to talk of the key aspects. I'm not going to talk about the individual detailed implementations that exist. There are good talks about that from past Linux cons or ELC. I'm not going to repeat that. I'm just going to use these four mechanisms that we've looked at in more detail as examples of specific solutions going one way or another in these key aspects that I want to talk about. And this is the URL. I encourage also other developers who have some open source solution that is available might become available to add information about their solution to that page so that it's really comprehensive and then a fair summary of what's out there. The table that we have in that week uses a lot more criteria and for the talk I just picked four because they are, in my opinion, the most important ones. The block based versus file based one is something that really has influences on performance, reliability, compatibility with integrity protection. So that is the important one that we'll come up later again. Then the partition layout, how much space do you need, what restrictions are there, integration with the boot process and finally that's a bit less technical but it's relevant for use cases around system update. How does it interact with some update server? So the first one, as I said, there are two key approaches. One is your system update mechanism rewrites blocks. That means file system must not be mounted at that time. It might write just modified blocks but in general current solutions they rewrite the whole partition. That's how swap date works. Swap date itself is fairly flexible but in general what it supports out of the box is rewriting partitions and Mender.io also made that choice. The alternative is file based updates. You have an existing root file system rewrite so you can modify files in that partition. Two prime examples are swap D and I have to be careful about the pronunciation. Swap D and swap date are two different things. Swap D is updating files and replacing files. Basically it's rectory trees as part of the update process. OS tree is slightly different. It creates an alternative tree using hardlinking and then during reboot it switches over to a new root. So it's a bit more atomic. But it still needs a writable root file system to do its work. Some other attributes that follow besides reboot required or not required is that the block based approaches have a fixed partition size. So you can't have an update stream that gets applied to different devices with different hard disks. That just doesn't work because the hard disk always has to be partitioned exactly the same way. Whereas the file based ones they can work in a way that the OS gets installed and uses for full disk as it is available on a device and still the OS gets updated using the same update stream on a server. Which makes it easier to support a variety of devices. And then of course IO is a key criteria. Updating partitions is just slower. You can try to be efficient with delta updates. But it's still always going to be more overhead than just updating a few files. And in particular SWAPT for example tries very hard to just do the minimal amount of work that is necessary to get the system updated. Another criteria is the partition layout. So a common setup is to have two partitions. One is live. That's one that's currently used by the OS. You can't rewrite it. But then there's a standby partition that you can override. And then after rebooting you switch to the other alternative partition. Mender IO works best way. And I guess SWAPT also supports it. Although SWAPT date is fairly flexible. It's a bit more like a framework. But it has these use cases implemented. I can't imagine how to do that with OS tree and SWAPT. But it's not how these tools are currently designed. So that's something that would have to be added. Also for fail-safe mechanisms a second partition is nice. If you get file system corruption with SWAPT or OS tree you do have a problem. Because if there's only one partition that one is now corrupted. And related to system update is always a question how additional items on your machine get updated. That could be firmware or depending on the partition setup where how the system boots it might also be the kernel. That's not necessarily immediately updated if it's not part of your root FS. So that's also something to keep in mind. And then the integration with the boot process. What mechanism does it need, does it rely on for booting? Does it need a particular boot loader for example? And the key question then is if you have such an A, B setup how do we choose what to boot into? Do we put on the left boot or the right boot? Which one is it? Which one is the one that we really want? And is it guaranteed that we choose the right one or is an attacker for example able to do a downgrade attack and force us to boot something that we didn't want to boot? And finally implementing a fail-safe mode or rescue mode that might also depend on certain features like a boot loader that is able to execute your code or make choices. Or if you have an in-it ram FS that's guaranteed to be around, Swapdate has such a feature. And the fourth point, as I said, that's a bit less technical. It's a bit more about the use cases. If you have a large number of devices you may want to do things like updating only a subset of those. And that relies on being able to tell what device is currently trying or checking for an update. And that's not guaranteed. For example, OS 3 and definitely Swapdate, they just work with a server and the client get checks for an update anonymously. In Swapdate that's an ordinary HTTP server which hosts some files. The file hierarchy is basically the update and the client pulls some files, gets a new version number and then knows which additional files it needs to update to that current version. And if you want to know if a Swapdate-based device is actually up to date, you need additional tooling, you need telemetry that tells you what the status is of your device fleet and that's not part of the mechanism that is provided by Swapdate itself. The alternative is to have dedicated update servers with a custom protocol that the client used to check for updates. And Mender.io, that provides that. It's part of their design, that they have a complete solution for the client and for server. So you can actually do some device management and say this device is supposed to have that version and you know that it has updated. That's also important. If you have devices out in the field that never get turned on, yeah, you don't get to them to update but still, you perhaps might want to know that. And Swapdate has some support for that with Forkbit, another open source project. So there are solutions, it's just not that everyone supports everything. So that's about the update mechanism. As I said, there have been more detailed comparisons but that was the point that I found important. And I'm a bit torn myself which one I would choose because they all have different pros and cons. And in some cases, they need further work. I don't know if someone should do that work or who is going to do it. So for the IoT ref kit that I mentioned, we currently have support for Swapdate but it's not even in the repository. So we haven't made a choice. The Yocto project itself certainly doesn't have a preferred update mechanism. It's just that for the ref kit, we need to choose something and make that work and see whether we can run into any problems architecturally in open embedded, for example, that might need further work. And I do have a list at the end of my talk about opens where I see technical challenges that are universal to all of these update mechanisms. So by doing this exercise, we might find further work items that really belong into Yocto and open embedded and that's something that we can work on. So if you have questions, feel free to ask in the middle. Otherwise, I'll just go and continue with the integrity protection. I've seen it mentioned by two different solutions, two different system update solutions. Swapdate was one and there was a new thing announced at Foster in the beginning of this year which whoever name escapes me because I wasn't actually at Foster myself. They also claim that they support Hockpit. So yeah, apparently it's used. So integrity protection. Again, different solutions, different aspects of how they are doing it. I've mentioned at the beginning of the talk that I have extended experience as a user of the IMA solution. That's the Linux integrity measurement architecture with an extension that is actually called extended verification module. Commonly it's just called IMA EVM. That's part of the Linux kernel. Then there's whole disk encryption with per machine secret key. I'm going to explain why I consider that also an integrity protection mechanism. And then the third mechanism, the invariity, also a Linux kernel feature. All of these are readily available. It's just a commeter of integrated them into a system. So let's talk a bit about IMA. I worked with that in the Oster project to try to make it secure. I was just really again dwarf at, remember, learning about it while doing it. And I had a hard time figuring out how to make it secure, to be honest. I learned quickly that originally it was designed as a measurement technology. So part of the trusted compute group design is that a device while it boots up takes records what software it is running. And by using a TPM chip, it can actually prove that to a remote server. So the use case is the device boots up even perhaps with modified software, but then some server that he talks to can ask the device to prove that it runs unmodified software. And if it doesn't, the server can then refuse to service that device. So that's one possibility to use that. But it's not actually preventing the device from booting. So it's not local enforcement of integrity. That was then added later on with the IMA. IMA covers the content of a file. So it's an extended attribute that is basically a hash calculated over the content of that particular file that the X attribute is attached to. And the kernel, when opening a file, reads the entire file, does the checksum, compares, and then refuses, basically gives an excess denied if a hash doesn't match, because it means that the hash was modified, the defile was modified by someone. And because the IMA X attribute itself is attached to the file, when we consider a scenario where someone tries to modify a file while the device is offline, we also need to protect the X attribute itself against modifications. And that's what EVM does. And EVM also covers protection bits, the owner of a file, so the metadata. And the way it's commonly set up is that EVM is a cryptographic hash created on the device with a per machine key. So it actually needs to be set up on the device. There is also a mode, I think, with asymmetric keys, but I'm not entirely sure that EVM can work exclusively in that mode. I think not, based on my experience. The problems and the reason why unfortunately have come to the conclusion that this technology perhaps is not yet ready for production is that these changes with data hashing means that it's very easy to accidentally create unreadable files. And we've run into various cases where that has happened. For example, SQLite appends to a file, but never closes for file. And because the mechanism that automatically updates the hash only triggers on file closing, it basically completely broke SQLite. I discussed that with upstream and we agreed that updating the hash on F data sync would at least mitigate the problem a bit, but that's not implemented and it's still risky. Because there might be other files that are just kept open and where data gets written. And even if file is properly closed, file systems are just not designed to flush the metadata together with the data. We ended up with situations where data, if I was closed, the data was written to disk, committed, the XFPUT had been updated in memory, but then wasn't flushed to disk for an extended period of time. And then we still ended up with an unreadable file, although the kernel had done everything that it should. It just hadn't flushed in sync. So it's a lot more fragile if you have writable files on the disk that are protected by IMA. Yeah, it's basically in the inode cache. That was my understanding at the time. It's in case of a sudden power loss. So you have written, you have flushed the data and then you lose power and then it comes up with the old XFPUT and the hash doesn't match. That was the problem. So it's fine as long as your device keeps running, it was just more fragile in terms of resistance against sudden power loss. And that completely changes the file system semantic. That's why, yeah, it's not a drop in replacement. Basically, your software running on such a system needs to know that it has to be a lot more careful. And, yeah, but the small straw that really broke for camel's back for me is that it's not actually secure at the moment. Unfortunately, there are known attacks because it doesn't protect the integrity of the directory. And that goes to the part about protecting the configurations. It basically allowed an attacker to remove configuration files, replace a file with a sim link to something that is not protected. So all kinds of attacks are currently possible and further work is needed to address those gaps. There have been patches, but they are not merged and it's currently unclear when that will happen. And therefore, yeah, I present it for the sake of completeness because we've done a lot of work on it, but unfortunately, I hate to say that because I know that people have invested a lot of work into that and it's dear to them, but yeah, it just didn't work for me and probably also others. So, whole disk encryption. Now, that's a bit more, a lot more mature. People have done that for a long time. It's commonly used on laptops, for example. Integrity protection here is basically a side effect of encrypting the entire partition because an attacker, when the device is turned off, he just sees garbage in the partition. If he tries to modify the partition, all he can achieve if he doesn't have access to the secret key is that when the device boots up again, some blocks will be completely modified and there's no way to do dedicated file editing, for example. That just doesn't work. You lose because, yeah, you lose, it certainly increases the risk because there's block chaining. If a single block has a bit flip, that will affect more than just one block. That's true, yes. The key problem and PAN is intended here in an embedded device. Where do you get the encryption key from when the device boots up? That is really the key problem here because normally on a laptop, you type in something. There's the user activating the device by providing something that is not on the device itself and an attacker who gets access to the device doesn't have that additional knowledge and he can't use the device. In the embedded space, people have done that, say, with USB sticks that need to be plugged in, where something is on the USB stick and then you boot up, you take away the USB stick. That's okay for home devices for toys, but on a craner scale, that just doesn't work. I am going to present one thing that I've done, which is a bit more secure than just dumping the key on the heart itself, but I don't think there is a perfect solution. What I've done is infused the TPM and the NVRAM in the TPM, but still the bootstrap and problem exists. Someone running, getting access to the device, still has the device, the key basically in its hands. It's just a question of how hard it is to get to the key. If someone has a good solution, I'm all ears. I don't think I have found one. We can look at the solution that I've come up with and see whether that can be improved. The third one, the severity has a different characteristic. Both holder's encryption and IMA allow you to write and modify files in your partition. The severity doesn't. It takes advantage of that by taking hashes, basically, of the data blocks in your partition. It's hierarchical, so you hash some blocks, you hash the hashes and so on until you arrive with a very short root hash, which is the key to unlocking the protection of the partition. Then integrity protection happens when individual blocks are retrieved from the disk. The kernel then goes through the hashing steps again, compares against the root hash and knows whether that block has been modified or not. The advantage also is compared to whole disk encryption, that you get a really specific read error saying, this block here is unreadable. You have a problem. While with whole disk encryption, you might have data corruption in a data file and you won't notice at runtime because typically file systems don't do check-summing of data itself. The severity thing is something that needs to be activated explicitly at boot time after verifying the root hash. The root hash really is the confidential, the property of the thing that needs to be protected. If someone modify gets tricks for kernel into using a root hash, then he can also rewrite the hashes and modify the data and there's no protection. So protecting and verifying the root hash, that's something that needs to be done in the boot process. Something that might be interesting depending on your use cases is that the actual partition itself is also usable without the severity. So it's just a plain data partition. You could also boot up without the severity and it would just work fine. Then the partition becomes read right, but then of course, if you later on enable the severity again, you don't have the right hashes. But at least it might be interesting to our device. I think that's one of the opens. I think stateless, the concept that is getting pushed by, say for example, the system D developers is fairly promising and we are not done with that yet. There are a lot of traditional Linux components that put all kinds of writable files into the Etsy directory and that is a problem. For a subset of these components, it has been solved by the clear Linux project. They have distro patches at the moment that patch say libc to not depend on files in Etsy. For the Yocto project, I think that's something promising and I would love to make that also possible, because a lot of update makers will depend on it. It's not just the severity. Also, OS 3 and Mender.io basically wipe out the entire root partition including the Etsy directory and you somehow have to have some additional data partition and then if you have system settings restore those. So it is a general problem. That's one of the things that I think it would be worthwhile to address in a broader scope. Unfortunately, it comes with these distro patches or the upstream patches that are not yet emerged upstream. Maintaining that on an ongoing basis is a challenge. The thing that I wanted to add in this talk is really the compatibility between these update mechanisms and integrity protection. Based on the key characteristics that I've described so far, we can easily deduce that IMA and EBM will work with the file-based worms. I've personally had that working with SwapD, so that works with the limitation that I mentioned for IMA. Encryption, I think, works with all of them. Perhaps not right away, but there's not nothing wrong with overwriting, say, the encrypted partition, the device mapper thing, the device mapper entry, and then you can rewrite the entire partition. It will stay encrypted, so I think that can be done with all four. And the severity, that only works with update mechanisms that update the entire partition. And they also have to update this hash data. That's an extension that, for example, Mendaio would have to support. It's not just one partition, at least in the way how I set it up. It's two partitions. But that's feasible. So let me come to this case. I call it case study. The status at the moment, in fact, is that my modifications have not quite yet been merged into the IoT reference OS kit repository simply because they depend on other patches that I submitted to OECore, for example, and Metasecurity. And those are still pending or were pending as of end of last week. But I expect we will merge that so that it will become a feature of the IoT reference OS kit. The general architecture. And that's the part where I still need to wear my head, because that's something that other people came up with, colleagues that have worked on the early boot phase in Austral. We kind of assume that we are booting on a device with a UFE firmware. And then what the firmware loads is a single boot x64 FE application, which consists of the system debut FE stuff. That's formerly known as gamie boot. But we only use this very small stuff. We don't actually use many features of that stuff. It's just so that this combined application looks like an FE application to the firmware. And then the kernel doesn't even need to support FE. The other parts of that combo app, as we call it, is the kernel being a RAMFS and some boot parameters, like what our root file system is. Or whether we boot with read only rewrite with traditional kernel boot parameters. And that's all in one file. And we can use secure boot signing to assign that entire plop. And then the verification in a secure boot enabled device is from the firmware to that chunk of components. And inside both components, we then activate the integrity protection of the root file system, which is the interesting part when it comes to the root FS. Something that was added recently is support for the runtime machine configuration feature, that's a meta Intel feature in the Yocto world, where we can have some kind of fingerprinting for different PC or for different machines that otherwise behave identical, but have, say, different serial consoles, for example. And then, even as part of the UFA combo app, we can identify what specific device we run on and then modify the boot parameters based on that. So we can have a single image that works on different devices with different serial consoles or other differences in the way how they need to be booted. The root FS, but it's currently X4, but we're not dependent on that. So it's not like we are really using specific file system features, could also be something else. And whether we use whole district encryption or DMVarity depends on the individual setup. So traditionally, it was not, neither of these, and I just added DMVarity and encryption as part of my recent changes. So that's now the part where I can really take off my head. It's actually getting pretty warm, and it's a bit scratchy. So I'm glad to get rid of it. That's fine. We've not actually implemented it, but we've done case studies where we've done it manually, and it works. So this approach is feasible. We know that it works. Another open is that we would like to have that as part of the build process so that it's done automatically. But from an architecture perspective, it's fairly easily to add secure boot to this. And the lack of a boot loader is actually a feature here because it makes the whole boot chain shorter, which means it's easier to verify that it's doing it correctly. Less components involved in a setup in an architecture is always good from a security perspective. And there's always the element in the room that currently I think the only boot loaders which actually can verify the kernel when loading them are GPLv3, which kind of conflicts with the requirement to support secure boot if you really want to lock down your device. But that's just not a lawyer. I'm just pointing out that there might be a problem here trying to do that. So in our case, we skip that part and we can do it and we believe it's fine from a legal perspective. But we lack this piece of code where we can put, say for example, a fallback mode. So it has its drawbacks too from a reliability perspective if the system update fails and something goes wrong with the kernel or the in-it-removeth, there's nothing that we can fall back onto. We only have the UFE firmware left and that does not necessarily allow us to implement fancy things like bootstrapping or fallback mode. That's an unsolved problem in that particular architecture. Okay. So the traditional target machines for us were things like Minoboard or the dual device. Now, I wanted to do experiments with TPM and those devices do not necessarily have one. And it's also easier to do it in a virtual machine. So I started looking at ways to add a TPM to QAMO and it turns out that, well, one of these giants before me have done all the work, it's just not merged into QAMO yet, unfortunately for reasons that might become obvious when I do my demo. It's not easy to use at the moment and some further work is needed to make it more and more approachable and more integrated into QAMO. But the code is there. It worked after patching QAMO. Some of my own work involved pulling all of these together, of course, but I also made it possible to boot the meta-intel machines with run QAMO. If you're familiar with Yocto and OpenBedded, there's good integration for QAMO. You do your BitBake, you build your image, you run, say, run QAMO, give the name of your image and a virtual machine comes up. And that depends on some additional information. There needs to be a configuration file for QAMO and that was missing for the meta-intel machines and that's now possible. The other problem for us was that we depend on UEFI. The default firmware in QAMO is just a traditional BIOS, so that didn't actually work for us. But again, there are firmware recipes for Tiana Core that could build the OVMF firmware and I've made it possible to integrate that with run QAMO. So with the patches that are currently pending for OVCore, it's possible to say run QAMO OVMF and then it will use that alternative firmware and images that depend on UF or Benboot also under QAMO. It's currently stuck because there's a compile problem. So if someone knows OVMF and VAT on a core, help us welcome. But we'll solve it. I think we'll get into OVCore. And then from a real-world perspective, the vision that I had in mind was that there is some real device locked down during manufacturing so that it has some secure boot keys enrolled and the manufacturer of the OS provider basically knows how to sign his operating system so that this particular piece of hardware really just boots the signed OS. And how to get that OS onto the internal storage when it's a problem. You might be able to flash it immediately and manufacture the device so that it has an OS installed. But perhaps it's a bit more like having a virtual device and then you still need to copy it onto the device. For example with EVM, if we wanted to use EVM, we basically would have to boot into Linux as part of the installation process to create an EVM key that is machine-specific and then write for files. So in that case, it would actually would have been technically impossible to create an image that works on the device immediately. Same with whole disk encryption. Whole disk encryption needs a per machine key to be secure and that means you need to boot up your device. You may need to create that key. And I think one way of doing it is to have something that I call an installer image which is a normal Yocto produced image. You could take say core image minimal and turn it into an image installer image by adding this class that I've written. And the class makes sure that you have additional image files on that installer image and it adds the installer script. And then when you boot up that particular image, you can invoke the installer and it will write to the target disk that's internal to the device. So that's the mode of operation that I had in mind when creating that setup. The advantage compared to what's done with the live image, for example, that exists in Yocto is that you can have really separate images. You can have a production image with completely different content than your installer image and you're completely independent of what you put into what. And also the installation can be done in a full Linux environment. I think the live image, live CD thing does the installation from the Internet RAM FS. So yeah, you could have, you have a bit more flexibility basically. Currently that is in the ref kit or my pull request for the ref kit, but I could also imagine if there's interest that we put this image installer BB class into OECore. Right now, it's there. If someone finds it useful, we can discuss further whether that's useful. The installation process then is fairly straightforward. You identify the target disk, you partition it and you, so that's a step currently where we really use the full disk, the target disk. So it's then copying files, not partitions, although both is possible. It really depends on this distro specific part that's not in the base class, but rather in the image recipe where you define what your input is and how to handle it and how to get it onto the disk. And I've done that so that it also does the whole disk and trip to set up using a TPM. So we're coming to that. And the, yeah, but that's the, that's the optimal part. I've not done the invariant. So currently on the target device, the invariant is not possible. It's just the installer image itself, which can use the invariant even if you don't do it for security reasons. I found it interesting that it has the, the nice side effect that you easily detect if your USB stick was corrupted, which is unfortunately a common problem. If you, if you do, if you support a OS, one frequent complaint is that people try to copy it to a USB stick, flash it to a device and something doesn't work. And often it turns out that it was a simple bit flip because the USB sticks are just not, not so reliable as it would, one would hope. And the invariant will detect that when reading from, from the the invariant protected read only partition. Okay. Another system component is the, in RAM FS. It's based on an existing framework in OECore. And I've just added some new mod. It's modular. So you can have additional components that get plugged into the boot process or in the execution of the in the RAM FS. And there's one module for the invariant that looks at boot parameters, identifies where the hash data is. And the other is it's kind of looks, but could also be just DM script. So the whole description, the setup, the problem that I try to address was this key problem. And I don't think it's a full solution. What I've done is that I'm assuming that there is a TPM on the device. And when the device first gets installed, the installer takes ownership of that TPM without specific authorization, because there is no way how to bootstrap that problem. So the authorization in TPM terms is, is well known. Anyone who gets some of his own software to run on that device will have access to the TPM, because there's no additional authorization, because there's no way how to get that authorization entered into the device. But it's still a bit more secure than just storing the key on the disk, because it is more complicated to get to it. And if you combine that with secure boot, you basically make it very hard for an attacker to just read the key. That's not a perfect solution, but I think it's more secure, and therefore it addresses some kind of threats. And that's always better than having nothing. Another interesting detail is that access to the secret key gets blocked after reading it once from the individual methods. So if you boot then into your full OS and some malicious software gets to run via some kind of hack, that software will not be able to read the key from the TPM anymore, because it's locked. That's the feature of the TPM that it supports that. The severity, as I said, we need to have the hashes that's comparatively to the root partition small, but it's still some space. And we have the root hash. The specific setup that I've used is that there is a second partition. In addition to the root partition where the hashes are stored and there's some reserved space at the beginning where I'm putting basically a custom header with a root hash, and I'm signing that root hash using OpenSSL. And then in the in a Ramaphiz module, there's the public key, and that verifies that the hash that it's reading from that first block is valid. And if it is, it activates the severity. If not, it must fail. That basically must prevent booting. And then we boot up with a device map where the root is actually the read-only partition with the invariability protection. I do have a demo. All of that, what I just described, is in a Git repository, a fork of IoT reference Git. I have a link at the end of a presentation. We don't have time to go through it. Basically, it shows how to initialize the TPM, and that currently requires root access, or starting the TPM requires root access, because software TPM is kind of written so that it emulates a real TPM device, and it slightly modifies the existing code in QEMO, which works with TPM as a pass-through. Basically, QEMO makes an existing physical TPM available to the virtual machine, but it expects that there is a depth TPM device. And in this case, the only way how the software TPM can provide that device is via root access and the character of the device in user space feature in the Linux kernel. But that makes the whole setup a bit complicated, and I think that's the reason why these whole patches have not been merged upstream into QEMO, because it's too complicated. And I've mentioned some of the opens already. You mentioned it. I didn't actually have to talk about it. You immediately jumped on the key points. Well, stateless root affairs, that's one thing where I see possibilities to improve the OECore infrastructure. In some cases, perhaps carrying some of these distro patches, in other cases, perhaps just reconfiguring components. Some other things are a bit more specific to the ref kit or the way how we are booting. There's currently no A.B. partition setup. In general, there's no recovery mode. So with SwapD, we don't get very... We are basically without any of the recovery features. So if we decide to stay with SwapD, we need to do a lot of work on that. Same with OSTry. I think the way how it's currently done. Okay, it might be a bit more secure than SwapD, but you still don't have many recovery features. UFE signing, it works if done manually, but it's not integrated. So that would be a nice feature. We do have an extra layer, perhaps, where we can put that thing. And finally, in a small detail, this RMC, the machine specific configuration database that I mentioned, it's currently getting read from the V3D system partition. There's no protection on that. So we do have a gap here if we wanted to enable secure boot, but this is more feasible. So I think you were a very good audience. You already asked questions while we were going. I still want to give you a chance to ask some more questions before we need to close. And I'm getting a sign that we are at the end of the talk. But if you're interested in the demo, find me at the Yocto booth. I do have a laptop there, which has the whole demo running. Or if you want to try it out yourself, clone that repository, follow the instructions under the doc how to image installer file, and it will show you what commands you need to invoke. It will compile software, it will show you how to compile software TPM, how to do the whole demo that I had planned on your own machine, because it doesn't need anything besides normal BitBake environment. It will give you the whole virtual machine, which is very neat for development. And thank you.