 So, hello everybody. I have good news and bad news. The bad news is that I am what's standing between you and lunch. The good news is I only have 27 slides, so bear with me. Before we start, a few words about who I am. So, my name is Gilad Ben Yosef, and I work for ARM on cryptography and security in general. Specifically, I'm the maintainer of the ARM trust zone crypto cell Linux device driver. They're working on and around the Linux kernel for quite some time. Some of you may also know me as the co-author of Building Embedded Linux Systems, which really is just to say that I've been working on the Linux ecosystem for quite a while. So, this talk is actually based on a true story. Something that actually happened to me that triggered me to make this talk. I went on a vacation in Slovenia, by the way, which I'm quite confident is a conspiracy by SD card makers the whole country. It's just so beautiful that you just have to keep taking more pictures. But unfortunately, vacations end, so I went back home, and I opened the door to my house after being away, and I heard a siren, which is not something you expect to hear in your own home. It turns out that the siren was coming for some CCTV, the camera in-home camera, that was supposed to record what's going on in my home. But after some looking into things, it turned out my specific version of the CCTV was taking somebody to control of it. And luckily for me, they chose to do a distributed denial of service attack against who knows where, rather than trying to take pictures of me going down to eat something in my Andes late at night, which is a bothering thought, not just to me, but also should be for the rest of the world, if you have seen me in my Andes. But anyway, I digress. This really made me very sad, because I'm working on security infrastructure in Linux, and I should have mentioned this. The specific CCTV box that I was using was actually running Linux. And I was sad because I know that we, as a community, as an ecosystem, can do better. We actually have the tools to do better. They might not be perfect, but we're working on them. And the purpose of my talk is to discuss or explain how we can do better when we're building this embedded system, these smart devices. So the problem definition, basically, is that we use smart devices for everything. And smart devices, for me, is just my ongoing definition for anything between CCTV, a cell phone, a DVR, set box, our cars. It's a smart device. It's an embedded device. More often than not, it's running Linux. We use these things for everything. And that means we need to trust them. That's not so easy, and it's sadly or happily on us to deliver. So what does it mean to trust? What does it mean to trust a smart device? So what do we want? So as you see here, what we want is secure devices, and we want that just after we finish downloading this completely random app for somewhere on the internet, which is maybe a funny way to say that we want to keep our secrets, me and my undies. We want our devices to serve us, excuse me, not some malicious stranger. I want the CCTV device to take pictures of my cat mangling my kitchen, not the DOS, some random server on the internet. But we also, at the same time, want the device to be connected. We want a frictionless user experience. We don't want it to ask for a password or whatever. And we want an open-ended device, something we can install software on, a generic device in some way, the app ecosystem and all that. And of course, these things conflict. So there's lies, our problem. We want a secure environment, a secure device, to do things with them which are inherently dangerous. And that means that we're going to fail. And that means that in order to get secure devices, devices we can trust, it's not enough to have a first line on defense. We need to assume it's going to fail in some way or form. And we need to think about what does it mean to fail safely? What's a trusted way of failing? So, again, what do we want? We want that if somebody took hold of our device, temporarily, okay, we don't want them to have access to all the secrets on this device, whether these secrets are pictures that we don't know to see or encryption keys or what have you. If someone broke in, right, to a specific device, we don't want that to manifest into be able to gain additional, access to additional resources. If some unauthorized change occurred, we want to know about it, right? If somebody changed our kernel under the hood without our knowing or any other software, it will be good to be alerted to this. If somebody broke in, if by mistake we ran some malware, it would be really good if we could just reboot and forget. So, what I'm talking about is the threat model of dealing with a malicious entity that got some sort of temporary hold of our device. It could be some evil government, you know, a tyranny, getting hold of our device during custom inspections. Or it could be something like a malware that has been running on device because there was some security flaw. What can we do then? This is what I'm going to talk about. So how do you build such a system? So it turns out we actually have the components, we have tons of components, we have more components than we actually necessarily need, so there's more than one way to do it. And the purpose of this talk is going over these components now and try to explain how to fit them together because I think that more often than not the problem isn't necessarily how to use a specific component but figuring out how it fits to the big picture. And hopefully that would be easier than getting hot water from the hot water faucet at a certain ANA lounge I was at. You can see the instruction for that there. So, let's start. And when we start, we want to start at the beginning. Let's start when the system boots. So there's this concept of a secure boot or verify boot, or name differ, and there's actually seven versions of this, seven standards. I'm describing one, I actually follow the Android style secure boot or verify boot, but the general concept is actually the same. So we have a device, it powers on, there's some boot loader, probably some firmware before that loads the kernel and the kernel runs in it. And what we want to achieve is to get a chain of trust, to have some sort of root of trust in hardware, in firmware, somewhere thing that we can rely on and use that to create a chain whereby this component verifies the next one. So, for example, we can have a firmware running really early on in the boot, something which is not revealed to change, it's in ROM or E-Prom, something like that. That uses some secret inside the hardware to verify the boot loader. So it reads the image of the boot loader, take a hash, take a signature, and only if it matches will it let it run. And the boot loader then, of course, can do the same for the kernel image. And this is what I mean by a chain of trust. We start with something we have some high-level trust on, some secret in the hardware, could be a ROM with a signature or ROM with a public key, we'll see as we go along further how that works. And we use that to verify the next stage and the next stage that we now trust can verify the next stage and so on. So, verify the kernel, load that, and now the question is, okay, so we have a kernel, we just took hashes and signatures so we know it's cool, how do we go on from there? So the Android solution for that or the start of the solution for that is something called the Android Palans Verified Boot. In the Linux kernel, we relate to it as DM Verity. It's a Linux device mapper target named Verity that provides a transparent integrity checking of read-only block devices. Okay, so I have some file executable file in a file system, I load it, and I want to know that it was not changed by some malicious entity that may have access to my device. And DM Verity helps prevent persistent rootkit that can hold onto root privileges. That means if somebody was able to get access to my device and even run something and even break into the kernel and what have you, if I can get to a situation that if it changes the persistent storage, I know, and stop that, right, then I can get rid of that malware but simply rebooting. And this is what DM Verity does and what happened if you took a device, specifically an Android device, and changed the root file system that holds the application without, you know, regenerating the signature that DM Verity worked with, you'll just get the message, we cannot boot your device, somebody temper with it, stop. You might ask yourself, why is this different than what I mentioned with the kernel and the bootloader? Why can't we just take, okay, there's a blob of the file system, right, can take a hash signature. I know the previous state is trusted because chain of trust, what's different? So what's different is size and speed, right? The kernel is maybe bigger than we want, but it's not that big. A file system can be much bigger and we don't necessarily access each part of it at the same time, certainly not at boot. It will really be a shame if we need to wait long or should I say longer time than we already do when our device starts just because we want to hash the entire NFL system, even older files that we may never use. So DM Verity does something smart. It uses a construct called a miracle tree, not this miracle tree, this has nothing to do with it. A miracle tree which is a construct which says something like this, let's take our storage device, all the block in the storage device, right, and for each block, we'll compute a checksum, hash, and we'll keep a hash of this block and we'll keep blocks of the hashes together and hash them. Therefore, do it again and again until we have a root hash, right, something that basically includes some sort of data from each and every one bit in the other line blocks. The cool thing about that is that if we can get a trusted root hash, right, if we, for example, sign that root hash with some public key which is part of the firmware which cannot be changed, I know that root hash is valid. I can then read this miracle tree of other hashes from the storage. I can trust them because each one of them will hash back to a block, that hashes back to a block that in the end goes and hashes to the root hash and I know that that one is trusted. So if they match the hashes are okay and if the hashes are okay, that means that when I read a block from storage, I can compute the hash on it and if it matches the hash that is written beside it on some other file system or on the same block device, sorry, I know that it is not being changed. So this is the way to get to a system that even if somebody got a hold of your device and managed to run a malware on it, if it changes the root file system, unless it can break the chain of trust and change the root hash that is stored, hopefully signed with a public key on the firmware or the hardware, then you know about it. You can stop booting. You can tell the user something is wrong. Okay, so this is the invariant. It's actually very simple to use. Okay, you can see here a very simplified example of the commands used to set up something like that and this example actually uses loopback devices. Of course, this also works with real block devices. So Verity Setup Format. Here's my file system. Here's another block device or it can be the same block device with an offset, no problem, where I want to store the Merkle Tree and it does its thing and it will report back the root hash, that long string of numbers there. And that I keep in a safe place, right? That I either burn into the hardware or I sign it with a public key which is in the hardware or something. I get a situation where I can trust that and then when I create this DM Verity Mapping, I can mount the file system and if something changes, the colon will catch me, say the hash does not match, refuse to use the block. I'll get an IO error and it actually has a folder error correcting and so on, so just one bit of random error will not cause catastrophic results. Okay, so we have one block of our system and that block is good, this model is good for securing a file system which is inherently read-only in a smart device. You know, the one that has the application that are the binaries. That does not change often. It changes when we do some sort of update and when I do the update, I can also update his signature and it's fine. But of course, this gives me some sort of integrity check and it's good for read-only device. What about the other stuff? What about the picture of my cats? What about the stuff that changes? User data. That will actually have more than one solution. The simplest and I think very effective is called in Android parlance full disk encryption. It's actually relying on an underlying mechanism called the Encrypt in the kernel and you can see here what it does and you can sort of guess from the name. Basically, the Encrypt provides transparent whole disk or partition or block device encryption as part of device mapper mapping. That means that the file system does not know anything about encryption. The underlying block device may or may not know anything about encryption. We put the layer in the middle that worries about the encryption. It actually supports a whole bunch of advanced mode of operation XDS, ESS IV and so on and so forth for those not familiar with these terms or a name of cryptographic algorithms or mode of operation that are relevant to storage encryption. Used by Android to implement its full disk encryption as it is called. And it can optionally if you use the right cryptographic mode AEAD can actually also in conjunction with another device mapper in the model called DM Integrity provide not just encryption but also integrity. So again, you will know not only that somebody cannot read the file if it doesn't have the key they will also know that you can also know that the file was not changed by somebody that doesn't have the key. So if the invertee gives us a good answer for, let's say, binaries this one gives a good answer for application data. And again, a very simplified example of how one uses this thing. So there's a utility called crypto setup. Again, it supports a whole bunch of formats legacy other operating system what have you. Simplest one just looks format. And again, I'm using in this example a loopback device, but of course any block device will do as well. So I'm just saying, look, I want to use the encrypt with this FS3 image. I open the device. I make the connection and it will actually stop and ask me at this point what's the password to encrypt this stuff. And then I have a block device which is like any other block device read write and I can format it normally, mounting normally, unmount it normally, but I cannot access the data unless I do the second step and provide the right password. It's cryptographically secure. Okay, interesting. So we have an easy way to test the verity of veracity of application data and we have a way to do the same for read write data and we actually have enough to start talking about, okay, how does the system look like? Of course, I'm going to complicate this as we go along but basics are all here. So we have some sort of boot ROM something you cannot easily change right, really a real ROM or IP ROM or something that you have some manner of trusting that is not easily changeable and that boot ROM, when it loads reads public key that it uses when it reads the next stage of booting could be directly the boot loader, could be some firmware we'll discuss why firmware later and that next stage comes not just with the blob of whatever it is we want to run with some sort of cryptographic signature which is signed using the public key so I can verify that it was not changed so I know the firmware is okay the firmware does the same for the boot loader the boot loader does the same for the kernel and this is how that goes and then the kernel which I know can't be trusted uses the encrypt to verify the read write kind of data the user data and the verity read-only application file system data each stage verifies the next the public key may not actually be in the hardware one possible implementation is put a cryptographic hash of the public key in the IP ROM or what have you and use and put the actual public key in the image itself, I know it's the right one because otherwise the hash would match and so on they can be more than one public key of them there might be a key that the chip maker holds they use to sign the key for the SOC provider which signs the key for the OEM making the device so there's a lot of way to play with this but this is the basic setup and of course I'm not trying to get here but you don't have to have one of these if you want to do an AB kind of safe upgrade you can give two image either one of these including of course the kernel and so on so we see that with only two components we get a system that has this property or more of this property of being able to sort of fail in a more trusted way so even if somebody took a hold of our device if this is in effect it will not necessarily if the device is not booted for example it will not be able to read our files he will not be able to change the binary later even if he took a hold of the device while it's already running then he might be able to read our files because the device key is in effect but he will not be able to change the application data for next boot so all I have to do to get rid of them is reboot the device in theory there are of course attacks on this method as well so let's get started so what can we do next what's plugging us well encrypting a whole disk with one key or a whole block device with one key is good certainly better than nothing at all it's not fine grade enough for certain situations for example if you have multiple users and now we have multiple users even of our phones I can hand over the phone to somebody I want to also hand over the ability to see pictures of me and you and there are other interesting problems like for example if we encrypt all the data and the device in the same key we must provide it as the system boots there's no alternative because as the system starts you have to punch this in this can be a bit of a problem for example something that really happened in my Android system that was relying on what I described before had this silly problem where if you set up your alarm clock to wake you up in some certain hour and then for whatever reason the system rebooted maybe there was an update and so on what happened is the system got to the prompt that asked the user to enter his password or press the fingerprint thing and stop there because the user was sleeping so the alarm did not did not go off because the whole disk is encrypted and the software that is needing to ring the alarm is on the decrypted disk but the user is sleeping so that may sound silly but it's something that actually happens and bother the users and this makes some way either not enable this feature or some user to turn it off and then we get back to non-secure system so to resolve this problem of multiple users and the scenario I described one of the things that we can use and is actually also used by Android today the latest revisions is file system based encryption that is take out the encryption for the layer beyond the file system and put it inside a file system and this gives us the ability to have a more fine-grained control in fact fscript file system encryption allow us to encrypt each directory separately so we choose the directory and assign some key to that directory and we can have another directory with a different key and of course support for this needs to go into each and every operating system there is some core library that is common to all but each operating system need to enable this currently enabled in xd4 ubfs and f2fs as I said it allows a more fine-grained approach for example giving different users of the system different encryption keys for the directories or giving different stages of the system different encryption keys so for example the code that is needed for the alarm clock may be encrypted with a key which is just part of the root partition or something like that so it may be less secure but at least the user will wake up which is useful and the idea behind this is that it is designed to protect against occasionally temporary offline compromise of the block device content or loss of confidentially of some fine metadata including file size and permission is tolerable that's a quote from a spec and what that means is that what this is designed to do is to assume that you lost control of for example your phone and your device for a little while it was at the customs when the evil government agent took a hold of it you left it in the hotel and some criminal got in and did something but that is a temporary thing and the idea is that even that temporary loss of control of your device shouldn't manifest itself to the ability to later read everything else either that was encrypted before assuming the device was not booted and password was not sorted or in the future it does protect the file names currently but it does not protect other metadata for example file sizes so there's some interesting things you might be able to do with file sizes so it's not perfect but again it's an interesting step so again an example how this works it's a little bit more involved sorry for the small fonts so I have to get a random key which is what this stage is and then I have to get a descriptor for this key basically think about it as a signature for the key because it's simply not easy to type in the 512 bits of the key every time so it's easier to keep it in a file and have some descriptor and then I insert it into the kernel the kernel has a system called the keyset system that is designed to do exactly that keep secret key material for uses like that and dmcrypt uses that so I'm letting the kernel know that I have a new key and please keep it inside safely and gives me the key descriptor so we can talk in the same language and there's a whole bunch of different details about working on the keyring of the keys file system there are different key rings that I can use I'm not going to go into all these details now but you can see here the output of the command key control show they just show me which keyring are active and I can see that I have a new logon key here which is used or can be used by the XD4 file system so I taught the kernel about the key I haven't done anything with the file system yet let's make some test directory on the file system that supports encryption and then I can send the policy that says this directory is now attached or associated with the key whose descriptor is this and henceforth I will this directory is basically encrypted with the key I provided whereas the content is this encryption mode and the file names are protected with that encryption mode and so on and so forth and that means that now we can write data to this thing and see all the content so long as the key is in the kernel and if I log out or if I time out from my device or whatever you can remove the key at which point which is what we're doing here at which point you will not be able to read any of the data and in fact if you will go to this directory yes you will see that instead of the file names there's this random junk which is guaranteed to be something which is asking you can see it you can see there are files there but you do not know what the file name so it's sort of a compromise that you want to be able to let the user know that there are files there but you don't want them to learn about the true file name unless of course and of course if I try to read such a file I'll get the error that I don't have the key and if I'll try to put in the wrong key at least the current versions will just output random junk I think newer version they're working on changing that to actually say no that's not the right key okay so this is interesting we can do a little bit better if we ask the self a question well how could this fail so the problem here is this key thing at some point I need to take the key for somewhere and okay put it in the kernel and now I need to trust the kernel to keep a good hold of it but my assumption is that that will fail at some point because this is the game we're playing can we do something a little bit better can we maybe find a good way to keep the key safe so there's actually several solutions for that one of them a trusted execution environment and the idea of a trusted execution environment is that it is some sort of a hardware feature and there are such feature in both processors there it's called trust zone there's something very similar in Intel called SGX, AMD is S-E-E they're not exactly the same the general idea however is and the idea is basically that and here I'm showing trust zone that I work from home but you know in the general concept is the same that I have my normal reach OS for us guys or girls it's just Linux as we know it and it's running as usual but the processor we're running on has an extra mode sort of like virtualization if you will where in that extra mode has access to memory and maybe other resources of the hardware that a hardware enforces and not being accessed when I'm running in the normal mode and the idea is that I can put here something very minimal in theory at least the attack vector is smaller because it's not a full operating system it's really a library if you will although it's called a trusted OS and keep my secrets there so for example I can do something like use a callback interface into this safe area to ask my trusted operating system or really a library but don't tell them I told you so to keep the key for me okay and for example not deliver it to the user ever just deliver it maybe to the kernel or in some and this is not upstream but it's possible to never let the kernel have access to the key at all but let the trusted OS for example to put the key or deliver the key to a hardware crypto engine that can use it to open my file without the normal operating system having control of it now this is useful and it's used in probably all of your phones and many other devices it's not a silver bullet because a trusted operating system does not mean it does not have bugs it does and sometimes these errors means that somebody can take over the trusted operating system and so on and so forth but it's not a layer that we can use to defend ourselves and you can see here a little bit complicated we'll try to follow me if you will how we can use this to do something more safer with the keys that I describe for FScript so we have some master key blob sitting in the storage and say I use it to I use the master key to encrypt excuse me, I use some hardware key which can be something actually in the hardware but it can also be just a key that is in memory or some flash device that is only accessible for the TE and I use that to encrypt that key that we discussed before and on setup time we just encrypt that and what I'm doing is that when the system boots for example in order to get a hold of that key the user needs to provide something it can be a password or it could be a pattern or it could be a fingerprint but here is the cool thing the cool thing is that what is reading this password or the fingerprint and so on is hardware connected not to the normal operating system but to the TE it can be taken away it cannot be stolen and that master key is then used to derive two keys in the case of android one a device key and one a user key device key is the stuff we use to encrypt stuff like our alarm clock it's tied to just something in the hardware and that means that when the device for example a phone wakes up it already has access to the key in a safe enough manner so it can decrypt the files needed for the phone but when the user wakes up to get access to is secret or her secret files you need something more you need a password or a fingerprint or a utf device and then fscript does these things and encrypts the files algorithms okay I talked about one option to do this, there's something which is sort of similar or related which does not necessarily have to go with a trusted execution environment but it's sort of the same idea but a hardware model basically what we're doing is taking some software using a hardware facility to run this software which is more contained and ideally isolated after it still might have bugs I can instead put some hardware off of the CPU which is hopefully safe and let it keep the keys involved and one such example it's really a standard, it's not a specific hardware called trusted platform model which is basically a hardware model that keeps your secret it has some persistent memory that allows it to keep encryption keys and it has the ability to actually execute encryption functions on that device which means for example I can check the signature of something or sign something without actually the software having any access to the key and again this isn't the silver bullet either because of course just like we had bugs in TE software running in trust zone or whatever also and there's one very very recent bug in one of the very the most used chips that are used for TPM models so we can also have errors there but again the idea is to put one more layer of security there's one more thing you can do with this which is cool this thing has the ability to do at the station and binding which are terms that mean I can feed it some sort of a hash of the state of the hardware and software and it knows some set state that the system needs to be in order to satisfy itself and remote users that the system is in the right status that it's not been compromised and either attest to that to somebody remote or only if this is the case allow me to get access to specific keys of the TPM or other keys which are signed or excuse me not signed encrypted with that TPM key and here you can see an example of a more advanced system setup basically what we've done is we added the firmware now actually boots also the trusted OS in parallel to the kernel they're not rallying actually in the same time okay but it sets up the TPM and the kernel can have services from the trusted OS such as getting the keys for fscript for a specific directory when a specific user signs in with his fingerprint or otherwise okay one more topic very advanced maybe not still really useful today but I think very interesting it's the whole inner subsystem integrity measurement architecture which is software in Linux that working with the TPM can measure the state of the system right remember I told you the TPM can get these hashes of the state of the system somebody needs to feed it this system this state and this is IMA it can attest to the system reliability locally right it can take the hash it has of the state or a specific resource and says okay this is what I expect and it can also attest to that remotely by the way if you're wondering what that word is attest at the station simply means giving evidence of your reliability just like this picture here which I filmed in Thailand that says genuine fake or money suit attest to what it's actually giving and of course the system can also allow certain action only if we're in a certain state and how is that done well we're taking the hashes just like we saw before and feeding it to the TPM and when IMA run it basically feeds all the hashes of the state of the system to the TPM so every time I run a file or by default every time the truth opens a file the hash of that file is fed to the TPM so I have always a state of the system and if that state deviates from the state we expect the TPM for example can not allow me to get access to certain keys or will refuse to sign a message to tell something remote that the system is okay and the IMA model can actually be extended with another model called EVM to allow certain Linux operation to either fail or succeed based on the state I'm going to run an executable but if IMA with EVM thinks no the hash of that is wrong I took a picture of that hash and it's not the same as it needs to be then it will fail the system as you see I've run out of time so I'm not going to tell you how to use this plus it's rather complicated but it's interesting advanced topic I put here links to where you can find a really good how to do this and I hope that by now you're feeling much safer and calm like this cat that feels so safe that it closes his eyes and I would like to thank you and hope you took something from that at least enjoyed it and please enjoy lunch thank you