 So thanks for coming. We're here to talk about full disk encryption while you're not really as secure as you might think you are. Oh, what just happened? I'm missing a slide. Okay. I'll just say what it was then. So how many of you encrypt the hard drives in your computer just like a show of hands? Oh, wow. Yeah, welcome to DEF CON. So I guess it's like what, 90% of you at least. So how many of you use open source full disk encryption software, something that you could potentially audit and okay, not as many of you. Like encrypt or, you know, how many of you always fully shut down your computer whenever you're leaving it unattended? Okay, more of you I'd say about 20%. How many of you have ever left your computer unattended for more than a few hours? A lot of hands should be up. Either. Either on or off. I mean, I'd be surprised if you're not because I'd have to ask or you're like zombies that don't sleep or something. Okay, and then the other answer of course is anyone who leaves their computer unattended for more than a few minutes. Also pretty much everyone. So why do we encrypt our computers? And it's surprisingly hard to find anyone actually talking about this, which is really weird. And I think it's really important to articulate our motivations, why we are doing something, a particular security practice. And if we don't do that, we don't have a sensible goal post to see how we're doing. There's plenty of details in the documentation of full disk encryption software of what they do, what algorithms they use, what, you know, how their passion passwords and so forth, but almost nobody's talking about why. And I argue that we encrypt our computer because we want some control of our data, some assurances about the confidentiality and integrity of our data that nobody's stealing our data or modifying our data without us knowing about it. And it's basically, we want determination over our data. We want to be able to control what happens to it. And there's also situations where you have liabilities for not maintaining the secrecy of your data. Lawyers have to have attorney-client privilege. Doctors have patient confidentiality. People who are in finance and accounting have all sorts of regulatory rules that they need to comply with. And so if you're leaking data, you know, there's companies which have to notify their customers that, oh, we've someone left a laptop unencrypted in a van and it got broken into and stolen. So your data might be out there on the Internet. But it also speaks to, and it's really all about physical access to our computers that we want to protect them because really, full-disk encryption doesn't do anything if someone just owns your machine. But it also gets to a greater point of if we want to build secure networks, if you want to have a secure Internet, we can't do that unless we have endpoints that are secure. You can't build a secure network without the foundations of the secure endpoints. But by and large, we've figured out the disk encryption theory aspects of the stuff. We know how to generate random numbers reasonably securely on a computer. We know all the block cipher modes of operation that we should use for full-disk encryption to get these sorts of nice security properties. We know how to derive keys from passwords securely. So mission accomplished, right? We can all stand on an aircraft carrier and, you know, the answer is no, it's not the whole story. There's still a hell of a lot of cleanup that you need to do. Even if you have absolutely perfect cryptography, even if you know it can't be broken in any way, you still have to implement it on a real computer where you don't have these nice black box academic properties of your system. And so you don't attack the crypto when you're trying to break someone's full-disk encryption. You either attack the computer and trick the user somehow, or you attack the user and convince them to either give you the password or get it from them in some other means by like a key logger or whatever. And de facto use doesn't really match up with the security models of the full-disk encryption software. If you're looking at full-disk encryption software, they're very much focused on the disk theoretic aspects of full-disk encryption. And here's a quote from the true crypt web page. They're actual documentation that they do not secure data on your computer if someone has ever manipulated it or is manipulating it while it's running. Like, I wish I was making this up. Basically, their entire security model is like, oh, if it encrypts the disk correctly, if it decrypts the disk correctly, we've done our job. Woot. And I apologize for the text that you probably would not be able to read very well, so I'll read it here, a little bit of it here. So this is an exchange between the true crypt developers and another security researcher by the name of Joanna Rucosa, where she brought up this attack and tried to talk to them and see what their reaction was of feasibility. And so this is what they said. We never consider the feasibility of hardware attacks. We have to assume the worst. And she asks, do you carry your laptop with you all the time? They say, how the user ensures physical security is not our problem. And she asks very correctly, why in the world do I need encryption then? So we live in the, ignoring feasibility of an attack is just, it's specious. You can't do that. We live in the real world where we have these systems that we have to deal with, we have to implement them, we have to use them. And there's no way that you can compare a 10-minute attack that you can conduct with just software, like a flash drive to something where you need to pull apart the hardware and manipulate the system that way. And regardless of what they say, physical security and resistance to physical attack is in the scope of full disconcription. It doesn't matter what you, what you disclaim in your security model. At the very least, if they don't want it to claim responsibility of that, they need to be very clear and unequivocal about how easily the stuff can be broken. So this is a diagram of sort of an abstract system diagram of what is mostly in a modern CPU, or a modern computer, and sort of what the boot process is just so everyone's on the same page, what actually happens here. So as we know, the boot loader gets loaded from the secondary storage on the computer by the BIOS, and it gets copied into main memory through, you know, data transfer. The boot loader then asks the user for some sort of authentication credential, like a password or a key smart card or something like that. That password is then transformed by some process into a key which is then stored in memory for the duration of the computer being active. And then the boot loader of course transfers control over to the operating system and then both the operating system and the key remain in memory for the transparent encryption and decryption of the computer. This is a very idealized view. This assumes that nobody is trying to screw with this process in any way. And I think we can all think of a few different ways where this can be broken. So let's enumerate a few of the things that might go wrong if someone is trying to attack you. So I break attacks into three fundamental tiers. Non-invasive, which is something that you might be able to execute with just a flash drive. You don't even need to take a system apart. Or some other hardware component that you could attach to it, like a PCI card, Express card, or Thunderbolt, the new adapter that gives you basically naked access to the PCI bus. Secondly, we'll consider attacks where a screwdriver might be required where you might need to remove some system component temporarily to deal with it in your own little environment. And also soldering iron attacks, which is the most complicated where you're physically either adding or modifying system components like chips on the system in order to try to break these things. And so one of the first types of attacks, a compromised boot loader, or this is also sometimes known as an evil made attack, where the boot loader itself, since you need to start executing some unencrypted code as part of the system boot process, something which you can boot strap yourself with and prompt the user for credentials and then get into access to the rest of the data that's encrypted on the hard drive. There's a few different ways that you could do this. You could physically alter the boot loader on the storage system. You could compromise the BIOS, you could load a malicious BIOS that hooks the keyboard adapter or hooks the disk reading routines and modify it that way in a way that's resistant to removing the hard drive. But I mean, in any case, you can modify your system so when the user enters their password, it gets written to disk unencrypted or something like that. That in some way the attacker can get it. You can do something similar at the operating system level. This is especially true if you're not using full disk encryption, if you're using container encryption, there's the whole operating system that someone could manipulate. This could also happen from a attack on the system, like an exploit. So someone gets root on your box and now they can read the key out of main memory. That's a perfectly legitimate attack. And then that key could be either trans stored on the hard drive in plain text for later acquisition by the attack or sent over the network to their command and control systems. Another possibility, of course, is capturing the user input via key logger, be it software or hardware, something exotic like a pin hole camera or maybe a microphone that records them typing in sounds and trying to figure out what keys they pressed. This is kind of a hard attack to stop because it potentially includes components that are outside of the system. I also want to talk about data remnants attacks, more colloquially known as a cold boot attack. So if you asked five years ago, even people who are very security savvy, what are the data properties, what are the security properties of main memory? They would tell you, when it powers down, you lose the data very, very quickly. And then an excellent paper from Princeton 2008 discovered that actually, at room temperature, you're looking at several seconds of perfectly good very, very little data degradation in RAM. And if you cool it down to cryogenic temperatures by, say, using an inverted can duster, you can get several minutes where you're getting very, very little bit degradation in main memory. And so if your key is in main memory and someone pulls your modules out and pulls out your attack, pulls out the modules from your computer, they can attack your key by finding where it is in main memory in the clear. And there's like some attempts for resolving this in hardware, like, oh, the memory modules need to be scrubbed and rebooting up. But it's not going to help you if someone takes the module out and puts it either in another computer or a dedicated piece of hardware for extracting memory module contents. And finally, there's direct memory access. Any PCI device on your computer has the ability to in ordinary operation to read and write the contents of any sector in main memory. They can basically do anything. And I mean, this was designed back when computers were much slower where we didn't want to have the CPU babysitting every transfer from devices to and from main memory. So devices gain this direct memory access capability to just, they could be issued a command by the CPU and then they could just finish it and the data would be in memory whenever you needed it. And this is a problem because PCI devices can be reprogrammed. A lot of these things have writable firmware that you can just reflash to something hostile. And this could compromise the operating system or execute any one of any other form of attack of either modifying the OS or pulling out the key directly. There's forensic capture hardware that is designed to do this in criminal investigations. They, like, plug something into your computer and pull out the contents of memory. You can do this with Firewire. You can do this with ExpressCard. You can do this over Thunderbolt now, the new Apple adapter. So these are basically external buses to your, these are external ports to your internal system bus, which is very, very powerful. So wouldn't it be nice if we could keep our keys somewhere else in RAM? Because we've sort of demonstrated that RAM is not terribly trustworthy from the security perspective. Is there any dedicated key storage or cryptographic hardware? And I mean there is. You can find things like cryptographic accelerators. You use them in web servers so you can handle more SSL transactions per second. And they're, you know, they're tamper resistant or certificate authorities have these things that hold their top secret keys. But they're not really designed for high throughput operations like using disk encryption. And so are there any other options? Can we use the CPU as sort of a pseudo hardware crypto module? So can we compute something like AES in the CPU using only something like CPU registers? Until in AMD out of these rather excellent new CPU instructions which actually take all the hard work of doing AES out of your hands. You can just do the blocks like for primitive operations with just a single, you know, assembly instruction. The question is then, can we store, can we, can we store our key in memory and can we actually perform this process without relying on main memory? We have a fairly large register set on x86 processors. I don't know if any of you have actually tried adding up all the all the bits that you have in registers, but it's something like four kilobytes almost on modern CPUs. So some of what we can actually dedicate to key storage and scratch space for our encryption operations. One possibility is using the hardware breakpoint debugging registers. There's four of these in your typical Intel CPU and in 64 bit mode these are each going to hold 64 bit pointer. So it's 256 bits of potential storage space that most people will never actually use. The advantage of course to using debug registers is one, they're privileged registers so only the operating system can access them, ring zero. And you get other nice benefits like when the CPU is powered down either by shutting off the system or putting it in sleep mode you actually lose all register contents. So you can't cold boot these. And a guy in Germany, Tilo Mueller actually implemented this thing as Tresor for Linux in 2011. And he did performance testing on it and it's actually not any slower than doing your regular AES computation and software. How about instead of storing a single key though we can store 228 bit keys. This gets us into more of the crypto module space. We can store a single master key which never leaves the CPU on boot up. And then load and unload wrapped versions of keys as we need them for additional task operations. The problem is we can have our code and our keys stored outside of main memory but the CPU is ultimately still going to be executing the contents of memory. So a DMA transfer or some other manipulation could still alter the operating system and get it to dump out the registers whether they be in main memory or if they're some more more exotic like debug registers. Can we do anything about the DMA attack angle? And as it turns out yes we can. In recently as part of new technologies for enhancing server virtualization for performance reasons people liked being able to attach say a network adapter to a virtual server. So it would need to go through a hypervisor. And so IOMMU technology was developed so you can actually sandbox a PCI device into its own little section of memory where it can't arbitrarily read and write in the system. So this is perfect. We can set up IOMMU permissions to protect our operating system or whatever we're using to handle keys and protect it from arbitrary access. And again our friend from Germany, Tilo Mueller, has implemented a version of Tressor on a micro bitvisor called Bitvisor which basically does this. It lets you run a single operating system and it transparently does this disk access encryption. The guest doesn't even have to care or know anything about it. Which is great. Disk access is totally transparent to the OS. Debug registers cannot be accessed by the OS. And IOMMU is set up so that the hypervisor itself is secure from manipulation. But as it turns out there's kind of other things in memory that we might care about other than disk encryption keys. There's sort of the problem that I hinted at earlier where we used to do container encryption and now we all do full disk encryption for the most part. We do full disk encryption because it's very, very difficult to make sure you don't get accidental rights of your sensitive data to temporary files or caching in a container encryption system. Now that we're reevaluating main memory as a not secure, not trustworthy place of storing data we need to treat it in much the same way. We have to encrypt everything we want to leak. We do not want to leak. So things that are really important like SSH keys or private keys or PGP keys of a password manager or you know any top-secret documents that you're working on. So I had a very, very silly notion. Can we encrypt main memory or at least most of the main memory where we're likely to keep secrets so we can at least minimize how much we're going to leak? And once again surprisingly or perhaps not so surprisingly the answer is yes. A proof of concept in 2010 by a guy named Peter Peterson actually tried in implementing a RAM encryption solution. So it wouldn't encrypt all RAM. It would basically split main memory into two components. A small fixed-size clear which would be unencrypted and then a larger sort of pseudo swap device where all the data was encrypted prior to being kept in main memory. Ended up being obviously quite a bit slower in synthetic benchmarks with read performance, more effective than write performance. But you know what? In the real world when you ran like a web browser benchmark it actually did pretty well. 10% slower. I think we can live with that. The problem with this proof of concept implementation it was it stored the key to the crypt in main memory because where else would we put it? The author considered using things like the TPM for bulk encryption operations but those things are even slower than dedicated hardware cryptosystems so it would be totally unusable. But you know what? If we have the capability to use the CPU as a pseudo hardware cryptomodule it's right in the center of things so it should be fast enough to do these things. So maybe we can actually use something like this. So let's say we have this sort of system set up. We've gotten our keys are not in main memory. Our code responsible for manipulating the keys is protected from arbitrary read and access by malicious hardware components. Main memory is encrypted so most of our secrets are not going to leak even if someone tries to execute a cold boot attack. But how do we actually get a system booted up to this state? Because we need to start from a turned off system, authenticate ourselves to it, and get the system up and running. How do we do this in a trustworthy way because after all someone could still modify the system software to trick us into this great new system but in reality we're just not doing anything. So one of the very important topics is being able to verify the integrity of our computers. The user needs to be able to verify that their computer has not been tampered with before they authenticate themselves to it. And there's a tool that we can use for this, the trusted platform module. It's kind of got a bad rap but we'll talk about that later. But it has the capability to measure your booting sequence in a couple of different ways to let you control what data will be revealed to the system from the TPM in two particular system configuration states. So you can basically seal data to a particular software configuration that you're running on your system. And there's a couple of different implementation approaches to do this and there's fancy therapy to make it really hard to get around it. So maybe we can do this. And so what is the TPM anyway? I mean it was originally sort of like hailed as the grand solution to digital rights management by media companies. Media companies would be able to remotely verify that your system is running in some approved configuration before they would let you run the software and unlock the key files. It ended up being really impractical in practice and so nobody's actually even trying to use it for this purpose anymore. I think a better way to think about it is really just a smart card that's fixed on your motherboard. It can perform some cryptographic operations RSA, SHA, has a random number generator and it has physical attack counter measures to prevent someone from easily getting access to the data that's stored in it. The only real thing that has this ability to measure the system boot state into platform configuration registers. And it's usually a separate chip on the motherboard. So there's some security implications of that. There's some kind of fun bits like monotonic counters, numbers that you can only request the TPM increases and then you can check what the value is. There's a small non-violetel memory range that you could use for really whatever you want. It's not very big, like a computer, but it could be useful. There's a tick counter which lets you determine how long the system has been running since last startup. And there's commands that you could issue to the TPM to let it do things on your behalf which include even things like clearing itself if you feel the need to. So we want to then develop a protocol that a user can run against the computer that they can verify that the computer has not been tampered with before they authenticate themselves with the computer and then begin using it. So what sort of things could we try sealing to platform configuration registers that would be useful for the sort of protocol. And so a couple of suggestions that I have are seeds to one time password tokens, either the time or the event variety. Maybe some sort of unique image or animation, like a photograph of you somewhere. Something that's difficult to something unique that's not something that someone can easily find elsewhere. And then say disable the video out on your computer when you're part of this challenge response authentication mode. You also want to seal a part of the disk key and there's a couple of reasons you want to do that. It assures that the system within certain security assumptions it assures that the system is only going to be booting into some approved application that you control as the owner of the computer. And so ultimately that means that anyone who wants to attack your system needs to do it either through breaking the TPM or they need to do it within the sandbox that you've created for them. And this of course is not very cryptographically strong or anything like that. You're not going to have a protocol which allows a user to securely authenticate the computer to the same level that you have say encryption in your head. It's never going to be perfect. So I mentioned that there's a self erase TPM command that you can issue as in the software. And since you're also running since you're also require the system to the TPM requires the system to be in a particular configuration before it will release secrets you can actually do something interesting like self destruct. If you develop the software and your protocol to limit say the number of times the computer has been started up unsuccessfully, have it time out once the password has been waiting on the password screen for some period of time, or limit the number of password attempts so you can enter. Or the amount of time since the last computer has been started up say if it's been in cold storage for a week or two. You could also restrict access to the computer for periods of time say in the country and you want to lock down your computer for the duration of the trip so when you get to your hotel or whatever on the other end then you can unlock it but not before. You can also do fun things like leave little canaries on the disc which appear to contain the critical values for your policy but are really just trip wires and you're really just using the internal TPM values. You could also create a self destruct password or duress code to effectively issue this reset command. And since the two options that an attacker would have would be break the TPM or run your software you can kind of make them play by these rules and you can actually do an effective self destruct. The TPM is intentionally designed to be very, very hard to copy. Basically you can't clone it very easily so you could use things like monotonic counters to create blockers, any disc restore, replay attacks and once the TPM clear command is issued it's game over for an attacker who might want to get access to your data. There's some similarities to a system that Jacob Applebaum discussed at the chaos communication congress many years ago 2005. He proposed using a network, remote network server for many of these options that are pretty brittle and kind of potentially difficult to use. Since the TPM is an integrated system component you can get a lot of these advantages by using the TPM instead of remote server. And a hybrid approach is potentially possible. You could have a system set up say as an IT department where you temporarily lock down a system and it can only become available again once you plug it into the network call your IT administrator system again. I'm kind of hesitant to expose a network stack this early in the boot process because it increases your attack surface but it's still a possibility. So I've sort of qualified all my statements so an attacker can only do this that's of course under the assumption that they cannot break the TPM very easily. So this is actually a optical microscope scan of a TPM or smart card done by Chris Tarnofsky who spoke in here at DEF CON last year and then Black had a few years ago on the security of these TPMs. So he's actually done some really great work in figuring out how much how hard these things are to break. He's enumerated the counter measures and sort of figured out what would it take to actually break these things and actually has gone and done it and tested it. So there's things like light detectors, there's active meshes, there's all sorts of things to try to throw you off the track of what it's actually doing. But if you spend enough time, you have enough resources and you're careful enough, you can actually get around these. You can actually get around most of these. So you can de-encapsulate a chip, put it in an electron microscope workstation and go wild. Find where the unencrypted data bus is and just glitch it to spill out all of its secret data. But nonetheless, this sort of an attack, even if you've done all the R and D, something is going to take hours with an expensive microscope and you're still going to spend months of R and D to figure out what the counter measures are on the chip so you can actually break it without frying the one chip of your attack target. There's also recent attacks. I mentioned earlier again that the TPM is a separate chip on the motherboard in almost all cases. It's not up in the CPU like it is saying for DRM enforcement in video game consoles. If you manage to reset it, you're really not going to adversely affect the system that badly. It's usually a chip that's off the LPC bus on the computer, which itself is sort of a legacy bus that's off the south bridge or the platform hub. And on modern systems, really the only sorts of things you're going to find on here and things like the BIOS legacy keyboards. We used to have floppy controllers, but I guess not really anymore. And if you find a way to reset, say the low pin count bus, you'll reset the TPM into a fresh system boot state. You'll lose your PS2 keyboard, but not a really big deal. And you'll be able to play back the boot sequence of a trusted boot sequence that the TPM has data sealed to without actually executing that boot sequence, and then you could use this to extract data. There's a couple of attacks that have tried to exploit this. If you're using an older mode in the TPM called static root for trust measurement, you can do this pretty easily. I have not seen any research on a successful attack against the newer Intel trusted execution technology version of the TPM activation. This is an area that probably needs more research to intercept the LPC bus and what it's communicating to the CPUs. That might be another way that you can attack the TPM. Let's look at a blueprint what I think we should have for getting the system up from a cold boot state up into what we have our running trustworthy configuration. There's a lot of really vulnerable legacy system components in our PC architecture. You can do all sorts of things in the BIOS like hooking the interrupt vector table and modifying disk read and writes or capturing keyboard input or screwing with the system in all sorts of fun different ways. Masking out CPU feature registers. There's plenty of options if you want to mess with people. In my opinion, you really want to get out of BIOS controlled mode out of real mode into protected mode and really just do your measurement stuff. Once you get into this pre-boot mode or just your operating system like a Linux initial RAM disk then you start executing your protocol and start doing these things. Once you're using operating system resources, what someone does at the BIOS level as far as interrupt tables doesn't really affect you anymore, you really don't care. You can do sanity checks on your registers like if you know you're running on a core server or if you know it's going to be supporting things like no execute bit and debug registers and other stuff that people might try to mask out in capability registers. Here's the runtime blueprint. What we actually want the system to look like once we're in the running configuration. There was the previous project Trevisor which implemented many of the security aspects of doing disk registers and having IOMMU protections on your main memory. The problem is that Bitvisors are very specialty, not very commonly used program. Zen is sort of like the canonical open source hypervisor where there's a lot of security research going on. People are making sure it's not broken. In my opinion we should use something like Zen as your bare level hardware interface and then use a Linux DOM0 administrative domain on it to actually do your hardware initialization. In Zen all of your pair of virtualized domains are actually running in non-privileged mode in ring three so they don't actually have direct access to things like the debug registers. That's one thing that's already done. Zen exposes things like hypercalls that give you access to those sort of stuff but it's something that you can use in software. And so the approach I'm taking is we'll sort of do that master key approach in the debug registers. We'll dedicate two debug registers the first two to store 128 bit AES key which is our master key. This thing never leaves a CPU register as soon as it's entered in by the process that takes the user credentials. And then we use the second two registers as virtual machine specific or whatever's. It could be either as ordinary debug registers or in this case we could use it to encrypt main memory. In this particular case we still need to have a few devices that are directly connected to the administrative domain. That includes the graphics processing unit which is a PCI device. The keyboard, the TPM all this stuff needs to be directly accessible. So you can't really apply IMMU protections on this. But things like the network controller arbitrary devices in the PCI bus you can set up IMMU protections on it. So they have absolutely zero access to your administrative domain or your hypervisor memory spaces. You can do similar things by creating, you can get access to things like the network by actually putting things like your network controller into dedicated virtual machines. So these things are, these things get the devices mapped but have IMMU protection set up so that device can only access virtual machine. You can do the same thing with your storage controller and then you actually run all of your applications in virtual machines that have absolutely zero direct hardware access. So even if someone owns your web browser or sends you a malicious PDF file, they don't actually get anything that would let them seriously compromise your disc encryption. So I can't take the credit for that architecture design. It was actually the design basis for a really excellent project called the CUBE's OS project. They basically describe themselves as a pragmatic formulation of Zen and Linux and a few custom tools to basically do a lot of the stuff I just talked about. Implements these non-privileged guests and do a nice unified system environment. So it feels like you're really running with one system but it's actually a bunch of different virtual machines under the hood. So I use this implementation of my code base. All the encryption, all the crypto stuff is stuff that I've added on top of it. And so the tool I'm releasing this is still really proof of concept experimental code. I call it failings. It is a patch to Zen to do the implementation of the disc encryption stuff as I've described. Master key in the first two debug registers second two debug registers totally unrestricted. For security reasons the XMM registers which are used as scratch space are encrypted as well as the key when you're doing some VM context switches. And I've also implemented a very, very simple implementation of the crypt keeper paper, the encrypted memory using ZRAM. Because hey, it's been mainlined. It does pretty much everything except crypto. So adding encryption on top of it is just a really tiny little bit of code which is great. The most secure code is the code you don't have to write. So the nice thing about ZRAM is it gives you a bunch of the bits that you need to securely implement things like AES counter mode which is really great. Hardware wise you do have a few system requirements so you need a system that supports the AES new instructions. A reasonably common but not every system has it. Chances are if you have an Intel i5 or i7 almost all of them support it but there are some oddballs so check out Intel Arc to make sure it supports all the features that you need. Ditto, hardware virtualization extensions these are very, very common as of like 2006. IOMMU is a little bit more complicated to find if you're looking for a computer. It's not listed as a sticker specification you kind of need to dig for it and there's a lot of people who should know better but don't about what the difference is between VTX and VTD and so forth. So you might need to hunt for a system that supports the stuff and you want a system of course of the TPM because otherwise you can't implement this measured boot thing at all. So usually you'll want to be looking at business class machines where you can verify this sort of stuff exists. If you look for like Intel TXT it'll have almost everything you need. The cubes team actually keeps a really great hardware compatibility list on their wiki which actually has details for a lot of systems that do this sort of stuff. So security assumptions in order for the system to be secure we have a few assumptions about a few system components. The TPM of course very, very critical component for assuring the integrity of the boot. We need to make sure that there's no back door capable of dumping NVRAM or manipulating monotonic counters or putting the system into a state where it's not actually trusted we just think it is by resetting the PCR attacks and based on remarks by Tarnofsky who has reverse engineered these chips I'm sort of setting a bound of roughly 12 hours of exclusive access to the computers required if you want to do a TPM attack on it to pull out secrets. There's a few assumptions about the CPU, the memory controller and IOMU mainly that they're not back door and they're correctly implemented. Some of these might not necessarily be very strong assumptions because Intel could very easily backdoor some of these things and we have no way of finding out. And some security assumptions about Zen as a piece of software it actually has a very good security record but nothing's perfect and occasionally there are security vulnerabilities. In the case of Zen given its privileged position in the system that's actually kind of a big deal you really want to make sure it's secure. And so under those security assumptions we have what sort of put a framework up for a threat model we want to do a realistic threat assessment where we realize that not every system is reliable. Especially when there's so many legacy components that were designed without any consideration of security. But at the same time also there are not all theoretical attacks or practical and you can't lump very, very simple attacks with a difficult complex hardware attacks. And I think a good analogy is thinking about safe security. We all know that every safe can eventually be broken. It's a question of how much time you could eventually can be broken. And so I think we need to think about our systems in the context of having physical security defenses in terms of hours rather than minutes that we have right now. And as always if I've screwed up, if I've made an assumption that you don't think holds, prove it, verify it, verify mine, make sure I'm right or wrong. And so expected security this is what you'll actually get. Cold boot attack is not going to be restricted by whatever you have in clear. Hardware based RAM acquisition is not going to be effective because they're going to be IOMMU sandboxed to nothing so they're not actually going to get your application state or your system state. And even if you manage to extract the secrets out of a TPM, all it's going to do is get you back to where we are right now where although it is easily broken, you're still all the way down to zero. But if you have a good security habit policy which is reasonable say 12 hours of no contact with your computer, you should be okay. As long as you're reasonably vigilant and not excessively vigilant, you should be okay. A couple of attack methods which are really like these are the main ones that I would attack if I were trying to break into a system that used something like this. Keyloggers and friends are still going to do some mitigation of this by using one-time tokens but still again, you're going to be imperfect. TPM attacks as mentioned before either NV RAM extraction or LPC bus reset intercept hardware. Find some way of tricking the TPM into getting into a configuration that it thinks is trustworthy but is actually not. In RAM manipulation, if you have something which looks like RAM, acts like RAM, but isn't actually RAM, it pretends to be RAM for most time, then there's really nothing you can do because you'd be able to manipulate the contents of the system no problem. You could also try things like a transient pulse injection which is how George Hots broke the hypervisor security on the PS3. A little quick bit about legal notes. I'm not a lawyer obviously, not your lawyer, not legal advice. As far as I know, if you have self-destruct, as far as I know it is not illegal yet. But there has been no legal test case that might be interesting to find out. But I'm not sure I'd want to be that test case either. But you also need to be aware that TPM and strong encryption is not it's illegal in certain jurisdictions. You can't use a TPM in, say, China. You can't use a TPM in Russia. And some companies like the United Kingdom have mandatory key disclosure. You will go to prison if you do not hand over a key like the RIPA act. Future work and improvements, production version, stable version, right now it is not stable. If you put your computer to sleep it will eat your data among a couple of other problems. I'm working on it. And some other things that might be fun to do in the future like open SSL keys are really important. So if there's an API that you could do to basically let you swap out your contents of memory very, very quickly so your exposure time is very easy to install while you guys could all install and maybe upstream the patches to Linux and the goons are getting ready to kick me off the stage. Conclusions, I'm almost done. I'm almost done. So best security in the world goes unused if it's unusable. The model needs to account for realistic use patterns. And it's not just disk encryption. You really need to think about it holistically from the perspective of the whole system. And it's challenging to do this, but it's not a problem. Thank you.