 And now, our first talk here today will be about anti-evil May, that is, we will find out how to actually make sure that the piece of software that you type in, for example, your hard drive password every morning, is actually the piece of software that you think it is. So please welcome Matthew. Good morning. So first of all, I'd like to say that it's wonderful that so many of you have been able to make it to this completely unreasonable time in the morning. I appreciate that. So quick introduction. My name is Matthew Garrett. I'm a security developer at CoreOS, but I'm also on the board of directors of the Free Software Foundation. So while I'm very interested in security, especially very low-level security, securing the platform before we even get to the point of running applications, I focus as far as possible on ensuring that any security work I do still permits the owner of a computer to be the person who makes the ultimate decision about what they want their security policy to be, what software they want to be able to run, because it's unfortunate when we make security choices that make it impossible for people to have free control of their own systems. So the quick corporate sponsorship thing, we just opened an office in Berlin, chat to me if you are interested in working on this kind of stuff. What I'm going to be talking about today is the problem of how you can fundamentally know that your computer is trustworthy. In this case, when I say trustworthy, what I mean is the computer will not betray you. The computer is operating in your best interest. The computer is not choosing to circumvent your decisions and make life easier for people who are not you. So we've seen cases where, for instance, in DRM, computers are untrustworthy because they refuse to let you do what you want to do. But from a security perspective, we also want to be sure that a computer is taking your secrets and holding them and treating them in a trustworthy manner. It's not releasing those secrets to anybody else. It's not holding onto them and then using them to leak data about you. So part of that, part of thinking about how we define the trustworthiness of a computer is knowing when we trust a computer. And up until the point where you start putting secrets into a computer, it doesn't matter whether it's trustworthy or not. It doesn't have any way to harm you. Now, secrets are obviously things like passwords, passphrases. Secrets are also things like your browsing history. Secrets are anything that is not information that you would necessarily wish to publish entirely. I'm only going to be talking about the former stuff, but in terms of building a fully trustworthy platform, you need to have trustworthiness from the very beginning of the boot process. Before you enter any secrets, your computer must be in a trustworthy state. You need to be able to look at your computer and say, I trust this computer. Because if that's not true, how do you know that the software that you're typing your password into is operating in your best interests? And the free software world, we can examine the source code. We can have a lot of people independently reviewing it. And that means that there is perhaps a higher probability of us being able to look at this software and say, yes, this software is trustworthy. But once that's been compiled, once that's on disk, if someone is able to modify that software on your hard drive, then it may transition from being trusted to untrusted. So if someone is able to get access to your system and then modify the contents of that, then you're in unfortunate position. But it's worse than that, because that software is running on top of an operating system kernel. And how do you know that the kernel itself has not modified that software? A sufficiently modified kernel, a backdoor kernel, is able to arbitrarily modify the behavior of any user space applications. Even if you had some way to verify that your user space application was trustworthy, that the program taking your disk encryption passphrase had not been modified, if the kernel's been modified, it doesn't matter. The kernel can still subvert to that. But say you have some mechanism to verify that the kernel is good when it's read off disk. How do you know that the bootloader that read the kernel off disk and then verified the kernel state didn't then modify the kernel, which could then modify the application and then store your passphrase or send it off to somewhere unpleasant? The bootloader, OK, say we have a means of verifying the bootloader. How do we ensure that the firmware that launched the bootloader didn't tamper with the bootloader that launches the kernel that can then be tampered with to tamper with the software that reads your passphrase? And fine, OK, we solve all of that. And then, well, fundamentally, at some point, the CPU itself is obviously executing this code, is capable of doing whatever it wants. And if the CPU is not trustworthy, then there is no way to trust the firmware, no way to trust the bootloader, no way to trust the kernel, no way to trust the password prompt. Figuring out how to trust the CPU is a hard problem. I'm going to ignore it. But figuring out the trustworthiness of the rest of the system is more practical, and not necessarily to the extent that everybody would like, but to a reasonable extent. Trusted computing was a phrase that started appearing a little over 10 years ago in the early 2000s. And at that point, trusted computing, it was clear, was going to be used as a mechanism to ensure that our computers, the computers we owned, could be trusted by media companies, could be trusted by proprietary software developers. We felt that the way that trusted computing was inevitably going to be used was for our computers to be out of our control, that people would use this technology to restrict what we could do with our computers. Thankfully, none of that's happened. And there's various technological and political reasons for that, which I'm not going to go into today. But at the heart of trusted computing is a component called a trusted platform module. Most of the time, a trusted platform module is a small, discrete IC that's on your system motherboard. It's typically on the LPC bus. The TPM does not have any ability to access system memory directly. The TPM can only do what it's asked to do. The TPM cannot independently modify system state. The TPM has no way of reaching out and looking at the contents of RAM and figuring out what you're running. The TPM can only do what it's told to do. It can only know what it's explicitly told about. If you never run any code that touches the TPM, the TPM can do nothing. The TPM cannot phone home on its own. The TPM cannot interfere with your ability to run software. For the most part, on free software platforms, we have completely ignored the TPM because most things we can think of to do with it were not particularly beneficial to us. The TPM has one fascinating feature, which is inherent to the trusted computing goal, the idea that we can define whether a computer is in a trusted state. And that's something called measurement. In secure boot, each component verifies a cryptographic signature on the following component and refuses to run it if that signature doesn't match. In trusted computing, your ability to run arbitrary software is in no way restricted. There is no validation of signatures. There are no keys involved in this at this stage. Instead, every component measures the next component and stores that in the TPM. And measurement in this case is just a SHA-1 hash. Just as a brief clarification here. The TPM spec version 1.2 is what I'm mostly going to be talking about today. TPM spec 2.0 is now available and devices are beginning to appear. But pretty much every single TPM that's currently available in the wild is a 1.2 device and they only implement SHA-1. More modern TPM devices have support for basically arbitrary hash algorithms, including some that are actually good. So in a measured boot, the first component looks at the next component to be executed and before executing it takes a SHA-1 of that. And then it writes that SHA-1 value into a platform configuration register or a PCR. When we say it's written, extend is a meaningful word here. When a value is written, it does not replace the existing value. It's not possible to arbitrarily overwrite the existing contents of a PCR. Instead, the existing content of the PCR is propended to the new value. So you had 20 bytes in the PCR, you wrote a 20-byte SHA-1 and now you have a 40-byte value. And that 40-byte value is then again hashed, again with SHA-1, and the PCR is set to that value. So this means that the new value of the PCR depends on the previous value, which means that there's basically three ways to get a specific PCR value. And those are break SHA-1. So if you are able to figure out a way to add 20 bytes of arbitrary data to the existing PCR value, such that you control the new SHA-1 value, then you can completely break trusted boot. If any of you know how to break SHA-1, I'm going to be a little bit unhappy, and various other people are going to be very, very unhappy, and we're all going to have bad days. So again, I'm going to ignore this possibility because it's far too terrifying and we can't do anything about it anyway. Another is to trigger execution of arbitrary code through unmeasured data. Measurement typically only measures the code. Now there's few exceptions, but basically it measures the code. That code has to operate on data. As a species, we're pretty bad at writing code. We're even worse at writing code that deals with data and we're astonishingly bad at writing code that deals with arbitrary user-provided data. A break in trusted confusing process was demonstrated by the Invisible Things Labs through the simple expedient of noticing that the firmware allowed you to provide a boot slash picture in the firmware, and this meant that people buying the boards from Intel could replace the boot slash with something, including their company brand rather than just having a generic Intel logo. The problem was, the format was BMP, the old Microsoft Bitmap format, and that includes run-length encoding support. You can say the next value represents 256 of these values, so it's a lossless compression mechanism. There was nothing in the code to check that the run-length encoding did not result in you overrunning the buffer that the Bitmap was being decompressed into, which let you overwrite arbitrary code in the firmware and then, since the Bitmap was unmeasured, you then had arbitrary code execution, which meant that you could then modify the following stage but write the correct SHA-1 value into the PCR. So everything would look good. If your firmware is not competent, then your firmware cannot be trustworthy. So the fact that the majority of us are running proprietary firmware with no available source codes and we can't verify that any of it is any goes whatsoever is a minor problem here. And finally, you can perform exactly the same sequence of writes to the PCR, and this is what happens in the normal boot. If nothing's been modified, then the SHA-1 of every components will match. You perform the same sequence of PCR updates and you end up with the same PCR values. So that's great. If the system boots correctly, then the PCR values will always be the same as long as none of the components have been interfered with. But there's a minor problem here. How do we read those values? And the straightforward answer is very simple. We just ask the TPM what the PCR values are. There's a command you send to it and then it gives you back the PCRs. So that seems like a straightforward answer. But the TPM is hardware. We talk to hardware through the kernel. The kernel can just lie to us. The kernel could give back whatever PCR values the kernel wanted to. And this, again, is kind of a problem. The entire point of this is to determine whether we can trust the kernel and if the kernel's untrustworthy, we have no means of determining whether the answer is accurate or not. If we get back incorrect values, we know something's wrong, but getting back correct values does not mean that everything's okay. So the standard way of dealing with this was something called remote access station. And this was the thing that we were most concerned about in terms of trusted computing back in the early 2000s. Remote access station is a protocol where the TPM has a cryptographic conversation with a remote server. Every TPM has what's called an endorsement key built in. And the endorsement key has a certificate that chains back to the TPM manufacturer. So you can ask the TPM to sign something with its endorsement key. You can ask the TPM for the endorsement certificate. You can verify that the endorsement certificate chains back to the TPM vendor. You can verify that the TPM is able to sign something with that key. And that means that the TPM has control of that specific endorsement key. You know that you're talking to a real TPM at that point. So through a reasonably complicated chain of events, you can then create a secure cryptographic communication channel between the TPM and a remote system. And when I say between the TPM and the remote system, this is all mediated by your operating system. If your operating system doesn't want this to happen, it won't happen. This can't be done against your will if you're running an operating system you control. The local operating system has no means of interfering with remote access station. If the remote system gets a set of values from your TPM, it knows that those are the values that the TPM handed over. All the operating system can do is basically perform a denial of service. It can prevent this from happening at all. The remote system can then look at the PCR values and can decide whether the system is in a trustworthy state. And that could be based on any kind of arbitrary policy. Different components are measured into different PCRs. So you can look at the system and say, okay, I recognize this firmware version because the PCR that contains the firmware measurement is this value. And I recognize this bootloader and I don't care what kernel is being booted. You can do something like that. From there is you need to trust the remote machine and you don't have any particular reason to trust that a remote machine has not been subverted and is now operating outside your decisions about trust. If someone compromises this machine, then you could have a setup where your computer attests to the remote machine and then a remote machine might send you an SMS back saying your computer is good or your computer is bad. If someone can interfere with the remote machine, then it could always just say your computer is good. And obviously, you also need network access. So this means that if you're using remote access station, there's no way for you as a local user to figure out whether your system is trustworthy. If you're on a plane without Wi-Fi, if you're in the middle of the countryside or if you just don't feel like connecting to the network, you can't do remote access station. So it's not ideal. And then how's the remote machine communicate back to you and send you a fake SMS maybe, but then someone could just interfere with that and send you a fake SMS. We know that the phone networks are not as secure as we would like them to be, especially if your adversary is a government. And over some sort of secure connection, well, if you're running on your local system, then fine, you open the browser. The browser says, yes, everything's fine and the browser's being compromised by the current and earlier points. And so you can't trust what it's telling you, but everything looks okay as well. So remote access station is a problem because it relies on you having to trust a large number of systems that aren't in your control. And that's not the kind of situation we want. We're asserting that the TPM is trustworthy. Everything I'm talking about is completely pointless if the TPM isn't trustworthy. So if you'd like to treat this as an academic exercise, that's completely fine. I understand your point of view. But if the TPM is trustworthy, then we can take advantage of any other features that the TPM offers to control secrets. And this is good because one thing that TPMs can do is control data release based on the state of the PCRs. As well as measurements, TPMs are able to encrypt small quantities of data. You can ask a TPM to generate an RSA key. And you can then ask the TPM to encrypt something with that. And the cool thing here is you can ask the TPM to encrypt that data along with information about the PCR values. Now, the key to decrypt this data is kept on the TPM. The decrypt, the private key, is never in system RAM. It's never on your hard drive. This data is encrypted with a key that is in the TPM. So there's no straightforward way for someone to get hold of the private key that they need to decrypt this. If you have the PCR value set, then when you put the data into the TPM, the TPM decrypts it and then looks at the PCR values and compares those to the values that it has recorded. And if they match, it will return the decrypted data. And if they don't match, it will return an error. So this means that it's possible to have a secret that will only be released if the measurements are good, which then should indicate that your system is trustworthy. So you could use this, for instance, as a way of storing your disk decryption key. And that means that if your boot process interferes with, if the measurements no longer match, the TPM will not release the disk decryption key. And as a result, your boot loader will not be able to decrypt your hard drive and your system will not boot. Now, this doesn't protect against the case where someone just steals your laptop. If that's all you're doing, then all they need to do is turn your laptop on and it'll boot. The TPM is not verifying that you are the person booting the system. All the TPMs doing in this case is verifying that the boot state has not been tampered with. So, okay, we can deal with this. You enter a passphrase as well. So the system boots. The TPM decrypts the key. If the TPM successfully decrypts the key, that's half of the process. We still need you to type in a passphrase in order to decrypt the key through a second stage and actually allow you to access your hard drive. So this seems safe because if someone steals your machine and turns it on, then they still can't get access to your system without your passphrase. And if the boot process is tampered with, then the first step of this will fail and the system will not prompt you for the passphrase. There's a problem with this line of thinking, which is fine. You boot your system and you get a password prompt and you think, okay, the system is in a secure state. I can trust the system. I can type in my password. And you type in your password and you hit enter and it looks like the system is booting and then, oh, no, Linux is bad. This happens. I'll try turning it off and turning it on again. And you turn this off and you turn this on again and you get your password prompt again and you type in your password again and the system boots. And so everything is fine. Software is bad. We're bad at writing software. This is unsurprising. This happens. How many of you have never seen a kernel panic at boot? Awesome. Nobody put their hands up. So great. This seems like a thing that might just happen, right? Now, the problem is, how do you know what's actually printing that? You have no way at this point of knowing that this password prompt is legitimate. Now, you do know that if the boot process has been, if the boot process has been tampered with, that the TPM will not release the secret, so the parse phrase on its own is useless. But you still don't know that that's a real password prompt. Someone could have infected your system with some codes that modified your boot process and then put up a fake password prompt, waited for you to type in the password, saved that password somewhere, printed a fake kernel panic, and then uninstalled itself. You reboot and now everything works correctly because your boot process is now trustworthy. The problem is, there's now a copy of your parse phrase somewhere on your system that an attacker can get hold of and now they walk in, steal your laptop, they have your secret, and they have your parse phrase. So this is not ideal. The problem here is that you're still typing in a secret before you know that the system is trustworthy. Now, the idea here is the EvilMate attack is one where, say, your laptop is in a hotel room and an adversary enters your hotel room while you're out, tampers with your system and then leaves. Your system has now transitioned from being trustworthy to not being trustworthy. Anti-EvilMate was a technology developed by Joanna Rutkowski at InvisibleThings that was intended to make it less practical for this. Rather than just blindly typing in a secret, you would instead have a mechanism by which the system would prove to you that it was trustworthy in advance. And this time, rather than just encrypting a disk encryption secret, you encrypt a phrase that's only known to the user. So when you install Anti-EvilMate, you type in a secret phrase that nobody should be able to guess. That's then encrypted with the TPM and again sealed with the specific set of PCR values. So system boots and then, if the system's in a trustworthy state, that secret can be decrypted and then printed. And if you see that phrase, then you know that your system is in a trustworthy state. If you don't see that phrase, then you should not consider the system trustworthy and you should not type anything into it. As a couple of problems with this, and the first is that if it's always printed, if you have this as part of your normal boot process, then someone can just walk up, reboot your system, read the phrase off the screen and then install some malware that prints the same phrase even if your system's not in a trustworthy state. So there's a way around that, which is that rather than having the secret stored on the system, you store the secrets on a USB stick. And then when you boot the system with the USB stick, then the secret can be decrypted and printed. But that relies on the user carrying around a USB stick. It relies on the user having the discipline to always boot with the USB stick if there's a risk that their system's been tampered with. And so you could potentially end up in a situation where a user forgets to do this and then their system has been compromised. So exposing a static secret is a little bit suboptimal. Ideally, we would not have a static secret. We would think of some sort of dynamic exposure, something that changes over time, something where nearly having possession of a single instance of the secret being presented does not allow you to present the correct secret in future. We want a shared secret with some sort of dynamic component. And I would really hope that all of you are already using something along these lines. If you're not, you should feel bad and then you should go and enable it immediately because this is basically the design goal of TOTP, the protocol that's used for the majority of two-factor authentication. You have a secret... When you're logging into a website, when you enable 2FA, the website generates the secret and stores it and the website typically prints a QR code and you enroll that QR code on your phone and now your phone also has a copy of the secret. But the authentication that you present is based not just on that secret but also on the current time. So there's a algorithm by which you take the secret, the time, and then you generate a six-digit number. You present that six-digit number to the website. The website then verifies that's the correct number because the website also knows the secret and with luck, the website also knows the time of day. That's pretty much a solved problem. And this is nice because it's very easily available. You can get lots of 2-factor authentication apps. You can get free software 2-factor authentication apps. You can run free software authentication apps on free software operating systems. You can even run them on free software operating systems that run on top of basically free firmware. So that's nice. Users have been trained to understand this kind of thing fairly well. Lots of people use 2-factor authentication and somehow the world hasn't ended. So we can probably assume that users are reasonably familiar with the idea that you have this magic six-digit number even if they don't understand the technology behind it. So what we can do is seal the TOTP secret to the TPM. Print a QR code that then allows the user to share the secrets between their laptop and, say, their phone. And then on boot, we print a six-digit number. And then the user checks their phone and checks whether it's the same six-digit number. And if the two numbers match, then the system is trustworthy. Or, you know, there's a 1 in a million chance that the system is compromised and you just haven't got the right secret. You could use longer numbers if you want. So I'm going to give a quick demo of this. As you can see, I have wonderful source control. So I simply run the sealed TOTP application. This case, I'm running this as root because this version of it talks to the TPM directly through the kernel interface rather than using the TPM demon. And then, whoa, there's a QR code. Isn't ANSI wonderful? So if I make that a bit smaller, anybody with a phone in the audience can now steal a copy of this secret. Anybody watching is on... So I'm then going to, you know, not actually use this secret for anything important because I'm not an idiot. And then if I run this... Whoa! I get a six-digit number. And if I had enrolled that QR code on my phone and looked at it now, it would show the same six-digit number. I'll leave this up to the audience to verify whether this is the case afterwards. Just as a reminder, if you're attempting to do this while watching the video later, it won't work because time has passed. And that's kind of the point. So, back to this. Still a couple of problems with this. And the first is that the secret exists in RAM. The TPM itself cannot do the TOTP generation. And so we have to allow the TPM to decrypt the secret and then release the secret into RAM. And that means that anybody who's able to access the contents of RAM is able to access the secret. Now, if they've compromised the boot process, this isn't a problem because the TPM will refuse to release the secret. But there are memory-based attacks that you can perform even if the system's in a trusted state. Like, if you have any externally accessible DMA-capable buses like ExpressCard or Thunderbolt, you can just DMA the secret out of RAM. So, you really want the IOMMU to be enabled. Unfortunately, many free operating systems still disable the IOMMU by default even if the hardware supports it because we're bad at software. Also, Intel are bad at GPU design, but that's another story. So, we can think about alternatives. And one option is that TPM keys themselves. When we generate a TPM key, we can ask the TPM to restrict that key to specific PCR states. That is, the TPM will refuse to perform any encryption, decryption, or signing operations with that key unless the PCR state matches. So, we could generate a key. We can seal that to the TPM. And then we can ask the TPM to sign the current time of day. And the TPM can then provide that signed object to us in some way like maybe printing another QR code. And then you can verify that the signature matches. So, we may, for instance, have an application where we enroll the public key on your phone. Have the TPM sign the time, print a QR code, you scan that QR code with this application. The application then verifies that the QR code is signed correctly. That requires us to write new software. I really dislike writing software, so it's unfortunate that I have to do that for a living. It's less familiar to users, and one of the big problems with security is making things as easy for the users as possible, asking them to do something new and different and exciting every time they boot their system is not ideal. So, is there anything else we can do? And, well, okay, we could generate a key outside the TPM. We could then import it into the TPM so it's now under the TPM's control. Rather than signing the time of day, we could encrypt the time of day, hash it, and then extract six digits and print them. Now, this is very like TOTP, but it's incompatible. You could then, so you'd still need a different application on the phone, but in this case, the behavior would be very similar. It would be probably easier to train users. The downside to this is that in order to verify this, because we're doing encryption, the phone also needs a copy of the same private key, which is why we have to generate the key outside the TPM rather than in the TPM. And the problem with generating a key outside the TPM and then importing it into the TPM is that due to some unfortunate design decisions, any key that's been imported into a TPM can then be exported from the TPM. So anybody could then ask the TPM to just release a copy of this key. So if anybody is able to get hold of your system and then use arbitrary code, they could then just extract that private key from the TPM and then they'd be able to fake things at a later point. And maybe we're overcomplicating things a little. Maybe there's benefit in additional simplicity because apparently simplicity is a virtue, something that is repeatedly stated by operating system authors and apparently they don't understand either the word simplicity or virtue. TPMs have a bunch of GPIO pins. This is nice. You can attach interesting bits of hardware to your TPM. Now, it's not immediately obvious why you'd want to do that because you can't execute arbitrary codes on the TPM. But access to the GPIO pins can be restricted based on the PCR state. You can set things up such that you can only write to the GPIO if the PCR state is correct. So you, for instance, attach a small LED to the GPIO pin and then at boot, the operating system attempts to write to that LED. If the PCR values are correct, it can do so if the LED turns on and your system's in a good state. So hooray! This seems pretty awesome. We can just... This is a very easy modification to make to laptops. We just need an extra LED that's tied to a GPIO line on the motherboards. So cool. Turns out that it's quite easy to just attach things to LEDs to make them turn on. So the hardware that you would require in order to attack this is not particularly complicated. Yeah, so it's kind of a nice idea, but it's not ideal. Another thing is we've been thinking about using the screen or using an LED for communication. We could instead use some other interface. An example of that is NFC. So you could, for instance, basically perform remote access station over NFC. Your system boots to a certain point and then some sort of prompt appears. You walk along, you tap a physical object you have in your possession against the machine and then communications performed. The TPM produces some sort of signed material, passes that to the device. The device verifies that this is correct. And one of the nice things about this is that you can now also move away from things like having a passphrase. You can have a secret that's embedded in the physical device. And if you don't tap the physical device against the system, that secret is not provided. And so the system refuses to boot. One of the nice features about this approach is that you can't just ignore the prompt. If I get a six-digit number on the screen, then I can still just skip verification and go ahead and type in my secret. In this case, if I always have to tap my phone or something else against the laptop, and if that does not automatically result in the system then booting completely, there's no way for me to forget to do this. If I don't do this, the system doesn't boot. And this means that you can ingrain appropriate behavior into your users. It's much easier to train users if there's no way they can get it wrong. I have previously worked as assistant min, so this is unfortunately personal experience. And problem here is, well, you're still limited in terms of hardware support. The majority of laptops don't have any NFC capability. And even the majority of phones don't have any NFC capability. But the way you do have this hardware, this is a potentially interesting avenue to explore. And when I say it's an interesting avenue to explore, I mean, it would be very nice if someone else would write the code to support this. That's my usual approach to trying to get new software. However, even with all of these things, we're still vulnerable to certain kinds of attack. Now, previously I mentioned DMA-capable devices. And if you have no IOMMU, if you have no way to prevent DMA-based attacks, just give up. That's not going to work out well for you. Unfortunately, there is one device that is always on your system and is always capable of performing DMA. And that's the management engine that's built into all recent Intel systems. AMD will be shipping something equivalent at some point, I'm sure. So, you know, in the x86 world, we're basically stuck with this idea that there is a device that can do arbitrary DMA. And if it can do arbitrary DMA, then even if your system appears trustworthy up until that point, someone could have injected arbitrary code into the management engine at some point, and that person who injected code into the management engine may work for a three-letter organization, for instance. And that's going to have the ability to steal all your secrets. Or even if it doesn't steal your secrets, it can just compromise your operating system after it's booted, after you've thought you're in a completely trustworthy state. There's no good answer to this at the moment. This is a problem in a variety of ways. It's a problem in terms of us having trustworthy firmware. It's a problem in terms of us having a trustworthy operating system. We do not know for sure how tightly controlled access to the management engine is. We do not know how secure the management engine is, how competent the developers were. All we have is Intel's assertion that the management engine is written by top men. Having seen various bits of code that Intel have reduced over the years, I would not want to state that Intel are always good at writing software. And obviously, you can attack the TPM itself. TPMs are not intended to be tamper-resistant. The specification explicitly leaves any kind of tamper stuff up to either the platform specification or for individual vendors to compete over. And, yeah, this is not ideal. A lot of them define tamper evidence. And a TPM is tamper-evident. If someone tampers with it, you can see that. And a lot of vendors interpret this as, well, look at the motherboard. Is the TPM still there? If so, it hasn't been tampered with. Which is not ideal. TPMs are not intended to be particularly resistant to attacks. It's almost certainly the case that if someone decaps a TPM, they're going to be able to read the data out of it. This is something that governments would certainly be able to do. It's something that a sufficiently skilled amateur would probably be able to do for not that much money or time. So that's not really ideal. It's okay, though. We have a good solution to this. Intel have come up with the idea that rather than having a physical TPM on the motherboard, you can have a software TPM that runs on the management engine. Awesome. And, obviously, all of this also relies on you having a TPM. As of July this year, all systems with Windows 10 certification will be required. All new systems with Windows 10 certification will be required to ship the TPM. So, that's nice. Thanks, Microsoft. If you've got an Apple, well, that's unfortunate. If any of you are actually doing any kind of confidential work on Apple's stop for the love of God, please, please stop. The absence of any kind of plausible boot security on Apple's is a disaster. There is no secure boot. There is no way to do a measured boot so you can't verify the trustworthiness of the system later. Even if you're using full disk encryption, the password prompt could be written by anything. It can then man in the middle. You store your disk encryption phrase and then allow anybody to have access to your system later. Don't use Apple laptops if you have anything you want to keep secret in any way whatsoever. Well, obviously, if hardware isn't trustworthy, we have no mechanism to build trust. If the CPU can't be trusted, there's no way we can build a trustworthy laptop. If your hardware has been designed by the manufacturer to be untrustworthy, you can't do anything about that afterwards. The only real improvement we can make there is eventually to have more control over the entire process. And there's still going to be cases where even if we have fully open-source CPUs, if we have the ability to verify the states of every transistor in the design and ensure that the system is completely trustworthy, when it was fabbed, it could still have had a backdoor introduced. Solving this is a very, very difficult problem, and I'm going to say that I have no idea how to solve it. But incremental improvements in security are still improvements in security. We can't just sit there and say, we can't solve the really hard problems, so therefore we should ignore every other problem. We still need to do what we can to make it easier for people to verify that to the maximum extent possible, their system is trustworthy. So, there is some codes available. For this to work, for you to be able to do trusted boot, you need a bootloader that does full TPM measurements. I have a set of patches to the shim first-stage bootloader that supports doing measurements. I have some patches for grub that also support doing measurements. The combination of these means that you can then have a completely measured boot process up to the point where the kernel starts. So, the initial ID is measured, the kernel's measured, grub's measured, shim's measured. Any module grub loads are measured. I'm even measuring the commands that you type into grub, which is maybe overkill. And then the code that I demonstrated for generating and encrypting the TOTP secrets and then decrypting those is available there as well. Ideally, we would integrate these into systems such that it's very straightforward for users to just enable this functionality. There are some additional complexities around this, which I'm happy to talk to people about if they're interested, but it's sufficiently subtle that it's probably not worth it right now. Anyway, we have about 10 minutes left. So, if we could go to some questions. As always, if you have questions, please line up at the mics. There are three mics on each side, and two mics on the top. And also, if you're on the stream, of course, as always, go to the IRC channel, go on Twitter, and people will read out your questions for you. We'll actually start from the Internet, please. Thank you. We had some discussion on the IRC, whether it would be possible to use the NFC approach, but instead replace it with, for example, a QR code so that the TPM presents some signed material in a QR code, the phone verifies the signatures, produces a six-digit number you have to enter in your computer to proceed with a secure boot. Yeah, I don't see any reason that that wouldn't work. Sure. Yeah, so you would need to ensure that the application didn't just provide a six-digit number, but the application also printers an indication as to whether the certificate verified, but yeah, I don't see any reason that that wouldn't work. Microphone number eight, please. Yes, thanks for the great talk. I would like to know how, you know, if you like, seal the TPM stuff to the configuration registers, like with your bootloader signature and stuff, how does this or does this even handle updates? That was the subtlety that I was hoping to just conveniently alight right now. There are a few ways of doing this. So, for instance, you could pass, when you perform an update, when you know that the measurement is going to change, you could have a mechanism for passing that information back to the bootloader. The bootloader could then, rather than loading the kernel and NSRD, could instead just extend the, perform the same measurements, but rather than booting the kernel, it can then unseal the secret, reseal it to the new values, and then reboot, and then that was work. That means that you need to have a secure communication channel between the operating system and the bootloader. I think there's a couple of ways you could potentially do that with EFI authenticated variables, but this is a thing where the way that most existing limitations get around this is that when you do an update, the secret is unsealed and then saved to the hard drive. You then boot without verification and then you reseal it, and that's not entirely ideal. I think that maybe ends up encouraging users to behave in a reckless manner, but yeah. The state of the arts here is mixed. Even proprietary operating systems don't do a great job of it. I think that that's something where we could come up with something better, but it's going to be some work figuring out exactly how to integrate this with update mechanisms. It's possible, but it's not convenient. Microphone number one, please. Hey, sorry if you already answered this. I was late to the talk, but have you ever looked at the Chrome OS security design and their use of TPMs and secure boot? I mean, they control the hardware platform as well. Right, so Chrome OS is basically in the states that the TPM is the root of trust and you then have secrets for things like hard drive encryption ties to the TPM. This is still, this does ensure that if the system is not trustworthy, that you do not have access to your secrets, but it's still vulnerable to the state where you have a compromised boot system that then presents a fake prompt future type in your passphrase or something and then reboots itself. So Chrome OS gets a lot of the way there, but it's still vulnerable to certain classes of attack. The laptop doesn't do a good job of proving to you that it's in a trustworthy state. It merely asserts that it is. Okay, thanks. I'm not entirely sure if that would work, but I'll think about it. Thanks. Microphone number two, please. Okay, I have two small questions. The first one would be, why is the TPM on the motherboard and not just integrate in the CPU to prevent, well, to just remove it or something? And the second, wouldn't it be a very simple solution to just restart a computer if the TPM decides that it isn't trustworthy instead of, well, doing a code and or LED light or something? So the first question, the short answer is international geopolitics. The long answer is TPMs fall into the kind of categories of hardware that it's difficult to export in certain countries. If Intel integrated TPMs into their hardware, Intel would not be able to sell those CPUs in certain countries. Also, various people relying on TPMs do not necessarily trust US manufacturers. TPMs are pretty much all pin-compassable. You can just swap them out. If you look at the ACPI tables for Lenovo, you can actually see that Lenovo supports on the same system TPMs from four different manufacturers. And some of that's just for convenience in terms of choosing whoever's cheapest for a particular production run. But some of that is because certain customers will want TPMs from a certain manufacturer because they trust that country. Some of them will want TPMs from a certain country because they're legally required to. In China, you're probably going to want a Chinese TPM. So things are a little more complicated than you might imagine. The upside of this is that if your CPU is untrustworthy, someone would still need to compromise your TPM as well. And if your TPM comes from, say, Infineon, if it's a German TPM and if your CPU is from Intel, you're going to either need someone to have compromised both manufacturing in Germany and the US, or you're going to have needed to have screws up badly enough that both the NSA and Germany are really upset at you. The second question, why can't the TPM just reboot the system? The TPM has no mechanism to operate on its own. The TPM just does not have that ability. It's impossible to do that with existing hardware. Signal Angel, please. Thank you. Are there any operating system project that would be happy to implement this as their standard? I don't see any reason why any free software operating system couldn't have this functionality added to it. It's really just a matter of people putting in the work. I keep meaning to get around to it, but I'm also incredibly lazy. Number one, please. A tool called Evil Abigail was released recently that automates modifying the in-it RD to insert a backdoor. I was wondering if you had looked at that tool or had any comments on it. So that is basically the kind of attack that this is intended to protect against. If the in-it RD is modified, then the measurements of the in-it RD will change, and so the secret won't be released. So Evil Abigail will just not work. Thank you. Number two, please. Thanks. The TOTP implementation that you did, you mentioned that one of the problems was you had to pull the secret out of the TPM in order to actually perform the TOTP operation. But the algorithms, it's like HMAC SHA-1 or something like that. Is the TPM capable of doing it directly? No. Obsessing, but yeah, there has been a loss easier in that case. And two again, please. Hello. What prevents the following attack? An attacker steals your laptop, creates an identical copy where your laptop was, and then when the laptop, when you boot the fake laptop, it shows you a secret. And the attacker is booting your legitimate laptop at the same time, communicating with the fake laptop and showing that same secret. Okay, so basically, the system is compromised such that rather than talking to the TPM locally, it's talking to a remote TPM. Correct. Ooh. Yeah, that sounds upsetting. Make sure you boot your laptop in the Faraday cage. Thank you. I'll use lots of stickers on your laptop, maybe. Microphone number eight, please. Are there any remote attestation or hardware boot measurement features available on non-X86 systems? TPMs are not fundamentally tied to X86. There are certainly armed systems in existence that have TPM-based functionality. In some cases, a hardware TPM. In some cases, a software TPM that's running in trust zone, which is, again, utterly terrifying. But in that case, you can do exactly the same operations in terms of whether there are other implementations based on completely different technology but basically solving the same problem. None of this I know of. I'd say that if this is a problem you're trying to solve, the easiest way to do it is just to drop a TPM onto the system. There's nothing X86-specific about them. You could quite happily have a TPM attached to a MIPS if you wanted. Thanks. Internet, please. Thank you. What is the difference between your code compared to SecureGrup2? My code is and SecureGrup2. So SecureGrup2, is that the one from the, I can't remember their name, the French organization? Okay, anyway. That's... My recollection is that other implementations only support TPMs on BIOS systems. My implementation also adds support for EFI. So you can combine this with EFI SecureBoot. And you can have verification of the status. One of the nice things about the SecureBoot of EFI implementations of this is that your key database is hashed into a PCR as well. So you can verify in the same way that nobody's tampered with the key database. And if you're really... This allows for some interesting policies, like you can only seal against the firmware and the key database, and then you can just assert that SecureBoot resulted in everything else being trustworthy. And that makes updates simpler because sign stuff just boots magically without you having to interfere with it. Anyway, yeah. I think that's the main difference. And that's it. Please, once again, thank Matthew.