 Thank you very much everyone for coming. Thank you very much to DEF CON for having me. This is a nice big crowd and so I'm really grateful for this opportunity. This presentation is on protecting data with short lived encryption keys and hardware root of trust. My name is Dan Griffin. I'm the president of JW Secure. We're a Seattle-based consulting company that specializes in custom security software development. Today, JW Secure released a tool for experimenting with short lived encryption keys and a white paper discussing secure time and mobile computing. The tool and white paper are linked in blog posts that we put out this morning and the URL of my blog is on the slide. This presentation will talk about the context of that work including why we need high assurance data protection. What are the foundations for achieving it and how it can be undermined? General Alexander, the head of the national security agency, was at DEF CON last year to give a recruiting talk. Did everyone go to that? All right. Did everyone sign up? Are there any feds in the room right now? I actually don't see any hands. I guess General Alexander's message was a little too controversial for DEF CON, which is saying something. In other more open and accepting venues, the NSA has discussed the need to be proactive about secure mobile computing. Does that strike anybody else's hypocritical? All right. But the big takeaway for the rest of us is that the NSA is working on mechanisms for trusting their content from the cloud to mobile devices. They've stated as much. So there must be ways that the rest of us can do the same thing. Let's review the checkered history of mobile devices. When laptop computers first appeared, they were awkward and obviously from the early 80s. Early PDAs were also awkward, although I think the trio was kind of cute. Now half the people sitting at Starbucks are on a laptop and everyone has a smart phone. And now that we can work and play in the digital world anywhere and at any time, we actually do. But as work moves to less secure computing platforms, hackers like Lisbeth here have a rich new world of opportunities. After all, hackers and spies are just like everyone else. They look for the maximum return for the minimum investment. So their focus is now on mobile devices. What can be done about this? The first thing we want to do is define some characteristics of a trustworthy machine. But if there's one thing we learned from Terminator 1 and 2, it's that machines are sometimes trustworthy and sometimes not. Likewise, people talk about identifying the person in an internet transaction. But it's not a person. It's a machine and it may or may not be properly expressing the user's intentions. Given all of that, how can we be sure that a remote party is telling the truth? Desktop PCs and laptops can be configured to be secure. And users can be trained. Some of this applies to phones and tablets and some of it doesn't. For example, it's hard enough to enforce complex password policy when users have full-size keyboards. But on phones, the keyboard experience is built around autocomplete. And it's not consistent across apps or devices. Likewise, many mobile platforms still lack the system level extensibility required by the good third party antivirus systems. Still, mobile devices have lots of untapped functionality that can be used for increasing security. So let's see how to use it. Relatively low root-kit risk is perhaps the only aspect of the phone that is still more secure than the typical PC, at least for now. But application level attacks have become much more prevalent. When a mobile app or game costs 99 cents or less, you can bet that zero attention has been paid to security in that supply chain. You get what you pay for. And in mobile, there's a whole new range of risks from geolocation to remote theft of data, remote theft of service, including by the telcos. In short, it remains difficult to get the device to reliably report on its current state. And that presents a risk both to back-end services and to sensitive data. To get reliable information from the device, we need to get an authenticated report from a tamper-resistant root of trust. This is the purpose of the trusted platform module, or TPM. The TPM is a crypto processor typically implemented as a tamper-resistant chip on the motherboard or as a secure execution environment in system on a chip firmware. The latter is becoming the norm. Device integrity is the point of the exercise. But how can that help protect the stuff that the user and the custodian of the data care about? First, we want to measure that the device is running the way that it's supposed to be running. That's the first consideration. The second consideration is that we want to ensure that only on a compliant device can I decrypt sensitive data and only while the device remains compliant. By issuing encrypted content bound to a specific TPM and to a specific state of that device as reflected in the TPM, we can really lock down the lifetime of that content. So how can we determine if the device is telling the truth about its state given the risk posed by installing bad apps, if not root kits? Remote attestation uses the TPM, root of trust to determine if the firmware, boot loader and operating system are known good. I talked about remote attestation in detail in my DEFCON presentation last year. I'm not going to go into as much detail this time. You guys probably know DEFCON takes really high quality videos of these talks, so they're all available. What this means is that determining the health of a mobile user device requires some infrastructure and it requires a little cryptographic dance, but it's achievable. Nevertheless, I'm not suggesting that these techniques be applied to every internet transaction. They're too heavy weight for that. But for securing high value data on consumer class devices that are sometimes disconnected, this is currently the best foundation we have. So let's dig in. With measure boot, starting with the BIOS, before each component in the boot chain is loaded, the previous component computes its hash on disk and sticks that hash in the TPM platform configuration registers or PCRs. After boot, a boot log can be retrieved from the TPM. The log includes the boot image hashes that I just talked about. It also includes some code signing information as in who signed the binary. And it also includes other boot metadata, for example, was disk encryption used to unlock the boot volume. Importantly, the TPM can sign that boot log with a special purpose key that you can, of course, downstream determine if you trust or not. The server can then issue content decryption or authorization keys bound to those measurements. Sorry. The server encrypts the key to the manufacturer endorsement key, and that endorsement key is unique to each TPM. So when the device state changes, the measurements change, and the TPM will refuse to use that encrypted key thereafter. So we have a pretty powerful mechanism here. Let's see how to wire it together. Trust starts with the TPM and the key distributed with it. As I mentioned, this key is set by the manufacturer or the OEM, along with a PKI certificate that's signed by that manufacturer. And there's a pretty short list. So if you keep that set of issuing certificates, you can determine across a variety of devices whether you trust that chain. Thus, the TPM is as protected as any hardware or firmware can be. In other words, electron microscopes are a problem, as is an insecure supply chain. Importantly, TPM 2.0 includes a secure monotonically increasing counter. This at least is a more secure option than the standard PC real time clock is when it comes to enforcing policies that are time sensitive. For example, they need to start now, they need to end at a certain time, they have a limited window, they can only be run once, various combinations of all those things. This counter on the TPM is the foundation of our work on short-lived keys. Once the client device has received a measurement bound key from the remote attestation service, how can we use that key? Well, consider that constant reauthorization is expensive and that users hate it. Kerberos, for example, uses a token that is good for, say, eight hours and Kerberos has been able to mitigate some of the burden of reauthorization that way. The point is that this validity period is a policy setting that can be ratcheted down for the truly paranoid or increased for a better user experience. Why not use the same technique to protect mobile data? So, again, for gory detail of how to implement a data loss prevention or a digital rights management solution, using TPM measurement bound keys, please see the white paper that I mentioned at the beginning of this talk. To run your TPM through its paces, please check out the timed key tool that I mentioned as well. I'll warn you that currently the prerequisites for running that tool are relatively steep because unless you have a system on a chip based Windows 8 tablet or laptop from the past couple of months, you probably do not have a TPM 2.0 system, which I admit is a little lame, but bear with me. Obviously, this is a particular concern if you're planning on building solutions based on that capability like my company is. But we believe that it will make sense in the near term and especially for small high value deployments. Also, if you're serious about developing custom TPM middleware, you'll want to join the trusted computing group and specifically the TPM sub group because that's how you can download their full TPM simulator reference implementation. So that will save you months of engineering. But as an additional warning, the ability to download that code requires a premium level membership because they know the value that they're giving you. So you won't be able to do that. You won't want to do that as an individual unless you're a developer. My blog provides a trace of running the time key demo using this tool. These traces are sufficiently verbose to allow you to infer the TPM command sequence. In summary, you can use the first three commands listed on the slide to do the following sequence. And you can do other things as well as you can see, there are a variety of commands, but let me run through what we consider to be the default case. First, you use the tool to create a 2048 bit RSA key and you specify policy to limit the lifetime of that key to 60 seconds. Note that another major improvement in TPM 2.0 is that it supports both elliptic curve and symmetric cryptography. So that raises some very interesting scenarios around of course using smaller keys, but also in particular doing data streaming. The caveat being the TPMs are not fast. But there's some opportunities there. Anyway, TPM 1.2 only supports RSA. And TPM 1.2 does not support the time bound keys thing. So again, this tool is not going to run on a 1.2 device. The next step then is running the tool to encrypt some sample data. And third, still within that 60 second window, decrypt some data. Finally, after 60 seconds, do the decryption again, you'll get the expected failure saying that your key is out of policy. With these capabilities in mind, again, how can we use measurement bound keys to protect mobile data in the real world? Consider data access by trusted insiders. In this case, we want to ensure that only users with trusted client machines and encrypted disks can download sensitive files from the document repository, say such as SharePoint or box.com. By using platform attestation to enforce pin protected disk encryption, hardware identity, and a limited key lifetime, we decrease the chance that the data can be recovered from a lost or stolen device. And arguably, we decrease the chance that the data can be stolen even with the device still in the hands of the user. But before you deploy such a system, you should understand what can go wrong. The BIOS integrity measurements, or BIM model, was published by the National Institute of Standards and Technology to protect computers from rootkits. BIM basically boils down to measured boot plus TPM remote platform attestation. In other words, BIM is the NIST model for everything we've just been discussing. Based on this, my company implemented a solution called BHT, or BIOS integrity measurements heuristics tool for DARPA cyber fast track last year. We included this data flow diagram along with a threat model as part of that deliverable. And we like it for obvious shock and awe purposes. Looking at the diagram, security increases from the left where you have the user, to the right where we have the TPM route of trust provisioned by the manufacturer. So, insecure, more secure. Note that as only the two rings on the right, the TPM and the supply chain that we're depending upon for measuring the trustworthiness of the device. Of course, this assumes that your remote attestation solution, your middleware, has been implemented correctly, which can be a big or a small assumption depending. Given that assumption, two other questions arise. First, can you trust your supply chain? And two, are your adversaries sufficiently well funded to attack the TPM directly? We believe that properly engineered middleware can significantly narrow the window attack on mobile data. But there's nothing that middleware can do if your chipset is owned. And in order for users to be able to interact with plain text data within the allowed policy window, there is necessarily an increased risk of attack during that time. In any case, remember that the goal is to prevent the device from lying about its integrity. TPM 1.2 is widely available on PCs, but to date not widely used. As a result, the user experience around initialization and provisioning is quite poor. And this can necessitate shortcuts when you're trying to deploy a solution based on these capabilities in a typical heterogeneous environment. And of course shortcuts are the enemy of security. This problem is essentially solved in the latest hardware. So hopefully adoption of TPM 2.0 will continue to increase. As I've mentioned a couple of times, we recommend that measured boot be used to enforce disk encryption, if possible, as part of any data loss prevention solution. But on windows, that combination pretty much implies that you're going to be using their BitLocker feature. BitLocker has been the subject of a number of published attacks. Nevertheless, when properly configured, we think it's good protection. Note that for other TPM based encryption solutions, such as those you might implement using this measurement based keys capability, many of the same types of attacks may apply. So you need to do your research and do what you can to prevent exposure, reduce exposure of your keys in main memory. This gets back to the streaming opportunity I talked about before. That could be a really compelling way to mitigate the exposure of keys in main memory at the risk of trying to figure out the throughput you're going to want to get to make streaming interesting today. Can you decrypt HD content fast enough on a TPM? I don't know. But when somebody figures out, that's going to be cool. Looking forward from my perspective as a security software integrator, there are three points to consider. Intel has excellent developer support for the technologies we've been discussing, particularly on the Atom chipset. But will their mobile platform adoption strategy for Atom be successful? ARM has been slower to embrace TPM, presumably because no consumer cares. But if Intel makes TPM a differentiator, will ARM respond? I think they would. I hope they would. Finally, can software companies successfully integrate these capabilities in a way that is both secure and usable, that being the usual tradeoff? Either way, as I've demonstrated, short-lived keys are a great tool for mobile data protection. And support for TPM 2.0 is increasing. So learn to trust machines if only for short periods of time. Thanks. I'd love tons of time for questions if anybody has any. Yeah, the question was about, I mentioned a prerequisite that our current demo tool depends on TPM 2.0 as well as 32-bit windows, the point being that that's a rare combination these days. Yes, the one test device we've been able to get our hands on is the Acer Iconia, the new one, Acer Iconia W3, which happens to only be 32-bit. That is not going to be the norm, I can guarantee. If anybody else has questions, you're invited to use the mic. Or I can repeat it. Okay, I'll go to the Q&A room, which is down the hall. Thanks, guys.