 So, thanks everyone for showing up. My name is in large print behind me, which is a little bit scary. I'm Phil Tricka. I am currently working for Intel on the Trusted Computing Group's software stack to drive the second iteration of the TPM, which is the Trusted Platform Module. The talk today is largely focused on three specific areas. We'll start out with just some background stuff. I mean, this is just kind of level set. It's not anything you probably don't already know, but I think it's something that we need to say before we kind of get too far into it. And then the second arc here is just going to be about what the TPM is, what it does, what it's intended for, and really what it's not intended for. And then finally, we'll talk about the software stack. And this is the stuff that's actually interesting. The architecture of what the software stack is, what it's meant to do, the different components involved. We'll get into how you can actually pick it up, kick the tires. If you've got a system that has a TPM on it, even if you don't, there's a simulator out there you can run in software. And then we'll talk about some actually use cases, ways you can actually use this thing to do some meaningful, or to implement some meaningful kind of security properties in the embedded Linux systems that I'm hoping that you all are building. So the first bit of this is a bit of a soapbox, and I'll apologize for this. But I feel like it's really important to say that a lot of people, when they see embedded hardware security modules and things like the TPM, they see TPM, Trusted Platform Module. That means that it's trusted. And I should believe that this thing is going to solve all my problems. But the reality is that if there was a silver bullet in computer security, we would have found it by now, I hope. And it would have solved all our problems anyways. And the fact of the matter is the TPM is an extremely useful piece of technology, but it will not do security for you. It's a tool in your toolbox just like every other tool. And you can use it to great ends, or you can screw it up horribly and end up with kind of a zero sum game in the end. So really this is the soapbox where I say security is a process. It is something that you have to do from the very beginning of your process through the very end of your process. Everything from architecture or really I mean design through to the implementation and all the way through to the maintenance tail for your system. And this is something that takes everyone in your team. Not to say that everyone in your team has to be a security expert or have some like crazy security background. But everyone has to be aware that the decisions that they make when they're implementing the software that's in your system, they have to understand how that affects the overall security properties of the platform. And there really is no such thing as a secure system. There are shades across the spectrum for security. But you will never build something that's perfect. We're all here talking about open source. We're all here talking about Linux. These are things that we didn't write. These are things that we get for free. And we also get all of the bad things with the good. And the bad things are much, much, much smaller than the good. But there are security vulnerabilities in the software that you will take from other projects that you will put in your system. You'll have to be able to deal with the fallout from those security problems, you'll have to be able to recover from them. And really our goal is secure enough. Secure enough for the use case. Secure enough for the customers that are buying our platforms. And putting a TPM on these platforms won't instantly get you from zero to 60 on the security spectrum. It will give you a useful tool that has a certain set of security properties that we'll talk about. But it's not going to solve all the problems. So this means that we still have to do the basics. We still have to do the things we've been talking about for years, which some people still don't do. And that's everything from the basic platform hardening stuff that we do when we first pick up a desktop system. We turn off services that we're not using. We exclude tools from our build. And all of this is the defense in depth stuff that we know that we should be doing from day one, which is minimizing the attack surface of our system. We want to make sure that everything that's running on it is just the stuff we need and nothing that we don't need. And we want to make sure that that's as small an attack surface as possible. And the good news is that we're talking about embedded Linux systems. General desktops are particularly difficult to reason about because they're meant to serve a very large audience that may have a number of different goals. But for embedded systems, they're designed specifically to do one function. And that means that we can start excluding a large portion of the tools that we just don't need, which can reduce our attack surface significantly. And this lets us do fun stuff like making our root file system read-only. If you do that on a generic desktop Linux system, you're going to have your users up in revolt because they can't do the weird stuff that they were expecting to be able to do. But on an embedded Linux system, generally you don't have anyone logging into the thing and changing where bin bash points, where they put in a different shell that they'd rather have. So I always advocate for making your root file system that comes out of your open embedded or whatever build system you've got, making that read-only, and then having a very small data partition where you actually can write the things you need to write, something that's very small. And this is about making your system as resilient to attack as possible. So if someone pops a shell in your system, maybe through telnet that you didn't disable, they won't be able to drop in their malware directly into the root file system. It'll make their lives harder. Similarly, the ideas around the kind of tale of maintenance, all of these systems need to be updatable in the field. Automatic updates, even better, signed automatic updates. If your system will accept a payload and basically write this over the root file system as an update without checking a digital signature on it, you have very significant problems and ones that the TPM will not solve for you. However, if you want to protect the key that you're using to check the signature with and keep that in the TPM, that's something that it will help you with. And finally, the last point here about mandatory access control and the one about decreasing complexity in embedded systems. I'm a big fan of SCLinux. I've worked on it in past lives. And I think on a general desktop system, it's kind of difficult to reason over these large security policies. But for embedded systems, it actually is significantly easier. So if you're building a really stripped down small system, your SCLinux policy can get extremely small as well. And so mandatory access control policy, I think, is largely something that's becoming part of the basics that everyone should be doing along the way. Another thing that I'm going to beat the drum about is threat modeling. This is actually something that was new to me a couple years ago. And it's a process, not a technology, right? This is a process that a team goes through when they're designing and building a system where we sit down and we talk about the assets that our system needs to protect and the threats to that system. And it's largely a creative process, right? People sit down and we brainstorm and we write these big lists about the assets in the system, which are usually fairly tangible. And then we come up with these threats that start out being very reasonable and very sane and then just get wild really quickly. And it's actually a really good creative process because a lot of times, these things that seem really wild at first end up being fairly in scope later on. And so it's really nice to be able to build out this large kind of collection and document really what the threats to the system are because it's a really useful tool when we start thinking about which threats we're going to mitigate. You can look at these things and you can look at past information and you can say, generally, with pretty good fidelity, which ones are more likely to occur than others? And so you can start prioritizing the threats you're going to mitigate and where you're going to spend your time and your team's energies to actually remediating the most likely threats to your system to protect the most important assets in that system. And it's really, it's a game of trade-offs, right? You can't fix everything. You'll never have that perfect system, like I said in the previous slides. But the important thing is to find the things that are most likely to happen, remediate those threats, and then have a list of the things that you haven't remediated, the threats you've accepted, because they're either too much work or they're so unlikely to happen that they're not that big a deal. Or maybe you can do a partial remediation to through something like a mandatory access control policy that will confine some kind of exploit. And really, this ends up being a useful information when you want to actually describe the properties of your system to a customer. I've found that to be kind of unnervingly common when you talk to people about the security properties of the system. Most people can't really describe what that is. And so as you're going through a threat modeling exercise, you can go and build out this big collection of risks and assets, and then you can say, which risks you've mitigated, which risks you haven't. And most importantly, why? Which ones are important and which ones aren't that important? And someone may disagree with you. And it actually makes for really good conversations and helps kind of illuminate the space you're working in. And so I was trying to come up with something interesting for the slide about if your team doesn't threat model, but the only thing I could come up with was just, please, give it a shot. I'm assuming everyone in the room, and I'm really tempted to ask for a show of hands, but I always hate when people do that. So I'm going to assume that there's probably some subset of you that do do threat modeling and some subset that don't. All I can say to the people that don't is it's pretty much free content. Microsoft is really the, you know, love them or hate them. Microsoft is kind of the group that's produced the most useful information in the threat modeling area, and that's because they've probably sustained the most attacks against their systems, because their systems are so widely deployed. They have a very long legacy tail, and they've kept a lot of this information around for so long, or rather the technology around for so long that people have attacked it constantly and to their benefit and to their credit, they've built a very interesting and very useful body of knowledge around how to model threats, how to do the prioritization, maybe how you could keep this information for your team to learn from and have past projects or for other future projects, learn from past projects, and they give away a lot of this content on MSDN. It's 100% worth a good look. OWASP also has kind of extended this into some of the application space, and there are a couple of books that came out of Microsoft. Adam Showstax was really my first introduction to this, so if you're gonna do any one thing or if you get anything from this talk, I'd say go out, find a copy of Adam Showstax's book, and talk to your team about actually describing the threats that your system faces. And so my last slide kind of going over the basics are just kind of these classic security concepts. I have a tendency to kind of get wrapped around the axle on this slide, where I talk about the academic background for all these things and waste a pile of the 39 minutes that are still ticking down in front of me. So let's just do this really quick. Confidentiality is the easiest one. It's the one that we all know the most about, right? It's about keeping things secret. Generally, people think of this as synonymous with encryption. Now, encryption is something we usually use to maintain confidentiality in the system, but I could just as easily build a system that no one can access and maintain the confidentiality of that system. Now, generally, we've kind of all agreed that that's really impossible and eventually someone breaks into your system, and so a lot of times we throw some encryption on top of that to say that even if someone can read this, if they don't have access to the key, they can't actually decrypt it to make sense of it, and it's generally easier to protect a very small encryption key than it is to protect a very large body of information on a computer system. Integrity is effectively the dual of confidentiality, where confidentiality is concerned with who's reading information, integrity is concerned with who's writing information. So if I produce a hard disk image from an embedded build system, and I deploy this onto my systems, it's kind of a whole, and I can generally measure it, I can calculate a hash over it, and I can figure out how to identify it, but if my system ends up having its integrity decreased where someone can actually write to the system, they would be able to change that root file system or change components of that root file system, and the integrity state of the system is degraded. Authentication is usually, whatever one knows when they type in a username and password, so really it's about proving something to be true. When someone shows up and they claim to be user X, you challenge them and say, okay, well user X knows a secret, do you actually know that secret? And if they provide the secret, then they're authenticated, and they've proven effectively their claim to be true. Authorization is something very similar to authentication, and authorization is actually very, very heavily ingrained in the TPM, so this is something worth discussing. So authorization is less about proving who you are, proving something to be true, it's more about proving that you are allowed to do something or use some object. So authorization, you're really just not claiming to be someone, but you're claiming to have some kind of rights, and again, it just boils down to providing either the ability to satisfy a policy, which is something that the TPM does in a really interesting way, or the ability to produce a secret that only someone who's authorized would be able to produce. Down-repediation is kind of less interesting in this context, but really it's about keeping someone from denying a fact. So if something happened, and a system was thrown on non-repediation properties, you would be able to, or if someone wanted to claim that, oh, that actually didn't happen, I didn't produce that data, you would have some way to actually refute that claim and say, well, you actually didn't, here's why. Usually this is the realm of digital signatures, and again, the TPM can be used to produce a system that has these types of properties. But again, to get back to the original thesis here, the TPM doesn't give you these things. Using the TPM won't get you all these properties. However, the TPM is a building block that you can use to build a system that realizes some of these properties. All right, so shifting gears from my soapbox, I'm officially stepping down off of that, and we're actually gonna get into what a TPM is. And I'll apologize, the slides that are up on the web differ in order a little bit here. As I've been practicing this, this turned into be a little bit easier flow to go into this slide next. So the TPM itself is really just a small crypto engine, and I've often heard or I've heard it described as a crypto decelerator, as opposed to a crypto accelerator. It's very small, has a very kind of anemic CPU on it generally, and it does these things very slow, and this was actually by design. So internally, it has a number of cryptographic functions, hashing functions, there are key generation functions which really are relying on the random number generator, which is something we'll talk about in detail later. But also the last bullet here is an interesting one around the integrity measurement and reporting subsystem. This is what we'll see here on the bottom right hand part of the slide that we call PCR banks, the platform configuration registers. And the block diagram over here on the right is really meant to hammer home this point that there is these number of different functions that sit around this IO bus, and the TPM is meant to have one input and one output, because its design largely came around the early 2000s when we were realizing that the software separation mechanisms implemented in operating system kernels were significantly weaker than we needed them to be. And so we were seeing processes on a system attacking other processes and breaking down these separation barriers between them that the operating system enforces, and that's largely because of the complexity of the operating system. So the TPM was an observation that the most sensitive and important information in your system is often the crypto keys. And so you wanna take your crypto keys and your crypto operations and segregate them in a very small but very tightly controlled environment that has one input and one output, receives command buffers, sends responses, and is impervious to software attack. Also, that's something that's worth discussing are the different flavors of TPMs you can have. Typically, we're all very comfortable thinking about these things as separate chips on the system, like a discrete IP block, it's in a piece of silicon, and it's wired into the system through a well-defined bus like the SPI bus or the LPC bus. And we've even seen them hooked over to the I2C bus. But it's also very valid to build the TPM that is an IP block or a discrete IP block that's on a larger SOC, and so that you can have something like Intel's PTT, which runs in the CSME, or something like software TPM that would run in ARM's trust zone that would have, again, one input and one output, and that would prevent these software level attacks from the operating system or applications against it, but still within the same package. So a TPM doesn't necessarily have to be a discrete piece of silicon, it can be included in a larger SOC, or even for the folks here that are familiar with virtualization like Zen. Zen has a virtual TPM that can actually be run in a separate virtual machine from the system that it's servicing, and it uses the IOMMU and the separation properties of the hypervisor to protect it from direct software attacks. So there's just kind of different flavors of the TPM that can exist as well, and they're all valid TPMs. So kind of reaching back to that threat modeling discussion, the TPM itself has its own threat model, but it's described in a particularly interesting way, and it's largely around the fact that the T in TPM, meaning trusted, really doesn't necessarily mean that you should trust it, but simply by virtue of using it, you must trust it. It is an opaque box that does crypto operations for you, and if you believe it's going to protect your keys and perform these operations for you, you implicitly trust it. It's not open source software you don't get to actually interrogate the code directly, but by virtue of using it, you are trusting it. And so this first bullet here underneath the top level there, this protected capability is how the TPM architecture document describes the different functions that the TPM provides. So any function of the TPM in the previous slide that I showed this kind of breakout with these boxes, they're all protected capabilities. And by protected capability, it really means that it's a capability that TPM protects from the outside world. So software cannot tamper with these functions, and therefore you must trust these functions and they must be implemented properly. If they're not, the trust you should place on the system is significantly degraded. Similarly, shielded locations are a place in the TPM where sensitive data can reside, and it can only be operated on by these protected capabilities. So protected capabilities are things that you must trust, and the shielded location is where sensitive data resides when it's being operated on by a protected capability. The last bullet here in this subset is this notion of a protected object. It's very much related to a shielded location in that protected objects contain sensitive information where the sensitive information is encrypted, and it's only ever decrypted in a shielded location so that it can be operated on by a protected capability. Now this is relevant to the TPM in general because it has a very small amount of RAM in it, and so the actual number of secrets that the TPM can hold at any one point in time is very small, generally about three RSA keys, maybe four, and it's really dependent on the implementation. And so protected object is effectively an object that has sensitive data in it where the sensitive data is encrypted anytime that it's not in a shielded location. And so this allows the TPM to take secrets, encrypt them, and then export them off the system so that they can be persisted in operating system memory that's much more plentiful than the TPM itself. And so this is usually the realm of something like a resource manager that actually will do this kind of context switching, much like an operating system scheduler does for the CPU but on the TPM itself. And finally, I was talking about this threat model. The TPM is meant to protect from attacks by malicious software on the platform. The physical security properties of the TPM are largely out of scope, other than there needs to be this one input and output mechanism. The TPM does not claim to give you any kind of physical protection. So if someone pops the chip off your system or starts probing on the bus, there's nothing in the TPM specification that requires that it be able to protect against these attacks. Now this allows for vendors to differentiate their implementations. You can buy a FIP certified TPM that meets a number of security requirements. However, this is not a requirement of a TPM. And this is because the TPM was meant to be low cost. The idea was to drive adoption and get these things into as many systems as possible so that people are getting their crypto keys out of general kind of RAM, out of way from the operating system. But not so much that it requires that you buy in five, $10,000 to have like a full, physically secure HSM on your system. So physical protections largely out of scope. And there's tons of information out there. There have been guys that have given talks at DEFCON. There have been folks that have published academic papers where they actually physically attack the TPM. They'll start stripping metal layers off the die. They'll start pulling secrets out of the thing. And for commodity TPMs, that's largely out of scope, but it makes for an interesting exercise for decapping chips, which is kind of fun. And finally, this is actually one of the most interesting and probably complicated parts of the TPM is this notion of integrity measurement. So this is one that I struggle with. So bear with me if I lag a bit on the slide. So integrity measurement is really about identifying the software in a system. The behavior that a system has, a software system, is largely defined or almost exclusively defined by the software that's on the platform and that's executing. You know, hardware is generally immutable. Software can be changed. And so if someone messes up the integrity of your system and they start changing it, it will behave differently. And so the notion of the measured boot is around the ability to measure the components of the system as they come up, as the system itself boots. So everything, you know, the kind of notion of this turtles all the way down, there is this last part that gets, or this first part that gets measured, the bottom of your measurement. And we call this the root of trust for measurement, the RTM. On all biospace systems, this was the BIOS boot block. You know, there's a parallel in UEFI and there's a parallel in pretty much every system that boots. Some instruction that the CPU performs when it comes up grabs a piece of code out of a piece of memory that's usually on the SPI bus, throws it into memory, transfers control to it. Well, in a measured boot system, that thing is the root of your measurements and therefore it effectively measures itself. It's the thing you trust, much like you trust the TPM. But in a measured boot, that thing before it transfers control to the next piece of firmware, it measures it. And by measure, I mean it calculates a hash over it so you can identify it, right, a one-way function. And it stores this hash somewhere on the system and the TPM is specifically designed to store these hashes. In these things we call platform configuration registers, PCR for short. And there's generally about 25 on most general purpose systems, sometime less depending on the actual platform itself. And so as the system boots, this firmware is pulled out of SPI, you give it, you know, it's transferred control and it's measured before it receives that control. This kind of, this builds up from firmware to option ROMs to the boot loader to the operating system. Each system, or each part of the system before it transfers control to the next, it measures it, stores that information in a PCR and then the system continues to boot. And so these PCRs are populated with information that is put there by a TPM function called the extend function. And extend again is a protected capability which is something that you must trust to be implemented properly in order for the TPM to be a part of the system. And so in the end, once the operating system's up, if you look at these PCR values, they should reflect the state of the software that is running on the system. And so again, I've kind of stepped on this slide already. PCR is a shielded location, only operated on by the single protected capability that is the extend operation. It's memory that is only capable of storing a hash and in the TPM spec, it must be the size of the largest hash produced by an algorithm in the TPM. Typically there's about 24 of these on most general purpose computers these days, but there are specs coming out that allow for less for mobile devices and embedded systems. And the extend operation itself is defined as a hash that depends on the previous state of the PCR. So there's this kind of mathematical equation up here that defines the PCR at time n. I guess I should have used T there. Time T would have been nice, but we'll say time n for now, which is defined by the previous state. It's previous state. So when the system restarts, this thing's all zeros. And when you do your first measurement, it takes the time at n minus one, or it takes the value at n minus one, which is all zeros, concatenates that with whatever it is you're hashing and then calculates the hash over both of those things. So the value in a PCR, the hash value, always rolls forward, never goes backwards. And it relies on that kind of one-way computational property of a hash so that it can only move in that forward direction. And so the only way to change the values in these PCRs is to reset the system. And again, that makes it easy to verify but very difficult to forge, right? So that's the whole point of the hashing algorithm. All right, so shifting again, we're getting into the TCG software stack now. So we understand kind of what we're required to do as far as just kind of best practices for building a Linux system, all of these kind of minimum bar things. We've talked a little bit about what the TPM is. It does some of its functions and now we'll get into how we actually drive it. So that whole measured boot thing is generally something that most software developers never interact with. It's something that's instrumented into the firmware, something that's instrumented into the hardware and we don't generally interact with it but when we do interact with the TPM, we do so through a programming API just like we do every other device on the system and the TCG defines the software stack that defines these APIs for interacting with it. So there's a number of different APIs and on the bottom here is the discussion of this access broker and resource manager. It's generally a system daemon but if you look at some systems that's implemented in the kernel itself and this is something that is used to mediate access to the TPM. It's a single device. You potentially have any number of applications on your system trying to use it and so this is the thing that gives them all the illusion that they are the only user of the TPM. So it's much like an operating system scheduler in that it takes context, persists it for an application, switches it out, allows another application to use it and we'll just keep doing that for as long as the system's running. It's meant to be as simple as possible and it also has to deal with power management function. So if the system goes down in a power state or a sleep state, the resource manager is responsible for taking information out of the TPM, storing it in main memory and then letting the system put itself to sleep because when the system goes into a sleep state the TPM memory is ram and it's mostly lost. It needs to be reset when it comes back up. Now on top of this is this thing we call the TPM command transmission interface or TCTI. This is something that was added in TPM too that was missing from TPM 1.2 and it's particularly compelling because it's a very slim, basically send and response API that is meant to abstract the details of the IPC mechanism that's used to transfer TPM commands and responses from the application to either the TPM or the resource manager. And so this is a good way to decouple that kind of higher level API and the application from the lower level implementation of how the commands actually get transmitted there. So on a Linux system you may see domain sockets, you may see debus for transmitting these things and the TCTI, there should be a different TCTI module for all those different IPC mechanisms. And it's something that can be dynamically loaded into an application depending on whatever the needs of the system are. So this keeps you from having to change your application every time you have a different transport mechanism goal that lives underneath you. And so one could implement a TCTI layer that talks over IP sockets and so you could send TPM commands from a local system to a remote system if you wanna go that route. Finally on top of this stuff we build three programming APIs. The one on the left here called the system API is the only one that's actually specified currently. The one in the middle is the enhanced system API and the one on the far right is what we call the feature API. So the system API is the spec that's published and this is the one that Intel has implemented. It is the lowest level in the programming interfaces. It is not meant to hide any of the details of the TPM from the user. Someone who is writing to this API will need to know intimately the details of how the TPM works. There's a one to one mapping with the TPM commands and the API calls so you're literally invoking specific TPM commands. And in fact, there's actually more functions because this API, and I forgot to put it on the bullet up here, this API has asynchronous variance of all the function calls. So it's meant to be very much something that can be integrated into event-driven languages so that you could have or rather, so you don't require that each application create its own, you know, threading environment because a lot of times if you're asking the TPM to perform some operations it will take tens of seconds to generate keys potentially and so you don't want to have your application lock or have to require every application implement their own event loop. So this is something that allows it to be integrated into existing event-driven APIs or in ventrifying programming environments very easily. But it's also something that's designed to be very heavily or very amenable to embedding so you can implement this in very low level systems. It doesn't interact with a file system, it's not meant to have any persistent storage. It does not perform any crypto operations itself. So if you want to use the higher level TPM functions like HMAC sessions or encrypted sessions you need to provide that crypto yourself. It also doesn't do any dynamic memory allocation so it can exist on heapless systems. So this is something that's designed to be able to be integrated easily into like a UEFI environment or maybe into an SGX enclave or something very deeply embedded. Now the enhanced system API which hasn't actually, you know, we're in the TCG we're kind of finalizing the spec right now for this. This is meant to kind of bring this up a level where it's now providing crypto functions for you. So it'll handle all of the HMAC sessions and encrypted sessions if you need that. Still it doesn't do any file IO but it will do memory allocation for you so you don't have to worry about maliquing memory in your own program. So it will not be something that can be put into heavily embedded systems that don't have a heap. And finally the feature API is really kind of still a little bit nebulous right now to me. I've only been working in this space for a short period of time but this is an API that's meant to be very much abstracted from the TPM itself. It's meant to be something that can be used by your typical application developers who don't wanna learn the intricacies of the TPM but they do know that they need crypto keys and a couple crypto operations. So they shouldn't have to actually know the TPM commands themselves. All right so to put a picture to this I generally live on diagrams and so this is just a way to put these things together to show you how a normal application that's invoking a system API call as we have it implemented today what it would look like. The prefix here TSS2 is the TPM software stack two for the second iteration. CIS is the identifier for the system API and then those X's are meant to be some TPM function. This is what's exposed by the system API. It's just either a shared object or a static object if you're gonna link it directly into your application. And underneath that you instantiate a TCTI and you actually pass that to the system API when you initialize it. So that's the thing that catches commands and sends them out to the IPC bus. And so really the system API itself is just a layer that takes C structures and serializes them into TPM objects or rather TPM command buffers. And it does the reverse when it gets a response back so it'll give you C structures from that kind of network byte order command buffer. And the TCTIs that live underneath that that catch those byte streams and then send them out over the IPC bus. We've got two that we've implemented now that are in open source one of which talks directly to the device driver and Linux so this is dev TPM zero. The other one is a socket API so there's a you've given an IP address and a port and it will connect either the resource manager or it will connect to the simulator for your doing testing type stuff. And again the whole thing or at least in this picture this is meant to have as few external dependencies as possible so it can be integrated into very heavily embedded environments. And so one of the items on my wish list here is to actually make this usable from UEFI. This is again just to kind of break that out a little bit more to describe how a system would look at runtime. You've obviously got more than one application running at a time and so the resource manager is what would be mediating access here. The resource manager hosts the back end for this IPC bus. All the applications are using a TCTI to connect to the resource manager over the set IPC bus. And then in the resource manager itself you can implement any number of things kind of the minimum is you'd have to be able to do the management of transient objects and sessions but you could implement quotas, you could implement some kind of access control and this is the thing that's mediating access to the TPM itself. But notice the resource manager itself has its own TCTI so it's a very modular design that really reuses a lot of components across it. So the implementation itself and the code that we've put out there as Intel is actually hosted on GitHub. We've tried to make the processes kind of transparent and participatory as possible. Intel has this organization that we call 01.org that's up on GitHub and we host two repositories under that. One of which is the TPM software stack. One of or the other one is a set of command line tools that you can use to drive the TPM just to kind of do some prototyping and things you can run on the command line. The it's distributed under a three clause BSD license and this is all designed to make things as usable and flexible as possible for people who want to pick this up and implement it. And because this is meant to be very heavily embedded this gives you as much flexibility as you can. And so you don't have to worry about the intricacies of linking and how that affects the licenses. So this is actually something that I've started working on very recently. The majority of the code out there was implemented by my predecessors actually left Intel since and I've taken over for him since then. So personally I really don't know a whole lot about the inner workings of the TPM directly but I figured out a lot of the machinery and the way that the plumbing works in the TSS. So if you show up on the mailing list or rather we don't even have a mailing list at this point. If you show up on the GitHub issues and start asking questions I will probably not know the answer to the actual thing you're trying to do the TPM but I will be able to help you set up the plumbing so you can talk to it very easily. The good news is we've had a whole bunch of people show up and they're answering questions for us. Now I don't even know who they are in a lot of cases. That's the folks from Facebook show up. We've got some folks from Alibaba and we've gotten some requests from other folks as well. So I've been pretty impressed by how quickly this community has kind of come together. And this last bullet here is take it as a word of warning. There is a lot of churn that's gonna be happening in these repos over the next couple of months. My predecessor was a Windows developer and a firmware developer. And so the majority of what I've been doing in this code is, and this is how I got involved with the whole project was just implementing a sane auto tools build and decoupling a lot of the really close coupling between the things that needed to be turned into libraries. It's not particularly glamorous work but I think it's particularly useful and important. And finally, this has actually predated my work at Intel and I've started integrating this stuff into my own personal work in Open Embedded. I've been working with the TPM 1.2 for quite a while and 2.0 is the new stuff that I'm starting to integrate into a meta layer that I call meta measured. This is a layer that sets up all the TPM infrastructure for you in a build so you can integrate this into your Open Embedded build. I've got live images and in it are Ds that come out of this thing. And I've got some grub patches as well that are integrated there. Hopefully they're gonna be obsolete by some work that's going upstream. And what this does is it instruments grub in the UEFI environment at least to measure all of the things relevant to the boot process before it hands off to the kernel. This was a huge gap in Linux systems previously for measured boot where all the firmware would do the measurements, you get to the boot loader and it would all just kind of disappear because the boot loader would load your NITRD, load your kernel, read a bunch of config files and then launch the kernel without ever measuring any of that stuff. So these patches that are going upstream are intended to fix that problem. And I've got a board support package for the MinoBoard Max that's out there now. It turns on the TPM through using the machine feature, which is kind of nice. And I'm also working on getting this stuff working on ARM system. So the Infineon actually sells an SPI TPM that you can plug into the Raspberry Pi and it talks to the Raspberry Pi over the SPI bus. Excuse me. It doesn't have a fully instrumented measure boot, so you're not gonna get very many meaningful measurements out of your PCRs, but you can do interesting stuff with the TPM once the system's up and running. Unfortunately, there's still some work to be done in the TSS to support big Indian systems. This is something that we in Intel do quite a bit of, unfortunately, so we assume the world is little Indian when in fact it's not. And so you can't actually build the TSS right now for big Indian systems, but I've got some patches that I'm testing now to fix that problem. So now we're making that kind of jump into use cases and I was trying to come up with the simplest, most concise use case that you don't have to do much work in order to benefit from having a TPM on your platform. And I'm keeping in mind that kind of ARM system where it's not really tightly integrated into the firmware, but when your kernel comes up, there's a TPM there and it might make some use of it. And so it turns out that random numbers and the quality of the random numbers on your system are really, really, really important when you're doing cryptographic operations. And cryptographic operations really can just mean creating an SSL tunnel or setting up a VPN over IPsec. And so the more entropy you can have in your system, the more randomness you can get in your system, the better the keys that you generate for your sessions will be. And so the TPM has its own random number generator, but Linux has its own entropy source internally as well. The RNG tools that are out there exist so that you can take other randomness sources or other entropy sources and plug them into the kernel's entropy pool. And so you should never trust any one specific party or system unless you have to. And since you don't have to in this case, you've got the kernel, it's gonna reach out to network ports, it's gonna look at all sorts of timing stuff in the kernel and generate entropy from that, but you can also take the TPM's RNG and hook it into that as well. You don't actually have to trust either one of them, but you can take the two of them together and the RNG tools will, or rather you take the TPM RNG kernel module, combine that with the RNG tools, hook them together, and you can add their entropy source from your TPM into your kernel. You still get to read dev random or dev you random, just like you normally would, but you get the added benefit of an additional entropy source. And so if your embedded Linux system is hooking up to a VPN to talk to some kind of backend or it's setting up SSL tunnels or doing HTTPS or whatever, that means that you can use the TPM for your asymmetric operations for doing the sign to do your Diffie-Hillman handshake, but you can also benefit from having a really good entropy source or a more varied entropy source to generate the session keys that are used over those potentially long-lived connections. There's actually a whole handful of how-tos that have been written up over the past three to five years for doing this, and it's the first thing you should set up on these systems. And now as I was going back and researching this talk, I realized that I never set this up for my OE builds. So my homework that I'm going to do as soon as I get back to the bar with my laptop is to start working on getting this set up so that these things just happen out of the box when you integrate metamesured into your OE build. All right, so similarly basic crypto operations is kind of the next step that you could go, the next kind of level of effort. This is the stuff where you're actually interacting with the TPM using the system API. But you can actually start and experiment using the TPM tools that are just the command line tools that we're distributing as open source. And this guy here, Davide, he's a Facebook engineer and he actually gave a talk at FOSSTEM a couple weeks ago that walk through the steps necessary to create a signing key using the TPM tools that we have up on GitHub, how to sign something and how to then verify that signature outside of the TPM. So the belief then is that you're on a platform that has a TPM, you generate a public key, and you sign some piece of data, you export the public key, that ends up on some other system that may not have a TPM and so you can extract and transform the TPM key into something that OpenSSL understands and then verify the signature. This is where you start to butt up again some of the weirdness of the TPM. So the TPM has this notion of the EK which is very much something that can, it's a key that actually identifies the platform. And so the TPM will not allow this key to be used to sign any information because that ties the signed information directly back to the platform and you lose anonymity there. So underneath the EK you actually build these kind of hierarchies of keys, you can create what's called an attestation key that you can use for, that's a temporary key and you can generate as many as you want and destroy them as you see fit. You could use them as one time only keys if you want and they can be used to sign any amount of information. And so then we have these four commands here that I've listed out that describe the hashing process. And so Davide actually, I think he created this word press specifically to write this one how to. That's the only blog in his blog. So even if he took the time to create a blog to write one blog post, I think this is one blog post worth reading. And finally, as I'm getting up close to the five minute mark, this last use case is really one that is really interesting and compelling if you have a system where the measured launch is very well developed. And this uses some of the higher level functions of the TPM and what we call TPM policies, key usage policies. And this is actually what Microsoft BitLocker uses in Windows to allow access to the disk crypto key. So as the system comes up, what happens is as this whole measure launch process happens and you end up with a PCR is populated with the information that describes the software on the system, you can then create a key and bind the usage of that key to the state of those PCR values so that if someone changes your firmware, changes some of your low level system software, that key will no longer be usable because the PCR values will be different and you won't be able to satisfy the policy. And so BitLocker uses this as a way to bind their hard disk crypto keys to the state of the platform. Similarly, a project I worked on in a past life that we call OpenXT, which is a kind of a virtualization system that uses Zen and a whole bunch of kind of platform security properties. It uses the same mechanism for sealing a crypto key for a lux volume where all of the configuration information for that system resides. So if someone tampers with the system, it'll boot up, but then it'll say, I can't get access to this key because the PCRs don't match what the system expected and therefore the TPM won't release this key to me. And so this is a particularly powerful tool and something that's extremely useful when you're building these types of systems if you wanna have a really good assurance that the integrity state of the platform is what you expect it to be. So a couple of shout outs to some friends that helped me put this together. Some of the folks I'm working with in the TCG have only been working with the trust and competing group for a couple months but these folks have been super helpful in bringing me up to speed and getting me to a point where I'm actually contributing to the specs. So that's been particularly helpful. And I got a couple of folks here at the bottom, Bill and Imran, who I've been working with at Intel to kind of build up a team of people that are supporting this project. So just I can't thank them enough for the contributions they've made and how much they make me look like I know what I'm doing. So many thanks to them if they're watching the video. And that's it. We got five minutes left on the countdown timer here almost exactly. If anyone has any questions, happy to answer them. You can take them offline if you want, if you don't wanna ask them here, but I'll be floating around all week if you wanna learn more about TPMs, systems where they're integrated and how you can use them. You, sir. Okay, so the question was how much of this is specific to Intel platforms? How much of it will work on ARM platforms? And specifically, I guess a lot of the ARM manufacturers have bought into the global platform initiative, I think is what it's called. And that's largely a competitor for the TCG. So I don't necessarily know you'll ever see them both residing on the same platform, but really the TPM is either a physical chip or something that can be implemented in an SOC. So that's pretty much can be realized on either part of the system. The software stack that I was talking about should be portable between the two. Now, I pointed out that it's not. And I'm trying to, I'm working to fix that. And my goal is to have that fixed pretty quickly. But as far as the global platform versus TCG thing goes, I mean, they're meant to achieve similar goals. I don't really know enough about the global platform initiative to say either way, but from what I understand, they're largely in a kind of competing against each other. So I imagine you could probably implement a system that could use either or depending on how much of, depending on which platform features you're using. So things like platform configuration registers. I don't know how much measured boot stuff the global platform initiative does. So there may be some parts that aren't. But generally you should be able to take a TPM integrated into an ARM system and use it just like you would on an x86 system. It's on another hand back there. It's actually a daughter card. So the Raspberry Pi has like that block of, is it 14 pins on there or something? It exposes the iScore C, the SPI bus. And Infineon actually provides that in a daughter card that just plugs right into the top of it. They haven't got to the point where you could just like, you know, call up fries and buy one or get one off Newegg. But this is kind of a supply to man problem where they're, you know, you can't just go order one because no one really wants to go order one as far as they can tell. So if you show up and you start banging on the door saying, hey, I want to buy this stuff that might actually help out. So if you know an Infineon rep or really any TPM manufacturer if you can convince them they actually want to do something meaningful with this. So the quite, yeah, whether or not it's on DigiKey. You can actually buy TPMs on DigiKey but I've only ever been able to find 1.2s in the atmel ones that were on the iScore C bus. I actually soldered some of those onto some boards to make them work and you can. I actually made one work on an Intel Edison board but it's, as we say in Intel, it's been blue wired. Like I soldered the thing and had wires hanging off of it and if you poked it the wrong way it would short out. But from what I understand, actually it's someone reached out to me recently. There are some people that are actually making arm systems that have TPMs that are actually on the board itself. So I'm hopeful that we'll start seeing this kind of more general availability soon. Oh, one right there. All right, so the question there was a notice that or rather a note that there is something called Secure Boot in UEFI that has a word in common with Measure Boot. So Secure Boot is actually a signature checking scheme. There's a database in the UEFI of keys and in a Secure Boot environment the UEFI will actually refuse to transfer control to a bootloader that hasn't been signed with a key that's in that database. So this is completely different from Measure Boot, like 100%. So Secure Boot is actually an enforcement mechanism. It checks a signature and then refuses to transfer control if the signature doesn't check where Measure Boot is 100% passive. The system will boot as it is but anything that's deviated from what's expected will just be recorded but there won't be any enforcement mechanism. The two can actually live side by side very easily since one is so incredibly passive. So you can have these two things exist very much together. So the follow up was whether or not these would need to be integrated into a single bootloader. I'm not entirely sure that I know the answer to that exactly. So yes and no. I mean, ideally you wouldn't have to. Ideally it would just be done for you. So the Secure Boot stuff, you don't ever actually write any code to implement Secure Boot. You just make sure that whatever it is you're launching has the right signature on it and the corresponding public key for the private part that did the signing is in your key database. Similarly, if the bootloader's instruments are properly and these patches do go upstream into grub then you'll never actually have to touch it and it'll just work out of the box. So ideally no but currently yes you would actually need to do a bit of work to patch grub and to set up your Secure Boot environment. It's the handsome man down front. Yes, so if our demo works tomorrow there will be a demo that shows a measured launch functioning using the technology I was just describing. So if you're hanging around and you want to go through the demo area you'll see me there hopefully and hopefully you'll see a functional demo as well. No guarantees, worst case scenario we'll talk your off about this stuff. Just trying to set expectations low. All right, well if there's nothing else thanks for coming out. I really appreciate you filling this very large hall here. Thank you.