 So, while everyone's filtering back in, earlier James had plugged what PSAC, the Platform Security Conference that we had a couple of months ago. If anyone's interested, the materials are starting to get posted online. And if you're looking for it, you're looking for PlatformSecuritySummit.com. So, just a little plug there. So, thanks everyone for coming back from your break. I know the cookies and everything back there must be really distracting, but hopefully I'll lure you back to your seat. And with this awesome talk about the TPM2 software stack, or TSS2. So, I work for Intel. My name is Philip Tricka. You can see it up there on the slide next to my email. If you need to get in touch, don't hesitate. You can find me through a number of different mechanisms. Got my GitHub link up there and everything. So, this is the TSS2 as defined by the Trusted Computing Group. Intel works in TCG quite a bit. Myself, I have inherited the throne to the software stack working group. And we are hard at work standardizing those APIs and implementing them as well. So, there are three major thrusts to this talk. And one of them is not background. There's been a lot of talk about the TPM, interesting things that people can do with it. I'm assuming that there is a fairly kind of basic knowledge, at least for what it is and what it does. I'll do one slide on background, and it'll be pretty quick. The rest of it is going to be dedicated to the design of the software stack, the actual collaborative process a little bit, talking about what we've done in the TCG. We'll do a component breakdown, so there will be a lot of boxes and arrows to show you which components, talk to which. And we'll do a really good breakdown of the plumbing there. And the second part of this, we'll talk about the open source work that we're doing with the stack and how we're doing the development. So, I'm sure that it'll be hard to disagree with a statement where someone would say that in order to design an API, you need to be informed by experimentation. You have to try to use the API to see that it actually does what you expect it to do. And that's very much a part of this process. So, we've been doing a ton of work with community building, promoting adoption, and trying to make this less painful than interacting with the TPM usually is. And then the last bit of the talk focuses on use cases and examples. And one of the biggest problems that we have is that the TPM is very complicated in a lot of cases. And so writing code to use it is kind of difficult. And so if we're forcing people into a situation where they have to both build all the software, get it installed, and then suddenly write however many lines of C code to do something meaningful, all while trying to figure out what something meaningful is, that's really a heavy lift for a lot of people. And the learning curve there is just too much, I would say. And so what we do is we have a bunch of projects that are out there that will help people just build some simple cases where they can do something useful and not have to invest weeks or months to get there. So really it's kind of about instant gratification. And we'll also talk a bit about the flexibility that's been built into the stack. And I've actually worked on a use case specifically to exercise this a bit. So we're showing that the stack has various components and that it can be used in a bunch of different use cases. And so background, really quick. If you want to get up to speed on the TPM, you can go out and read this back. If you like, it's pretty heavy weight. I would instead recommend that you find some materials or Arielle Siegel's most recent book. I'm a big fan of the writing. It's very approachable. That's pretty much all I have to say about where you should start out. But really, when you're looking at the evolution of the TPM from 1.2 to 2.0, the use cases are largely unchanged. This thing is still very good for protecting encryption keys, signing keys while they're in use. So really cryptographic key protection and the notion of this route of trust for storage. Whoops, there goes that. And reporting. The TPM 1.2 and the 2.0 didn't really change that much as far as that use case is concerned. However, the actual implementation has changed fairly drastically. Algorithm support, having algorithm agility, which means that you can start implementing new algorithms in the TPM, and the structure of the command and response buffers is resilient to this. So there's a lot of size and then some field value. Also, the 2.0 has added some interesting stuff around the protection for the communication path between the application and the TPM itself. So there are protected and encrypted sessions. So integrity protected and encrypted sessions, which is particularly interesting. So that's it for background. The rest of this is just going to go straight into the software stack. And we'll talk first about the design. So I have never actually sat down and written an API that was used by a large number of people. So the first thing I did when this project was kind of left by the group that was running it, and I ended up kind of scooping up and carrying it forward. The first thing I did was just jump on my favorite search engine and type in how to design an API. Then I went back and added how to design a good API. And I found a talk that's up on YouTube. I think it's Joshua Block is, and I'm sure I'm ruining his last name, but this is a guy from Google who's been designing APIs probably since before I was born. And the thing that I took mostly out of his talk is that use cases are where this whole thing comes from. If you don't know how the API or how you want the API to be used, it will probably never be used for anything useful. So all of this is driven by use cases. And from these use cases came this notion of a layered design. Working for Intel, we write a lot of firmware, we write a lot of stuff that gets very close to the hardware. And so sometimes, thinking of UEFI, you may not have the environment available to you when you're writing this code that would support what we think of as a regular user space process. So we showed up with a requirement just that we needed to be able to use this stuff from a firmware environment. Now, obviously, we wanted to be able to be used from a normal Ring 3 environment. And so we end up kind of having to layer these things. One of the more useful things that we did in this also is separating the transport layer from the API itself. That was a really hard part for me working with the 1.2 TSS trousers is that there was a very tight coupling between the Daemon portion and the user space application. This made it really hard to do things in early boot code if you wanted to do something without having to start the Daemon or even earlier than that if you may not have even been able to. So I think of that as a really, really big improvement. Also, synchronous and asynchronous, we were wanting to be able to support event-driven programming. If you have a synchronous API, you can support an event-driven framework that would use something like a worker thread model. But there are also plenty of other event-driven programming libraries out there that use pull. And so we wanted to be able to use a pull interface to this as well so that we could support more than just one event-driven programming. And let's see. The other thing is that we don't want to abstract away all of the details of the TPM. We want to be able to have very fine control, but also at the same time, when you don't need that very fine control and all those details, you'll get some sane defaults from the stack. And sometimes those defaults are chosen by the TCG. They come out of the working group. We pick constants of various sizes for structures that are kind of formalized in the implementation. But also there's the notion of, you know, Linux distro may have their TPM or their infrastructure set up differently, and they may want a different set of transport. And so there's a way for the libraries at build time to be able to override some of these defaults. So the layered approach will kind of jump into that on the next slide, but the lowest layers, just think of getting low and closer to the hardware, this is intended for the use of by-expert applications or things that are in constrained environments. So micro controllers, you know, really embedded stuff. And so that means just a minimal set of dependencies. If you're going to statically link a lot of this stuff together, you don't want to have to statically link in tons of things. And even worse, if those APIs required access to the file system, that would be particularly problematic. And the upper layers are providing some convenience functions. And again, more features means more dependencies. So this diagram comes pretty much straight from the spec. And it's, you know, the kind of little layer cake that we kind of build up from the bottom. The two lowest layers in this, I guess the lowest one, the device driver layer really doesn't have a standard interface. This really depends on the operating system you're working on or the environment you're in. You're in UEFI, you get access to the tree protocol, which I guess has been renamed to the TCG2 protocol now that it's been standardized, even though it didn't change at all. There's also the device driver layer from Linux that has a device node, but also if you're on Windows, you get TBS. And I'm sure, you know, there's a number of other interfaces that may get written and exposed there. So the device driver really has a non-standard interface. And this is, again, why we wanted to separate the transmission layer. The device driver may be packaged or may not be packaged with the access broker and resource management daemon or that component, I should say. And so in 1.2, that was a user space process. In Windows, it's built into the kernel driver. And on Linux, we're starting to move closer and closer to having that full set of functionality in the kernel as well. Right now, the implementation of the resource manager in the kernel is very, very primitive. We still have and maintain a daemon that does this in user space. And we're slowly migrating features in, as that makes sense. And so as the glue that kind of holds together these layers is this TPM command transmission interface. If you're a resource manager in the kernel, you really don't have to worry about too much there. You just kind of put the TCTI layer on top of that and it plums everything into the upper layer APIs. If your resource management is being done by a user space process, then you'll have a layer that sits underneath it that is the TCTI and it'll expose a front end and you'll have another one on top of that. So you could really just stack these things at endosium, but I don't recommend you try that. The top layer here is really the stuff that programmers care about. This is the APIs that programs interact with generally. And so working from left to right, the system API here is our lowest layer. It is a extremely thin layer on top of the actual TPM commands. Every command is exposed. There is a synchronous and an asynchronous version and the system API does very little more than just turning C structures into a TPM command byte stream. Sending it out using a TCTI that it's been configured to use and getting back the response, turning it back into C structures, handing them to the caller. Extremely simple but extremely powerful. Moving next up to the right, enhanced system API is really what it sounds like. It just enhances the system API and provides some really nice convenience functions which are really just automating the crypto operations for HMAC and encrypted sessions. It will also do dynamic loading of the TCTI modules and this is where some of that configuration comes from. If you're on a properly configured system and your distro is set up the packaging, right, they can choose the default TCTI that makes sense for their system and you won't even need to tell an ESys which TCTI you want. You can just give it a null point or it will pick the right one for you. Still, you can override it by initializing one yourself and passing it in if you need to. The box here in gray is the one that still isn't done. The feature API is meant to abstract the TPM even further but that stuff is still very much under consideration. So now that we've kind of broken down the layer cake, let's actually look at what these things look like when we put them into an application. Just like what I was saying on the last slide, the system API transforms C types into TPM command buffers, one to one mapping with the commands and it's suitable for highly embedded applications. We don't recommend you use this from, you know, general user space apps but for quite a while this was all that we had and so we were actually doing that and I can tell you it's very painful to use in that regard. Internally this uses another utility library that we've defined which is really just where we do this type marshaling. For every type that exists in the TPM spec there is a corresponding marshal and unmartial function in this library and so the system API really just does a little bit of state tracking internally and calls out to the the marshaling library. Then everything gets pumped out through the transmission interface at the bottom and it gets sent over whatever IPC mechanism down to a device driver and you'll get your response back. So we have a bunch of these TCTi modules that we can actually kind of remix very dynamically so you don't even need to recompile the application in order to change them and they'll allow you to talk to either, you know, the device driver. We have one for talking to the Microsoft simulator code so you can run the TPM in like a user space process. Excellent for debugging which you know you end up having to do a lot of when you're trying to use this stuff. And one to talk to TBS we actually have one that'll work on Windows. You can use the stack on Windows as well. Adding the enhanced system API layer on top of this we add some of these additional dependencies and really this comes in the form of the crypto libraries. So the current implementation we support libgcrypt and open SSL and I, you know, adding additional libraries is into an enormous burden. There are some well-defined interfaces inside of the ESys that can be used to add additional crypto layers. This is really what you want to do if you're writing general user space C application right now. This is what we're recommending to folks and we've actually just added this fairly recently. It was a really big development for us. Now there's nothing that strictly says that you need to build these in this way. You could have a completely separate system API library and you could reimplement ESys entirely without using it. But that doesn't make a lot of sense to me. We use ours internally and it's a great way to get, you know, just better test coverage. You just have fewer code paths. So lastly just before we're done with the actual architecture side of it. The resource management is really interesting and unfortunately complex. TPMs are really small, really inexpensive, and they have RAM on the order of a few kilobytes. The TPM itself has no notion of users. It doesn't know where the commands are coming from. And so the resource management daemon component or the component that may be in your kernel really just is there to make sure that the objects that you have previously loaded and that you are going to use again get loaded for you. The resource management component uses just three TPM commands to do this. And really the management component comes down to saving and loading contacts on behalf of the caller. This gets you a certain kind of isolation from other users of the TPM. The resource manager loads your contacts for you. When you're done it unloads them, then executes another command and your objects will not be available to that command because it may not be coming from you. And the resource management component is usually the part that understands the connection, the client connection, so it knows the difference between where the commands are coming from. And like I said earlier, we're starting to move this functionality to the kernel device driver, not as quickly as I would like but there's just a lot of work to go around really. So the software stack from the open source implementation side. This is the community that we're trying to build, the adoption that we're seeing and some of our success stories along the way. So when I inherited this project, it was almost three years ago now, I was definitely not the first person working on it. It was a prototype developed by a team that came before me who all magically decided to leave at the same time. And so the whole thing just kind of dropped on the floor. And to my boss's credit, no one said, hey Phil, go clean that up. They said are you sure you want to clean that up? And this stuff seems pretty important to me. I've written systems that use TPM 1.2. It has some really interesting properties and I think that the open source community needs to have this stuff available and it should be done right because it's kind of plumbing and you just want your plumbing to work. But when I picked it up, I had to do this stability, reliability, how do I get this thing into a shape so I can actually support it? What happens if someone submits a bug and I don't even know what that part of the system does? So the initial year of this was just triage. Pay down technical debt, identify codes that's liability that I know I can't support and come up with a way to remediate that whole thing. And that really gets down to making it debuggable. Like someone sends me a bug report. If I don't, if I've never worked in that part of the system, how do I at least get meaningful information out of it to find out where it's coming from? A lot of this got down to write the right tools for the tasks. When I inherited this, the build system was one make file that was handwritten. That doesn't make this particularly easy for operating systems or distros to actually package the stuff. So finding a real build system. CMake would have done the task just fine. I chose auto tools. They both have their problems. They both have their strengths. And some of the things I just kind of cut off. My predecessor that was leading the development effort was very much a Windows developer even though this was really designed to be used on Linux. And so he did all of his work in Visual Studio. And the first thing I did when I came in was just look at the Visual Studio files and I said, I haven't looked at this in 15 years. And I deleted them. It didn't make me particularly popular. Because it turns out there was a group in Intel that was actually using that stuff. But that was how I found out. And, you know, and then that's how we got our remediation to try it together, right? I mean, they weren't even using the latest version anyways. So, you know, I got to do the when are you going to update? When do I have to have this ready? And so I went back and learned a lot about Visual Studio, learned about building DLLs, learned about doing continuous integration on Windows. And, you know, we just kind of moved it forward. And now our Windows support is actually a lot better than it used to be. Some things just had to go entirely though. The original resource management Damon was an absolute disaster. And I just deleted it and started over. And it was so bad that actually the people that were using it inside of Intel, they had some problems using it. And it was for a high priority project. And they threw me on a plane and sent me to Poland for a week to try to help them figure it out. And after that, we worked through their issue. I went back and deleted it and started writing over. All of this boils down to just trying to build a healthy open source project. Something that looks healthy from the outside when people are looking at it, so that they'll say, yes, I actually think I should use that. I'm willing to build something that depends on that. And I'm not, and I'm pretty sure that it'll still be there when I need it, that it'll get updated. Success metrics, like who's, you know, adoption, right? Your end user in a lot of these things really isn't at the end user who's sitting at the keyboard. They don't compile source themselves a lot of the time. The distro package ends up becoming really your target audience. You want the distro package to be happy and to make packaging this stuff for the distro very easy. How you communicate with them about changes, semantic versioning is very important. You know, when you're actually breaking API, making it very clear in your version numbers, testing was super important. The test code that I inherited was a 9,000 line C application, one application with tests that, you know, later tests would have, would depend on state that was set up by previous tests. So yanking all of that out, decomposing that, putting it into a test harness that's built into the build system, separating unit and integration tests. There really weren't unit tests to begin with, but I'm now a big fan of CMACA, which is really, really cool. Integration tests that actually get their own version of the TPM, a new instance of the simulator when you're running the test, that have a test harness that will set that up for you and tear it down that will get you meaningful logging information when something breaks and then tying it all together with a CI loop. And this was really just, you know, a one person task for a while because I wasn't just trying to attract users, I was trying to attract people inside of the company, inside of Intel and outside of Intel as well to contribute. I needed people to help me do this because I couldn't do it alone. And so we've got all of our, you know, all of our Travis CI coveralls, so we're testing, we're getting metrics for our code coverage. Our goal is to have everything above 80%. And I think we've only got one part of the project that's below that now, setting up static analysis so that covariities run on these things. And we use scan build as well. So we use two different static analysis suites. So now everything is up on GitHub. We actually have an organization for the project that's separate from the Intel real estate. And that's because we've gotten some very significant contributions from the outside. And it just makes sense that, you know, that when we've gotten a large contribution like the ECIS layer, we've been working with folks inside of the TCG and, you know, our friends at Frownhofer, they actually had a stack that was built completely separately and was just not open source. And when they saw that the project had come along, they looked and they said, you're missing ECIS. That's the only thing you're missing. And we worked out an agreement and, you know, we've had some really good chemistry with the developers over there. And they actually took their ECISIS, lopped off the lower parts and open sourced the higher level API and it rebased it on top of our kind of lower parts of the stack. And that's really, I mean, that's really the thing that made it, made this as successful as it is because the system API alone is not sufficient. We need ECIS. And that's been a big part of this. So we've got a mailing list, got a repo here with the core libraries. That includes the programming APIs and some of the transport layers. We have a set of command line tools that I'll talk about later on. We have an open SSL engine now that's out that I learned a lot about in preparing for this talk. And we have a resource management daemon, like I was saying. So we've had, we've got maintainers now that aren't just from Intel. So I've managed to get a bunch of folks from Intel's open source team OTC to come on board and help. And we've also gotten folks from Fraunhofer SIT and we have a maintainer from from Red Hat as well. So we've done, I think we've come, you know, a really long way in the last, last two years. Finally, we've also gotten some really large contributions from from the folks at Infineon. Peter Hughes, a previous maintainer for the the kernel driver. He was the one who helped me get Coverity set up and he was really the one that put the spurs to me to make sure it happened. And we've gotten, you know, Facebook was actually one of our first really big name users and they did a a pretty significant deployment on top of this stuff. And we've gotten patches from Alibaba, Red Hat, GE, SUSE, Debian. So we've gotten some really good community involvement that I'm really proud of. Also we have a bunch of new projects that are in the work. We've got a PKCS 11 module that's getting ready to be open sourced. I've written a transport layer or a transport driver for that can be used in UEFI. So I'll talk about that a little bit later on. We have a set of patches that Frownhofer is putting up for a crypto setup integration. And we have a a response code decoder basically. That was one of the first things that I ended up writing when I came on board. But as folks from OTC came on, Bill Roberts looked at the the tool and was like that's super useful but we should probably just have this be a library. So we don't people don't even have to put these raw binary RCs into the tool. So we're working on actually standardizing that API right now. So I've already kind of bragged about names I'm up here dropping names like crazy but I'm going to keep doing it. Packaging for distros is a really big deal for us. I can't tell you how many oh I ran make and nothing happened questions we've gotten from kind of users on the mailing list and being able to tell them just to download it through your package manager is really really important. We've done geez maybe back in June. I did our 2.0 release that should make it into rel 8. Unfortunately we missed the the Suze enterprise deadline for 15. So we're going to be a little bit behind on that. It's just you know you can't win them all I guess and 2.0 for us is our really big API compliant release. When I when I took the project over I had just stamped a 1.0 release like immediately and I'm like this is apparently our API right now and going through that and figuring out where it lines up with the spec we had some deviations and changing you know your API and your API you don't do that lightly. So we spend about a year working on the 2.0 release and it's a lot of management when you think about having a you know a release branch around and not being able to release from from your master branch for a long time. So we're very very very happy to have the 2. 0 release out now and hopefully you know unless the spec changes 2.0 will be you know our or two will be our major version number hopefully for a very long time. Also Red Hat is doing some integration with their clever system. It's not something I'm entirely familiar with but as I understand it and there's a link here that should tell you more about it but this is a pretty interesting system now that's using the TPM for some some key management and protection. So I'm I think that's hopefully the direction things will be going in the future our our next goal is now that we have these APIs that are stable. We now want to start integrating this stuff into the core platform you know of Linux so that we can actually benefit from having the TPMs and using the TPMs that are on all of our systems. Strong's one has been one of our one of the the TPMs probably most prominent users in the Linux community. They had a 2.0 or rather a 1.2 implementation that used that they used for for doing protecting clients side keys. They've updated that using our TSS as well and they've even kept up to pace with our our 2.0 release. So Strong's one is is a really one of the early adopters and and they're doing really good stuff. Myself personally I have a soft spot for open embedded and so I maintain a layer that has all the recipes for this stuff. I haven't updated it to 2.0 yet but I've pretty much get an email a day from random places of the internet asking me what I'm going to so I think that's probably gonna happen sooner than later. And really I think the right way to handle you know open embedded and this going forward is to get rid of the you know a separate layer where those recipes live and those need to go as far upstream as they can. I think that's probably going to be a pet project of mine for the fall. If anyone's interested in that by the way I would love some help. So yeah change log for our 2.0 release right we've got compatibility with the TPM to 1.38 spec we've got a couple extra commands for the 1.46 or from the the draft 1.46 spec which is specifically the attached component commands. This is a really interesting feature that's been added to the TPM spec really recently and I think it has a lot of potential so if you're interested in some new and wild things that the TPM might be able to do for you the AC commands are something you may want to take a look at. So we've added a bunch of libraries so the type marshaling library is new again the ESIS implementation we got via frownhofer and our collaboration with infinean which you know is again another reason why I think this this project is viable for the future. Again we've added I've added back all that support for Windows we have a tcti that'll talk to TBS there were no required changes in any of the other libraries we did I did and I guess I should say I learned in the process of doing this that the visual studio compiler isn't really c99 compliant whoops so once I figured out that that's where all my errors were coming from I then figured out how to get clang working on visual on visual studio and the world is a much better place for it so if you do stuff in visual studio I highly recommend figuring out how to get clang plugged into that and again yeah using app there for for ci so we do ci builds for Windows we do those on on app there I have not yet gone through app there or visual studio to figure out how to to set up our test infrastructure so we basically just build for Windows but we don't have our test harness running on Windows yet okay so the the final push for this talk are use cases and I got just about ten minutes left so that should be sufficient let's find out the right so when people show up there's this to the project there's this kind of path they take right they try to build it it fails we have to explain to them how how auto tools work how to install the dependencies they get it built they get it installed and then they kind of just say you know what do I do with this thing now they may know what it's good for but they don't necessarily know how to program to make it do something useful and so in our efforts to reduce the learning curve we want to have people be able to do something without writing a pile of code in order to do it and so if we're if we can communicate to them what the TPMs are good for and again there's a lot of good information out there about it data protection for for keys also data protection for you know whatever you want to stick into the NV storage we've seen a couple talks that talk about that and then you know attestation is interesting but you're not going to get someone who you know unfortunately people want to do this they build it they install and they're like now I'm going to write my attestation service and then you know they usually just disappear because it's a huge undertaking so really what we want is to start have people start out just doing basic crypto operations how do I get the TPM to create a key for me how do I load a key how do I get it to sign something how do I verify that signature and to do that maybe even without writing code so this is where the TPM tools project comes in again this was a part of what I inherited and it's changed significantly over time so OTC has taken this over it started out as almost a clone of the TPM 1.2 tools which unfortunately had a lot of TPM 1.2 isms formalized in those tools and so we've kind of been going through and stripping that out and making a almost like a one to one mapping the TPM two commands so you can literally see when you're running the command line tools it does this thing if you dial up logging you'll get to see the full command buffer that that could send out of the TCTI and the response that comes back and we can kind of use this as a teaching tool so this is literally the steps that you do to create a primary key in the the storage hierarchy the then create a subkey that you can then use to do specific things we then load the subkey that we created we can calculate a hash of a document or some kind of message using open SSL so we just kind of create a hash locally we then use the TPM to sign that hash with the key that we just created we have a little format option you can see here that we'll output it in a dir format so it's a format that you know a lot of other tools will recognize and use we then extract the public portion of our of our or sorry that is extracting the public portion making sure it's in dir format then we can use open SSL to verify that hash or the signature on that hash and make sure that it meets the the document or the message that we actually calculated from that's kind of the one of when you actually look at how systems like Strong's one use this this is pretty much everything that's happening you got a key you're protecting into the TPM you're going to negotiate a TLS session so you're gonna rather I should say an IPsec session so as part of that exchange you've got to prove who you are and usually sign something with a private key and send it back that's exactly what this is but just on a command line locally now you'll notice there aren't very many options that I'm using here so we're using a lot of default options I think it creates a shout one key which you may not want to do shout 256 may make more sense you want to make restrictions on this so that you actually have to be author or authenticate yourself to prove that you're authorized to load and use a key none of that is being done here no password creating a key anyone if they have this this blob of the key they could load it again and use it and sign stuff with it so this isn't particularly meaningful as far as providing some kind of security guarantee to you however it's mechanically the steps that you would go through in order to do a basic sign operation on the TPM and have it verified using a different tool that's something I think we've been missing for a long time and really you'll notice the bullet at the top there this was a demo that got put together by a Facebook employee Davide and he presented this at FOSSTEM in 2017 so if you want to see really where we've come in pretty much just a year you can line this up next to what he had to do to make this work and it's pretty stark I mean he has some commands where he's literally going through like a TPM the public key that the TPM dumps out and like grabbing specific portions sticking them together in a way that that turned into the DER format so we're trying to pull this stuff internal and make it so it's you know it's part of our our system and also I guess I should say we're trying to learn from the people that are using this I mean this guy at Facebook didn't do this because you know he was bored he did this because he needed to and it was something useful so we're pulling it in and making it easier for him an example using the open SSL engine is right here and this is almost the exact same demo as before so we have a separate utility that is used to create the key because apparently you can't do that through the engine and again I didn't write this I learned about this last week we then use the open SSL engine to output the the public key in the PEM format so now we're using open SSL for to do this for us we then use open SSL to hash the document doesn't require the the engine we then sign the hash using the engine and verify the signature just using straight open SSL again so there's a couple of the TPM commands that you saw in there where I'm creating a primary key creating a sub key then I'm loading the key all that gets done by the the open SSL engine for you and you don't have to really deal with with that part of it which is you know it's nice it's nice not have to to even understand or know what that what that is and finally I wanted to be able to show this because the the use case for the system API is one again that's particularly interesting to Intel but we never really had an implementation of this that we were were showing off and again to come you know to come clean on my on my previous kind of pontification about about API design we said that this was the intended use case but you know really you got to put your code where your mouth is and you know get it out there so people can see it and so I've started a a new project that I just got opens approval to open source last week and all it is is a a TCTI layer that sits on top of the TCG2 protocol in UEFI and enables it enables the use of the system API so instead of having what's only available through the the TCG2 protocol which basically boils down to a bunch of commands to query the protocol to figure out you know the state of it some stuff that may be like PCR banks that no one will ever actually do if they're writing you know a UEFI application and then one command to just send a raw buffer that someone somewhere has already crafted and so we use the system API to do that for us and you can you know the we've got an example application in there that just builds a simple UEFI executable that you can drop into a FAT32 partition load up the EFI shell and run this command then you can see the difference between what the get capability command in the TCG2 protocol shows you which is information about the protocol and what the get capability command from the TPM gets you which is information about the guts of the TPM and so that stuff hasn't actually gotten brought into our main project yet because we're not exactly sure how it's going to fit in there a lot of this for me was just an exercise and learning the ridiculous things you have to do to build an EFI executable and so our libraries right now and the way that build works we can't build the libraries in that way yet so you know we're not perfect we've slipped up a little bit on our build hygiene and we're kind of forcing flags on users when they may not want to use them so that's kind of on me to go clean up now that I've stumbled across it so I think this could be really interesting I mean there were some talks earlier about firmware people using the TPM in the firmware this isn't a model for doing that efficiently for sure because everything's just statically compiled into a single executable and if you build more than one executable you'll ever done a code across them but you know there's I'm sure there's some give and take where this might actually be useful so if anyone thinks that this might be something that they could could use in their their day to day I'm happy to chat about it and that's that is it so I've managed to come in under the 40 minute mark so I can handle a couple of questions I think oh also references so some of the references from there as well this is you know take a take a look oh that was easy thanks oh no sorry we got one I know we got two yeah have you looked at trying to integrate the your tss2 library with commonly used open source crypto programs like say ssh and open ssl James bottomly has done that with the IBM tss library and to be honest that's the only reason why I'm using the IBM tss library it sounds like I would much rather use this but let's you know SSH integration is kind of cool great so on one of the previous slides there's a there's a handful of questions in there but on one of the previous slides pks s11 that gets us a lot of free integration with things like open ss or rather with ssh so ssh if you've got a pks s11 module you can use that for your authentication we've we had someone build and open it a pks s11 module in the past had a demo that did exactly that but the code wasn't such that we're we're willing to open source that are supported so we're rewriting that now the second part of that was sorry you gotta remind me now there was one other part of that for open ssl integration so we have the engine I'd be interested to know what else we're missing I'm not an open ssl expert I did not write the engine that was another thing that with that frownhoffer contributed to the project so you know on the mailing list if you want to meet up afterwards I'll take down a pile of notes and we'd love to have it be you know your tss of choice two part question so the first one are there any good videos you'd recommend on boot integrity and firmware security sorry can you say that one more time any videos you recommend this is a leading question yeah on right so firmware security and boot integrity and then I have an actual question thanks so like I was saying I plugged the platform security conference earlier there were a lot of great talks there but you know there were a lot of good ones about Linux boot so we had trouble Hudson there Vincent Zimmer was there as well and I think his talk is phenomenal and you know TPMs have had sometimes a bad rap in the Linux community do you have an opinion on why say Google is doing things like shielded VMs without TPM they have their own root of trust and in Windows System Guard is doing quite a bit of stuff on trusted boot so do you think the Linux community is going to follow these examples oh well I mean they're I was just going to say no but they're examples we can choose them as a model we can see if they fit it's nice to have people that have done the worker you know ahead of you so you can look at them and say that work that work that didn't work you know it's all about the properties you want from the system and I think that's really a question you know that I'd love to hear some of the folks in the distro community talking about red hat how the red hats of the world intend to do something maybe like bitlocker-ish I think is probably the first and easiest step anything you know that's more complicated than that might be useful too but I start small start simple all right well thank you