 Hey, welcome back from the break. And for now, we will start the tutorial. This will be presented by Monty Wiseman and Avani Dave. And this is complete platform at a station remotely verifying the authenticity and integrity of your platform's hardware, firmware, and software, which is actually not the longest topic subject we've had at this conference. I think there was a longer one once. Oh, it means between the two. Yeah, I think there was a previous one you put was the nation-state security. OK, OK, we'll get a crowd is shrinking. So yeah, as you said, my name is Monty Wiseman. I've been involved in, I'm from GE Research. Formerly, many of you know of me as being from Intel. But moved over to GE to do industrial controls and try to security critical infrastructure. So with that in mind, one of the things that we think is really important is actually use TBMs for attestation and more than just sealing data. So we've been looking at this project and I want to turn it over to Avani, who was our intern over the summer, working on some of the research on the current state of attestation. So we're just going to do a survey of what attestation tools are available today. It's not a complete list, but it is a partial list as well of some of the work that we're doing in TCG to try to standardize or in an effort to standardize some of the data structures so that we can continue to make forward progress on this attestation program. Thank you, Monty. So attestation overview, as the title says that we are trying to achieve or demonstrate proof of concept work for hardware, software, and firmware attestation as an verifier in one single platform. So just going for some overview of attestation. TPM supplier signing certificate will be issued by TPM author, like the vendor of it. It will create an EK certificate on the TPM hardware itself. Now, platform on which the TPM module will go with, it will have its own platform supplied signing certificate, which will be attested or say, which will be bind to the EK certificate, which TPM is put on, the platform on which TPM is kept on. So the binding between the platform certificate and EK certificate is must for having the platform authenticity or for the attestation. That's for the part one of hardware attestation. Then software, then comes the firmware. Is it OK? OK, yeah. So for the firmware part. So after that platform, we say that that's attested. Then for the firmware part, what we need is like the firmware BIOS boot process will take place, and then it will generate certain logs. Those are called the events log in TPM 2.0, event log structure, which is a CEL canonical event log structure. We need a verifier which will verify first hardware root of trust, then firmware based root of trust. Then comes the IMA part. So integrity measurement, IMA. So IMA event logs will be generated. So after the boot process finishes, it will take the measurement of different modules, and it will create the IMA measurement that will go into the IMA event log. So that is called software event log structure. So if we say our platform is attested or verified, that means we need to make sure hardware, firmware, and software up to a certain level, which we verify that is attested. It should be authenticated or verified. There is a RIM firmware specification, which is reference integrity measurement firmware specification, which Monty is working with Nest and they will be explaining the new standard, which is coming up. There is also new standards for event log CEL format, canonical event log for TPM 2.0, and there is IMA CEL format. So we are working on getting these three pieces combined together for having end-to-end attestation for the platform. The contribution which we make it in here is like we leverage the tool already developed by NSA called HRS, HIRS. What it does it like, it will take the TPM supplied vendor EK certificate. It will authenticate it for the verifier part. Then it takes the platform certificate. It will take the root certificate, sign it the platform certificate, and make the hardware verified. Next part where our contribution comes is like we leverage the patches which was developed by Matthew Garrett of Google. And on top of it, we developed a utility which will convert the new standard of event logs into TCG specification 2.0. Also, we had presented or say the team at GE under which I'm working on. They have developed IMA patches, which will do the CEL conversion, canonical event log, or you can say the TLV format conversion, tag length value format conversion of the IMA event logs. So we have combined IMA patches, meaning Monty and David Safford at GE, presented in LSS 2018, the IMA event log patch and the utility. So we are combining IMA event log utility, and we are combining kernel patches for CEL, for event log, together to have a unified place for single platform verification. Now going into the background, do we need some background of TPM? It's an audience question. If we say yes, then I'll hand over to Monty. Do we need some basics? No, OK. So then we'll go away. Yeah. So then we go over. So going into some background of platform boot process, in order to understand how events logs are being generated and what we are trying to propose here for the new standard and what our utility will be helpful in that prospect. So when system boots, BIOS reset, reset vector will go on. After that, it will go to static root of trust measurement. Then it will take the firmware component 1 hash of it. Then it will extend it into some PCR value, and it will create a structure into event log structure for that particular PCR extension. Similarly, the chain of trust will be established into the boot firmware by having next level of firmware hash measurement extended into the PCR and creating an event log for it. So on and so forth, for the BIOS boot process, it will create hash of each extended PCR, extended into the PCR 0 to 7. It will generate event logs in firmware memory. After that, bootloader will go. It will create shim hash of it. Then it will store it into PCR 8 or 9, and then it will create an event log of it. Next is extended boot process. Grub 2 will be loading. And whatever kernel module you have provided or whatever kernel you want to load it, that it will load the OS of it. It will create an event log structure for that as well. Next comes IMA. So this is where OS part comes. So after OS boots up, then it will take the hash of certain parts which you want to make sure it's authenticated or attested. You take the hash measurement of it, store it, extend into the PCR 10, and then it will create an IMA event log of what is attested at that stage. Now, what is currently available in IMA is in different log format. And event logs for the boot process or BIOS event logs are in different format currently. So the goal of this presentation or our contribution is to convert both event log structure and IMA event log into CEL format, canonical event log format, which is a new standard. TCG has returned it and published it for PC client specification. So following that standard, we have created a utility on top of Mathugaret's patch to do firmware event log conversion. And then after, with last year's patch and utility, we are converting IMA event logs into CEL format. Along with the hers and these two utility, we say that we verify hardware, then we verify firmware, and then we verify the software which is running for the portion of IMA. So this is our end goal. And the contribution is to convert it into the CEL new standard, which is coming up. So that being said, we'll go to the next. Let's verify the hardware, first part of it. How we can say that our hardware, which we are trying to verify, it's actually attested or say it's legit in that terms. So value proposition, where this comes from. It's like we get some hardware which will have its TPM module from Xwender. It has its TPM module. And we trust on that. They provide it. We ship it through some supply chain. It goes through some warehouses. It will go into the installer location. By default, current time, we trust on all the levels with it. And we say that if it has a public key for AK and attestation key, then we use that at the installer location. And we say that it is authenticated. What happens with this is like, if we don't have a root of trust established between the platform supplier and EK certificate, which we are getting it from their TPM module certificate, then that can be the case of counterfeiting. And that can be so with use of this particular root of trust measurement and hers, particularly in that terms, we can reduce the cost. And we can increase the trust between the supplier and the TPM vendor across the supply chain so that operator or plant operator can verify all this thing with hers attested. That's the proof of concept to work. So you can verify it, and this can be beneficial. Next is EK2 platform certificate binding. As I said before also, what happens is like the EK certificate. TPM supplier will give its TPM supplier certificate which would be loaded into the TPM module itself with having TPM attributes of it. That will be bind into the PCR, one value of it. Then platform supplier will provide its root of trust measurement and platform certificate. This platform certificate is referenced or bind to the TPM which it actually loaded onto that platform. So now, a vendor has a TPM which is associated with this particular platform. We are making a binding between the TPM vendor and the platform certificate. So that root of trust measurement is required in that case. So EK certificate will be generated for the root of trust measurement. Now what happens in this case is like TPM supplier certificate and platform supplier certificate, we transfer it. One time trusted channel, we are using it to transfer these two certificates to the platform owner. We say that platform owner gets through this channel both the certificates. What we do is like we validate that. We sign the platform and TPM modules with those certificates and we say that it are tested. Now this can be enlarged or say we can leverage this particular mechanism for having multiple, actually one more slide here. So with this similar method, we can validate multiple platforms with same EKN platform certificate. Say thousands of, in a warehouse, thousands of chips will come in the same way. So like enlarging the concept of validating EKN platform binding to make validation that this particular TPM is tied to this particular platform and we are validating that. And in a warehouse, there can be thousands of chips coming in and we are validating it for platform as well as EK for that. One thing I am going back to one slide to explain that how ownership or say platform AK is established after that. As I said, platform binding with the EK certificate is done. What we do after that is like we will do the association of ownership to that particular TPM module. So owner will get the key certificate, which is an attestation key, which will be verifying this platform after that verifying EK. What it does it like verify the first platform certificate signature, then verify the EK certificate signature, then verify EK belongs to that platform. And then we say that if both of them matches, then we can say that, OK, now we can generate an AK attestation key for that particular platform. So this is the owner's establishment part of it. After that, OK, so now we are going for the demo. For the first part, HERSH. And then we will go for the demo of event log. And the next is, I'm my event log for that. So for the setup of HERSH, I'm using a box client, which is a CentOS 7 verifier. And on my local, I have HERSH Provisioner installed. So I'll just provision a TPM with that. So side by side, I'm just showing two things. On my local, I'm running HERSH Provisioner. So Provisioner on this side on the terminal, which is having a TPM module. So the device which we are provisioning it, which should have a TPM module on it. And the verifier doesn't need to have a TPM. But for end to end verification, if you want, you can have a software TPM. IBM's implementation of a software TPM with TTCG stack or IBM stack. And you can verify codes on both the side to say that it's attested. For the proof of concept for now, I'm showing first, HERSH Provisioner is started. And then, so this particular step what I'm making is like I'm so I'm running the Provisioner now. So what it does it like, it first goes on to your system, which has a TPM module configuring the Provisioner, deleting existing EK keys stored into that, then provisioning the particular platform, creating an endorsement key, sending it over to the verifier on this side, which is a VM in this case. And then it will be creating or relevant with the new nuns so that every time it's a fresh connection between the client and verifier. So now, second part of it, which I want to show is like, here we have a policy set. So what we can do is like we can make sure without all policies. And with policies, I'll show you two cases where we can verify each certificate being evaluated on or say verified it on. So first time just disabling all. So as you can see that three basic certificates, I have already preloaded into the store of the verifier at this moment just for the demo purpose. Those are endorsement credential certificates. So this is the root certificate, root CA certificate. We get it from the platform supplier. So first endorsement certificate. I have already put that into the verifier store for now for my platform, which is like Intel's TPM Intel's NUC and having a TPM module of 0.22. So I have already put the root certificate into the store. Then I have created so NSA has a utility called PECOR. So what it does it like with that, I have created platform credentials, validation certificate, and I have created signing key. So that is platform attributes. And I have put both the keys into the store for the verifier. What it does it at this moment when I run again the verifier or say ACA utility on the terminal, what it will do is like it will verify without any certificate check at this moment. And I'll show you into the UI part. First I'll show you without. And then I'll show you in the UI part. So right now the last test which we had it, it had all three greens because it was verifying endorsement key. It was verifying platform certificate and it was checking the signing key as well. So platform attributes, it was verifying it. Now I am running it with all three disabled. So you will see a line which is blank, which is not checking any of the certificates in this case. So again, now if I refresh, yeah. So it's not checking any of the certificate. I'll make sure now one more run just to have the policy set. All three certificates will be verified in that case. So it gives us or as a user or a verifier way to verify all the root certificates and platform binding with it. So that first step, hardware root of trust verification is done with the horse part of it. So I made all three enable. And as I said, I have already loaded the certificates into the store for now. So if I run again, check the reports. All three are verified. So if there is any breakage or if there is some counterfeited part or some malicious code injection in that certificate creation or somewhere else in the line, we can say that hardware is not verified at that point. Now it comes to the second part, which is firmware validation. So that too into CL format. So the next demo is for. See if there's any questions on, as she said. Yeah, so this is a good breaking point. If you have any questions, we've got a pointer. This is provided by the NSA proof of concept code. And there's a link to it that she's been playing with. But there's a little more to it if you want to ask any questions about where we are at this point. Because we're going to leverage this onto the next phase. Yeah, and to extend his say, NSA's tool is supported on CentOS 7 only. The problem which we had it is like CentOS current version of the kernel is 3.10 on the main line, which doesn't even have TPM event log part. So in order to or say get it up to the patch level of 5.2 kernel or above 5.3 kernel in order to get TPM 2.0 event log, we have to either patch the kernel to 5.2 or higher, or we can port it into the Fedora kernel. So what we have done is like we have ported it to the Fedora. So right now NSA's the tool, which you saw it here, it's running on Fedora. So we ported it to Fedora, Fedora 30. And still ACA, Provisioner, or say the Verifier, it's still running on CentOS 7. So both are in different flavor, just for a little bit detail on that. Questions? Yeah, as you said, we would like to see this a little more portable. And one of the things we'd like to work on over the next years to make this so that it can very easily be moved from one environment to the other. One of the problems, just as kind of an aside, is that platform certificates, and this is why it's a little more restrictive, platform certificates are in fact attribute certificates, but not key certificates. And the only library we could find to manage keys were attribute certificates was Bouncy Castle. So they were restricted to write most of this code all in Java. We can't seem to get the open source community to support attribute certificates. If you know anybody who will help us along that area, that would be really nice. And we could be a little more flexible and write all of this and see. So, yeah, that was one of the obstacles that they have as well as what we have as well as just the most common tool out there doesn't support attribute certificates. So, going on to the next part, which is event log, event log two dot o structure, what I'm taking is like I'm taking a fresh event log measurement from my system, system kernel security, TPM zero, binary run time, and then I'm putting it into temporary measurement. What happened? Okay, since I am on system kernel, I need to be sudo. Okay, so now I have it in temporary measurement, the event logs, which I just captured it. Now I'm running it, the utility which we developed it to convert it into or say parse it into the new CEL format, canonical event log format. So this is our utility. It's the source code and everything is available on GitHub. So we'll be sharing the GitHub link for you guys to try it on and for the feedbacks, temporary measurement. So now this is the new CEL event logs format. There are some events which are like humongous in terms of a data size and which we do not know the reason like why they are of this big of an event data size like this. Yes, this is firmware. Yes, this is firmware, yeah. And this is new CEL or TPM 2.0 event log structure. So this is the first event which is actually in the form of 1.2 format, which will say us the information about what algorithms it support. So here you see that it supports two algorithms, which are those, so these are the hashing algorithm by the way, SHA-1, SHA-256 are on. So in the utility I have created Enum for support of right now at SHA-1, SHA-256 and SHA-384, but in future if you want to extend it, just add the Enum value and it should be flexible to support that as well. Most of the vendors or say this particular NUC system which we have evaluated, it supports only SHA-1 and SHA-256. So BIOS will tell you like what it supports on and you can turn it on and off the bank and do the analysis the way we did it. So here 0-4 is SHA-1 and 0-B is SHA-256. So two algorithms are supported from the event log 1.2 structure. As you can see here, all the event doesn't have both the algorithms supported. That's one of the observation which we observed while we got the log passed out. As you can see here, and we have made a checking on the run while it is parsing it, whether both the event types are matched or not. So number of algorithms supported matches meaning like it has both the events of SHA-1 and SHA-256 because the 1.0 event log structure specifies that it has two types. There will be some events where both the algorithms are not supported. At this moment, that's the observation we are getting it. So these are the BIOS event log which we are parsing it. So OEM vendors needs to have a consistency across different SHA supports. If they say that BIOS supports this hashing algorithms, so then they should have consistency between all the events which we are seeing it on here. So that's one of the observation. There is one more utility or say there is one more script provided associated with this utility to run it multiple binary blobs. For the demo purpose, I have provided one test file, a test file folder which has couple of binary blobs for different sets of algorithms. And then you can run a report on those saying like this many algorithms matches or not. That's the additional work which you can evaluate it on. So that's first part. Second is AIMA. So for AIMA, we have to first sign in in order to say verify the signage or say get the event log of AIMA. This particular utility what it does it like, first we will do it, sign the files. I have it in different actually limit. So we have a shell script which was, which is already posted on GitHub, which will do the signing process. But for the demo purpose what I'll show you it is I have already, okay. So with current state of AIMA event logs what is happening is like it's currently stores into the memory. It doesn't get released because we don't have a syncing and serializing process. With CEL format we take the event log of AIMA and then release the memory block after exe boot services. So that means that being said that memory is being freed up. So we are clearing that portion. We have provided the utility for that as well. To demo the AIMA part. I already have created same data block as I did it for, sorry. I was in wrong folder for that and that's why I was not finding the utility. So here AIMA signing will do the signing on the files which we are going to, AIMA sign.sh will do the signing first. It needs to be sudo. As you can see that it's, it will sign the files which we are doing the, or say, which we'll be using it for AIMA provisioning or say AIMA event log creation eventually. So it does the signing of it first. Then after we'll use the utility to run on the event log to parse it into CEL format. This will take some minutes. Yeah, so while we're waiting does everybody understand the issue about where AIMA retains the log in memory? It's often been described as a memory leak. We had no way of extracting the log out for and be able to go back again and put it together unless you did a lot of managing of just blobs of data. One of the key advantages of the CEL and she'll show that in a later slide is that we're adding sequence numbers to each one of these events. So what you can do now is she's gonna show is we're gonna pull the event logs out, put them into this new format and with this patch that David Safford has it will actually free up the memory inside the kernel for the next set of measurements to come down. So you don't lose them. You have to save in a file someplace but then now sequence you can send them out. You can, or store them on the local desk we're sending out later, or analysis but because they're sequenced you can now append them back together again and as we skipped over the tutorial as hopefully everybody knows the extend sequence is you have to maintain the sequence or none of it has any value. We didn't grab the fastest machine for this by the way. So that was one of our problems. It will. Just the last one? Yep. And. Don't recall it taking this long before. No it won't. Yes? Yeah, it's. Yeah, sorry. Yes, I'm sorry. I'm sorry, good. Yeah, you go ahead. Yeah, it's just a file but the integrity of the file is verified by the quote of PCR-10. So it's self, I mean, well when you reboot it starts at zero. Yeah, with David's patch the kernel does at each boot the sequence starts at zero again starting in kernel and then as the thing's running it will just continue to accumulate them until let's say we get to, I don't know, 580 make up a number, right? If you've got a really busy system you go read this with her utility with previously when you read the pseudo file it just said and we go read it again it's the exact same information, right? Now when you go read the pseudo file it will return the information as soon you've kept it and it'll in my case it'll start, it'll start with sequence number 581, right? When you reboot it goes back to zero. Yeah, so now this one is done, signing is done. I already have TLV underscore data which is a binary blob of TLV data. So I'm running this tool against TLV and this is the sequence number which we were just talking about for the IMA event logs. Previously it was not there so with this new patch and the utility we are getting the sequence number for IMA as well and you can see that we are doing a check on PCR-10 calculated matches the original one which was supposed to be there. So we are validating that as well and because since this has very large number of events in there I'll just do, I do not want to go to the top of it but yeah, all the events are there. So now with this two the utility of both the side IMA and event logs we have both in CEL format the new TCG 2.0 event log structure. So here you can see that for event logs we have the numbers, event numbers. I started with event number zero for 1.2 format event so that says like this will be supporting this many algorithms and whatnot, the information about the platform little bit on that and then event number one is actually TPM 2.0 first event and so on and so forth and for the IMA TLV this is the parser for it. Basically that's kind of the demo part of it going back to the presentation. Maybe it's kind of a summary go back to the boot sequence. Oh yeah, sure, sure. Kind of give a visual, yeah that one. So yeah, so what we've done now is before coming out of the very toppling memory basically it was a blob of C structures which is how the UEFI keeps it and they're not even sequenced. So if you break that blob of data up and as you can see in the demo here it was actually quite large, larger than I was expecting it to be. You've got to maintain that and it's not, I mean you've got tools obviously we can digest C structures but it's much better off being able to convey this information in some standardized format. We defined TLV which I'll describe in a minute but it's also just as important to make sure that the stuff coming out of IMA and the stuff coming out of the firmware are formatted the same so we can have a common set of verifiers. Yep, we have some screen captures just for like a backup for us like just in case demo doesn't work at times so we had in slides so like people who wants and yeah there is a negative test scenario also like where the certificate doesn't match you obviously get a ready indication on the horse provisioner saying that that certificate doesn't match so we covered that as well and here is the horse captures little bit detail about the platform on which we did the provisioning it says like these are irreplaceable components the red ones and the other it gives the NIC card information and rest drivers and some what's running on the platform related information so if somebody changes a particular component of a platform the verifier gets notification of that whether it wants to trust it or not or again root of trust needs to be established in that case we can go and dig into the details for the verifying of the platform and EAK, EK and AK generation after that now I'll be okay yeah so this was so last year we did the Monty and David Safer did from the GE group did the presentation on in LSS 2018 we have a link for that as well in the presentation so they explained what is canonical event log record structure as you can see that it has a record number it has a PCR number and these are all in TLV format tag length value format so we have showed you first utility of it and now we are showing you the canonical event log structure just to be on the same page first it's record number then PCR then digest if it is only supporting SHA1 then that will be the digest of it and the event which will have the event size and then event type the data of it and these will be going into the each cell record so TPM 2.0 if I have that open we can verify it it's on the new structure so this is the structure for event log which it's explained into the presentation itself record number PCR digest and event content itself here is the link for the last year's LSS presentation which we did it there is a video also available and slides available for people who wants to review that material as well we already covered the demo for both of them now I'll be transferring it to Monty for reference integrity measurement the next topic or new standard coming up from NIST and TCG yes we don't have any tools on we don't have any tools on this yet we thought what we would do this is a very early stage of development and what I wanted to do is at least introduce the concept that we are working on for how to produce the reference measurements for the firmware so right now our focus is entirely on the firmware as Avani mentioned there are some upcoming NIST standards that are going to start requiring OEMs to provide the reference measurements for their bios as they're coming out and this is an effort to help OEMs provide a standardized version of doing that so with that what we're working on right now is we kind of surveyed what we currently have we being TCG and there's actually a spec out there for reference measurements but we looked at it and it's actually pretty old and it was pretty hard coded to XML and we thought maybe hard coding to XML might not be a 2019 solution and we also looked at it and we found only one vendor currently using it I think it was Strungsone was using it and in some very limited cases so with that we were kind of enabled if you will to kind of start from scratch say let's see what we can do and provide some new thinking on this so in my question earlier to one of the other presenters one of the things that we started looking at is well let's not reinvent the wheel let's actually start from something that's very much in use today and see if we can make use of extended rather than creating yet another set of tools that people have to use one of our problems though is we're providing the reference measurements for something that's very different from a file so but regardless we thought well let's look at this and see if we can make this work so we took the notion of SWID tags and thought well they're very heavily used and we think that they were going to be more used and going forward and as I pointed out earlier it is based on an ISO spec but this was nice enough to produce NIST IR 8060 as any of you know that if you've bought an ISO spec they're not cheap so you can go off and read 8060 and probably get about 95% of what you need out of buying it so it's quite a discount thank you for the US government for doing that so anyway the link is there to go get it and that's pretty much the standard that we are working from as our basis as I also mentioned earlier in a question to a previous presenter there's also an effort to provide SWID tags because SWID tags are XML so that wouldn't solve the problem but if we start with an information model and which is essentially what we're doing with SWID tag with this new format we're going to say well this is the information you need to convey among one of the ways of doing it is going to be this XML based ISO standard but thanks to a colleague of mine working in TCG Hank they are actually working on a COSWID which is a concise binary format instead of XML format so there's a link to it there I just saw some recent news out this morning I think it's been promoted as a draft or something like that it's moving right along so you're going to be able to represent this information provided we continue with the SWID tag which all indications are that we are going to you're going to be able to represent it as either XML or in concise binary format or any other format that somebody wants to come up with we envision for example a JSON format again because how we're going to do this is based on an information model you need to convey and it's up to a binding specification a binding protocol for exactly how to map that to a particular set of data structures and part of the reason that we want to stick with SWID tags is there's a number of tools open source tools available and software developers are already used to doing this although OEMs it's kind of a different space for them but at least they can start from a common set of tools that we have so just trying to recover everything there okay so as I said the problem with SWID tags though is SWID tags if you look at the attributes in the SWID tags they have an attribute in there for identifying files those files are located but there are no attributes that map into something like a PCR index which is critical and a sequence number if we want to provide a set of golden measurements or reference measurements for all of the events that Avani was showing you in her presentation so we needed some other way to solve this we entertain the notion of just simply adding in a bunch of new attributes we have to put them in either a custom area or we can go to ISO and try to get them to standardize on a new set of attributes and neither one of those solutions seem like they seem like they were going to need too much work so we also thought it was going to add a bunch of bloat so we decided to do that so how we decided to solve it was as been mentioned many times all problems can be solved with yet another level of indirection so this is our current proposal for solving that on the right hand side is what's called the base rim so the base rim is a SWID tag we haven't added any new attributes to it what we're doing is we're looking at this thing called the payload which again I mentioned earlier so inside the payload and you can have multiple of these payloads inside of the SWID tag we are going to use those and this is where we thought well we'll add a new type of thing called PCR attribute instead of payload and substitute it so that would be PCR index and event or something and as you saw from the screen these events can be quite large what we decided to do instead was use what's there and use the file attribute to point to something new this is the level of indirection so we're going to create these a definition for these rim support files and the rim support files will have information like PCR index and again our focus right now is on firmware it will have things like PCR index and sequence number and the information that you need that you want to convey a little more detail of the two classes of information actually they're on the slide we're looking at the OEM can provide the raw the ending PCR value itself or this can also contain here's the list of events that you will see for example in the in the display that Avani showed so these will simply point to an array of these support files and I'm only showing two of the attributes here but these are the only two we really need to change or make use of I should say one is the file points to the support file and then of course there's a hash of the support file we don't see a need to sign each support file because the hash of each one is inside the SWID tag and then we're going to mandate the SWID tag be signed because that's an option for SWID tag thank you too close alright so basically the SWID tag is going to be signed and that signature is going to provide the integrity of all of the support files so this whole thing will be we're calling this an instance and again the names and things may change but this is kind of the direction that we're heading right now alright so I don't think that changed oh so I'm not in page mode so the format of the specifications we are going to be producing is we're going to start as I mentioned before with an information model and you're not going to be able to do anything with this information model except right another spec and this gives us the flexibility as I indicated earlier of having this information in different types of formats depending on the particular use case so the information model is simply going to describe I'll go to this previous will simply describe the information that has to be in the base RAM and the information that has to be in the support file but it's not going to talk at all about what format it should be in what sort of standards it might even just be a wire format it might not even be a file format alright and then after that there will be what's called these binding specifications and you know the first one to talk about is obviously the binding specification for how do you represent this stuff on a PC client or we call it a PC client but everybody leverages off of that specification the servers typically use all the the PC client specs the the networking equipment work group uses that as well so we are going to define two types of support files as I mentioned before and the first type is going to be a snapshot of the actually start from the bottom the simplest format would be a snapshot of the individual PCRs and I'll show that in a minute the second will be a long list of all of the events very similar to what Avani showed here's all the events that the OEM come out maybe out of the golden measurements when the OEMs first produces it and then the final thing that this binding specification will provide is where do you put this stuff where do you distribute this stuff and I will go and I think during Q&A we can have a debate about how this isn't we have I'll show one proposal in a minute so this is an example here of a very simple set of measurements that the OEM can provide and one would I'm just calling this for right now aggregate PCRs so in this very simple case the OEM or the IT department it doesn't mean this stuff has to come from the OEM we want it to but on legacy systems they made out of producer that doesn't mean you can't have this stuff today somebody can take a system into a lab that they believe is pristine and you can just do PCR reads and get this information out of here and say this is the expected values for PCRs 0-7 for example which is what the PC client defines in which case that's all you really care about maybe it's a single point point of sale terminal it's not something that's supposed to change from boot to boot and we certainly don't expect people to be updating the firmware so in this case the use case might very well be it's perfectly fine to distribute the golden PCR measurements for I just depicted zero, two and four but all seven of them for example in this one aggregate or this one rim firmware instance and it's simply distributed now what I can do is I can hand this to a verifier the verifier perform the steps that Avani talked about earlier go give me your quote information and I'm simply going to do a comparison operation on these the other choice this is actually what Avani showed the other choice would be I want more detail if PCRs don't match exactly what the OEM said they should be I want to be able to decompose this and identify something in the middle for example and kind of a silly use case but I've actually seen it where somebody simply swapped the PCI cards on the bus while the BIOS goes through and enumerates the PCI cards takes a measurement of the BIOS visible portion of the option ROM as the older term the EFI application that might be sitting on on there in which case the PCR2 in this case will be very different at the end but if you actually look at the events in there the only thing that changed is the card in slot two that event and three are now swapped around so if you had a more sophisticated verifier someone would be able if they received something like this from the OEM or from a trusted source they would actually be able to go through well PCR2 is different but why is it different and be able to go through and look through these or maybe you added a card probably a better case you've added a card PCR2 is going to be different but I've got this event in the middle of it that I wasn't expecting but okay it's someone's authorized to open up this this machine and put this card in here so that's okay I'm going to let this thing on the network because that event that that particular digest matched what was claimed I can go look that up if I want to this is a lot more work but a lot more flexible we're going to provide we're proposing to provide the option to do both or either one of these formats so what do you end up with then is again what we're calling a bundle they could at least terms for now is in the top case you'll have this array of here you're providing the detailed events for the log you can see and this is very simple way to be much bigger than this you'd have one of these for each of the seven PCRs and then of course down the vertical column you would have each one of the events that within a log that are associated with that particular PCR so this would be an awful lot of information to pass the bottom one obviously is a simpler case where you just have one rim firmware instance and where you've got one rims one SWIDTAC structure that simply points to in this example three but in reality it could be a zero PCR zero through seven for each one of the previous PCRs now how do you distribute this stuff obviously one way of doing it is you can provide a well known URL to go get them and I think that's going to be a viable solution another solution that we would love feedback on is maybe we allocate inside the boot partition a new place and we're just calling it TCG manifest where the OEM can place this information onto the boot drive before they ship it now obviously this problem is what happens if you swap out the boot drive right so we're going to have to find a solution that's why my personal belief is we're going to have to support two of them the first having it locally on the machine for convenience but I think the whole notion of that's the only place it is and if you swap out your drive you're out of luck and I don't think that's going to fly so I think there's going to have to be some way of replacing it and obviously going back to the OEM and asking the question given the model number it's obviously going to be a I think that's going to have to be an option but again we're just working through this the point of this discussion is to give you the background of the tutorial on all the attestation this we believe is pretty much the final piece because we believe we have the canonical event log that we can provide to the verifiers in a standardized format hopefully producing a diverse and rich set of verifiers out there and we have a list of the existing ones out there in the market today and there were actually more than I was expecting so that's actually good news I would like everybody to start using these standardized formats so that vendors or customers don't get locked in to a particular set of solutions a particular OEM what we really don't want is OEM1 producing rim formats in their favorite way of representing and then OEM2 does it a second way and then OEM3 does it a third and these poor verifiers are going to have to go through and parse every one of them and then if somebody new comes along the block and they have to figure out how to plug that one in and the same exact thing for the series of events coming from the platform although we have a little more control over that anyway so here's the we have a whole page of cool things that we found so new things we didn't even know about QIIME was kind of a cool project I actually learned about this week so I'll be doing some reading on that but anyway that's pretty much our presentation we're just about 20 minutes I guess for discussion any questions about the effort that we're doing again the effort on the rim is very active within TCG and we'd be very much interested in getting feedback before we get too far along I don't think they don't appear and Ivani's done a little more research than I have so correct me they don't appear to deal with platform certificates so the HRS project ends at the delivery of an AIK or sorry old school AK the attestation key that for summer we had one architect didn't like the eye so HRS ends at delivering an AK right and that's it that's as far as it goes you know we go all what if we expand it firm believer in kind of the livings concept all right let each tool do its thing and I think HRS getting HRS to stop at provisioning the system here's the AK and you know going back to going back to what she showed here sorry for the put this into slide mode but one of these displays I don't know actually I don't think we have a display um yeah actually it's this one here yeah this top key driver yes I do right once this is the AK I should have labeled it better that was my false my slide once this key is delivered it's signed by all the other keys on the system are signed by an entity outside of the owner the TPM vendor or a TMS flyer the platform manufacturer but once you get to here that key is signed by the owner you know the facilities or whatever at this point they should make the claim this is a valid platform I've already checked out the rest of this everything on this slide can just go away as far as that owner is concerned until the system is is reprovision so I still believe this is where her should end right this does a great job of getting this there from this point you get a key and now you use that key to attest to the firmware to attest to the software as you know Ken's got some tools that does this key lime does this I think that's where key lime starts assuming you got these actually I think it starts with an EK doesn't start with a platform certificate problem with an EK alone which is why I like this is I think to me this is the end game you know I have a little daughter board for a raspberry pi that's got an affinity p.m. and that infinity p.m. is just as good as the infinity p.m. that's inside the $20,000 router right no difference between them what's different is what platform it's on the platform certificate says this thing's good so we can throw all this stuff away but once I get that key I'm done and then I can move up the stack I started testing the firmware the stuff that the owner actually cares about thumbs up thumbs down using SWIT tags that's kind of there's two things I want to walk away from this is SWIT tag the right approach I mean because at the end of the day if this goes forward you guys will have to speak now or forever hold your piece until the next patch yes okay everybody likes SWIT tags what about where to put it I think I saw a grown I don't know I might have misinterpreted they might have read an email who knows that if we need to put it local and I think it is important to carry this stuff local and I believe that's not the only place it should be again not counting disfailures and gee I don't like this two gig drive and I want to put something bigger in is this the right approach creating a new space a new folder in boot anybody cares yes ah good point yeah so let me ask I'll repeat the question I think I have to repeat the question so let me repeat the question what's the point of keeping it local because the verifier has got to go get it anyway the first use case to me would be offline distribution so my company for example if we were to move on something like this you got industrial controllers out there you go deploy just an example use case right if you got industrial controller out there sits on the other side of the OT it sits inside OT you can't call home to whoever made this thing you can't cross the boundary right so I mean the other choice is to copy all this stuff locally and have it within the within a provisioner inside but since I've signed anyway right I don't have to worry about somebody tampering with it I'll detect that because I have the root I have the signers root key that I would have had to gotten right by some out of band mechanism you know guy with a briefcase with a you know handcuff or a phone call reading thumbprint or whatever right that I have to do is as if I described for any of this but once I have the root key I'm gonna go decide I trust these things are not right and putting them here we couldn't think of a better place to put them because this is the thing that in theory doesn't change until you swap out the hard drive and if you go below this way well and you know you have other things going on if you start blowing away your boot partition anyway right you've done it for another reason so yeah I mean but it is a problem if you don't save this and you need it then that's a problem any other everybody likes this stuff well cool all right I guess we're done early for everybody so and if anybody wants to hang around afterwards you know be interesting to have a a buff you know people can leave that aren't really interested and I'll be happy to hang around and since this is the last session this will be the time for buffs anyway and I'll make myself available and anybody that wants to join in that would be that would be great so for that Thanks Savani and Monty