 So, good morning. My name is Matthew Garris. I'm a security developer at Aurora Technologies. I've been working on Linux security related topics for over a decade now after I somewhat unexpectedly got roped into working on UEFI Secure Boots because I've previously been working on ACPI and ACPI and UEFI have one of the same lessons in common and as a result I am now apparently, nominally an expert on security as related to firmware, as related to hardware, kind of integrating with the stuff that's below the pure operating system level and trying to make that into something that forms some sort of coherent whole, a way that we can make use of functionality that exists below the operating system and expose that in ways that are of benefit to the overall security of the platform, be that the kernel or be that individual applications. My day-to-day job these days is primarily working on security for things like phones, laptops, workstations, VMs, and to an extent even cloud services. And a large part of that in recent years has involved tying things into hardware backed identity. The idea that we can cryptographically prove that something is what it claims it is. And this is very useful if you're, say, interested in producing some sort of zero-trust solution. A solution where you want to get access to something based on a device's identity and its state. And being able to tie that identity to the hardware, being able to tie the state to that identity means you have a lot more confidence that the device you're communicating with is both a legitimate device and that its state is legitimate and therefore it should be granted access to something. Ideally this even lets you get to the point where you can say have a computer shipped directly from a vendor to an end user and then have it bootstrap off the network without IT having to be involved anywhere in the process and still having trust that the device is a legitimate device, that it hasn't been intercepted in transit, that it hasn't been replaced with an untrustworthy device, that sort of thing. And also we do trucks. I also work on the trucks. We also do hardware identity on the trucks. I want to make it clear that today I am not going to be talking about the trucks. The stuff that I'm talking about today is largely a personal project that has actually kind of spun off some work that I was doing as a previous employer. We have no current plans to use this in the trucks. So disappointment to anyone who is expecting to find out amazing things about autonomous vehicles. Sorry, not going to help you out today. So let's give an introduction to trusted platform modules. The first revision of the TPM spec has been around for on the order of 20 years now and they're still very misunderstood chips. They are a lot smaller than the truck. They have significantly less kinetic energy than the truck and therefore in most ways they are less scary than the truck. So what can a TPM actually do? So a lot of people still kind of think the TPM as a cryptographic coprocessor and in a sense that's true. It is a process that is independent from the main CPU and it does cryptography. But when we talk about cryptographic coprocessors, people also often think of cryptographic accelerators and a TPM is a cryptographic pessimizer. It is something that will make your cryptographic operations slower than doing them on the CPU under almost all circumstances. Like fine, if you stuck a TPM on a 6502, that might be a different story. But for the most part, TPMs are still clocked in the order of tens of megahertz, maybe low number of hundreds of megahertz and what cryptographic cores they have on them are still not optimized for raw speed. A TPM will typically take on the order of a second or so to generate a 2048 bit RSA key pair. Generating ECC keys much faster but we're not talking about something that's fast. So why would we deliberately choose to use a chip that is slower? And the answer is that the TPM has some functionality that historically we did not have on anything else in the system and to the extent still has functionality that we don't have on the rest of the system. One is that TPMs are guaranteed to have a random number generator. They generate cryptographic key pairs. They need a sort of randomness themselves. And this was of much more value before we got to the point where Intel, AMD and so on started introducing random number generators into the CPU directly. They're able to generate key pairs where the private key never leaves the TPM in an unencrypted form. The public key, sure, you can have the public key but the private key, no, the private key is going nowhere. And that is, TPMs don't have infinite storage. So there's some other question about how do we keep generating keys? And the answer is that the TPM will give you an encrypted blob that represents the key and some additional metadata around the key. And we'll be getting on to what some of that metadata is later and why the fact that this blob exists is a problem under some circumstances. But basically, the private key is encrypted with a key that is on the TPM, that is always kept on the TPM. And so if you want to use this private key, you need to pass the private key to the TPM. There's no way to use it otherwise. There's also support for exportable keys. There's ways to move keys between outside the TPM and inside the TPM. Different trade-offs there. Obviously, if you've generated a key pair outside the TPM, you have no guarantees that the private key has not gone somewhere else as well. So generating on the TPM is the guaranteed way to ensure that the private key is associated with that particular TPM. And there's also some amount of ability to store state. TPMs are not stateless objects. They can be reset back to their original configuration, but they store otherwise some amount of state. And apparently I should have checked this better for typos. Anyway, never mind. Some of the states that are TPM stores is associated with the lifetime of the TPM between resets, NVREM, which we'll get to later on. Some of the states is scoped generally to this specific boot session. And these states are, this state is generally stored in what are called platform configuration registers. Now, TPMs, obviously, given the time they're introduced and given honestly the state of the industry at that time and to an extent since then, there was a strong perception that TPMs were things that could sit in the way of your boot process and refuse to allow you to boot something that wasn't accepted. That's untrue. We call that secure boot. That's something completely different. TPMs only know what they're told. They have no insight into the state of the system overall. A TPM cannot inspect the state inside your CPU. It cannot inspect the contents of RAM. It's not able to speak to any other hardware on the system. The TPM sits there and it does what it's told and it knows what it's told. So for boot security, the idea of this is to maintain security. We tell the TPM what's going to happen before it happens. We generate what is called a measurement of something that is relevant to security. And when we say something relevant to security, that might be software. Which software we boot is something that's relevant to security. But it can also be configuration. Which policy is applied to the software that we're booting is potentially just as security critical as the software that we're booting. And we can generate cryptographic hashes of these. A measurement is just a unique way of representing the thing. And the easiest way to do that is just do a SHA-256 or whatever of it. And then pass that to the TPM. Now, typically TPMs have 24 platform configuration registers and they're normally used for different purposes. But this is policy as opposed to a technical implementation. But you're probably going to want to measure more than 24 things. And this is where this magical formula comes in. Say we want to measure some data. We take a hash of that data, say the SHA-256, and we pass that to the TPM. The TPM then takes the current value of the PCR. So say we're writing to PCR0. When we initially supply power to the TPM, the default value of PCR0 is all zeros. And we take all those zeros, we take the other hash, we stick them together. So if this was SHA-256, the PCR value would be 32 bytes. The new hash would be 32 bytes. We stick the original PCR, the new value. We now have 64 bytes of data. 64 bytes doesn't fit into a 32 byte register. So we take a SHA-256 again of these 64 bytes and end up with a 32 byte value. And the PCR gets set to that value. Now, this has some interesting outcomes. Firstly, fairly obviously, the PCR value does not just depend on the new value you give to it. You can't directly set the PCR to an arbitrary value. The value of the PCR depends on both the old value and the new value. Further, that means that it depends on the order of the measurement. If I measure A and then B, I will get a different value to if I measure B and then A. And finally, the calculation is deterministic. If I measure the same set of things in the same order, I will always get the same value. And it's a sufficiently simple formula that I can do that myself. PCRs can be used for a couple of purposes. The first is to constrain access to resources that are on the TPM or associated with the TPM. So for instance, I can generate a policy that says you can only use this private key if these PCRs have these values. This is an example of where this is used is in the Windows BitLocker implementation by default. When you boost a system, the disk encryption key is encrypted by the TPM. The operating system passes the encrypted key to the TPM and asks the TPM to decrypt it. If the PCRs have the expected values, the TPM will decrypt it, pass that secret back to the operating system, and the operating system can unlock the disk. But if it's instead got a different set of PCR values, which indicates that someone has interfered with the boot process and changed the security process of the system, it will refuse to unlock the disk and will instead prompt the user for an unlock key. So that way, you can automatically unlock the disk when you know the system is booting securely. And if the system is booting insecurely, you can require additional validation. You can also use it remotely. You can set up, you can do something called a, well, each TPM has an endorsement key, which is unique to the TPM and is generally associated with a certificate signed by the TPM manufacturer. The endorsement key can't be used for much directly, but one thing the endorsement key can be used for is to prove that what we call an attestation key belongs to a specific TPM. So that means if you register all your TPMs, you can then prove that an attestation key is associated with a specific machine. And that gives you the ability to then know that anything signed with that attestation key, the attestation key is on the TPM. The attestation key can only be used to sign specific TPM things. You know that that signature comes from that specific TPM. And one of the things you can sign with an attestation key is what's called a quote, a representation of the PCR values in the TPM. That means that you can ask a machine, please give me a quote of the PCR values. You pass over a nonce along with that so that you know that the reply you get isn't being replayed. It gives you that back and you can look at the PCR values and you know those are the actual PCR values associated with the machine. Now, as I said, the value of the PCR depends on every measurement that went into it and the order of those measurements, but all you get back, the TPM doesn't retain the state of everything that went into it. It only retains the final value. So there are two things you can do here. You can either look at the final value. You can say, okay, I expect this machine to boot into this final value. Or you can look at the individual events. But how do we look at the individual events, given the TPM didn't keep track of them? And the answer there is, well, the individual events are kept track of elsewhere. They're tracked in the firmware and they're tracked in the operating system. But they're kept in RAM. And that's not strongly protected. An operating system can tamper with the event log. Any component of the boot process can tamper with the event log. It could change those values. So if we had a set of expected individual values, we can't just look at the event log and say, okay, this tracks because someone could have replaced the values in the event log with the values we expected. But what we can do is take those values in the event log and then go through the same calculations that the TPM did. We can take the original value, the new value, stick them together, take the hash, and then repeat that for each entry in the event log. And if the answer we calculate is the same as the value in the TPM, that means that the individual entries are legitimate. And we can then look at those. And that means that we don't need to worry about ordering, because we can look at the events without having to sort of figure out, is the boot sufficiently deterministic that these events will always occur in the same order? Say you there might be a race condition that causes enumeration of two PCI devices to happen in different orders, depending. And that would change the measurements. By looking at the individual results, we don't even care about that. So what can we do with this? Firstly, we can tie a cryptographic key to a specific device. As I mentioned before, we can have TLS client certificates that are tied to a device and which then can not be used on any other computer. And that means that if something presents this TLS client to us, we know it's on that computer. There's no way for a user to copy that private key to another system. We can tie a key to a specific state. We can say this key is only usable if the PCRs are in a specific value. And we can then do stuff like verify that the device state is within an expected set of criteria before we grant access to a service. So say you have a particularly sensitive resource, you want to make sure that it can only be accessed by systems running the latest patch level of an operating system. You can look at the measurements from the TPM, you can evaluate that, and then you can decide, yeah, this fits my security policy or no, this doesn't fit my policy, access isn't granted, even if the user has valid credentials, the device thereon is not acceptable. I'm not saying this is sensible for the generic use case. I'm not saying that average websites should start doing this. That sounds like an absolutely terrible idea. But this is in specific circumstances extremely helpful. If you want to be able to date access to extremely sensitive resources, then you want something like this. But that's a problem. Operating systems run lots of processes. They have lots of users using them. As I said before, the TPM knows nothing about system states other than what it's told. It does not know who owns a secret. But for instance, you don't want process A owned by user A to be able to use key B owned by process B owned by user B. If those are completely separate things, there needs to be a way to separate the resources that this one has access to from the resources that this process has access to. So the kind of spec-invisaged way of doing this is that each session you negotiate with a TPM is authenticated. You can basically provide information about that session and then ensure that any resources that are loaded are associated with that session. You can also associate authentication details with individual objects. So you can say this key can be loaded and the policy could be, the PCRs need to be disvalued, but the policy could also be and the user needs to supply this password. And that way individual processes can have individual sessions with individual objects associated with them and prove that they were the owner of those. But that involves some sort of static knowledge. Those secrets can't just exist on disk because otherwise if I have access to the disk, I can extract those secrets and I could use those to authenticate for those objects. So that's so optimal. We don't really want to end up in the world where all of this is done with basically passwords because that just doesn't work. So instead, we can fall back to the idea that, okay, the colonel's job, one of the colonel's job is to arbitrate access to resources. And basically, the colonel already makes one piece of hardware look to processes as if they're the unique owner of that hardware. I can have multiple processes accessing a disk simultaneously and the colonel will arbitrate between those without letting them step on each other's data. I can have multiple processes accessing a GPU without them being able to corrupt each other's state. And the colonel has something called a built-in resource manager. And the resource manager's job is to, effectively, every time there's a context switch regarding the TPM, to save the current TPM state and to load new TPM state. And the TPM specification makes this straightforward. There's basically a command you run that saves the current state, like which objects are currently loaded, which session they're associated with. And if you make that per user, well, what the colonel actually does is make it per reference to the TPM device node. So you open the device node, you get a new TPM session generated. Every time you switch back to using that file descriptor, the state associated with that file descriptor is loaded, the previous state was saved. And that means that you can have a bunch of processes all using the TPM separately. And it looks as if they have unique access to it. Nobody's able to step on the objects and resources belonging to any other user. And this is actually per process. So even if you have two processes owned by the same user, they're still not able to use each other's keys. Except if you have two processes owned by the same user, process A could open the key blob belonging to process B, load that into the TPM, and then have access to that key. Even though the reference in that session to that object is unique, you can load the same key into multiple sessions. So if you have access to that encrypted key, you have the ability to load it into the TPM. If there's no authentication associated with that key, you then have the ability to use that key for whatever purpose you want. So this means in a lot of cases, we basically fall back to access to TPM resources is just farcical conditions. And that's not great if we want to enforce strong secret management. Ideally, we would be able to do somewhat better to that. This is very much a case where if you make the, if you accidentally run Chamomile in the wrong place, if you make a file world readable, and someone's then able to grab that, you've lost control of that key forever. And that's not the source of situation you want to be in. Especially if, for instance, you want to be using hardware backkeys in say containers, where management of fire resources is a very complex problem, where you're trying to inject things into containers and make sure that's not accessible to any other container. There are various ways to solve this, but we're still largely relying on nothing going wrong here. And we can potentially do better than that. So this is going off in a tangent, but this is where I started getting interested in this problem. At the moment, we don't support hibernation, suspended disk, on systems with UEFI secure boot enabled. And the reason we don't do that is that hibernation, resume from hibernation, is basically a way of loading a blob of data from hard drive into ring zero. Also into ring three, but into ring zero. And if that image isn't signed, an attacker can get you to basically boot even in the presence of secure boot, can get you to boot an arbitrary kernel by just modifying just writing out a suspended image, boosting the system, telling it to resume from disk. It'll then load in this unauthenticated image and completely bypass secure boot. So obviously, well, why didn't we sign it? Well, what do we sign it with? If the private key is on disk, the attacker just either copies it off disk and uses it or just writes out a new private key under their control, same if it's a symmetric encryption key. There's no way that we can have the secret associated with the disk image. So where can we put secrets in a place where an attacker cannot gain arbitrary access to them? The answer is, of course, as I've already written up there and spoiled this, the TPM. So how about we generate a key and we associate it with a specific set of PCR values? That way that key can only be used if the system is booted into the expected state. And that means that an attacker cannot use that key themselves unless they're able to get the PCRs into the same state. The TPM will simply refuse, but that means that we need some way to control the PCR values in a way that prevents the attacker being able to simply reprogram them. So how do we do that? The answer I came up with was to make use of PCR23. PCR23 is a special PCR because unlike most PCRs, which are only reset when the system is reset, PCR23 can be reset at any time. You can just send a command to the TPM asking it to reset PCR23 back to the known default value. So we could associate a key with PCR23. We could program we could program PCR23 to a known value, decrypt the key, decrypt the disk image, and then reset PCR23, and then block userland from being able to write anything to PCR23. And that way we can guarantee that PCR23 will only be in the state required for this key if it's being managed by the kernel. And we trust the kernel because we have to trust the kernel. And this way we can say an attacker is not going to be able to gain access to this key as long as the kernel is trustworthy. If the kernel is not trustworthy, there's no point at any of this anyway. So that's fine. But what, so the PCR23 restrictions, how do we know they exist? The TPM doesn't know what stops an attacker from just boosting an old kernel that doesn't have this feature and then gives them access to PCR23, at which point, so obviously this is slightly laborious, but they can take the encrypted key from the suspend image, boost an older operating system, pass a program PCR23 to the expected value, pass that secret to the TPM, get the key back, modify the suspend image, reboot, and now they win. So we need some ways to be able to prove whether or not this feature exists, and we need the TPM in some way to know that this feature exists. Now the way we tell the TPM things is we measure something into the TPM. So we could have another PCR where we measured the kernel, we make a measurement saying the kernel supports this feature and that way if we associate the secret with both of those PCRs, then the attacker won't be able to just extend PCR23 because the other PCR won't be the same value, but then the attacker just extends that other PCR as well. And this comes back to what I mentioned earlier about the order of events being significant. If we could make sure that the measurement saying the kernel supports this feature happens before the attacker has an opportunity to extend PCR5 themselves, extend another PCR themselves, then the attacker would never be able to duplicate that measurement, that value because the order would be different. They would do their extension after the event, the kernel would do it before the event, therefore the values would be different. And the TPM spec envisages this. At each point in the boot process, we do something called capping. We make a specific extension to a PCR indicating that we are transitioning from a state where one component has control to another component having control. And any measurements that occur before that separator therefore were associated with that component. So if the kernel were to do its early boot and then measure a separator into every PCR, we would be able to know these events were associated with the kernel. Useland could not have generated these because Useland had not started running yet. Unfortunately, we don't do this. We could do this, but then we run into the problem of attacker runs and old kernel that doesn't do this and then measures a separator themselves. Yeah. Sorry. Really should have thought about this about a decade ago when we were implementing the EFI stub because that was been a great place to put this. So we have the problem that the firmware does make a bunch of EV separator measurements, but they happen before the kernel runs. No EV separators occur after the kernel is taken control and we can't just add an EV separator event now because old kernels could still be used to bypass this. So what if there were an event that was guaranteed to be measured after the kernel starts running? And there is. The EFI spec contains something called the exit boot services call and this is used at the point where you transition from wanting to interact with EFI as a boot time environment to interacting with it as a run time environment and when you make that call the firmware will make an additional measurement into PCR5. So if we do a measurement into PCR5 before this we will then be able to look at the order of events in the boot log and be able to say great the kernel made this measurement into PCR5 and then if we associate the secret with PCR5 as well as PCR23 we're able to say this secret was sealed to the TPM while this PCR23 restriction was imposed therefore we know that PCR23 is under control of the kernel not under control of user land. Perfect. This solves all our problems. I mean it doesn't solve the problem that the firmware is specified to make a measurement into PCR5 the firmware does not necessarily make a measurement into PCR5. We can get around that by just recording the value of PCR5 before we call exit boot services measuring is again passing that up to the kernel reading it again after we've called exit boot services and seeing if they're the same if those has changed then the firmware made the call and then we know that this feature is available if that didn't happen we disable the feature in the kernel and we just refuse to generate any secrets that make use of this and so yeah user land could fake that but it doesn't matter because we're never going to use that secret for anything anyway. So I wrote a set of patches that do all of this back in 2021 and I sent them up stream and they did not get merged and the main reason they didn't get merged is that it turns out that there are people already using PCR23 for their own things from user land and if we just disable access to PCR23 entirely from user land they break so we could kind of get around that by providing a command line parameters to the kernel only doing the PCR5 extension if that parameter is not there only imposing the PCR23 restrictions if that parameter is not there that's ugly I dislike solutions which are you have to do this different thing in order to ensure that the thing you want to do keeps working I also didn't want there to be a command line parameter to make the useful behavior enabled because that's a great way to make sure nobody ever does that so there's another alternative PCR23 is not the only PCR that can be reset there are several other PCRs that can be reset but they can only be reset if you're in a specific TPM locality and the easiest way to think about TPM localities is kind of like CPU range they're different execution privileges inside the TPM so traditionally locality zero is the running operating system locality four is the CP locality three is the CPU so things like TXT any other DRTM measurements are generally done in locality three and then by looking at the measurement the localities associated with that that tells you which locality this came from and that tells you okay well this was done by the CPU this was not done by something pretending to be the CPU because the locality would be different so that sounds ideal we could use a different PCR that could be reset from a higher locality and because the measurements would be associated with that locality we could even save the existing state and restore it we could basically replay what went into the PCRs before so we can do this without breaking user land wonderful simple yeah Jay's bottomly suggested this and I'm sorry he's not here to heckle me over this but problem solved every single time I've tried to implement locality support it's just not worked sometimes the TPM doesn't support it sometimes the hardware just seems to drop those accesses on the floor we only even supported in kernel for a subset of TPM types for CRB type TPMs the ACPI tables that define the range of registers that you speak to often don't include the additional localities so there's no way to communicate with them so this is great but it's also just something that doesn't tend to work except on workstation class hardware that relies on localities for things like TXT and that's kind of the space where you're least likely to want to do hibernation so this sounded great and ended up being a dead end I mentioned earlier that the TPM has some non-volatile RAM and this is great you can do things like encrypted key blobs could be stored on the TPM rather than on disk that way even if you reinstalled the operating system they're still available to you there are TPM associated certificates that are stored in NVRAM so you can stash things there brilliant make use of them for whatever you want later so that's the most straightforward thing you can use NVRAM for but it can be used for multiple other things as well for instance you can have monotonic counters in NVRAM a number that can only be increased and then you can associate authentication policies not just with PCR values not just with things like passwords you can associate them with the value of an NVRAM index and that means for instance you could do something like have a bunch of keys that are associated with a particular monotonic counter and then if you increment that counter all those keys become invalidated immediately so it's a mechanism for you being able to do sort of policy management or avoiding rollback you could invalidate a signing key by doing this and then in your secure boot implementation you could it would then be impossible for an attacker to supply an old key for verification because it just wouldn't work but one of the other things you can do is what are called extendable indices you can basically tell the TPM use this area of NVRAM as if it were a PCR and then it performs the same calculations as occur if we do a PCR extent if we measure something into an NV index it's the same castinate them take the hash store that but you can also just delete an NVRAM area and recreate it and that will reset the value to the original value so we effectively have and it's not infinite because there's a limited amount of NVRAM on the chip but an effectively unlimited number of PCRs that we can reset at will so this sounds perfect we could even attach policies to those NV values because when we attach a policy to an NVRAM value that side of things is just okay I don't care what the access control policy for this NVRAM is I don't care how the value is changed I just care that this specific set of bytes is the same and so if you attach it to an extendable index then you're effectively saying tie it to this specific PCR value except it's NVRAM the quote functionality from a TPM does not include NVRAM values so you can't just do that but there is another call NV certified that's effectively equivalent you can do a PCR quote to get the PCR5 value to say yes this feature is enabled and then you can look at the NV certified value to say and this other PCR also has the expected value now an attacker could potentially give you the legitimate value for one reboot into a system that didn't do that and then give you the fake value for the other that can actually be avoided because there's a boot counter that's in each quote it's not just like zero one it doesn't increment monotonically it's a but it is a value that's guaranteed to be unique for each boot so you can look at those and you can make sure that they came from the same boot in order to avoid that attack so that gives us confidence that the NV index value is equivalent to the PCR value so do we still need to impose restrictions on newsland and maybe I say probably here I've softened on that a little it kind of depends what problems we're trying to solve the problem with NV indexes is that you can delete them and recreate them it's the other problem is that if you don't restrict them in any way userland can also modify them so the easy way to deal with this would just be in use in the kernel we pass the TPM commands before it's executed and then we filter out anything that is trying to access a specific NV index alternatively we could have auth values we can authenticate access to it and then say okay you can't modify this you can't extend this unless you have this authentication and that can live in the kernel userland would still be able to delete the index because we don't want to tie to leash into the auth value because the auth value would go away after a reboot and if we lose the auth value then we can't if we tie to leash into authentication we can't delete the index without resetting the entire TPM and that's not good so use lines is then able to delete it and recreate it however the authentication value is incorporated into something called the name and the name is something that is incorporated into the NV certify statement so for remote access so we can tie secrets to the name as well as the value and that means that it couldn't be reused in that way so maybe that's good enough the main problem with that is you can't meaningfully persist over reboots because you can't persist the authentication secret because otherwise an attacker could just make use of the authentication secret there's maybe some stuff we could do with based on the PCR5 value do some magic assert that we're inside the kernel maybe I'm a little reluctant to do that but that really only matters if we want to be able to persist stuff over reboots otherwise we could probably get away with not restricting user space user space would effectively be able to provide some amount of denial of service but would not be able to provide a fake believable representation and would not be able to gain access to secrets so this means we can unblock authenticated hibernation so that's probably the first thing I'm going to be working on but what if we set that value not just based on whether or not we're resuming from hibernating or hibernating but based on which process is running then we could tie secrets to individual processes then even if you had access to the key blob the TPM would refuse to load it so my current implementation is fairly straightforward but doing certification around this if you know the value the index value associated with a specific process you can then prove that a request not only came from a specific TPM you can prove it came from a specific process and we can maybe go further than that not just tie it to processes but what if we tie it to say namespaces then we could say this request came from a specific container and that container could generate its own secrets we wouldn't need those secrets to be certified by the runtime outside the container because we'd be able to tie those to the initial index value associated with that my implementation right now is pretty naive it just allows a process to make a PR cuttle and that will generate a random cookie that will then be used to measure into this index value but we could potentially make that deterministic and I'm well maybe right now I'm not sure how to do that it feels like the straightforward way would be to associate that with IMA but the IMA measurements are associated with files rather than associated with processes so this is a case where I would really appreciate feedback from the people in the room so quick summary basically we can extend an NV index with a per process on namespace value we can prove that this functionality exists on a given system based on an event that's logged into PCR5 and then we can let USELAN codes use existing functions USELAN can already do PCR quotes it can already do NV certified we don't need any kernel support for this the kernel just needs to make sure that the TPM state is per process and then existing USELAN functionality does the rest so we've got like two or three minutes left anyone have any questions yeah at the back testing okay you spoke about replaying PCR measurements just verify if a set or a sequence of measurements is authenticated or was not marked then on the other hand you said that the only resetable register is PCR23 but I mean replaying also depends on the initial value so if you want to replace some sequence you have to set this initial value presumably with zeros how does this fit together or did I miss something so in this case for the magic index value we can just delete the index and recreate it and then it'll be back to a known set of values first so in that case we wouldn't really be logging the events we just be making an assertion that this value is associated with this process but you know we could have whatever is passing that generate a set of events which could then be replayed and you should be able to do the same thing so if we wanted to say measure multiple bits of data into this nvindex we could do that and synthesize a log event but that's kind of out of the scope of the kernel that would probably be a useful ant issue okay anyone else okay well thank you everyone I'm going to be leaving this afternoon but I'll be around until at least lunchtime so if anybody does want to talk to me about any of this feel free to just grab me thanks