 Okay. Welcome everyone. I'm Dov from IBM together with my colleague Tobin. We're going to talk about hopefully unifying confidentiality station. Now we have the mandatory slide for all the talks this conference. So we're talking about a setting of confidential computing. We are focused on VM-based confidential computing where we entrust the host or the hypervisor or what you want outside of the guest. And we are interested mostly with VMs with an encrypted memory. And we'll use the terms guest or a tester as the VM and host, which runs it. It's in the untrusted gray zone over there. And the guest owner also sometimes referred to as relying party, which is not in the picture. That will be the customer who wants to run their workload securely, let's say, in the cloud. So what we're going to focus today is we're going to do a quick overview of various attestation mechanisms that we find in the architectures today. Not all of them, but most of them. And then we'll discuss various approaches or ways in which we can unify or reduce the application between these mechanisms. And our focus here in this talk is on the guest side. So there's a lot to be done on verification of the attestation and various stuff should be done at the owner's side and the other part, but that's not what we're going to discuss today. So the first architecture we're going to explain is SCV and its bigger brother, SCV-S. In this case, we're doing preattestation, which means before the VM launches, we get the measurement and the sign measurement. And since the guest is not yet running, this whole thing is driven by the host. And at that point, we have a secure channel between the owner and to deliver secrets into the guest. And the sign, the launch measurement includes the platform information, like the version of the API of the firmware. The launch digest, which is hash of the initial memory of the VM, so firmware. And if it's a CVS, then we have the CPU state. And that secret channel is used to inject the secret into the memory of the guest if the measurement matches and then the guest can start. Next generation is SCV S&P. And here that station is driven by the guest. So the guest at some point can ask, there's a mechanism to ask for a signed attestation report. And that report includes again, the platform information, a bit more extended than what we had in SCV. And again, the launch digest, so initial memory content and the VCP of states. Another feature which is not directly related to attestation, but we'll mention it later, is that S&P VMs can be separated into permission levels called VMPLs. And this can be used to have parts of the VM running in higher permission level than other parts and allows you to deploy things that you cannot do with a regular VM. And we'll mention that later in one of the approaches for unification. In Intel TDX, similar to S&P, it's driven by the guest. And it's a signed measurement report, which includes the platform information. And the measurement is actually composed of few parts. So one part is the initial state. So it includes the firmware hash and the VCP of states. And then we have a mechanism or four registered called RTMRs, runtime measurement registered, registered. And these can be extended during the runtime of the guests. So unlike S&P, where the measurement was taken once at launch time, here you can issue commands to extend these various RTMRs and basically update the measurement of what is being executed or loaded. Actually getting the attestation report is done in two phases, getting the report and then using some other mechanism to sign it. But for our purposes, it does not matter. Another platform is the S390 secure execution platform. Here, again, the attestation is driven by the guest. It's actually, in order to issue this attestation report, you must have an encrypted attestation request. So not any guest can just send this request. You must have a request encrypted sent over from the owner. And only these requests will be accepted by the underlying hardware to generate the report. And the signed report includes, as usual, platform information and the entire memory state, which in this case includes both the kernel and the NITRD and the kernel command line in this platform, they're all loaded together. Also, they are sent encrypted to the hardware. So the whole attestation flow currently is optional because if the guest has started running, it means it was decrypted correctly, which means it's safe. And now we're going to discuss some ideas of how to unify these various different things that we've seen so far. Yeah, I don't know how we decided this, but I've got to explain the problem and then I'm stuck with maybe some potential solutions. So there have been a bunch of different talks already this weekend that have sort of touched on different elements of unifying confidential attestation. It's something that clearly everybody kind of wants to do. And it's kind of a slippery term actually, like unifying this attestation. There's different ways to think about it. If it's like from the boot perspective or from the perspective of what the hardware measurement is. So what we're going to think about in this presentation, which I think is a good perspective on it, is that what we're trying to do is measure the entire stack of the guest, of course, measure the guest application and everything below it. And we want to figure out a way to do that that's going to bring different platforms together more than it's going to bring them apart. We're not going to try and use quirks that are only supported in one place to measure the whole stack, but it'd be nice if there was an easy way that all platforms could sort of hop onto. Now, hardware measurement is definitely going to be part of the story here. You need to measure something with the hardware so that some part of the stack is linked to the hardware root of trust. And we're not really that optimistic about hardware measurements suddenly becoming really similar. I think we've kind of accepted that different platforms are going to have somewhat different hardware measurements. The platforms are somewhat different. So as much as we talk about unification here, there's almost certainly going to be some step somewhere where you need to do some hardware specific stuff to understand, oh, this is S&P, I have these properties that I can look at and that I can validate against some other policy. So like I said, hardware is going to be part of the story, but probably we're also going to use software to measure another part of the stack. And really the way that the rest of this presentation is structured is trying to understand what the best places that split between those two things, what we want to have measured by the hardware and what we want to have measured by the software and how we want to do that. And maybe by looking at the different sort of possibilities we can stumble onto one that we actually like. Let me also say, by the way, that we talk about measurement mainly in this presentation, but there's some subtlety about measuring stuff versus signed stuff versus encrypted stuff. We're not really going to get into the details there. Maybe at the end somebody can give us a really tricky question about that. We're mainly just talking about measurement today, though. So the first approach, we kind of look at it as the firmware-based approach. So essentially the hardware measures the firmware, and then the firmware is going to measure everything else. And there's different ways to do this, but one way to do this would be by having a secure VTPM basically in the firmware. Don't get too attached to kind of the firmware language here, but we're really talking about something that's going to be sort of the initial memory load into the guest. So a secure VTPM, what would that even mean? I think maybe we can understand why currently the way that we do VTPMs isn't really ideal for confidential computing because the VTPM is emulated on the host, and that breaks the trust model completely. So to have a secure VTPM, we need something that can't be tampered with by the host, obviously, but we also need something that can't really be tampered with by the guest operating system because you don't want to load the guest operating system. For instance, a CSP could put a malicious kernel into the guest, and if that kernel was able to overwrite the PCRs later on, it could make it look like it was a nice kernel that had the correct measurement. So yeah, we clearly, to be confidential computing friendly, we don't want the host to be able to access it, but we also don't want the guest to be able to access it. If we had something like a TPM, it's, I think, somewhat easy to understand how that would actually be consumed by the higher levels of the stack, right? If we have this TPM, we haven't measured, and we can trust this TPM, then we can use this as we go on to boot, and there's already a lot of infrastructure around that uses TPMs to boot in different ways. There's a lot of thinking that's gone on there over decades, really. So something you can picture right now, for instance, is a CSP where they already have an infrastructure that is kind of built around TPMs. They already have their customers, like, using encrypted disks and sealing keys to TPMs, and you can do this to, right, launch a bare metal machine that has a real TPM, or you can do it in a traditional VM that has a VTPM. What if you could also just do that in a confidential VM, and what if you could do that for all the different platforms, right? So not only would we be unifying between, like, SCV and S&P, but we would also be unifying between these different offerings of, like, bare metal, traditional VMs and confidential VMs. That's a pretty cool idea, generic interface that could really be applied in a lot of places. It could also hook into something like Keyline, right? Maybe the client already has infrastructure for understanding how to handle, you know, TPM quotes at scale. Maybe we could hook into that, right? So that all sounds pretty good. And on S&P, we're actually coming kind of close to understanding how we can implement this, right? So Tom gave a talk a couple days ago introducing the SVSM, possibly yesterday. Time works differently at this kind of conference. But the SVSM builds on this VMPL thing that Delvin mentioned, where we get different privilege levels inside the guest. And this gives us one of the properties we need here of being able to have something that can't be tampered with by the operating system, right? If you run something in VMPL zero and the operating system is running in VMPL one, you're not going to be able to overwrite those PCRs and mess with them, at least assuming you did things correctly. Also, of course, we have memory encryption that keeps the host from tampering with it, right? So we have now this SVSM stuff. It's out on the internet. Just the base SVSM, written in Rust, by the way. And I think there's some work going on to look at how to put a TPM on top of that. We don't have anything to share about that today, but hopefully in the future, we'll have more details about how the TPM can actually work with S&P. So like you said, that all seems pretty good. Unfortunately, the next few slides are going to be a little bit of a downer in this. There's a couple of questions you run into pretty quickly when you start looking at this approach. I think one of the main ones is how do you actually provision these TPMs, right? Let's say in this example where we want to seal a key against a TPM. This means that we have to know that the TPM is going to have a particular key, the SRK, that we can seal the key against. How does this get securely into the guest? This becomes sort of a complicated problem of maintaining a bunch of TPM identities across all of your deployments and making sure that you validate the state, the measurement of the guest, and then inject that TPM identity. And remember, this is something that the guest has to do, right? So already you might be thinking to yourself, hold on a minute, don't clouds already offer persistent TPMs or like VTPMs today? And yeah, some do, but that's the cloud handling all the orchestration and getting the TPM to show up in the right place. If we're doing all this to not trust them, we're going to need to be a little bit smarter about how we get the identity in there, and we're going to need to make sure that it's the guest owner who actually does that. That's interesting. Another quirk is that we can't really do much with the TPM until the identity is injected into it, right? If we're going to use this to validate the boot and we want to have the correct like EK, we want to provision the EK into the TPM, we don't want to boot until we've done that. And the snag there is that it means that we're not going to be using something like the guest networking stack to do this provisioning, because we can't boot the guest until we have the TPM. And so we're probably going to need to rely on some other way to get in contact with the host, some more support on the host. Like I said, there's already some clouds that have ways to provision these type of things and probably do this identity management, but you have to do it in a way that's trusted, right? So that's a bit of a headscratcher. Now there is an interesting proposal from our colleague James Bottomley, who you may know, who suggested that maybe an ephemeral TPM would be easier to manage. An ephemeral TPM would basically be one that generates a new EK every time you boot it. And this from a security perspective would work fine. Basically what you would do is you generate this new EK and you put the hash of it into the attestation report one way or another, right? So you'd make sure that the hardware root of trust or the hardware measurement includes the hash of the EK. And then you could, you know, when you validate that measurement, compare that to the EK that is actually reported by the VTPM and make sure you're on the same page. There's a little bit of a snag here, which is that the ceiling keys to the TPM thing might get a little bit harder if you're randomly generating the TPM each time. Interesting. So it's a bit more complicated. You could still do it. You could still have, you know, the SRK generated on the fly, but now your management engine, now the person who's sealing the keys can't reuse some generic image that it's, you know, matched up to an SRK. Instead, they have to think, Oh, this new SRK just came to town. I have to seal a new key against this or whatever. So, you know, we don't need to go down the rabbit hole too much here. But the point is that there are some questions about how to provision it. And another thing I hope is coming through here a little bit is that what I was saying on the on the last slide about how you could just plug this into KeyLime or some existing like infrastructure for verifying quotes, that's not quite true, because we have a software measurement and we also have a hardware measurement, and we're not going to get anywhere if we don't account for both of those. Right. So in fact, you can't really trust a quote from this vtpm until you validated the hardware measurement. Now there's different things you could do here if you want to teach KeyLime how to understand these measurements, that's something. Or if you want to have some sort of two stage approach with the key broker, the provision of these, you know, identities or something like that. There's different ways to go about this, but point is there are some complications here. Okay. And there's more complications here, interestingly. So, like I said, we need some particular properties for this to work can't be tampered with by the host or by the guest OS with with scvs and p we get these properties. But with other architectures, it's not as obvious what we should do here. First of all, with scvs and scvs, we don't have those options. And so it's not clear the best way to implement something similar there. It might be that that gets left behind if we go to sort of a vtpm based approach. Maybe not. Maybe the bigger elephant in the room is that it's not yet known how we would do something like this with TDX. And the thing that makes you a little bit hesitant here is that with TDX, you don't get as many RTMRs as a TPM has PCRs. So there isn't really a one to one mapping between a TPM and the TDX measurement. Now it's possible that someone smart, maybe one of you, can figure out a way to actually map those PCRs onto the RTMRs in a way where you're not going to be losing any information and we're going to be able to still keep the log straight and everything will work out. Another option is that we might be able to implement a VTPM as another enclave, right? We might be able to have another enclave that's linked to the main one and use that to implement the VTPM. There's some complexities here, too, about how you measure these things, about how you make sure that you have a secure connection between the two. A suggestion I heard the other day that's pretty interesting is that maybe we might be piggybacking off of some features that are likely coming to the confidential computing world in terms of device support. So it seems likely that over the next few years we're going to be doing more stuff with confidential devices, understanding how to use GPUs, things like that. Maybe we could be treating a VTPM as another one of those devices and that might pave the way for being able to understand how we can have one VM serving as or one enclave serving as a VTPM emulation and then also a guest that can also have its hardware measurement include something about that VTPM. So that's kind of this VTPM-based approach here and honestly it has some big upsides. The idea of really standardizing around the VTPM in some ways makes a lot of sense and if we're talking about unifying attestation there's surely some people who think hold on a minute, aren't there already ways to unify attestation? A couple of notes here though, I mean first of all in this firmware-based approach we don't necessarily have to use a TPM. We could do some other scheme for the software measurement like some people have talked about DICE recently which is another standard for doing somewhat similar things. We could probably implement that in the same place here. It's also worth noting that not everybody in the world actually likes TPMs that much and as we've found trying to explore them it's kind of hard to find experts in the TPM people who deeply understand how to do it. So it's possible that that maybe isn't where we should be putting all of our eggs but who knows. That is at least a little bit about this sort of VTPM firmware-based version. Like I said we're looking into some specifics about how to do this. Very interesting questions about provisioning, very interesting questions about how to get it supported in different architectures but at least this is one I think very promising approach to unification. Okay we're going to go up a level now. So another approach is that maybe we can have the hardware measure the firmware but also measure the kernel and then we could have the kernel measure everything else. Right? And there's probably different ways we could do this. One would be to have like a VTPM emulated in the kernel as a module or something like that. And here we would be isolating, we wouldn't need to isolate the VTPM from the kernel anymore because the kernel is measured, part of the hardware measurement, right? But we would need to make sure that user space can't tamper with the kernel emulation of the VTPM. Fortunately that's kind of how the kernel already built. That's part of the idea that that wouldn't be possible but would we have you know really strong hardware guarantees about this, not necessarily. This is an approach that we haven't looked into much and hopefully nobody will really ask too many questions about this slide. But it is a good one to think about. Maybe this is an option and for other parts of confidential computing there have been some attempts at unification like UPM. Some of these ideas are not without controversy. It might be complicated to get people to agree at the kernel level but at least it's worth thinking about that if we can measure the firmware and the kernel we could have the kernel take care of things and have that be the thing that joins everything together. Okay. Up another level. The unit RD. Okay so here hopefully you get the pattern a little bit but the hardware would measure the firmware, the kernel and the unit RD and then we would have something in the unit RD that would handle whatever comes next to the measurements unencrypted disk something like that. And there's a couple different ways we could do this on different platforms already. So for SEV and SEV S&P we've been working on something called measured direct boot which injects some hashes into the firmware. That's into the firmware binary that whole firmware binary is measured and then those hashes have to match what gets passed in via the firmware config on a direct boot. So we could use we can already use that to measure the unit RD. So what if we just put something in the unit RD that will help us out here. TDX actually supports this too with some patches you can have OVMF extend the RTMRs on a boot to make sure that those things all get measured. This is the approach that we actually take in confidential containers. Minor plot twist here but we'll talk a little bit about that cool CNCF sandbox project in this presentation. Just because CADDA the thing is based on does a direct boot and when we do CADDA with SEV we do a similar mechanism and CADDA also boots to an unit RD so it's a great fit for this measured direct boot thing where you boot right to unit RD. Note of course that this is a very constrained way of booting a machine it's not very generic not very universal and measured direct boots are good for some things but they're not great for other things for instance you have to separate different components from one disc right it's it's not some unified boot image where you have a kernel and even that RD and everything goes together you might have problems for instance if you update or upgrade one of those things especially if you do it from inside the guest it could be difficult to keep track of all the different components and make sure that they all match up and fit together but there are some some benefits to doing this. So like I said this is something we're using in confidential containers is this attestation agent. The attestation agent its role in confidential containers is to provide secrets to the guest it's to initiate do anything that needs to be done inside of the guest to initiate like attestation and then talk to some external relying party, key broker service something like that and get secrets from them and then provide those to the guest. This is a user space process and it's modular two big benefits we think so we have this modular thing called key broker clients basically if you want to have the attestation agent support a new architecture you just need to write a key broker client that knows how to get the attestation that knows how to give it to somebody and then can fulfill these two APIs get secret and get resource. We use get secret and get resource to talk to like OCI crypt in this case which will decrypt and handle signatures container images right well if you can if you can use that for signatures and decrypt container images maybe you could also do it for like a root disk VM right so would it also work to use this in the case of a VM and and have something in the unit RD that's going to help you boot some sort of encrypted disk maybe there are some upside to this for instance it's nice to be working in user space now of course we still need support in other levels of the stack right like measure direct boot required patches to the firmware and to QMU but you know it's nice to have most of the work be done in in user space it's also you know nice if you want to have other KBS's they're involved if you want to implement your own thing you want to add something in no problem of course a big a big issue here is that we don't have a standard this isn't a common thing to find in the mid RT the attestation agent today doesn't ship with anything besides confidential containers it's a possibility that if we picked up some component like this in a distro then you know maybe this would be a good way to go where everybody actually when they were just using some version of Linux they would in the end of our teeth it just comes with it they would have something that knows how to get measurements and knows how to give them to services knows how to get keys back and then knows how to hand them off to something that can decrypt a disk that could be a big step towards unifying stuff here that wouldn't really involve too much messing around the kernel wouldn't really involve a lot of messing around in firmware wouldn't need like us to learn how tpm's work it could be nice this is kind of the final option here is having the hardware measure everything so there's some ups and downs to this if i wanted to set out today and do this with qmu for instance i wouldn't really be able to obviously you could change the way things work but qmu for sev at least just measures the flash zero so just measure the firmware there's reasons for this not saying that's a bad choice by any means but it doesn't measure the whole stack that's the point it's also not clear that you necessarily should extend it to measure the entire stack you would have to change the way your guest boots up right because really these these these confidential computing platforms they want to measure stuff that's in memory right so you if you want to measure the entire stack if you want to measure the entire application it means that you kind of have to put the entire thing in memory when the guest boots there are ways to do this but they're not really standard also this increases the amount of stuff that you have to measure at boot time which usually decreases how fast you can boot so that's not exactly ideal there's also some downsides to using the hardware measurement for the whole application such an inspiring subject but but there's some some some downsides to using the well there's some some downsides to using the hardware measurement for the whole application because hardware measurements are kind of difficult to consume right not everybody knows not every end user knows exactly what to do when you give them some hash you can tell them oh well that's the hash of the kernel the firmware are the stuff but how do they actually verify that it could be pretty tricky there might be nicer things to give them like you know maybe a quote from a tpm is that nicer well that's up to you guys to decide but at least it's a little more widely known now i don't want to completely throw this approach under the bus because there are projects that use effectively like libk run more or less does this where it puts everything into memory the whole application and it means that when you get the ad station report it measures everything and that's great because you have an ad station report it tells you exactly what ran in your workload that's nice but like i said there are some trade-offs and one thing that is significant about this is that this is basically non unified ad station because all you have is a hardware measurement like i said we're sort of conceiving that the hardware measurements are kind of going to be different um also like i said it's kind of hard to measure the whole stack like this unless you do some tricks um it's kind of hard in a standard way to measure the entire you know guest workload and the guest stack and so if you think about it this approach doesn't exactly meet either of the goals that we started out with of measuring the whole stack and doing it in a way that is unified now the really not so great news about it is that this is what we do now this is the state of the art basically is that we you know leave the user to their own devices to deal with the hardware measurement i accept the hardware measurement usually just by default won't even cover the entire stack so we need to do something i think to take a step away from this situation either make sure we measure everything but ideally involve some kind of software component in there as well so we said we're going to focus mainly on the guests and that's true but it's worth talking a little bit about how to verify this stuff and i kind of alluded to it already we think the verification mainly follows from what you've implemented in the guest like i said it's going to have two different parts most likely assuming you have some software measurement going on you're going to need something that consumes that software measurement that understands tpm quote something like that but in a in order to trust that measurement in order to build the chain of trust from the hardware up to the software measurement you also need to go through the hardware measurement right and make sure that everything checks out there so there's going to be some sort of two-stage thing and that is going to make things a little bit more complicated also like i said this hardware measurement is going to differ from platform to platform so we should probably set our expectations a little bit when we talk about unifying confidential attestation it's not going to be the case that we're going to have some you know 10 line program that can you know that can take any measurement for many guests and immediately know what to do with it most likely there's going to be some kind of software component and we're also going to need to understand a bunch of different hardware measurements you could split this between multiple services you could have something that handles the hardware measurement and then some other thing that touches more on the software itself okay so we have like for the research stuff where should we go what should we do the first thing is probably obvious we should decide what we should do like i said in the beginning it seems like everybody you talk to wants almost everybody you talk to wants us to standardize this in some way it just seems like a massive headache if we don't in some ways it's early days you know like i said we're probably going to start to understand more about how we're like adding attestations of like i o stuff to confidential computing that's probably going to make the picture more complicated not less complicated so in some ways maybe it's way too early but on the other hand it could be exactly the right time to start coming together a little bit for what we want to do and set ourselves up in a way that's not going to be that complicated keep in mind that there's new architectures every day that are adding confidential computing support and maybe if we start to do something now we can influence the way that they go a little bit maybe we can't choose at all but it would at least be good to understand some of the trade-offs and at least be in the ballpark of what we're doing together overall i think we can understand that if everybody ends up with different approaches here it hurts our efficiency as developers and it also hurts security it would be a lot better to have some common things that are well understood that everybody uses that people have used a lot and that everybody has looked at and sort of audited that's probably the more secure approach rather than everybody just kind of going off and doing something random so hopefully with the people here and elsewhere we can take a good step in this direction these are some ideas about how to do it I think now we just take some questions or if anybody wants to rush the stage and destroy the presentation that's fine or if anybody wants to vigorously support one of these approaches that would be cool too but anyone any takers on any of these options we'll give you oh Christof so wait wait hold on we don't want to repeat the question that's too much work so far so good a very good presentation thank you so you mentioned the components that we use in confidential containers today and express the hope that this would go into the platforms the distroes etc doesn't that somewhat imply that you'd go the TPM route well so right now the like the attestation agent is mainly we're not really expecting to use the VTPM there at least today we do kind of see that as an alternative you obviously could use the attestation agent you know you could implement a KBC modular component of that that consumes a TPM whether or not that's a you know secure VTPM coming from an SVSM or any other kind of TPM you could fit those together but right now we have this measured direct boot thing we think okay great you have measured direct boot you have the attestation agent you're good to go the Achilles heels to that are really the the drawbacks with measured direct boot but yeah we don't necessarily see that having the attestation agent would imply that you are going to have the VTPM so the meaning of my question was if you put that in the distros and you want to standardize the boot for those distros that seems to imply and I think that was more as the result of the buff we had a couple of days ago that the only option that works on physical hardware is based on TPM and if we want to unify on this we probably want to use the TPM interfaces for the virtual case as well with the with the attestation agent we would we would basically add something when we're booting up like when we're decrypting the disk that would talk to the attestation agent that could get like like okay for S&P you do measure direct boot you get the attestation report that has in the measurement of it the hashes of the NTRD the kernel of kernel command line you don't need the VTPM anywhere to get the whole thing measured in that case as long as the attestation report attestation agent is inside the NTRD so it's inside the measured but that's only for the virtual case isn't it it's what it's only for the virtual machine case isn't it yeah yeah yeah so my point is that if you want to have the distros accept that they probably want that to be part of the NTRD as long as we are thinking about an NTRD that would be distro like coming from the distro built by for instance Red Hat then that NTRD has to support at least a large fraction of the physical hardware yeah well so I think it's I think it's true that out of the approaches we talked about today the ones that gives you the most interoperability between like a real VMs traditional VMs or whatever that is going to be the VTPM based approach you know when you look at like the hardware measurement you know there's no like if you're getting the attestation report as part of this there's no bare metal equivalent to getting the attestation report so there's not going to be any way that you're going to be able to get the attestation report in a VM but also do exactly the same thing on bare metal it's really just the TPM interface that would give you that kind of option to that yeah I'm not trying to force your hand one way but I'm trying to rehash here the discussion we had at the both no I think I think that's a big benefit of the VTPM or the TPM based approach is that that's that's an API that's available in tons of different contexts that's yeah that's not true of anything else as far as I know just I can second that in VM environments where you want to have full freedom over what you're executing which means it may not even be learning Linux at all in the first place capping off at firmware level which basically means VTPM is the way to go I can totally relate to that the interesting problem that you mentioned which I've been cracking my head on for quite a while as well is exactly the provisioning as well as the persistence persistence is even harder than provisioning how do you trust your cloud hypervisor with secure data that you don't know actually like where you store it but then ideally encrypted somewhere while it's somewhere off and then decrypted again without having a station done before you do that decryption again because you cannot talk to the outside never yeah yeah the persistence is a big is a big problem and that's another sort of benefit of this ephemeral approach which in some ways is kind of weird but it outside steps some really big problems like you know TPMs have counters in them and like how do you know how to propagate the state back to wherever it's going to be stored and doing that securely that's a big mess so yeah definitely keeping those things straight is tricky and yeah I also agree with the first thing you were saying which is that like if we look at what is going to require the least amount of work by distros or like the least amount of work by different OSs and different platforms to adopt this surely it's got to be the TPM based approach because everybody already supports doing that right that's another huge huge benefit I think correct also it gets us just multi-platform support immediately the real big question behind that question is how do we get to a solution on the TPM for system for provisioning well so the that's this is where we sort of lean into the ephemeral thing a little bit but really it's kind of just pushing it's kind of just pushing the problem somewhere else either you have to have this persistent TPM or you're gonna have to do a lot of messing around to like on the fly change your expectations you need to inject this the state to the ephemeral TPM immediately after it starts or something like that yeah it's funny we're on the further research slide because but I mean another thing to think about is like can we get away with maybe not keeping some pieces of the state like would it be enough to just have the EK of the TPM because you could inject that at boot and that's not that hard and if you don't need to let me reword what I was trying to say I am 99% confident we're not going to solve it in the session yeah how do we follow up properly so that we can not build five different solutions across the industry that's a good question I agree I don't know okay let's just think afterwards like after the session I just sit down there and try to figure out some way to coordinate and just work together what mailing list is best or something correct exactly let's just make sure we all go on the same yeah we have sir here you want to sir here so focusing on the NRD strategy which I have to acknowledge I came to favor so right now in some distribution we have Clevis to unlock the Lux devices and Clevis is already pluggable he already has multiple plugins you can unlock with a TPAPN or you can unlock with a price phrase or even some network protocol so do you think it will be feasible to stand Clevis to support multiple attestation protocols so it's yeah I think we learned about this but too late but so I think in several senses it's similar to the KVC or attestation agent with different KVCs in it which was designed for the Cata containers Confillation containers but yeah Clevis for example can take from from TPAPN for example take a secret from TPM and so on so it might be there might be overlap and maybe we can reuse thanks it also has a server side Tang I think have a server side component to which maybe is like the KBS there's a you want to go right over there okay last question can you go really fast yeah I guess my question is it seems like we have very different use cases being presented you're talking about the confidential container and we're talking about booting generic limit you know distro images do you see it as a failure if we use multiple measurement systems but unify somewhere else like if the relying party code is unified is that a failure in your eyes or do you think that's a reasonable way to standardize let me try and answer this really quickly I probably shouldn't well okay whatever so the I think the the the the attestation agent-based approach is a little bit overfit to containers not entirely but it's a little bit overfit in that it uses direct boot that's not ideal like like I said the Achilles heel of that thing is that direct boot is not great for generic situations involving VMs the VTPM thing I don't think have that problem as much and so I actually do think the VTPM is a pretty good candidate for working across a lot of different use cases and if we do that then you know we can have the verification side of things be pretty unified I don't necessarily say that would be a failure if it's not really unified I think at the end of the day it's possible that what will actually happen is that we're just going to have massive verification things attestation services that just speak a ton of languages you know the fewer the better but we'll see what ends up happening okay that's thank you everyone yeah thanks guys