 Are we good? All right, so let's get started here. Hello, everyone. I'm Nate Reller, and I'm from the Johns Hopkins University Applied Physics Laboratory. For those of you who don't know me, I've been at APL for over 10 years now. During that time, I've researched areas such as assured information sharing, measurement, and attestation, as well as cloud security. I was the technical lead for APL when we joined the OpenStack community back in 2012. And our mission then is the same as it is today. And that is to improve the security of OpenStack. I think we've had some pretty good successes with that mission. Our first big security feature that we added was center volume encryption. Along with that, we added the original key manager interface. So I was the author of that. I put that in Nova and Cinder. That's since been pulled out, and it's now its own library called Castelon. And that's being integrated into, or has been integrated into several services, including Nova, Cinder, Glance, Swift, and Sahara. Our latest security feature that we've added was the Glance image signing. So we got most of that in for the Metaka release, but we're hoping to finish that up for the Newton release. So as you can see, we have different security features that involve encryption, signatures, and keys. So that's obviously why we're interested in key management. So that's why I wanna talk about today. So in particular, I wanna talk about the bring your own key key management strategy. And this talk is gonna be a little bit different than some of the others you might see. So most of the talks are focused on things that people have done with the existing OpenStack code, like cool features that they have. This one is gonna border on a little bit of design aspects because bring your own key isn't implemented yet in OpenStack. So part of the focus of this talk will be, I'll give you my ideas for how I would implement bring your own key. That way we can kind of kickstart the design sessions as well as identify some of the requirements that are needed for it. And the other part I hope you guys get out of this is being able to identify the three different key management strategies that have been talked about and you can pick and choose whichever one that works best for you. So we talked a little bit about the agenda. Here's a couple more details. So first, just a couple of quick slides. I wanna give a definition in some of the properties of bring your own key. So we're all on the same page with what that means. And then we're gonna talk about why you might want bring your own key. So we'll look at that from a customer's point of view as well as a provider's point of view. And then we'll get into the bulk of the presentation which is looking at the three different key management strategies. So there's provider key management. That's where your cloud provider is gonna manage everything for you. They'll have your key manager and you may or may not even have access to it. Then we'll look at the bring your own key push model which I've renamed simply bring your own key. I know that might be a little confusing because we're talking about bring your own key in general. But I decided to rename it that just because when you look at bring your own key with other cloud providers like Amazon or Google, this is typically what they mean when they talk about bring your own key. And the third model is to bring your own key pull model which I've renamed to the bring your own key manager and you'll see why when we get to that particular strategy. So what is bring your own key? So for me it has three basic properties. The first is the customer is gonna provide the key to the cloud provider. So anytime you wanna do some sort of encryption or digital signature and you want the cloud provider to do that, the customer is gonna provide the key to them. Now when the key is received by the cloud provider, the second property is they're only gonna maintain it for the duration of that operation. So you'll tell it to do something, give it the key. It will perform that operation and then when it's done, it will overwrite the memory where that key was and that essentially wipes that key from the provider's namespace. So at that point, after the encryption is done, the provider has no knowledge of that key. So why would you want bring your own key? So there's a couple of perspectives here. So from a customer's point of view, the biggest reason I hear is regulatory compliance. So I put that with a question mark because I've read through a couple of those specs, the NIST and the PCI specs and I don't see anything in there definitively that says if you're using cloud services, then you should use bring your own key. And I've asked around and I haven't found a really solid requirement that spells that out. So if anyone knows after the presentation, I'd really like to know which ones they're referring to. But from what I gather from different people that I've talked to is that they say in the spirit of those requirements, that's why they should use bring your own key. And this talk is not gonna be focused on the requirements in those documents, but I just wanna give you an overall security talk on them. But I bring them up because that's why people are looking at this model. The second reason I see is the fear of loss or stolen data. So if you're putting your information in the cloud and your cloud provider has it, say they may lose some disk that they're using for like encrypted volumes. Well, if those volumes have been encrypted and they somehow lose them, you have the encryption keys. So if some adversary happens to get those disks, they still need to get your keys. So you have this nice separation of your data from your keys. From a provider's point of view, the two big reasons I see are customers want it and feature parity. Basically, the other cloud providers are doing this so we should too. And I say that a little bit mockingly, but it's actually a good reason. I mean, if we wanna be the dominant cloud architecture that's out there, if customers want it and they're switching to other providers because they're doing this, then I think we should look into it and look at providing that feature. So now we'll get into the bulk of the presentation here. So we're gonna go over three different key management strategies that have been discussed. So the first one is provider key management. This is where the cloud provider takes care of everything for you. Oops. And to kind of set some expectations for how this will flow, I'll first give a diagram that illustrates how that key management strategy will work for ephemeral disk encryption. And then we'll go over some of the issues that may arise with that particular strategy. So here's ephemeral storage encryption with provider key management. This is all gonna be kicked off by your customer. So for those of you who are new to OpenStack, the first thing you're gonna do as a customer is you're gonna go to Horizon, which is the graphical user interface, and you're gonna make a request to create a new VM server instance. At that point, you'll pass that the image ID for the image you want to be used for that particular VM. So that'll be like, do you wanna use REL or Ubuntu, whatever favorite distribution that you may like. So once Horizon receives that request, it sends it down to the compute controller. The compute controller receives it, it runs its scheduling algorithm, and then it determines which physical compute node and the rack of machines that it has should host that particular VM. Then the compute node receives that request. And at that point, it has a request that says, okay, I need to create a VM and I need to have an encrypted ephemeral disk with it. So because of that, the first thing I need to do is I need to go to Barbican, which is the de facto key manager, and I need to create a key for it and then retrieve it. And then the compute node now has a key. It sets up dmcrypt, passes it the encryption key, and then at that point, it goes to glance to get the image bytes and it copies them through dmcrypt. So now you have an encrypted ephemeral disk. So there's a couple of issues with this particular approach. So from the customer's point of view, the big thing is your provider's responsible for managing your keys. So you gotta ask yourself a couple of questions. Like how much do I trust these people who are managing my keys? So that sensitive information, is this someone that I trust? One of the aspects of cloud is that they're gonna be processing and storing lots of data. So there could be intellectual property, there could be legal stuff in there, tax forms, credit cards, whatever you want. And because there's so much information there, it makes it nice for adversaries to wanna target that. So one thing that I would be concerned about is, are these sysadmins, are they gonna be loyal to me? Are they susceptible to bribes? Have they had background checks? Do they have financial issues that could make them susceptible to those types of issues? The other thing you gotta worry about is how are they managing your keys? So are they following good practices and procedures for storing those keys? Are they auditing their key managers? Are they monitoring them for suspicious activity? And what are they doing for credentials management? And the credentials management is an issue I wanna talk about a little bit more here. So for those of you new to OpenStack, I simplify the diagram a little bit. So in addition to the image ID that you're gonna pass for your particular VM, you also need to pass in a token. So what you'll do first is actually go to Keystone to authenticate and that will return a token for you. So that's your user token. And you use that token to pass to each of the service requests. So those are done on your behalf. So in the diagram that's shown, you can see that token is passed to Horizon, which passes it down to the compute controller, which goes down to the compute node. At that point, the compute node reuses that user token to talk to Barbican and create those keys on your behalf. So now if we look at Barbican, we'll have an audit log and say, oh yeah, there's keys created by Nate Reller and these are for ephemeral encryption. And that's a nice feature of using the user tokens. The downside of that particular approach is that you're passing that token through all the services. So if I can compromise one of those services in the call chain here, like Horizon, I can start siphoning tokens and then talk to the key manager to start pulling out keys. The same for a compute controller as well. So an alternative would be if you wanted to use a service token. So what this means is that when you provision an open sac, typically you're gonna put all your services in the same service project. And they have their own credentials that they can use to authenticate to Keystone. And basically what will happen is a compute node will authenticate to Keystone. It will receive its token back. I call that a service token and it will create the keys using that token. So now all the keys are gonna look like they're owned by the compute node. And that seems better, but the downside is that with the policy that's set up in Barbican is that, again, if I can compromise one of the other services and steal their credentials, they're all in the same project and they're all admins. Which means if I can compromise Horizon or some other service, then I can just simply retrieve all the keys at that point. So either one of those approaches seems to have enough security. So one of the proposals that I have, and this is where it kind of touches on a design, uh-oh, do I gotta run updates here? Murphy. I can't even see what it is. Hopefully that wasn't important. So one of the proposals that I have out there is for wrap tokens. So what I really want is I only want to release that key on behalf of a user if a particular service is making that call. So I kind of see reorganizing the sequence of events here where I can have the user go to the key manager directly, create a new key and using my user token so that key will be created on my behalf and then I'll designate that as an ephemeral disk encryption key and basically say that only the compute node can receive this key if it's acting on my behalf. And then when we make the call and the user's token goes down to compute node, it will take my user token and then it will use, it will talk to Keystone to get its service token and then basically have those wrapped. So now when I present that wrap token to Barbican, I can have evidence of both the user and the service making the request. This way if I take over one of the other services, I not only have to have that service token, but I also have to have the user's token as well. Sorry about that. So now we'll get into the second model which is the bring your own key push model, which I simply just renamed to bring your own key. So this model is a little bit different. It looks similar to what we saw last time. So again, the user's gonna kick off this request and they're gonna say I want a new VM server instance and I want an encrypted ephemeral disk. So now in addition to passing that image ID, I'm also going to pass the actual encryption key that will be used for DM crypt. So the user passes that to horizon, that makes the same call down to the compute controller, runs to the scheduling algorithm and then forwards it onto the compute node. At that point in the compute node, this is where it gets a little different. It says, okay, I need to create a VM. It needs to have a cryptid ephemeral disk. I already have the key, so there's no point in going to Barbecan. So I'll just simply set up DM crypt, pass it the key and then I'll copy the bytes as normal from glance. So now I have an encrypted ephemeral disk. So there's a few issues with this. So the first one, I actually wanna go back a slide, is you can notice that the plain text key is passed through each of those services. So now if I can compromise one of those services in the call chain, I can have direct access to that key and I have it right away. So we're gonna spend several slides after this on that particular issue. The second one is that your users are now key custodians. So I'm picturing an environment like APL where we have like 4,000 researchers. We want them to be able to create VMs and set up their experiments and be able to run them. But part of this means that they now have to have access to your key manager. I wanna talk about some of the consequences of that. Another consequence of this is that the service actions must be initiated by the customer. And what I mean by that is anytime you, most of the actions are invoked by the user. So simply asking them to supply you the key is no big deal. So things like attaching a volume, creating a new VM, that's all fine. But there are certain actions that are done by the cloud provider that are not invoked by the user. So imagine you have your cloud provider and you have a rack of machines and they're set up, some of them have encryption running and you have a new rack of machines and you need to do a migration. So how are you gonna do that migration? So part of what you have to do on the new rack is you have to set up DM Crypt and pass it that key. Well, from the compute nodes point of view, it gave that key to DM Crypt and there's no way for it to get it back. It's like a one-way flow there. So at that point, you can do one of two things. You can either deny the service, so not do a migration. That sounds like a terrible idea and cloud providers probably won't support that. Or you can cache that key. So you basically need a copy of that key in compute nodes namespace so that when you do that migration, you can transfer it to that new machine. And I don't like having a second copy of the key, but it's not the worst thing. You could encrypt it, store it with the TPM or something like that. But for me, it's just one of those things that when we start designing this, it's an issue that we're gonna have to be aware of. So that way all the services that have situations like this can safely store that key in their namespace. And the last issue that I wanna talk about is this more work for the customer. So your customer now has to buy a key manager device. They're gonna have to set it up. They're gonna have to secure it. They're gonna have to monitor it. That's a lot of overhead. As well as the fact that you have these keys, they're protecting your data, and your cloud provider doesn't retain knowledge of those keys. So if you have a fire or an earthquake, something happens to your key manager and it's suddenly destroyed, you can call up your cloud provider and say like, hey, I just lost all my keys. Can you guys help me out? Cloud provider's gonna say, sorry, dude, like you're SOL, you're surely out of luck here. There's nothing we can do at that point. So that's another risk that, for me, that stuck out. So let's talk about that plain text key being exposed. So I didn't really wanna talk about requirements, but I thought I'd bring these up because if you're following good security practices, you're probably following these already. So the first one basically says, if you're gonna transmit a symmetric key or a private key, something that's sensitive, you should probably wrap that key. And you can make the argument, yes, you can make the call. I have a TLS tunnel and the key will be wrapped as I'm transmitting it. But once that key is received by services like Horizon and Compute Controller, which don't need to see them, the key's gonna be in plain text in their memory space before they forward it on. So you're kind of breaking the second one, which is you wanna restrict that key access to the least number of key custodians. So the obvious proposal, the way that this will likely go, I see, would be, well, let's wrap the keys before we send them. So the user will get the public certificate of the cloud provider or of the recipient. It's gonna send the wrapped keys as part of that request and then the recipient will decrypt it and then have access to the private key. So the big question here is, who's that recipient? Like, what's this public key? So for some operations, you don't know who that end recipient is. So ideally what we want is we wanna wrap that key from the customer to the compute node that's actually doing the work. But when we look at ephemeral disk encryption, we don't know who that compute node is. So the compute controller is this one-to-many relationship. So it at runtime determines which one should host it based upon some scheduling algorithm that it has. So you could do a couple of things. You could do like a reservation system if you wanted end-to-end encryption where you basically just say like, hey, I wanna create a VM, it's encrypted, and that return call returns you the certificate of the endpoint and then you wrap it with that and make another request. I don't see that happening. That's a lot of overhead. So I think the most likely conclusion that will come out of that is let's just use a single provider certificate. So basically the idea is instead of doing that end-to-end encryption, we'll just have your cloud provider, Rackspace or whoever will post their certificate and that's what you'll use to wrap your key. So in the case of the ephemeral disc encryption, we'll wrap it with that key, we'll send the request across and now the horizon dashboard and the compute controller can't see your wrap or see the key that's being wrapped but the compute node in this particular instance can. So that sounds great. It sounds like we've made good progress here. Now the downside of this is which services are we gonna trust to have that private key? And this is where I think things will be interesting in the community because there are certain services that we know that you must trust. So if you're gonna run something in the cloud, you must trust your compute node because it has your hypervisor and it can see everything. So you must trust that. So that's like an easy case where we can say, okay, it should have access to the private key and we have used cases that demonstrate a need to have that key. And we have some other ones like Keystone and Neutron which are probably likely good candidates for trusted services but I'd have to dig more into those. And then we have this category of untrusted services, services that I think that we don't necessarily need to trust and we can architect the system in a way to mitigate some of our trust that we must have with them. So if we look at a service like Glance, this is a service that I think we can label as untrusted. So there's one feature that I talked about already which is the Glance image siding. So that will give us proof, that will allow us to sign the image so that if anybody tries to tamper with that image or modify it in any way, we can detect that. And the other property that we're looking at implementing is Glance image encryption. So that way they can't read the contents. So if we see this in action, the first thing the user will do is they're gonna create their pristine image, whatever that might be, and then they're gonna sign it and encrypt it locally on their machine. They'll then give that up to Glance. So now Glance has an image, it's signed, so it can't be tampered with. It's also encrypted, so if somebody, if an adversary gets into Glance and starts retrieving images, they can't read the data, so I'm not too worried about that. And then when we want to use that image, we simply pass it over to the compute node and it has access to the keys that are able to decrypt it as well as verify the signature on it. So that's an example of where I can see Glance being a service that we label as untrusted. And that's also one of the reasons why we put for the sender volume encryption, we put the encryption side of that within the compute node itself. That way we encrypt all the bytes before they're sent off to sender. That way we don't have another component that we have to trust because if we have to trust it, that means it has more security requirements, more work that we have to do to make sure that it isn't compromised or vulnerable to attack. The last issue that I wanna talk about with the bring your own key model is this with users are key custodians. So this becomes an issue in more environments like APL where we want our users to create the virtual machines. This may or may not be an issue for the more automated environments. But we have an environment like APL where we have our researchers and we want them to create these VMs and we want the data to be encrypted. And so the first thing you'll notice is that when you make a request, the user's gonna have to have this like long random bytes of strings or long random bytes there that they're gonna have to copy in. So it's gonna look a little weird. I'm hoping that they don't modify that somehow or mess that up. But for me, the interesting part of this comes in when you walk through the sequence of events that they'll have to do to copy that key to horizon and pass it to the provider. So the first thing they're gonna have to do is they're gonna have to access their key manager. This means all my APL employees, if APL decided to do something like this, now have credentials to access my key manager. That key manager contains sensitive information. I'm not sure that I want 4,000 people having access to it. The other part is how are they accessing it? So most of these are gonna be set up with some sort of web interface that they have. So now I'm gonna have my users open their web browser to access my key manager. How much do you trust that platform or how much do you believe that platform hasn't been compromised? In this day and age, I find it, we operate under the assumption that it more than likely is compromised. And you add on top of that, users who are not the most security conscious. So I sit in our cybersecurity operations center. So basically that means I sit on the watch floor, I process alerts that come in and I have to mitigate them. And we discuss what we see with other SOC operators within our same community. And we'll see things like a user who has an alert and it says, oh, you have malware on your machine. And you call up the user and you'll be like, what did you do? And they'll be like, oh, I downloaded Chrome. Where did you get it? And it's not from Google. And it's like, this is the guy I wanna have access to my key manager. Sounds like a disaster. So that's one of the biggest drawbacks I see to this particular approach. So you have to be worried about their machine being compromised as well as their browser. Don't want them to siphon credentials and being able to muck with your key manager. So finally, once you do all that, you can copy the key and then send it off. And then you have the other issues that we talked about already. And then I talk a little bit about key wrapping here. So on top of that, not only do I have to ask my users to please wrap the key before you send it, that's another layer that, yeah, we may be able to automate it, but it could be another stumbling block. So the last one I wanna talk about is to bring your own key manager. This is what used to be called, I used to call it the pull model in the previous summits. So the flow of events looks similar to what we've seen before. We have our user, create me a VM, pass that image ID to horizon. Go down to compute controller, run through the scheduling, go to the compute node. So now we're at the compute node. It says, okay, I need to create a new VM with an encrypted ephemeral disk. First thing it's gonna do is it's gonna go to some centralized policy holder and say what's the policy for this particular project? Basically, what type of key manager should be used for this and where does it live? So those are the two basic properties that it needs in that policy. So is it gonna use a barbecue key manager or is it gonna use a KMAP key manager? And does that live like at jhuapl.edu or someplace else? So they'll get that policy, it will retrieve the key manager type, the location, and it then makes a request to that key manager. So at this point, we've kind of flipped the cloud a little bit. So from the cloud provider's point of view, when they make a call to that key manager, they don't know where it lives. It's likely to live on site on your premises so you can monitor it, but it could live wherever. So they'll make that request to create the key, retrieve it. At that point, it's the same as usual, set up the M-crypt, pass it the key, and then copy the bytes for the image. So obviously, the biggest downside to this particular approach is that your key manager is likely gonna have internet access so your cloud provider can access it. This is just a quick slide. I know some people like to take pictures. The current key managers, we have our Barbercan. I wanna give a shout out to the Barbercan community. I think Doug's out here somewhere. Yeah, so they have one implementation and then we're working on a KMAP implementation that's in progress at the moment. Also wanna give a shout out to Peter Hamilton, who works on PyKMAP. We now have a PyKMAP server in our latest release and a reference point for what needs to be in that policy. So the issues, we talked a little bit about the key manager being exposed to the internet. We'll have another quick slide on that. The credentials management, again, for the key manager. So how is your cloud provider gonna be able to have the right credentials to be able to create or retrieve keys from your particular key manager? So that's an issue you're gonna have to work through. Another consequence of this one is your provider services are now dependent upon your customer's key manager. So basically what this means is if you wanna do an operation that needs to retrieve a key, like a migration and your customer's key manager is down because they're doing maintenance, then you're not gonna be able to satisfy that particular operation. So that's something as a cloud provider you should be aware of, so that when you do service level agreements, you'll be able to work through that and you'll probably need some recovery steps in case, you know, your customer doesn't have their key manager available. And just like the bring your own key, it's more work for your customer. So again, you got to set up a key manager, monitor it, audit it. I got a meeting in a little bit here. I don't know how to get to the other screen. If someone can shut a little disaster, all right. Okay, we're back, all right. Yeah, so we did this slide already. Okay, so I talked a little bit about the key manager exposed to the internet. Obviously the thing that comes to mind is my adversaries now have an open path to get to my key manager. So if you're gonna do this, you're gonna have to monitor it, you're gonna have to audit it. I'd be worried about people stealing my keys. Now granted if they steal it, they still would need the encrypted data that goes with it to make any sense of it. More than likely they'll probably just go straight to your VM, but still you don't want people stealing your keys. I think the more interesting use cases are denial of service. So if I do a denial of service on your key manager, you know I can't do services with your cloud provider. And the really cool thing would be if I have access to your key manager, so I somehow got credentials, I now just start erasing all your keys. Or I put ransomware. I think that would be, I don't promote ransomware, but that would seem like a good use case for it if you're into that kind of thing. Probably shouldn't say that. So, and then you have credentials management. So how are you gonna have credentials be used from your cloud provider to access your key manager? So if you're gonna use Barbican as your key manager, that should plug in pretty nicely because you can just reuse your keystone tokens. So I think that should plug in fairly quickly. The other option, we mentioned the KMIT as a key manager. Part of KMIT means that you have to set up a TLS tunnel between the client and server. So you're gonna need to do some certificate management there. So basically have root certificates put in trust stores and do all that. The nice part about, at least from the provider's point of view, they could create one certificate and reuse that for all KMIT clients. We would like Keystone token support in the KMIT spec. It's currently not there. Part of the spec though allows you to do OEM extensions. So we could put it in there. That's one of the things we're talking about for Pi KMIT to be able to support Keystone tokens in a future release. And this is another proposal that kind of overlaps both OpenStack and KMIT. I just thought this was interesting. So earlier I touched on the fact that I really want proof of both the service as well as the user that's making the request. So if we have token support within KMIP and we also have wrap tokens, then clearly we could just use that. But I think that would probably take a little bit time to implement. But in the meantime, one thing we could do is we can still get evidence of both. So what happens with KMIP is the first thing you have to do is you have to create a TLS tunnel between your client and server. So basically we'll have our compute node send their certificate across and that's what they'll use to authenticate to your KeyManager. And the default operation is if you don't pass in additional credentials, that's what's used as your user. But in our case, what we can do is pass in that token, the Keystone token, if we can get token support and now we can fairly quickly have evidence of both the service as well as the user making that request. So I thought that would be kind of cool to implement this one thing I'd like to do with PyKMIP as well. This is a chart, basically outlines the three strategies that I talked about. I'm not gonna go over this with you, but I'll give you my personal thoughts on some that I had after going through this process. So when I first went through it, bring your own key that sounds awesome. I have the separation of data and my keys. But there's some downsides to that. I think what stuck out to me is provider key management actually works out pretty well in my opinion. You don't have that client set up. You can still have separation of your data and your keys because it'll be in your key manager versus wherever that data is stored. And you can still do things like separation of duties to make sure that the same person isn't adminning both of those. As well as you have that point to point. So the service that needs it makes that direct request and you don't have some of the risk of passing over a plain text key. You don't have your key manager exposed to the internet. And for my personal experience, I'll probably actually switch back to provider key management because I use bring your own key for some data storage that I have. And the thing that always concerns me is that if I get hit by a bus or something and I lose the encryption keys and my wife can't decrypt the pictures of me and my kid, I'd better be dead because she would kill me anyway. So that's one of the nice features for the provider key management. And one last thought on this is that if you were gonna do bring your own key, this is gonna require a community effort because I'm seeing API changes at least to pass in keys. And being able to get consensus on how we should do that and how we should safely put that key in the services namespace, I think that will be very challenging to get that consistent across all the services. Just based on personal experience from some of the security features that we've added. At that point, I can open it up to questions. The last thing I have is I am relocating to Tampa Clearwater area. So if anybody has some work down there or remote work opportunities, I'd be happy to chat. Sorry, self-promotion. So can you go back to the table you had for a second? Yeah. So a lot of the discussion around the benefits of BYOK and things tended to be framed in terms of individual users making use of it. You referred back to the use case at APL. The sort of clouds where I'm looking at BYOK being used are where, I hate to use the term, but various like two at auto workloads are gonna run. So the Facebooks of the world are gonna use it and they're gonna manage their interaction with our cloud and a bunch of other clouds and that's perhaps why they wanna do BYOK. And I think when you're talking about large orchestrated services making use of BYOK, actually some of the problems like client setup, some of the key custodian stuff actually goes away. So I'm not challenging this table as such, but I think it's use case dependent and I think we need to capture more of that. Yeah, I would totally agree. I mean, like the risk of managing your own key manager, like you said, those organizations probably already have something set up as well as the resources to set up automated processes. So I totally agree with that. Cool. Thanks. Yep. What would the impact of Keystone V3 and the way AWS works with ephemeral users where the users don't actually exist inside a Keystone? So everything, there is no persistent storage about the user, all the authorizations, the tokens, everything is ephemeral. So particular, like what issues might that cause? Or fix. I mean, if you can push the keys into the environment in an ephemeral way like through a SAML assertion or via a Crypto SAML assertion or via an OIDC token of some kind, does that help get rid of some of the need for like having your key server exposed over the internet? I would say I'd be open to looking into that. I know I saw some of that like on the mailing list. Proposed as alternatives instead of passing in that key directly. I haven't seen the specifics on how they would actually implement that. I guess my thing would be how would the recipient be able to decrypt whatever key that you're passing in through those other means? So yeah, I'd be interested. I can't really comment on it since I don't really know how it would be implemented. So I'm curious why you've gone on the model. I mean, I think it's a slightly similar question, but you're unwrapping the passphrase at the Nova compute level. I'm wondering why you didn't go to do it at the instance level in like a pre-execution by pre-DM decrypt, because then it could have been provider agnostic because you could bring your own key server and then you're not really relying upon the actual cloud infrastructure to actually do it. I mean, you don't have to trust the Nova compute node then, right? So you're suggesting putting the encryption within the VM instance itself? Yes, yeah. You could do that, but you still have to trust your compute node because the compute node can see everything. Like the hypervisor can read any memory at once. So if your key is in there, it can find it. Okay, so because you have to trust it slightly, you should trust it entirely. Trust the compute node entirely? Yeah, yeah. I mean, I don't know. I was just really unsure because I would have liked it within the actual compute node where I don't necessarily have to let the compute node know my secret at all. Because I don't trust they can just throw it away. I mean, obviously the code does that, but... Yeah, so I would say I'd be curious against what target you're trying to address. So if you're trying to address a compromised compute node, I don't know what you can do because again, it can read whatever memory it wants. Okay, thank you. I would have just a quick question which more or less ties in that but kind of not more from an end user perspective. For example, there are many OpenStack Clouds out there who don't support encryption on the storage level right now. Do you have any recommendations for them how they could encrypt their instances? Because what most people do I guess, like we do as well, is to encrypt it on an operating system level to do it directly in the VM. But you need key management there as well. Do you have any recommendations for that? So that's one case. We haven't looked too much into that just because if you're gonna do it within the VM instance itself, that seems like it's more customer-focused. So however you provision your VMs, you would have to basically allow them to have access to a key manager, the key that it needs. Okay, fair enough. Thank you. Any other questions? Yeah, I'll just jump in very quickly. So we've got two design sessions on this week. One is in the security track, which is at 5.20 tomorrow on Wednesday in room 408. And on Thursday, there's one in the Barbican track at 3.10. So if anyone who's interested and wants to ask more questions or get involved, then those are good avenues to do that. I guess that's it. Appreciate your time. Thank you.