 All right. Good afternoon, everyone. My name is Joel Kaufman. I'm presenting this session, along with Jarrett Rehm, on encrypted block storage, technical walkthrough. So there are going to be a large number of technical details. I'm from the Johns Hopkins University Applied Physics Lab, and actually led the design for the volume encryption feature that is landed in Havana. So Jarrett is working on the Barbican project at Rackspace. Do you want to say anything in addition, Jarrett, related to that? Sure. So as I said, I'm the Cloud Security Product Manager at Rackspace. So my job is to build products that customers use to meet security needs in their infrastructure. So we've been working on key management for a while. These guys are looking at transparent encryption, and so we've been trying to marry those two systems together to make sure that everything works as we kind of get closer to release here. Yeah, it turns out that volume encryption doesn't work very well if you don't have good keys. So just an overview on OpenStack data protection, why we're trying to do this work. Most of you probably already know most of this, but just kind of a high-level overview. Data is a pretty valuable commodity for most enterprises. They want to protect it. They want to protect their patents and all of their kinds of intellectual property that they would be presenting or getting from all of their employees. And so there's great interest in making sure that we protect that data against both accidental loss and unauthorized access. And if the cloud provider can help us do this, and this is important, and any kinds of things that the users can do in order to ensure this is also critical. Major things that this requires is if we have the compute host at the far left of that image and the storage host at the far right that are circled, usually you're generating and processing all of the data on the compute host side, maybe you're running Project Savannah or something with Hadoop. And so that data is then going to be stored elsewhere in persistent volumes. And so you have to secure both the data transmission and its storage at rest or encrypt the data at rest. So typically what we see in OpenStack is the control network is assumed to be secure, but maybe in practice this isn't guaranteed since VLAN isolation isn't perfect. And if you're looking at cloud bursting or some kind of hybrid cloud situation, you may have instances where you don't trust all of your storage equally. And so if you can encrypt everything, that's simply a big win from the user's perspective. Data protection and encryption is also going to protect against instances where data sanitization may not be perfect, such as someone who is disgruntled walking out of the data center with some disks in their back pocket, for example. So why are we looking at encryption for all of this data? Well, pretty simply it's good practice, principle of least privilege. If the storage host doesn't need to be able to interpret all of the keys and things that you're storing on there, then you probably don't need it to. And so you can just protect against any kind of attack that would come from the storage host directly. At the Grizzly Summit, Justin Kirkling gave a presentation in which he pointed out that out of 480 breaches, privacy breaches reported to the Department of Health and Human Services, from 2009 through 2012, 72% of those would have been trivially protected against by encrypting data at rest. So encryption protects against a large number of data breaches, it's required in a large number of cases for regulatory compliance, such as HIPAA, which stipulates that if the information could be accessed by a third party, that it first be rendered indecipherable. So if you have a good encryption scheme and you don't compromise the key, then that's going to be indecipherable data to someone who got access to the underlying disk itself. Cyber defense is also a growing concern for a lot of industries, and the sans critical security controls also explicitly call out data loss prevention and using encryption in order to protect against it. So when we're looking at data protection via encryption, again going back to what I started with previously, we wanna protect against both accidental loss and unauthorized access. So if the data is lost accidentally for any reason, it's been rendered indecipherable or unusable by whoever gained access to it. We also wanna make sure that this is managed with very minimal user action since if the user can screw it up and variably they will. So if we can avoid the user having really direct control over setting up all of that encryption or handling the key material themselves, then we're just going to eliminate a great host of problems that are potentially going to plague the efforts for regulatory compliance and those kinds of things. The specific threats that our work protects against are lost disks, stolen disks, reused disk sectors where you have a multi-tenant problem where you've allocated the disk sectors and then they remove the volume after that and then you allocate them to another tenant. If you don't first securely delete all of that data that the tenant may have left behind, then there's a potential for data leakage across those tenants. Like I mentioned previously, if the iSCSI commands or anything on the control network were intercepted then this would allow an attacker to rebuild the disk since those iSCSI commands are not encrypted by default and also we wanna protect against a compromised storage host or some kind of insider that would have access to those storage hosts directly. So a very high level overview of what our work does. First of all, this is just regular volumes in practice. We have data on the storage host at the far left-hand side of the slide and then we have that same, I'm sorry, on the compute host at the far left-hand side of the slide and then that same data on the storage hosts at the far right-hand side of the slide. You can see that the data is unchanged. So what is it that we have added in the Havana release? Well, what we've introduced is an encryptor which is represented by the enigma machine there in the middle. And so that data as it goes across that network, it's going to go through the encryptor as it's being written to the volume and then what is actually stored on the storage host side is just appears to be random noise since it is fully encrypted and assuming that the encryption scheme is good, it will just look like random noise on the other end. So the major features that we have and wanted to highlight, it's a very simple end-user interface. I will go through a screen capture later that we did before I came to the summit to show all of this in action, but it is a relatively simple end-user interface with very little chance for users to screw this up. It is much simpler to configure than any kind of full-disk encryption scheme. So technically all of this that we're presenting was feasible in OpenStack previously but it would have required a lot of manual user interaction in order to set it up. And of course, if the user had forgotten to do this at some point and accidentally wrote medical health information or something without having that full-disk encryption in place, then this would be a regulatory compliance problem. And so we're just eliminating those classes of errors. We have cryptographically strong volume encryption. So we support any kind of Cypher in mode that's supported directly by the kernel. Some of them are listed on the slide such as XTS mode or Cypher block chaining and a variety of key sizes. And if there is any kind of support for hardware acceleration, such as AES and I, extensions supported by the processor and you've loaded the appropriate kernel modules, then we're going to inherit those all directly. Based on the implementation and the way that we designed this, we are also securing data in transit because we encrypt the data on the compute host itself before it is sent across the wire to the storage host. And so this eliminates the need for any kind of separate mechanism to protect the iSCSI transmissions. We also have support for existing sender features such as snapshots and clones. We can boot from encrypted volumes and we also support a choice of key managers to handle the encryption keys. So the current defaulting key management, before I turn it over to Jared to talk and give a little bit more detail on Barbican, is that we have a very simple configuration based key manager. Now this by itself is insecure, but it is a proof of concepts that we have as a stand in while we're waiting for Barbican to be fully incubated by OpenStack. What we have currently in Havana is support for a single shared key. It is static and it is fixed and it is used by all volumes across all tenants. So ideally this is not going to provide you the most secure end-to-end solution in its current form, but what we do have is very flexible key manager, API abstraction that you could override and provide support with Barbican or any kind of third party key management appliance. So if Jared wants to provide additional information about this. So we just wanted to touch base on where Barbican is. We released our 1.0 release for Havana and so it basically supports the REST API for Barbican that is a kind of OpenStack API, right? So multi-tenant REST JSON, kind of relatively standard OpenStack API, supports using HSMs on the back end. We actually have, obviously you can have as many keys as you want. We do a key wrapping process that allows us to store basically an unlimited number of keys per tenant and then that each tenant has a key encryption key that's stored inside of the HSM. And so that allows you to basically have as many keys as you want. There's no real limitation other than disk space which thankfully keys aren't particularly large. So that's all out and out on GitHub now so you can kind of go play with that. The work of integrating these two systems is kind of just beginning. We were all racing to get stuff done for Havana so that's kind of next up on our list. Barbican provides the Python Barbican client which is a PIP installable library that you can pull in that allows you to talk to the API without having to write all the REST calls yourself. And so next up for this piece is basically just to say, okay well right now we have a static key that works for everybody and now we can switch to having independent keys for each volume for each tenant and those are all independent kind of tracked separately and obviously can't be accessed and all that. In addition, Barbican then takes over all of the ugly pieces of key management including auditing and logging and all the other things that need to happen to kind of have a secure key infrastructure and then the encryption piece doesn't have to worry about that, right? So for your back ends underneath the covers, Rackspace happens to be using the SafeNet Lunas but anything that speaks PKS S11 is fine. We're also looking at supporting KMIP and some of those other options and so Barbican basically becomes a nice abstraction layer. The encryption stuff happens inside the code that these guys have written. It talks down to Barbican and then Barbican talks to your back end infrastructure. So all that stuff should be out on GitHub. Feel free to hop on there and ask any questions then I'll be around if you need to. So what we've done is that we've actually implemented another interface abstraction layer between even Barbican's REST API and Python Barbican's clients that Jarrett's team has been working on and so it's relatively simple. If you can provide or hooks for create, delete, storage and retrieval of keys then that is sufficient in order to interoperate with our volume encryption feature. So just to give you an idea on how difficult this is our current implementation that we've been working on is trying to integrate with Barbican. It's about 120 lines of code to implement these four functions with the Python Barbican clients that the Barbican team's been working on. So if you wanted to deploy this and some kind of production environments with some kind of third party key management system even beyond what we've discussed here with Barbican then it should be a relatively straightforward process for you to be able to do so. So the next phase or sections of the presentation first of all going to walk through a series of slides that show all of the different steps for volume encryption then I'm going to provide a lot of more technical detail on all of those steps and then because you may have missed it the first two times that we go through all of the steps we're going to actually show screen captures of how you would actually use this in practice and show it end to end. So I'm basically going to show you three different views of the same thing in a series of steps that go into additional details each time. So if you want to use an encrypted volume the first thing that you're going to do is create that encrypted volume using what we're terming an encrypted volume type and I'm going to provide more detail on the following slides about what an encrypted volume type is. The next thing that you do is make a Nova API call assuming that we're going to attach this volume to a running instance. You're going to contact Nova tell it to perform the attachments. Nova is going to know to go out then and make a Cinder API call in order to retrieve the connection information for this volume. Once we have that connection information iSCSI admin is going to be called in order to create a SCSI device within the virtualization host represented there at the lower left-hand corner pointed to by the Euro. Nova will then go out to the key manager appliance such as Barbican in order to actually retrieve the encryption key for the volume. Once we have that encryption key we're going to pass this through DM Crypt which is going to actually handle the transparent encryption for us and once we go through DM Crypt and transparently decrypt and encrypt all the data going to that volume then we're going to pass that information or that device handle up to the running virtual machine. So more details on encrypted volume types. They are essentially an abstraction of Cinder's existing volume types. Essentially what we're doing is we're saying we need some additional metadata such as the cipher, the key length, the provider and the control location in order to know how to use these or in order to understand how to use this with encryption. So for the cipher we're looking for the name of the cipher and the mode of operation. We're using anything that is supported by Crypt setup currently but that's not a limitation. I mean you could provide additional providers. I'll provide more details on that momentarily that would allow you to use different definitions but I have two on the slide. There's AES and cipher block chaining with encrypted salt initialization vectors and also XTS mode which is what NIST recommends for full disk encryption solutions. For the key size that's simply the size and bits of the encryption key, the encryption provider is actually a class which defines hooks for how to attach and detach an encrypted volume. So I'll get into more detail on that momentarily. And then there's also a control location which is identifying the service that is actually responsible for performing this encryption. So our implementation has all of the encryption being done by NOVA or the front end side. However there are network attached storage devices that already have some kind of support for backend encryption so although we haven't implemented this yet there is a hook there if we wanted to expose that functionality down the road that capability does exist. So once we have an encrypted volume type how is it that we create an encrypted volume? Well the process is actually unchanged from creating a regular volume. All that you do is specify an encrypted volume type. That's the only change. So the encryption key when you have an encrypted volume type the encryption key will be automatically generated by the key manager so we will simply send off a request to Barbican for example and say we need you to create a key and once you create that key pass back the key UUID so that we can identify that key in the future. So if you see down there if we have our block device down there at the bottom left you can see that it has kind of a transparent key because it's merely a pointer to the actual key. The actual key is going to be managed exclusively by the key management's appliance so if you manage to attack the Cinder service you're not going to be able to access the key directly. You'd still have to be able to attack the key management's appliance and get past all of its security mechanisms in order to actually be able to decrypt the volume. So once we have that volume and we've created the volume the next step that we typically would do is attach that volume to a running instance. The first thing that's going to happen is Nova is going to make the API call or you're gonna make an API call to Nova and then Nova is going to ask Cinder for the connection information. The only change that we have made to this part of the process in addition to what was done previously is we've added an encrypted tag. That encrypted tag is going to be true if the volume is encrypted and false otherwise. If it is false then we're not going to add additional overhead to the existing process of attaching an encrypted volume in particular the total number of API calls that are required to Cinder. However, if the volume is encrypted then we're going to know within Nova to make a callback to Cinder to retrieve all of the encryption metadata. So that encryption metadata is going to include things like the encryption key, UID, the cipher, the key size, the provider and the control location, everything that we mentioned previously. The compute manager then is going to pass all of this information down to the compute driver. All of our hooks are currently within Libvertz and so this is supported within Libvertz currently. We have what's an abstraction which is known as a volume encryptor which specifies how to actually attach the device. As I said previously these are hooks that are run before we attach the device to the running instance and then after we detach the device from the running instance and that's simply going to allow you to do additional setup actions before we actually attach the device and make it visible to the user's virtual machine. We have two existing implementations that are defined in the existing code. There is a CRIP setup encryptor which uses raw CRIP setup and it inherits everything that is provided by default by CRIP setup. There's essentially two things to remember for this. First of all you have to provide all of the encryption metadata each time that you attach that volume which is why NOVA has to be able to retrieve all of that from Cinder each time that you're trying to attach the volume. CRIP setup also supports only a single key per volume which is potentially problematic if you're ever interested in rekeying a volume because it's really not supported. But the only way to do it is create a separate volume, transfer all of the data to your separate volume and then destroy the old volume if you wanted to rekey it which is less than ideal. However, there are LUX extensions or the Linux unified key setup which actually writes a standardized header to the front of a disk. It was designed for full disk encryption and once that header is written that header contains all of the encryption metadata that is used in order to access that volume in the future and so the information that is stored within Cinder would then be entirely redundant and we wouldn't need it when we attached the device after the first time. And it also gives us a cryptographic hierarchy so that we can actually change the volume encryption keys. And the way that this is done is when the volume is first initialized or first formatted, a random master key is going to be automatically generated for the volume and then we're going to encrypt that master key with the key that is provided by the key management appliance. And this cryptographic hierarchy gives us a lot of flexibility with regard to being able to change encryption keys. So the master key would be completely fixed. However, the key that's used to decrypt that master key you would be able to change if for some reason you believed that the key manager itself or that key had been compromised. You could change the access key and then it's a constant time operation in order to change the key that's being used to access the volume. So when we actually come to attaching an encrypted volume to a running instance, you can see at the top right-hand side of the slide there is the abstraction of the running instance. The first thing that's going to happen is we're going to use OpenISCSI or ISCSI Admin in order to create a SCSI device on the compute host. So DevSDB, for example, which is shown there at the bottom right-hand part of the slide. The next thing that's going to be done for us within or by ISCSI Admin is creating a sim link to the device. So you can see here we have our sim link which points to the actual physical device which is using ISCSI to compute or to communicate with the storage host itself. The next thing that we're going to do which is what we've added is creating a volume encryptor to actually set up the encryption. So again, we have our enigma machine so that everything being read from the volume is going to be transparently decrypted and everything being written to that volume is going to be transparently encrypted. And so we're using Crips setup in order to set up DM Crips for this mapping. And then once we have that, we're going to repoint the original sim link to point through the device mapper infrastructure that we've created and then we're going to pass that original sim link back up to the running instance. So you can think of everything that I'm just describing here currently as a very clever way to change some sim links behind the scenes in order to guarantee that the user can't screw this up in any possible fashion. But to the running instance, the one thing I want to point out down to the exact file name that's being passed to the running instance, we haven't changed any of that. We're just doing some clever sim linking changes on the back ends. So the running instance would never be aware that the volume underneath of it is transparently being encrypted. So the next series of slides is actually showing some of this in action and hopefully it will not be too hard to read from the back of the room. Creation of a volume type. And I'll be explaining as we go throughout the slides. First of thing that we're going to do is actually log in using the administrative accounts. This is running in DevStack and we're going to create a new volume type. So here we're going to call that volume type Luke's for the Linux unified key scheme. And then momentarily we'll make this an encrypted volume type on the next slide. So that was done in Horizon. Here what we have is command line clients. So one thing that we'd be looking to do as part of next steps is actually making this part of the phase also accessible in Horizon. But what we're doing is first of all listing all of the encryption types. And as you can see from there, we've printed back nothing. So then what we're going to do is we're going to make the type that we just created an encrypted volume type. So it's sender encryption type creates, sender or cipher is AES XTS plain 64. The key size is 512 bits. And this is due to an oddity of the existing implementation of XTS mode with encrypt setup. We actually have to double the key size. So this is giving us a 256 bit key. We stipulate or provide the control location being the front end. And then say that the volume type name is Luke's. So that corresponds to the volume type that we created just a moment ago. And then also provide the encryption provider. In this case, it's Nova volume encryptors, Luke's and then the Luke's encryptor, which is predefined and already in the Havana implementation of OpenStack. And then there at the very bottom, we simply echo out everything that we just created as confirmation that this has been completed. So now that we've created the encrypted volume type, the next thing to do is actually create encrypted volumes. So this time we're going to log in as a normal user. Again, this is being run within DevStack. And so we're using the demo user account. And so they're going to log in and create a series of volumes. The first one that we're going to do just for illustrative purposes is an unencrypted volume. So you can't see it there at the very top of the slide probably, but the name of the volume is simply unencrypted volume. So once we've created that one, then we're going to create another volume. This one's going to be encrypted. We're going to name this one the Luke's volume. And the Luke's volume is going to use the Luke's encrypted volume type that we just created a moment ago. And then the final thing that we're going to do after the Luke's volume is actually created is we're going to create a third one using the raw Crips setup type, which I didn't show you just a second ago. However, I'd previously created that encrypted volume type as well. And so we're going to go through the process of creating all three of those. And the only difference between creating an unencrypted volume and creating those two encrypted volumes is changing the volume type to be Luke's or Crips setup for encrypted volume types. So after we create all of the volumes, the next phase is actually attaching those volumes. This is completely unchanged to anything that happens previously with the unencrypted volumes. All of this is going to be handled transparently on the back end within Nova itself. So one other thing that I will point out is we're attaching these different volumes to slash devvdb, vdc, and devvdd. And then when we actually print information to that momentarily and write information to the disk, you'll be able to see the differences as we go through those. So the next thing that we want to do is actually write information to those volumes. So we had attached those volumes in the demo or in the screen capture to a CROS instance that we had already stood up within DevStack. And so here what we're going to do is we're going to log into CROS and print all of those volumes. So all the devices are listed there. This is just to confirm that we had attached all of them correctly. And then we're going to drop into a root shell so that we can write strings directly to the back end device. Now a normal user probably wouldn't be doing this in this fashion. However, it does make it easier on the next slide when I actually want to show what was written to the back end volumes if we actually write a standard string that we control. So here all we're doing is we're writing hello world followed by the name of the device that we're writing it to, to slash devvdb, slash devvdc, which is our Luke's encrypted volume, and slash devvdd, which is the Cripp setup encrypted volume. So that's how the user would traditionally interact with these volumes. They attach them, they write to them, they can read from them, all of that is completely transparent. Like I said previously, the user is unaware apart from the volume type that they're writing to an encrypted volume. So if you actually wanted to see what was being written though on the back end, we do have a short screen capture here. And I'm showing this merely to indicate or illustrates the differences between the different volume types that we were using. Traditionally this is what if you were able to compromise a storage host, for example, what would be visible on that storage host or in the raw data itself. So start by playing this video. And the first thing I'm going to do since this is running in DevStack is we're going to list all of the volume types. So it says there's a Cripp setup volume and unencrypted volume and then a Luke's volume. And I'm going to list all of those so I can actually see all of the information that's being written to these volumes. So I'll pause the video here. For those in the back, it's probably very difficult to see. However, the bottom half of this slide, when we're looking at it in less, is actually just a bunch of null characters since we hadn't overwritten or written to any of that information within the volume. And the first line is actually the string that we wrote directly, which is hello world slash dev VDB. So that is our unencrypted volume and what would be written to that unencrypted volume. The next thing that we will see is the Luke's volume. And so again, I'm going in directly to the backing files so that we can see the information that's been written. Oops, sorry that it jumped on me. So like I said previously, there is a standard Luke's header that gets written to volumes when you are using the Luke's extensions to Crip setup. And so there at the top, the important thing to take away is that there's a lot of binary data. The things that you can see in ASCII text is that it starts with Luke's, then there's AES, XTS plain 64, and then a lot more binary data. And if we bothered to page through all of this after a few megabytes of header information, then you would actually be able to see all of the encrypted bytes that have been written. However, since all of the information in the header, most of that information is binary blobs, it's pretty difficult to see what that actually means or does to the encrypted data itself. So instead, what I'm going to show is what the raw Crip setup volume would look like in this last one. And so here we're going to print out or display all the information in the Crip setup. And here it looks very similar, in fact, to the unencrypted volume. You can see that there's a bunch of null characters there at the bottom part of the slide, but that first line of the slide, instead of saying hello world slash dev bdb, here all of this is unprintable ASCII since it's just random looking binary data. So if you manage to compromise any of these encrypted volumes, all you're seeing is indecipherable information, like I said previously. So a few next steps that we're looking at in terms of furthering this work beyond what we've managed to put in Havana or for the Havana release. We've been unable to land backup support for the Cinder volumes currently. We had a patch, but the Cinder core team is making changes to how they want to store encryption or any type of metadata associated with the backups. And so late in the Havana development cycle, they actually blocked the patch until they could sort out some of the other issues that they were working on. So we'll probably update our patch and resubmit it in the relatively near future. There are some enhancements that could be made to horizon, such as creating encrypted volume types and creating indicators when you're operating or using these volumes so that it's well known and easy to determine if a particular volume and if a particular volume type is in fact encrypted. And there's a lot of configuration information that we may want to store per user or per tenant, so that for example, all of the volumes that belong to a particular tenant or a particular project are merely encrypted by default, or maybe a user wants to say that all of the volumes that they create automatically use AES XTS with 256 bit encryption. And so it would be ideal if we could provide all of that configuration information via global configuration files. Another thing that we're looking at working on in the near future is ephemeral storage encryption. So this, what I showed currently in this session was encrypting information within Cinder volumes. However, there's also ephemeral storage for the Nova within Nova for the running virtual machine instances that exists for the lifetime of the instance and no longer. And it would be ideal if we can also encrypt that information as well. So if someone else manages to break out of a compute host or the hypervisor running on a compute host, all of that information would also be indecipherable. One of the tricky things here is that we have to support a wide variety of backends that Libvert currently supports such as LVMRAW and QCOW2 formats. So we're actively working on this and hope to have something in the Icehouse release. So quick summary, OpenStack previously lacked the ability to encrypt storage. However, encryption does reduce vulnerabilities to many different sorts of attacks. And the Havana release of OpenStack now does include support for encryption of Cinder volumes. It's relatively simple to configure. It's completely transparent to the end user once they name an encrypted volume type. They don't have to worry about the keys. They don't have to worry about the setup beyond that. We did benchmark this and we saw that it was relatively low overhead, probably about 10% for most of the benchmarks that we ran. Of course, different benchmarks produced slightly different results. And the one thing I will point out, if you have ultra-fast disks such as SSDs instead of a standard magnetic disk, you may notice higher overheads than 10% simply because when you have a magnetic disk, the latency of interacting with that spinning platter is going to hide some of the latency that's introduced from the encryption itself. We're currently using a very simple configuration-based key management scheme. However, we expect to have further integration with Barbican or cloud provider that wanted to deploy this could integrate it with an existing solution that they have for key management, which would provide much more robust enterprise-level capabilities. So with that, be happy to answer any questions that anyone has. So the question was if there's any idea or any plans to make this work with non-iSCSI backends. Correct me if I'm wrong, but I think iSCSI is always used to communicate between Nova and Cinder. Not true? No. Okay. So we will support anything that uses iSCSI for sure. I think it won't matter in practice because we essentially treat Cinder as a bucket for raw bits. There could be some complexities there. I haven't looked at it. But I think it should be agnostic. Yeah, so there'd probably be some additional configuration information that we'd have to tweak for those. Quick question here. So what is the lifetime of the key on the host? What happens when the VM migrates? What happens to the key? So what will happen? The question is how long does the key persist on the host and what happens in the case of live migration or something? What will actually be done is we will tear down as soon as that volume is detached as part of the live migration process in order to transfer it, we'll tear it down into any key material that was existing in the host in order to support the DM crypt managed mapping will disappear at that point. So the question if I understood correctly is about integration with the keystone tokens for the encryption key information. So what we do currently is we pass the keystone token directly to the key manager. So BarberCan would receive that and BarberCan will have its own authentication mechanism to verify that key or the token is valid and we should release the encryption key. Correct, Jared? Yeah, so one of the challenges with kind of in this particular structure is giving the key manager enough context to decide whether or not the key should be released or not. So for example, if Nova and Cinder are both using service credentials to talk to us then they can access any tenants information in that particular case, right? And so if you hit Nova with an attached call and the user's token is getting passed down to us then we can do a little bit of extra verification but otherwise it's just service tokens and so they can kind of talk to each other. So yeah, I mean these guys would just call us. The service token would authenticate for them. We use keystone for that as a backend obviously and then we'll return whatever information that they asked for. Does that make sense? So the question is what do we do in case we want to clone a volume? Cloning a volume, what ends up happening on the backend within Cinder is we create a, we actually tell the key manager, create a copy of the current encryption key, return that new copy as UID and once we have that associated with the clone, that's all that we need to do and we just pass that directly along like we would to the original volume. So it's a unique copy of the original thing? Correct. And if you're using a Lux header, then you can just rotate, right? So if you wanted them to have separate keys, that wouldn't be a particularly difficult thing to do but only if you're using Lux, if you're using the straight stuff then you'd have to re-crypt the whole volume. Correct. I think there's a question in the very back. The approximate 10% performance head, I assume that's say an IO performance degradation and if so, do you have any sense of the CPU overhead associated and without hardware assist? It was with IO benchmarks. I cannot remember what's the CPU overhead was. I wanna say that it was relatively low on the machines that we were testing. However, I can't remember. We did the test several months back. Yeah, I mean generally it's three to 5% is usually what you see. That's hit or miss obviously and then obviously if you have ASNI or any of those types of things that can dramatically drop the load. Okay, be happy if there are additional questions, be happy to talk offline after the session. So thank you very much.