 Hey guys, so just a couple of quick info notes. So first of all, I know that you've all heard that, but you can vote for the best speaker in the app. You need to just sign in and then click on the speaker and vote for him. Yeah, perhaps Nathaniel, I guess. Second, Chris reminded me that I should tell you there is a party tonight. And there are still some tickets left. It's 7.30 today, OK? And OK, it's almost time, so I give you Nathaniel and securing automated decryption. Thank you very much. I'm going to do something out of the ordinary here. My seven-year-old son, just now I got the text, scored his first basketball goal ever, and he's very excited. So I would like everyone to cheer for him, and I'm going to take a picture and send it to him. Ready? You guys are awesome. Thank you so much. OK, so my name is Nathaniel McCollum. I'm a principal engineer at Red Hat. And my talk is called Securing Automated Decryption. I do apologize ahead of time. All my slides are in the cloud, and apparently the data center that's hosting them has had a catastrophic power failure. All of their systems are down. But the one thing that's really great is that the data center is really great with transparency. That's a virtue they really like to cherish. So they actually live stream their people fixing the problems. So we actually have the live stream right here. Oh, I guess the system is coming back up here, and we're waiting for a password, it looks like, of some kind. Wait a minute, we've got another one here. And it looks like we have a huge problem in the data center where all the systems are requiring passwords. So they're apparently running around frantically trying to decrypt all of these hard drives. Obviously, this is not a real data center. This is just my talk. So but this is a great illustration to show where we sort of are right now. And we have three stages, really. Yesterday we've been working on encryption standards. We've been working on the technological standards like AES, but we've also been working on political standards, things like PECI, DSS, which is, of course, mandatory for a whole bunch of people in the United States. And so we've been trying to just make basic encryption work. And now we're starting to see it deployed at very wide scale, and we are starting to figure out that we have to automate this process. So that's essentially where we are today, is how do we actually automate this process. And we're actually going to go somewhere else after that, which is once we've actually solved the automation problem, we are going to start to realize that we care more than just a simple yes or no. We actually want policy driving our automation. And so this is sort of the difference you can see between automation and policy. With automation, you have an on or an off switch. Can I decrypt my data or not, yes or no? And it's usually very limited to one situation. But reality, once we have automation available everywhere, we're actually going to want to do this next thing, which is we're going to want to have various different levels of security, and we're going to want to have a dynamic policy that can adjust to where we actually are to the context that we're in. So let's start with the first question. How do we automate? Anybody know? How do we automate encryption? What's that? You don't usually, right? So there is actually a pretty common technique, or at least it's the one that everyone is building right now. And I don't know why this is not advancing. I apologize. Here we go. This remote is useless. OK, so the way we do it is we start off with a secret. This is a little secret called dot. And we wrap this secret, of course, in it with an encryption key to protect it. And then we wrap the key in a key encryption key. And then we do this so that we, if the outer key is compromised, we don't actually have to re-encrypt the data. We can just create a new outer key as long as the inner key is secure and we can go about our business. And this is exactly the way pretty much most encryptions that you know work, including Lux disk encryption. And so when we want to decrypt, we actually use the outer key encryption key, right? This is the password that you typically do on boot. So when your system boots up, you type in the password, you get the key encryption key. Then internally, which you never see, it decrypts the encryption key and then finally decrypts the data itself. So this is sort of the standard password model, right? We have this secret on the inside, the encryption key, the key encryption key. We've got a password on the outside and we share that around to a bunch of people. And this has a lot of limitations, of course. Imagine the person in the middle there decides that they want to do something nasty with your data. Then, of course, they can because there's no isolation between them because they are all sharing the same passwords. So one way we can get around this is we can have a sort of escrow model. And this is what everybody's building right now. We have a notice we've changed. We no longer have a human readable password. We're using something that's a cryptographically strong random key. And we take that key and we store it in a remote system. And then when we want to decrypt our data, then we refetch the key back from the remote system. So does everybody agree this is a pretty simple model? Okay. Well, the problem is we're not done yet. Okay. We still have to build a lot more technology in order to make this work. Because I don't know about you, but I don't particularly want to send my passwords over the wire unencrypted, right? Right, so we're not going to use HTTP to send it. We've got to first encrypt that channel. So we're done, right? We're at least safe now. Oh, we have to build more infrastructure. I see people nodding their heads. We have to do more, okay? So yeah, we definitely, we have to make sure that the client can authenticate the escrow. Because when it creates that key and stores it in the escrow, how do we know that we're actually giving the key to somebody we can trust, right? So at least now we've finally got encryption and we're authenticating the server. We're done? Yes, no? No, no, we're not done. Of course, we also have to authenticate the client, right? Because when the client is actually trying to fetch this key back, you can't just give every key to every person. You have to make sure that only the systems which have created the key have access to the key. So now you're authenticating on both sides. You typically have some encryption in the middle with TLS, GSS API. So finally, we're secure, right? Good question. So we actually need a third point of trust as well. Because if we don't have some other place to trust, how do we know that we can even do these authentications, right? So in this case, if we're doing TLS, we're going to need some kind of a root of trust. And if we're doing GSS API, we're gonna need a KDC. And there's a few other models of this as well, but it shouldn't serve as a good example. So at least finally, we've got the whole system built until we have a data failure and all of those wonderful keys we've been storing in our escrow are now gone because we forgot to do backups. But we can't just do regular backups because this is highly toxic data, right? We can't just stick it on a USB key and let somebody carry it around. So we have to do backups that are secure. And so now we've designed this huge infrastructure to build up a standard escrow model. But yes, absolutely, absolutely. That's not represented on the slide here, but if you want any kind of high availability, you have to do replication or redundancy as well. So anything more than the simplest setup, that's essentially what I've depicted here is the simplest possible setup. But finally, once we get all of this built, we are finally secure from every class of attack. So there's, we have this heart bleed problem. And this heart bleed problem is, it actually resulted in the TLS channel itself being completely insecure. So even though we went through all of these great lengths to build a system with a lot of great security properties, it ended up being totally insecure. And to be fair, this can happen with any software. What we're gonna propose is not necessarily an exception to this, but we will discover that there are some techniques we can apply. So let's look at the lessons we've learned. Presuming that TLS will protect the key transfer is dangerous. There's a lot of reasons this can go wrong, whether it's a vulnerability like heart bleed, which is less common, or whether it's a simple error in generating certificates and error in validating certificates, which is way more common than we may admit. It is very often the case that we think we have a secure channel to transfer keys across and we don't. We also learned that complexity increases our attack surface. So we had to build all of these different infrastructures in order to make the escrow model work. And the difficulty with this is now we have every single thing we've built is a point of attack. This makes escrows difficult to deploy because they require so many different moving pieces. We really can't automate deployment of escrows because of all of the moving pieces. And you also have a bootstrapping problem because how do you securely store the keys that are keeping everything else secure? So it makes the entire problem difficult. And lastly, X509 certificates in particular are hard to get right. This is probably one of the most common causes of failure. So we decided to, when we were attacking this problem, to look at the question of asymmetric cryptography. And now everyone brace yourself. I'm gonna put math on the screen. This is the representation of an elliptic curve-diffy-holman exchange. So what we realized was that binding data to a third party is really not a key transfer problem. It's actually a key exchange problem. And so if we look at a standard diffy-holman, on the left side here we have the client. And on the right side we have the server. So two parties. You'll notice that this is a symmetric protocol. Both parties do the same thing in a mirror opposite to each other. And the first step here, we choose a random key. This is your private key. So if you're thinking of your public-private key crypto, the top line we're generating a random private key. The second step, we are generating our public key. So we take the private key, we multiply it times the generator, and we get the public key. Then we exchange public keys. So the client sends its key to the server and the server sends its key to the client. And the last bit is the wonderful math of diffy-holman. The client takes the server's public key and multiplies it by its own private key. Likewise, the server takes the client's public key, multiplies it by its own private key. And the end result is they both have K. Now notice that K never went across the wire. Nobody shared K. It was something that actually came about as an artifact from this mathematical computation. So when we realized that the escrow problem is really a key exchange problem, we thought long and hard about how we could make this happen as a key exchange. And so Bob Relier, also from Red Hat and myself, we invented this algorithm called the McCuller-Malier exchange. And so the way that this works is actually very similar to the way that diffy-holman works. So on the first, we now have two halves. On the left is provisioning. This is when we encrypt our data, okay? And on the left, on the right side is recovery. And this is when we want to do automated decryption. So starting on the left, the first thing we do is we generate a long-term key for the server, both a public and private key, just like diffy-holman. And the server shares its public key. This is, again, just like a normal diffy-holman. Now the server can share its key at this time, or it can publish it ahead of time. You can send that key by carrier pigeon if you really wanted to. It doesn't matter how the client gets the key. But when the client wants to do its encryption, it has the public key of the server. It generates its own key pair, and just like diffy-holman, it performs its half of the diffy-holman and calculates K. Now the client can use K to encrypt its data, but notice that the client has not sent anything to the server at this point. So if the client got S offline somehow, then the entire thing can be provisioned offline. You can actually do the entire encryption offline without contacting the server during the encryption phase. So when it's done with the encryption, we then throw away K, which is what we calculated here, and that's the value we use to do our encryption. That's our key. And we threw away our own private key. And this is the magical bit here. Now because the client has neither K nor its own private key, it has no way to get K back again. The only party in this exchange that can successfully calculate the value of K is the server. That's the principle that makes this secure. So we do retain, however, the public keys. So the client retains its own public key, which it has not yet sent to the server, and it retains the server's public key. Now, when we go to decrypt, the client calculates another key pair. We call this the ephemeral key pair. We take the public key and we add it to the client's public key. So the client has two public keys now, the ephemeral one and the original client one. We add them together, and that's the result we actually send to the server. Notice that the ephemeral key is different every single time that we do a recovery. So every time we decrypt our data, we calculate a new random ephemeral key, and we add it to the public key and we send the value to the server. The client. The client is generating, yeah, there's a fake line right down the middle here. This side is the client. So the client generates the ephemeral key when it wants to do its recovery. It adds the long-term key C and the ephemeral public key together, and it sends that value to the server. Because E, the ephemeral key, is always random on every decryption, what is the server C? Random data. It just sees a random key. There's no identifying information whatsoever. The client is completely anonymous. The server then takes the X value and perform its half of the Diffie-Hellman. So the result is essentially this K value, but we still have another public key mixed in. And the only party that can take that extra key out, that ephemeral key, is the client because it's the only party that actually knows the value of E, which changes on every request. So the server gets a public key that's completely random on every request. It multiplies in its own value doing the Diffie-Hellman and sends back the result. This is an extremely fast operation. Just to give you an idea, the very first thing that TLS does when you're establishing a secure channel is a Diffie-Hellman. So the very first thing it does is this multiplication. So this protocol is extremely lightweight because the entire protocol is essentially, on the server side, just as fast as generating the public key for TLS. That's before we even bring up an encryption channel. So the end result now is that the client gets back Y from the server and it can do a Diffie-Hellman multiplication on the server's public key with its ephemeral private key and then can subtract the result from Y and recover K. So the end result is that the client can calculate K again. The server cannot, and this can only happen when the client and the server can work together in pair. So if the client can't reach the server, the client can't get K. If the client can reach the server, the client can get K. The server never can and the client is completely anonymous to the server. So, is anyone dead? Have I killed anyone? So this is essentially, if we go back to our model, this is what the model using the McCullum-Relier exchange looks like. We still have our secret in the middle. We still have our encryption key and we still have our key encryption key, except our key encryption key is the K value that's derived from the McCullum-Relier exchange, which is sent in clear over the network because it's a key exchange that just exchanges public keys with the server and we don't need any other infrastructure. Thank you. There is one alternate deployment option, which is that the server itself can use crypto hardware like a TPM or Atmel makes an elliptic curve chip where you can actually generate the keys in the hardware. They never leave the hardware. And so that even if an attacker is able to compromise the server, they only get temporary use of the keys while they have access to the server. If you discover the vulnerability, you can kick them out of the server through reformatting or whatever, bring the server back online, but the keys themselves are not compromised. They never leave the hardware. And we can do this with standard hardware because the server's side in this whole exchange is just a standard Diffie-Helman multiplication. So all of the hardware that is built to store TLS keys and all of that kind of stuff, it all works with this protocol just out of the box. So let's go back now and compare this technique with the standard escrow technique. With an escrow, the server is required to be present during provisioning because a client generates a key and then pushes that key into the escrow. In the McCullum-Rally exchange, this is optional. You can request the public key at that time if you want to, or you can deliver that public key out of band somehow if you build your infrastructure that way. So it's optional. For both, however, the server presence is required during recovery. This is what we're actually trying to achieve, that the server has to be present in order for you to decrypt your data. For an escrow, the server requires knowledge of the keys. It has to be the master of all domains and know everything. With the McCullum-Rally exchange, the server knows nothing. It just sees random data. For the escrow, we, of course, then have to transfer the keys back and forth between the hosts. But with our system, we don't have to do that because we're just doing a key exchange. There's no transfer of private keys. The escrow requires client authentication, as we've talked about before, because since the escrow holds all the keys, we have to make sure that not anyone who requests a key can get any keys. So we have to do authentication and authorization. That's required in an escrow. But using the McCullum-Rally exchange, that's optional. You can build authentication if you want, if you want to do additional auditing or policy on top of it, but it is not something that's required by the protocol. Finally, or actually not finally, we are required to have transport encryption for an escrow because we're transferring the keys across the wire. But with the McCullum-Rally exchange, it can be sent in the clear because it's just an exchange of public keys. Lastly, end-to-end encryption is very desired in an escrow situation because you don't know if the next hop is gonna drop the ball on its transport encryption. And so although this is very desirable, it's also very difficult to deploy. But when using the McCullum-Rally exchange, you just have no need for end-to-end encryption because you don't need any transport encryption at all. So there's a server that implements this. It's called Tang. And it's available at the wonderful GitHub URL there. This implements the server-side daemon. And it is just a very simple C application that does three different REST gets and posts of some JSON data. That's what Jose is. Jose stands for JSON, object signing and encryption. It's a standard data format that we use. We didn't invent this ourselves. So the client just sends the key in Jose format to the server. The server does its bit, sends the other public key back. It's extremely fast. It's extremely small. It has minimal dependencies and it's available on Fedora 23 and later. Question. That's a great question. Reimplemented, sorry, the question was, why would we implement it in an unsafe language like C? And the answer was I would love to implement it in Rust, but for a lot of reasons, that was not an option at the time. And maybe in the future, we can do that. So if you'd like to do it, you are free to do so. I don't know if you don't know. Well, we don't either. So under the covers, this is all open SSL. We're not tied to open SSL because we're using a library for the Jose stuff. But we're not doing our own crypto besides the one multiplication, which we're not hand coding. We're just using SSL's built-in multiplication function. So here's the incredibly complex process of installing a Tang server. You first install it, then you start it, then you generate two keys. Thank you. On the clients, however, we have to have some software. What's that? No. So Jose, remember I said the question was, is it a typo that we're using Jose, Jen, instead of Tang Jen? The answer is no. We are using a library and CLI utility called Jose, which was also written by me. It's a fantastic library, if I do say so myself. And so it actually implements very handy command line utilities for generating keys and performing encryption and signing and whatnot. It implements the RFCs, so it's all the standards body stuff. We're not doing anything novel there besides just implementing what was designed in the RFCs. So this is an external tool that Tang depends upon. The tool will just be installed. It's a requirement of Tang because we have a shell script internally that does some stuff with it. So if you install Tang, you're gonna get Jose out of the bundle and you just use it to generate the keys. No reason to extra care. So on the client, we need some software too. And this software is called Clevis, which also has a handy GitHub URL. And Clevis is a decryption, automation, and policy framework, which is a fancy way of saying it does cool stuff. It has very minimal dependencies. It has early boot integration, so you can actually encrypt your root volume and then reboot your system and the system will automatically unlock as it comes back up. That only currently works for the root volume because there's a bug in system D. If there's any system D developers in the room, which I know there are, please fix it for me. Thank you. And we also have GNOME integration as well, so you can do this with removable storage. You can use Clevis to bind a USB key that's been encrypted with Lux to a Tang server. And when you insert a dialog will pop up and it'll say, please decrypt me, but then in the background, if it connects to the server, the dialog disappears and your data magically shows up. So this is available. We had a missing dependency on Fedora 23, but it is available on 24 and later. So here's a basic example of how we do encryption with Clevis using Tang. So first we obviously have to install Clevis and then we echo PT, which is our plain text and we pipe it into Clevis encrypt. Now the module that we're going to be using for doing the encryption is Tang. So we specify that as the next argument. And then finally we specify a little JSON blob there, which is the configuration containing the URL to the Tang server. The end result is some output in JWE format, which again is that standard Hose format for JSON web encryption, which is what JWE stands for. So the first thing we do is we are going to get the server's advertisement. You can by the way pass in the public key manually and then you don't have to deal with the trust on first use. But if you want ease of use, we have a similar model to SSH where it will go out and actually request the key if you haven't provided it and then it will ask to trust the key. And if you trust the key, you click yes. And the end result is a file, a JWE file, which contains our ciphertext. To decrypt it, you simply pipe that JWE into the Clevis decrypt command and notice we did not specify a password and the plain text comes back out. But if we stop Tang and then run the same command again, we get nothing out and a non-zero return status. Yes, I'm not understanding. The question was, would I consider having a human readable advertisement from the Tang server? It's JSON. Do you consider that human readable? I don't consider that. Oh, you mean the prompt? Yes, we can fix that. Patch is welcome. Or you can tell me what to write and I'll write the patch. Okay. Talk to me afterwards and give me some suggestions for improvement. But so the basic idea here is, of course you encrypt the data, it uses the Tang server and when you try to decrypt it, it uses the Tang server to do it automatically and if you stop the Tang server, you can't get your data back. Yes, so if the question is, is it possible to basically tell the system what the advertisement is and the answer is yes. Yes, so you can, as I said before, because we're using asymmetric cryptography, all of the provisioning can be done offline, which means that as long as you have the server key in some way, you can pass it into this command using one of the options which is currently poorly documented and but it will not do this prompt. It will not even talk to the server at all. It will say, oh, I already have an advertisement. I will just encrypt using that. Now, if you encrypt to an advertisement that you no longer have the private key for on the server side, you're very much out of luck but so at least be aware of that. So the interesting thing here is that Clevis is actually not tied to Tang. As I said before, this is a framework and so we can do the same exact thing that we did with Tang with an escrow and in this case, we're using custodia which is why there's no HTTPS here. So we're pretending we're like a container running our system with access to custodia and we are requesting a key from custodia over HTTP and so the system first generates a random key, pushes it into custodia and then encrypts the data using that key and then outputs the JWE which you see there and then we decrypt it. It fetches the key from custodia again and then decrypts the data. So we are not specifically tied to Tang in Clevis and there's a reason why which we're gonna get to in a moment. So here's an example of how disk binding works. So in this case, on the top we have Clevis bind lux and notice that the last two arguments there look very similar. They are in fact the same arguments that we passed to the Clevis encrypt command on the previous slide. The only difference is that we now have a path to a block device as the first argument. So in this case, we fetch the advertisement again, we answer yes, we wanna trust the advertisement and then we also have to additionally enter the lux passphrase for devsd1 or sda1 and when we do that, presuming we entered in the correct passphrase then the disk itself will be bound and the way that this works is that it generates or first it calculates the size of the lux master key so that we don't lose any strength of encryption. We generate a new cryptographically strong random key and we use Clevis to encrypt that key and then when we want to unlock the disk we simply decrypt the key using Clevis, hand it to lux and then do the decryption from there. So if this was successful, you see now here another one of our utilities, remember on the previous slide we used Jose to generate our keys, here is another one of our utilities which is called lux meta. Lux meta is a very small utility which we'll see at the end to store and retrieve meta data from a lux header. So if we do show on devsd1, you'll see that we have two active slots. That first one slot zero is active but it's empty, it has no meta data. The reason why it's empty is because that's the password that we just typed in in order to get access to the lux volume, it didn't have any additional Clevis meta data with it. On the other hand the second one is the new password we just, or the new cryptographically strong key we just generated and then it's the UID which defines the type of the meta data in that slot and that contains simply the JWE that we generated as part of the Clevis encrypt. The output of the Clevis decrypt is just simply stored in the lux header. Now we have two ways to actually unlock this. So in the top so far we've done the binding process which is the same, no matter how you wanna unlock it but we have multiple different ways we can unlock it. For instance, if you're doing root volume unlocking a boot, you install clevis-draket type draket-f to regenerate your init-ramfs and then you reboot and then if you've done everything correctly it should just come up automatically. On the other hand if you're doing removable storage and you want to be able to unlock an in-nome then you have to just install clevis-udisks too and you don't have to do anything else there. Well you probably have to restart your in-nome session if you're in-nome. So the draket-dash, the question is where is it getting the data from, right? When you reboot the init-ramfs is going to see that systemd is asking for a lux password. It knows which device it's asking the password for so it's going to look in the metadata to see if we have clevis metadata. If we find clevis metadata then we are gonna attempt to use clevis to decrypt that key. If we are successful then we pass the key back to the answer that systemd is prompting for. So up to this point we've actually been talking about automation and we want to move now to talk about policy and we're almost done. I know we're running short on time. The way we need to define policy is we have to define a relationship between keys. And typically this has only been done in a hard coded way in the past where we say encrypt one thing with another thing and then wrap it with another thing. But we can actually use another algorithm called Shamir's secret sharing. Shamir by the way is the S in RSA for those of you that don't know. He invented this great algorithm called Shamir's secret sharing and it's a thresholding algorithm. So you can take a key and you can divide it up into a number of chunks and you can define a threshold which is the minimum number of keys that have to be present in order to unlock. And this can actually be nested so you can break apart a key into multiple chunks and then you can break apart one of those chunks into multiple chunks again. So one such example of using this would be if you had a simple laptop but you wanted to have separate admin and user passwords. So you want to have some kind of master recovery password for the administrators but you don't want the users to have that password for when they unlock their disk. So this is essentially what Lux already implements. It's an or a relationship between the two because we have a threshold of one. We've split the key into two and then if either one is provided we can actually perform the unlock. We can also now make an automated laptop using this technique. So we have the admin password, we have the user password but we've now added a tang binding and again since our threshold is one any of those will successfully unlock. We can also have something like a high security system. Maybe it has some sort of bank codes or usually it's casinos that have the best security. So in this case you would have multiple passwords so we have three in this example and we would require two to unlock. So you would type in one and then it would have prompted you for another one and you'd have to type in a second one before it would unlock. But of course we can use Shamir to actually create sophisticated policy. So in this case we have a master password encoded in a QR code and stored in a vault somewhere by an administrator and if they want to pull out the disk from the laptop and recover the data they can scan the QR code to get the data back and unlock because it's a threshold of one. The only other way to unlock is if we go to the second one. The second branch actually requires both items. There's two splits or two sub keys and the threshold is two so we require both and this means we require the TPM in this case. So now this means that the disk for the rest of the branch has to be in the laptop. Now if the disk is in the laptop we have a set of four options. We can unlock with password, fingerprint, Tang or Bluetooth and we have to do two of those. So if you're sitting at your desk and you're on the corporate Wi-Fi and you're near a Bluetooth access point that's supporting the Tang algorithm then you've fulfilled your two and notice you've done nothing. It was entirely automated. But let's say you move out to a conference room and you leave your laptop there and hopefully we have the revocation call at this point and your system locks and you come back well you're not near the Bluetooth beacon anymore but you are still on the corporate Wi-Fi so in this case you can scan your fingerprint which is very easy and it lets you in. On the other hand if you take your same laptop and you walk down to the coffee shop at this point you need to type your password and use your fingerprint in order to get in so you have to fulfill two out of those four. And we have implemented Shamirs and Tang so notice that you can actually Clevis and Crypt and instead of passing Tang or HTTP as the plug-in we've passed in Shamirs, SSS. And we've specified two pins both of which are Tang to two different servers with a threshold of one. So this is a high availability algorithm with a high availability policy we've just created. We've said we're gonna bind to both of those Tang servers and if either one of them is available then we can unlock. So in this case we're prompted for two advertisements because we've just contacted two servers and gotten their public keys and but we still do our decryption and if we bring down server A notice we still get our plain text back but if we also bring down server B then we fail because we've not met our policy if both servers are down. But the neat thing is that we did not define this policy at compile time, at development time. We defined this policy as an expression of our security interests. So very quickly we'll talk about the ecosystem. We have the Jose Library which I mentioned before. There's a set of command line utilities for doing signing and encryption. It's very easy to use and it outputs JSON so it's fairly readable and it's web powered or whatever the buzzword is. We also have a dependency called LuxMeta which is the utility we use to store metadata in the Lux partition. And you can use it for general uses. It's not necessarily specific to Clevus or Tang. In the near future we would like to add a bunch of these features. We're probably not gonna get to all of them but patches are definitely welcome. So if you would like to add, if one of these features jumps out at you or even if you have another feature, let us know and we'd be glad to work with you. Any questions? This by the way is where the names come from. This is a very simple locking primitive. The U shaped bit on the outside is called the Clevus and the block on the inside is called the Tang. Yes. Yeah, the only the client should have knowledge of it. The question was, is the client's public key becoming a private key and the answer is yes. It should be protected by whatever your system policy is. You're always going to have some private key material on the disk. There's no way to do binding without having some private key you must protect. It's in clear text in the Lux metadata. Yes. Which is protected only root has access to that data. Yes. Okay. Yeah, the server does no decryption. The question was actually don't really understand the question. Not only the first time. Only the first time. There is a mechanism for rotating keys as well but we haven't implemented it on the client but the server has will have a chain of advertisements. So once you rotate a key, the new advertisement will be signed by old keys. Yes. Yeah, the question was, do we have negotiation, support for negotiation of the algorithm and the answer is yes. And in fact, we have several alternate algorithms in mind. Should we discover a problem? This is new crypto obviously. Should we discover a problem? We have several other ways to do it but we felt this is the best one to attack for right now. Yes, we do have a negotiation mechanism. Dmitry. Yeah, so the question was, can I comment on other applications besides just disk encryption and the answer is anything that you want to encrypt and later decrypt in an automated way. So it doesn't have to be disk encryption if you wanna encrypt a file. One of the ideas that we had, X4 I don't know if you know, has a file system based encryption which it could be very useful for say encrypting just a single directory of a database. So that's a good example but even if you just had a file that you wanted to keep secure you can pipe it into Clavis to encrypt it and pipe it out to decrypt it. So it's very scriptable. The question is, could the KDC make use of this technology? And the answer is yes. However, I would want to try to design a way of unlocking. So it would be fantastic to have some kind of a hook in system D to say decrypt the X4 directory on this demon before you run it and run it in a separate namespace and so do the decryption in a separate namespace so that nobody else has access to it except for the actual PID of the process that was spawned. So there's a lot of cool stuff like that but we're still new, we haven't done it yet. Question. The question is how old is the algorithm and how many and have we had peer reviews? The answer is yes, we've had some peer reviews. The algorithm is about a year and a half old so it's fairly new. I'm actually after FOSDEM, I'm giving the same talk at FOSDEM next week and after that I'm going to the University of Leuven which I'm sure I've completely mispronounced and I'm gonna be giving a talk there and my goal is to get through that talk without Rindell decrying me for my sins against humanity. And we're actively looking by the way for people who wanna write a formal proof of the algorithm. So if that's your thing, please let me know. Is there any impersonation of these impersonatings? Why during the initial renegotiation? During the initial negotiation, yes. So this is precisely why we have the trust on first youths. The question is, is there any risk of man in the middle? And the answer is yes. Not during the recovery phase itself because a man in the middle has nothing to gain in that exchange because it has just random data and it doesn't have the ephemeral key so it can't do anything with that ephemeral data. For during the provisioning process, however, there is a risk of man in the middle which is precisely why the advertisement is signed and then when we get the advertisement we show a list of the key IDs of all of the signing keys that were listed in the advertisement. We verify that the advertisement was in fact signed by all of those keys and we prompt you to trust them. It's your responsibility essentially to look and say, are these the keys I actually expect from this server? We are definitely open to ways of improving this process. We wanted to get closer to the SSH model than say the X509 route of trust model. So once you have accepted an advertisement, however, you've trusted those signing keys, you can always request a new advertisement from the server signed with the old keys. So from that point on there is not a man in the middle. Once you have proven in the first acceptance of the advertisement that you trust those keys, from then on you can get an advertisement signed with the key you already trust. And admin is gonna have to define that policy. If you have specific ideas, let me know. Any other questions? Thank you all so much for coming today. Thank you.