 Mislim, če so nudi podhauliti začnev skrižve. A možemo so začelič nekaj hoditi, da bomo njema v一些 motivacije. Pa, načo zelo srebenih, nešto tjeliča? To če je, ki seSwdovalo na pogrežONi so barome changed na uoloza v zelozazajilj naprej. vse bo, da je tudi nekaj zelo, da je to nekaj, da je tudi nekaj, ali bi sezavršen za inkripsičnega vsezavršenja, da je naredil na daj, da je inkripsičnega inkripsičnega. Svetična zelojska izgleda bo, da je to izgleda vsezavršenje v volumkih evočnjih, ko v tem različi vsezavršenje in je to počakaj, da je to izgleda. InCan't just let me point out that, in such case, lax, passphrase change doesn't affect volume key, so it's definitely not enough to change your passphrase because volume key stays the same. And it may be enforced by volume key change. It may be enforced by some policy in an organization, it may be even required to change on some regular basis, tako različenih per ročin in so ono. We provide offline cryptocurrency tool in cryptocurrency suite for some time, but it turned out it lacks a few features. Obviously offline utility wasn't online, but perhaps this wasn't the major issue. Much more important turned out to be the fact that even by a feedback from community it wasn't considered to be robust enough or reliable enough so that people would start adopting. They were always afraid of what happens when my system crashed during the encryption and so on. So let me clarify my interaction slide once more because I think it's much more important to say that the new re-encryption will be online, but it will be also much more resilient. So it should be much more reliable. And what do we mean by resilient? Well, Clip Setup provides software full disk encryption. So to perform re-encryption we need to overwrite basically whole drive. So when we perform the re-encryption we divide the device in so-called re-encryption zones. And if crashes occur during the time we are writing the re-encryption zone, we may produce tone writes. And so to be more reliable, we need a means to detect and correct these tone writes inside the re-encryption zone. And for that we need additional storage to keep some meta-rita. And we need meta-rita format flexible enough so that we can track progress of re-encryption properly. And obviously, Lux version 2 format provides both of these. So, how do we ensure that new re-encryption would be much safer? We introduce a few so-called resilience modes. The first one is called a checksum-based resilience mode. And what we do? Imagine we are going to re-encryps one of re-encryption zones. So we divide this re-encryption zone in a smaller chunks, which should be basically underlying physical sector size large. And we calculate checksum per each of these sectors. We keep this information in memory. Then we re-encrypt the segment in memory. Calculate one more checksum for the new ciphertext. We store all the checksums in meta-rita area. And after this finished, we start to write the re-encryption zone. So that's one way to provide better resilience. The second one is classical journal. There's nothing to explain. It works exactly the same like file system journal. We just write basically to data twice. And third type of resilience is so-called data shift. In this case, it works due to fact that it's not re-encryption in place. Imagine that we have re-encryption zone and first half of the zone is old ciphertext. Usually it will be plain text because we use it in online encryption. And we just read the data, encrypt, and write into second half of the year encryption zone. So basically it's a data shift, and we always have original copy of the data and space for new ciphertext. And, well, this can be called a resilience mode at all, but we provide also so-called no-oper resilience mode, which basically does almost nothing. It performs no data sinks. It performs no commit in metadata area. And basically the only way this mode is safe is if you are sure that re-encryption finishes successfully or it's gracefully interrupted, for example, by sikterm. We decided to provide this mode because it's really fast. And if you have, for example, a snapshot backup of the data, you may want to perform the re-encryption faster and in case of failure you have a backup point in some snapshot. Okay, so what happens if we experience crash during the re-encryption? In case of checksums, we read the actual content of the re-encryption zone and we don't know what part of the re-encryption zone is finished and what part is not. And we read the re-encryption zone and calculate the checksums again. And then we compare every sector with checksums stored in metadata in previous step. And if checksums matches, it means this sector wasn't re-encrypted. So we perform the re-encryption of such sector. If it matches, it means that the sector was already re-encrypted, so we can move on. And in the end we calculate the checksum of the whole re-encryption zone and compare it with this one final checksum we stored before for a new ciphertext. So we are able to detect if the recovery in case of checksums was successful or something really nasty happened. In case of journal, it's just a replay. We just take the segment and write it once again from the backup in metadata area. In case of datashift, it's just usual repeat. We repeat the operation because we have original copy and we just overwrite it. If we use no op, there's nothing to do. Okay. So that was various residents' modes for re-encryption in a nutshell. And now we can speak about online re-encryption layer. And it's really independent layer because users may use mobile even in offline mode and they can benefit from the residents as we described. Fast systems and applications access the data through special device stack and this stack is controlled by the re-encryption utility so that, of course, during the time we are re-encrypting the zone, we don't want to interfere with bios that are coming from file system. So we need to block file system from accessing these zones during the re-encryption. So how does it work in a picture? So this is how usually activated looks device. We have a data device with ciphertext, laks to header embedded in the head of the device. On top of it, there's a dmCrypt device and on top of it, we may have multi-file system. Then we introduce a special private device map device where we move the original table from the public device with multi-file system and then we introduce one more private device, so-called dm hot zone device which controls the access to re-encrypted zone because the re-encryption zone is the purple box below. And as you can see, any bios coming to old ciphertext can go through because there's no risk of conflicting with re-encryption process. On the other hand, the re-encryption zone, any IO coming from the file system must be blocked in a dm linear hot zone device until we finish the re-encryption. And for each of these re-encryption zone, in case we are using the checksum-based resilience, we just read all ciphertext, calculate and store checksums in a Vax2 metadata, that's first commit point, then we start overwriting the re-encryption zone with new ciphertext, and after we finish successfully this operation, we perform second metadata checkpoint and move to the second zone. And this works, it works like this for a whole device. And in the end we will have a Vax2 Vax2 device with new ciphertext and mounted file system on top of it as well. So, how does it perform? I have tested it before this for the short time, I have tested two device types, particularly GAC 5,000 RPMs, really all drive, and then I used pretty modern NVMe. And, well, it's no secret the NVMe performs much better than the spindle drive. As you can see here on this table there we are comparing various resilience modes and how they perform based on spare metadata space in Vax2 header. Because if we take a first line first row, these 95 megabytes of encryption zone size is based on the fact that we have almost 4 megabytes of free metadata space in Vax2 header. So, we store checksums, we are able to store so much checksums that we can encrypt 100 megabytes. And as you can see for a half terabyte NVMe it performs pretty well even if the device is under load from other processes, which was these tests, for example. And it's pretty obvious, journal performs the most of all because basically what we are doing with journal resilience, we read the device and write the device twice. So, it's no secret that it performs very, very badly. And as you can see, no op, pseudo resilience mode performs the best. And data shift the last row is basically just pretty similar to journal mode. So, for NVMe it's pretty okay because in checksum you are able to encrypt it under our with spindle drive it's not so good as you can see because basically we are generating too many seeks on the spindle drive and it doesn't perform very well but if you use lux to detached headers basically that's when you have lux headers stored in other device than the data device as you can see the performance is pretty good so maybe for spindle drives it would be a preferred setup. So, we are closing to demo and just a quick summary of features. So, we'll be able to encrypt the device with both detached headers and embedded headers in online mode. In encryption we are able to perform also with embedded header or detached header but there's a really small window where we need to shut down the file system we need to add a virtual device underneath the file system and this can be done in formatted file system so we need to just add this device and then we can perform the encryption online for the encryption we support online online or offline encryption and we use detached header only for that currently. And newer encryption can be interrupted and resumed you can switch between resilience mode after interruption and as you will see in a demo, hopefully recovery is performed in usual crypt activate by calls which is basically crypto setup open so, as we will see in a demo we didn't have to patch for example system D to be able to boot this boot with interrupted encryption. So, I will move to demo maybe I don't have a terminal, that's bad here decrease I haven't booted up I need it because I can get to I need this one yes So, it's a regular Fedora 29 and patched the only package I need to update was a pre-release crypto setup for it to work So, let me join it I will start the I will show you device tech first so, we have a lux device underneath LVM with root file system I will start to re-encrypt it So, this is just the progress of the re-encryption as you can see if I show you device mapper tables you can see yes, it's a tool but you will see that there are devices with two keys but it just doesn't fit the device I will run some write operation so, we have some fun okay, it's still running and now I do something really really agree I will just kill the virtual machine so, we sort of simulated the crash during the re-encryption fingers crossed and it seems okay so I will connect once again let's see in a journal CTL, there should be some info, yes, about FSG key so, yes, there was just some orphanage I know for interrupted copy operation, but file system is okay and works so, how many time do we have? I think so, I will thank you for your attention and do you have any questions? no yes, should I repeat questions so, they are corrected so, the first question was if it's already upstream, available in Fedora and the second question was if it helps people to actually encrypt their device on the fly so, yes I will start with the second question online encryption, there is this slide window where we need to, if it's a mounted fast system, we need to shut down the fast system introduce the device then we can remount the fast system and start the online process encryption so, the shutdown window is like really slow but it is there we can do it fully online and the second question is we are going to release Cryptsetup 2.1 in the coming weeks I think like in two weeks and next upstream release should contain, after this should contain the online encryption yes by default we provide them with let me scroll back by default we will provide them with checksums this should be probably a default when the user doesn't choose this the problem why we introduce the journal is that there are possibilities that if your storage is not able to provide these guarantees that it is able to overwrite one sector in atomic way there can be cases where checksums will not work for you and in this case you have to use the journal that's the only safe way basically we just wanted to provide one more completely safe solution for this but for example on NVMEs or SSD drives it shouldn't be a problem so the default should be checksum and data shift is for example data shift is used anytime we need to move the data and that's for example during the encryption where we need to introduce a lax header inside embedded device like in embedded mode so we need to provide a space for lax to header and in this case we are doing data shifts so that we can store lax to header yes, so new key is stored in a regular key slot so lax provides usually key slots for keys and if you that the process of encryption is that first you generate new key store it in a new key slot this key slot is so called unbound key slot so it's basically just stored key in this moment and then we start the process of encryption and we reference key slots that are used for old ciphertext data and reference key slots that can be used for a new ciphertext area and if I hear you correctly you are asking that I input just one passphrase yes, and that's for this activation problem during the system D because we don't have API to provide second passphrase for a single device so and we don't want to users to overwrite everything just to it so during the encryption it's it's necessary to have two key slots with a single passphrase but you might change it it's just passphrase ok