 In the last talk of the day, it's going to be my compulsory gain in the past, the beginning of a memory and more true work. Okay, all the people talk about CRIO and the container and checkpoints restore migration a little bit. In general, CRIO is a tool for checkpoints in restoring and migrating Linux products and containers. Now, container is from down from the bottom point of view, it's a collection of very different objects, like processes in each process, and also when processes have open files, sockets, mount points, etc. etc. etc. And processes are easily used in namespaces, and they can also be controlled by C groups to ensure resource usage of different things like that. So, here's the example of CRIO dump checkpoints in a very simple test application that actually checks if the environment remains correct after the trigger check and the initial operation. And you can see the page is one of them. ING is a memory dump of that single application, and memory dumps are orders of magnitude larger than everything else in the container checkpoints. So, I'll talk mostly about, I'll talk only about how do we deal with this container's management. So, this diagram presents most of the possibilities CRIO can do with the checkpoints in restoring container memory. First of all, if they transfer all the memory from running processes tree into images, then either way or another, it brings the images to restore back the processes in memory contents. And it has several additional tools to optimize the call in a memory dump in the store to reduce the container dump. So, how Linux lays out in memory a virtual address-based process? You can check in PROC and PID maps to see what's actually there. So, for example, it was a card from PROC self-made maps and deployed to show the memory map of card process. Each other space is divided into virtual memory areas, and the virtual memory areas are populated with pages, and some of the addresses are not populated because nobody actually tried to access pages at others. So, when CRIO performs memory dump, first of all, it raises all the processes it would like to check point, and injects parts according to the reference. Most of it is done using PID-race calls with different parameters. And the next step, CRIO parts both the MAPS and S-MAPS to determine a virtual address-based map of the processes. And it also detects a relationship between virtual memory areas and the files that back the virtual memory areas. And the next step is a CRIO engine sends call to Paralyze in the weekends to transfer memory contents to an open files, because for now it's a most efficient way to share some memory between different processes without actually opening it. Using PID-race will be way too slow, and there is no other system call that you can use to do the same thing. What happens is just the memory dumps map in both meeting process and the byte, and then CRIO engine can map the same memory using the same byte. And the final step is to transfer the byte contents into the images on disk using splice system call. When CRIO performs the restoring, it actually does all the way around. First of all, it creates a virtual address to the space by connecting the images and then moving them into the correct addresses. And then it creates a memory contents from the image files and moves it into the splice system to the appropriate places in the processes of the space. And depending on the query version, some of the mapping and pre-matching and reading of the data may be done in different stages. You read more recent CRIO versions to everything nearly before the point where the processes are resumed and allowed to run. So to implement micro-migration with the CRIO, you just need a checkpoint on source, transfer everything from source to destination, and then restore the destination. This approach takes as long as that time possible because the transfer and memory state for micro-migration is, for instance, 1 GB of memory is quite time consuming. So to optimize this a little bit, one can use what we call page server, so that instead of waiting until the complete memory dump is finished and then transferring the memory image file, you can write on the page directly to the destination node and save some latency during the micro-migration. The next technique is iterating memory migration, which is called mutual machines and containers. So at every iteration we check what is the memory that has been changed between the iterations, and on the transfer deltas and deltas from the previous round. Hopefully the process will converge, although it's not completely correct for certain types of applications that have large memory footprint and that have large rate of working set notifications. So when the pre-dump is used, restore a node to choose the right image and select the appropriate data from the correct situation of the dump and fill the processes' memory appropriately. The next optimization is what we call laser restore, which is the on-demand page for the restored processes. We just start processes at the heart without filling their memory entirely while actually leaving them with empty memory. And then every time when the process attempts to access a memory page, we get initial quantification using user-folded D-mechanism, and then we fill the appropriate page on the image file, or on the network, depending on the configuration. Currently, we do not have some smart statistics to ensure that we do address working set changes and we can do some smart things, we just place it in the track. But we definitely plan on improving it at that point. And the last one is an actual laser post-copy migration. The dump source doesn't actually dump any of the memory, it just transfers them from a picture in another space into pipes and keeps them there. And on the restore side, the processes are started without their memory and all the virtual memory of the processes is registered with user-folded D. Again, at this laser restore, every time a process tries to access a memory page, we get user-folded denatification and they request a page from the source node. With that, a workload can resume almost instantly. However, the process several milliseconds depending on the size of the working set it will run much slower than it would in running all the memory process. That's pretty much it. Yes, we did. Actually, only minus it is in the next one. I need to do this. And still, that's it for today. So, to be that in me, most probably we'll get to many things as much more. How long does it work? I mean, there is a possibility that the migrated process of the new server will not ask for all the pages initially created on the first server. Is in that case, which server to share with those other frames? Okay, maybe I'll explain briefly again. We have server A and B. We move process from server A to B. And the process running already on server B asks for only couple of those pages. What happens with the memory, what happens with the memory and the page server from server one to server B? Eventually, we'll get the server B. So, what is going on now? We categorize pages of the rest of the memory but we still have some background transfer for the memory. So, it's not still requested. Does that... We could imagine having a bandwidth limit. You could say, I'm willing to do that much. It's still the weakest point of the led generation. How can we decide what is going on? Because the only thing that goes through is a bit for democracy. Monsters, meat disease, or the diamond one, so that you don't... Yeah, but exactly. We have additional copy and we do all different things if you want to do other things. On the other hand, for some of those, the store is on the way to migrate. Any questions? Once, I don't know cream, but a lot, those are part of the process of our memory, it's sold there. The part of the process stage is detected via different broken pages and part is detected from the part I thought that is injected into the... For instance, how the street or table, you can have part of it from my broken, some part of it is not accessible so we need to be in the same process, one part in the same process. So while the street or the socket and the actions work out, what not is actually real hard work by states, we are not capable of a checkpoint in terms of the... With Taiko's question, I should actually be able to even do some of the buffer and stuff, off-term outputs. You can do some, because at some point you can say 10 meters or something and then refit those as needed. Put in a small thank you, and the exact date that is to be printed on is basically... But generally, I'm showing you a real device and you need to have some support from the camera to do the same thing. Thanks.