 Okay, guys, so let's get started and please welcome Stanislav Kinsburskiy. Hello, so today we'll talk about live patching and mostly about user space live patching and how it can be implemented with nowadays technologies we have. So let's start with short summary just to briefly explain what I'll be talking about. So a few words about what is live patching and why we can need it and then some introduction into binary code and process state. Then answer some questions what we can patch, when we can patch, where we can place and how we can play the new code. And a few pictures just to illustrate the ideas and some explanations why I call all this mess painless. And one question to the auditorium, how many of you know nothing about live patching? Okay, then I will spend some more time on this. Well, live patching, it's very similar to simple patching, like the idea. But you have to patch a running process. So it means that you have to patch a binary code by replacing one piece of process logic to buy another one. And you have to preserve process state when you're doing it because otherwise you will get segmentation fault or something like this very soon. And you have to replace this piece of code in a safe manner, otherwise it also will fail very fast. And it's quite a complicated task, containing of different stages. And there are a lot of questions to solve when you want to do something like this. Well, why it might be needed for someone? It's very similar to nowadays kernel patching technologies. In most of the cases, you don't want to shut down some service, you don't want to interrupt its service work. But you want to patch it because it has some critical vulnerability or some real performance degradation due to an issue in the current running binary. And you want to reduce this downtime. And well, you don't want to restart, you don't want to migrate a program to some other, I don't know, host to replace a software on the original one. Sometimes you don't want to restart a service which takes a lot of time to initialize itself and to, say, boot up. And you also want to save this time. And here are some examples of an existent and used live patching tools. So the most common tools nowadays are quite popular and it's hot topic. These tools are known as K-patch or K-graphed created by Red Hat and Zuse respectively. And they are used for kernel patching. There is also Casplyce. This is an Oracle tool. They are able also to patch kernel. And they are basically, there is a first tool allowing to do this. And they also, I think starting from 2015, announced that they can patch user space as well. And actually, there are also some, there is one more project I found on the internet. It's called Panos. It was aimed to patch user space. But as far as I understand, it actually is not that popular and not that well known nowadays, but it was more like proof of concept, given the idea that we can do this. So when we want to patch a code, we have to understand what we would like to patch. And to understand this, we have to know something about binary code. And binary code is, well, it can be roughly divided into two different types. One of them is called static. And statically linked code, it's the most of the utilities you're using on a daily basis like LS or PWD or whatever, cat. And by statically linking, it means that the addresses when the linker creates a binary program, all the references to internal functions or objects like variables is just written as address within a statical binary. And as an absolute virtual address. So it, in turn, means that statically loaded binary has to be placed on the special address specified by linker in the binary file. And then on the program start, dynamic linker kernel first and then dynamic link to do some stuff with the libraries. Sometimes the code has to be placed on some specified location during build time. And there is also a dynamically linked type of binary code. It contains, can be divided roughly into two different types. One of them can be called as load time relocations. This, and another one is a position dependent code. And both of them mean allowing kernel to load this piece of binary code anywhere to any address. But these references to data and functions have to be placed, have to be fixed in place during load time. And well, load time relocation is not really widely used. Nowadays, because for example, you have a function, say you're calling some libc function pintf and you're calling it 100 times. And in case of load time relocation, 100 in 100 places. And in case of load time relocation, you have to patch all the 100 calls within the binary code. And it's not really useful and increases load time of the binary. While position dependent code works a little bit different, so instead of addressing the function directly from the code, it addresses some other place within this binary, which contains the address. So on dynamic load, dynamic linker has to find the address of this function, place it to some cell in memory within this binary. And then all the references to this function will work automatically. So it's faster, but you have an additional jump, like the initial direction load of the address from this cell. And a few words about process state. So when we know what the binary code is, we have to understand also what the process state is because we have to preserve it during patching. So the process state is more like, reminds me a card castle. So it's like very tough structure connected to each other. And when you're removing only one card, everything else is falling down. So you have to be very careful with stuff. And the problem with process state is that it's changing the union runtime, different variables, for example. And it's actually contains three different types of data. It's a statically allocated data, some object which you use in your code, like define it in C, for example, like static var. So this address for this variable is placed somewhere in the binary. But you can then, during runtime, the process can change it. Dynamically allocated variables, it's similar, but it's more allocated in a variable from a heap. So you just call in malloc, and you're getting this dynamical variable and working with it. And also stack content. And by stack content, I mostly also mean here the call stack. Because the stack itself is a piece of not the binary, but the process itself, but the call stack, the sequence of functions which I was called in the process during runtime is also very important to preserve and to analyze. And I will elaborate a bit on this later. And then we are coming finally to the point of actual patching, so what we can patch. For those of you who know about live patching kernel, you probably know that it's a very limited functionality. And it also applies to your space as well. So it's a small changes. And these small changes have to fit into the area which doesn't violate some fundamental limitations. And they also can be split into three different types. It's a static data of different size. For example, you have a variable size of 4 bytes int. And then a new binary, new code, works with the same variable, but the size is different, say long. So logic of the code is also changes, comments, assembly comments, processor comments are also changing, because the size of the variable has changed. And this can be patched without bringing some disaster to your process. The same applies to dynamic objects. And also you can have some kind of a complex object, like a structure containing several fields. And if you're adding new fields to this structure, it also breaks this binary compatibility. And replacing the fields in the structure, changing the order, it's also possible. But most probably it will lead to crush immediately on the next access to the structure. And all this means that you have to review the source code. So the patching, the live patching can't be done in completely automated way without introducing a huge syntax or lexical analyzer like GCC does, which will parse source code by itself. But you need to see the source code anyway to make sure that the binary patch you want to apply on top doesn't realize these rules. So how to create a binary patch? OK, you found a new version of a binary, for example. You know that you can apply it's a small piece of code which, for example, checks pointer for null. And you know you can apply it because it doesn't violate any rules which were mentioned before. And how to do this? So a common way of doing it today, like, for example, kpatch or kagrav does, it's based on function. So the code is replaced by function. You have a no function and you just replace it by new one. And actually it's reliable and it's relatively simple. And another approach can be replacing some pieces of logic within the function, but it's quite difficult to get this piece and inject without corrupting the assembly code. And to do so, to get this patch based on functions, you need some information. So you need a function name because you have to get this function address from the binary. And it's usually the function name, the function size, and the function address within the binary is a part of symbols information, which, for example, ELF contains. This ELF is a format for delivering binary programs in Linux world. And you need all this stuff to actually get the exact piece of code, the exact function to patch the old code. And you also need the fault code as well, because you have to find the address of the old code so you can somehow redirect the execution. And the next question is, OK, you created a patch. You made this somehow, you instructed a new function, places it somewhere in some other file, and you want to apply it to the running process. And when you can do this? Well, it's also not a simple task. So first of all, you have to stop the process, because you can just simply change the wheel on the running car. You have to park it for some time. And what is more important is that you need also to stop it in a proper moment in time, because you have also two limitations applying to this when a question. If the process executes the code you want to patch, you can't do it. And if this function is also referenced in the process call stack, you also can't do it. Because in this case, you can have few threads executing different versions of function, which can lead also to disaster very soon, especially if logic changes. So you have to do it safe. And you have to catch all the threads of the process outside of the patching area. So this means that you have to be able to unwind stack, get the full call trace of the process when you stopped it, and analyze it to make sure that you are not going to patch your process when it's running the code. So where we can place this code? Well, with kernel patching, the kernel patching itself, the technology, already has a, the kernel itself has a load module technique, which really simplifies this problem for kernel developers. So they have to build this patch and make a module around it, and load via generic kernel facilities. With the process program, it's not that simple. Moreover, it's quite complicated. So some projects I mentioned before doing user space patching, they had to introduce another system call to the kernel to do this. So either as a module or something like this. And it's not merged into the kernel. But there is no generic way to insert code in a user space process. And even if you found one, virtual address space, yes. Sorry. Yes? No, I'm talking about you have an application, and it's working, and you stopped it. I thought, yes, the question was about what I'm talking about, when I'm talking about the address space. So you have an application. It's working in its own virtual address space. And you want to add something, some other code to this virtual address space. And you have to find a place where you can put it. And in general case, you don't have it. So there is a technology, I will also talk a bit about it later, how to write the new code when you have the address space already. But if you don't have, you have to allocate it somehow. And there is no generic way, like a common way, to do this nowadays. So you can't, there is, for example, a system called mmap, which allows you to allocate an address space within your process, but not within LN process. And for example, this project I mentioned in the beginning, Panus, it introduced one more mmap code called MAP3, if I'm not mistaken, which does this mapping creation within a process where we can then place the code. So this is a major problem, by the way, with user space patch nowadays. And it was solved in a way of kernel modification. But it's not upstream. So it's not like something you may rely on. You have to carry this, for example, the same Panus carries this module with it. It's a part of the project. Another problem, this isn't, OK, yeah? So it is not XMR, it injects a core frame. So the question was that GDB inserts the code already into the process. And can this be used? Well, yes, it can. So basically, anyway, it's more about not injecting the code itself. It's about creating a space for the code. Well, you can. But actually, this is actually a solution. So OK, I can also briefly tell about this later, right now, I mean, and explain what the technologies more easily used exist today for this. So there is an interface in the kernel called ptrace, which you can use to access process internals. And basically, you can take a piece of existing address space in the process, and you can inject a small binary blob, which will call for you a map system call within that process. And you will get the address of a new region allocated within this process, which you can use then to put the code there. Is it somehow OK? OK, I will elaborate a bit on this later. So this is quite a significant problem. You can solve it, like I explained already. But it's a bit tricky stuff, but you can do this. And another thing which basically applies only to user space patching, doesn't apply to kernel patching, is that you can select one of the two ways of patching user space kernel, user space process. Well, in kernel, you anyway have to load a small piece of code, wrap it to the module, and then use it to patch. Well, in user space, you can, of course, use this way. But you can also do it in a bit different manner. Because the code is distributed already, usually. So you, of course, can build the, applies a small change you want to patch to the sources and build the binary, and then get it back, like cut out of the binary, and put it as a binary blob, the only small function you want to patch. You also can do it in a different way. You can take a new version of binary, where this issue is fixed already, and simply map the whole binary interprocess, and use this new binary. And I will also tell a bit more later how you can even replace the old binary with a new one. Well, so for example, you already know how to get the code to place the code into a process, and you decided to put it as a binary blob, or just map a new file, for example, share library. So in case of study code. Well, study code is just to, I put this statement here just to make this talk more broad, because patching study code is very painless, and actually looks useless. Because when you put in a study code into a process address, you can't place it on the same address, where the old version was placed, because the status is fixed, and occupied by the old code, with the whole binary. So you have to place it somewhere, and then you have to fix all the references, this new code, to correlate to the variables in the old code, for example, or to some other calls from the function to some other functions. You have to fix all the cell addresses. And one of the significant problems in this case is that when you build this binary, you don't have the information, and it's relatively difficult to get the information of all these points you need to adjust, in case of study code. While the P code, this position-independent code, which you can place anywhere, for example, like kernel is, or shared libraries like libc, they can replace it at any address in virtual address space, and the code initially compiles it and link it in the way that when dynamic linker places, for example, a shared library into process address space, it already knows all the places from the elf internals wrapped in the binary, and giving you information about it. Dynamic linker already knows everything where it has to place the correct addresses. So it has offsets within the library. It knows the address where it places the library, and it simply calculates by summing these things and placing them to the addresses specified in the elf, writing these addresses to the binary code. So it's more or less simple task, routine. And if you are patching P code, and I have to say that the rest of my talk will be mostly focused on patching this position-independent code, you have to initialize variables if you are adding this P code. So initializing this, exactly this place, as I mentioned, which are not an elf code, the addresses, which are function addresses or variable addresses in the P code, you have to fix them. And also, you have to copy data in some cases, because for example, if this function changes some variable within your binary, within your library, the old one and the new one, they both modify somehow when you're placing a patch as a binary blob. You can simply redirect it to old data, but if, for example, mapping it as a whole, you can redirect all the references from old library to the new one, simply replacing it, and simply copy data from the old library to the new one. And you need, well, I'm coming to this important point, and when you succeeded by placing the patch into the address space and you initialized everything, you need somehow to redirect the execution from the old code to the new one. And here I'm coming to some pictures illustrating how this can be done. So this is how process looks when you started it before patching. So you have a process, and there is some function. We don't really care which one. For example, it's a Lipsy function, printf. And your program calls this function several times, and you have these references. So dynamic linker initializes it. It places in your program the address of this function within your address space. So everything is working nice. Then you are inserting code via map system call, either as binary blob, like only function, or mapping the whole library into the address space. And then you're directing it like this. This is a common way of doing live patching nowadays. So Kpatch, Kagraft, whatever else, Cast Flies, they do it like this. So they simply place in jump instruction, assembly jump instruction, which is architectural dependent. They simply place in this jump instruction in the beginning of the old function. And when you're calling the old function, you are simply jump into the new code and then return it back. And well, it's a nice way. It's a very simple way, because you don't really need much fixes, and once you initializes a replacement function, you can do this jump, and that's it. But there is also one more way which applies to user space. And it has some advantages about which I will call a little bit later, very soon. I code via replace. So for example, if you have a libc with some critical vulnerability, and you want to fix it, you can M-up, or load to the process address space, the whole new library. If, of course, it allows you to, I mean, if it can be patchable in this way. For example, you have changes only in that function you want. So Red Hat, for example, released a new update for glibc with CDE patch, and that's the only change. And it's applicable in terms of binary patch. And you can load the whole library into the address space. You can initialize it in the proper way. Copy the data from the old library to the new one. And then instead of placing the jump inside the old code, you can adjust and fix all the references to the old confunctions to the new code. And then you can even un-mub the old library. So it's more like swap. It allows you to eliminate this growing memory footprint because you have only one library in process. And it's more similar to the way how code patching looks like when you are using, for example, patch utility in Linux. It simply replaces the old code with the new one. And it's something similar to this. So in user space, it is possible to do touch things with a position-dependent code like libraries, for example. But you have, of course, in this case, you have to do some work as dynamic linker does. So you have to link the original binary using the libc and all of the other binaries linked with all this process using libc. You have to find them all and patch all the references to the new library. So well, why call it painless? So it's a partial joke, of course, because it can be painless. But what is important here is that you don't need any kernel changes. You don't need any additional system calls. You don't really need, prepare this binary somehow. You don't need any instrumentation or whatever else, like in case of the shared libraries, this position-dependent code. And you sometimes don't even need to build it. So if you have an old version of libc or libssl, and you have a new one, and you had a look into source code, and you know that you can patch it, you can take the binaries which arrived with the U-Mapdate, for example, and use them to patch. Well, that's more or less obvious that you have to get some information. So you have to parse elf structure, which wraps any binary code on unit distributions. And here are actually some more details in how it can be done. So first of all, these tasks, like task stop, task resume, and code insertion, can be done via libCompel. This is a new library, appeared as a subproject of the CRIU project, or GRU, online migration of containers, live migration. And it's quite powerful. It hides a lot of these architectural or kernel specific things, like pit-race interfaces, and different complex things with registers loading, and gives user relatively simple ways of stopping resuming process and also executing system calls. It also allows you to relatively easy load a piece of binary within the address space. It allows you to easily execute system call, like M-Map. You need almost nothing to know about pit-race or whatever else. And the sources can be found as a part of CRIU project. And well, it's an alpha stage, but it's working. And well, for stack unwinding, can be used lib unwinding. It's a mature library. And it can be used to get the stack of a foreign process. So in case of foreign process, you need to stop it, but lib compel gives you exactly this. So the logic is more or less like let's stop the process, use lib unwinding to get the stack contents, check that it's OK for patching, use lib compel to insert the code into the process address space, patch it and initialize a new code, and then resume the task. Well, so the presentation is actually all. So if you have any questions, yes, please? Things around the library's lungs that you can't discover. For example, in the corner, there was a change. Stack 2 was reduced to make sure, for example, that the stack threads are always given if you go into assembly that you always have correct control code information. I think you can't, but I don't want to change all the bit to include all the various information. So how's your opinion on, like, is it safe to patch or not patch something? OK. OK. So the question was, how simple is it to patch? How simple is it to make it safe, like in production, and do you need some, I don't know, explicit knowledge to do this or not? Well, live patching is a very tricky stuff. And it has a limited functionality. And well, it's complicated. So in case of the kernel, you, of course, have to review the code. In case of the space, it's the same. And of course, if you would like to patch some project, some library, for example, which you don't have enough expertise to review, then, well, it's very risky. So my vision in this case, my hope that since copatch succeeded, and it's becoming like a hot topic, it's already a hot topic. And people, I know that people are also looking in ZUCI, for example, at least, they're looking into live patching experience as well of user space. I hope that, for example, in case of, say, Red Hat, the guys started to deliver new versions of libraries as fixed CVEs, for example, one by one, not a bunch of 10 of them, because nine of them can be applicable in case of binary patching. And the 10s is like changing, I don't know, introducing new variable, and the whole new package, which arrived in your machine, can be a binary applied. So as for me, I believe that to make it more or less marketable like that, when you can deliver it to the market, to the final user, the maintainers of the source code should try to make these patches for the pieces of code which might be important for live patching, make them safe and small, because they know what they're doing, and they can actually understand that the code changes the behavior like in terms of process state. And, well, the interesting thing here is that, for example, if Red Hat starts to deliver, I don't know, updates for LibreSoCell, one update per CDE, then a binary patch building, basically, and it can be somehow marked by the maintainers as this can be applied, so this is safe for applies, and then the binary patch can be generated even on the final host of the client, because we can, for example, the funny thing here is that to generate the patch, we need the old binary and the new binary. The new binary is arrived with the package update, but we're to get the old binary, and the old binary, nowadays, you can find in Proc, actually. So for each process, you can find in Proc directory called Proc.peatMapFiles, and there you can find the old binary, even though it's rewritten by the new version. So you have the sources to create the binary patch on host, and you have a way to apply it on host without any kernel modifications by simple utility. And that's something on what I hope. Please. Our things is compiled for full. Could you please repeat? We've been compiling more and more programs for full railroad, which has to bind now, like, what kind of problems does that present to patching? As for me, so, yeah. So if I've not mistaken, I will try to say it a little bit differently. So you are compiling the programs with, like, immediate binding of the external functions, do you? And in the common way of linking programs is a lazy binding when the function is binding to the process on access. Well, yeah, that's also thanks for the question. I have to notice that when you are patching like this, like I mentioned, for example, by replacing the library at the whole, you have to initialize it fully completely immediately. So it can be done in a way of lazy binding, because dynamic linker knows nothing about this library. It doesn't contain any structures. And an attempt to use it will immediately lead to segmentation fault. So all the internally, within this new library, all the addresses have to be initialized immediately. And also all the references to this library have to be initialized immediately before releasing the process. So if I understood you correctly, as for patching, it's just nothing, because we will just do the same. OK? Yeah, so we can try to protect the regions anyway. Great. So the question was that when I'm patching only one function, say, or two functions out of the whole library, and this new function somehow references another one, which I don't patch. It references it not via PLT, some address table, but directly, in this case, the process can, in some cases, come into the old code, in some cases, use it as a new one. Well, yes. And that's why I was talking so much about library replacement, because, well, if you can live patch a piece of code, if you can live patch only this thing, so it sometimes looks very nice. For example, the new version of libc is released, and it has a lot of changes. So you can apply it, but you have only one function which fixes some critical vulnerability. And you want to patch only this. And it looks very useful. And I have to admit it, but it's quite complicated to make sure that you found all these coordinate cases, like you mentioned, for example. So as for me, since live patch, in any way used for small fixes, usually, or some maybe small fixes dramatically increase performance, sometimes, and I would like to do it like this way, just simply get the library with a small change and replace it completely. And it also will reduce the memory footprint, because, for example, if you have containers, I don't know. And you are distributed, and you have a bunch of them, like tens, hundreds, maybe. And they're based all on the same software. And you have this issue everywhere. And a lot of process using this libc, if you are adding only this piece of function, or you are living the old code with the new one, you're increasing footprint. And it's also not very nice in some cases. In case of kernel, you can't unload the old one. But in case of zero space, you can. Yes? Well, yeah. To replace the whole image, I need reproducible builds. And might there be a problem or not? Well, I don't really get it in the way of, how to say, that I do really need reproducible builds. So some things you can, when you are creating a binary patch, you have to, and it's actually, you have to anyway to parse the elf structure. And since you are anyway parsing it, you can do some basic checks. For example, function contents. So you change it to one function, for example. And you created a binary, and you created a binary patch out of it, like the old and new one. And you see that during creation, not only this function changed it, but quite a lot, for whatever reasons, then it's a trigger that I know. It should tell you that something is wrong, maybe. And it's not applicable for patching. So because the live patching itself can be, from my point of view, used either in case you're a developer, and you are doing something really understanding what you are doing, or if you want to distribute it across the machines, across the clients, you can anyway, you are distributing packages. And if you are doing this binary difference, binary patch, between two well-known, for you, the developer of Lipsy, for example, well-known binaries, you already, you can do it on your side, on the host side, to see that, OK, the binary diff is OK. It can be applicable against the old and one libraries. And since you're distributing exactly the same packages for the clients, like you were testing, well, I assume that you can be sure that the result will be the same on the target machine of the client. Yes, please. Yeah, maybe it functions in some way. Nothing good. Yeah, I need to read the question. So the question is, what if process during this runtime take the address of a variable, global variable, or address on function, placing it somewhere it is internals, and then using during runtime. And I'm just simply removing this. Well, yeah, nothing good will happen, especially if you are like dropping the old code and placing the new one. Maybe in some cases it can be, you know, you can overcome it by very, it's a tricky task from my point of view, by trying to replace, to place the new functions, the new library to the same addresses. But it's, in most of the cases, it won't really work. Yeah, but when it works. Yes, in this case, yes, it works. So it's also, it's a trade back. So you have to, with jump, it's simpler. Yeah, it's really simpler. It's more reliable. But you are accumulating state. So for example, if you're doing this live patching, like iterations for some time, you are getting state. This iteration, you've patched one function. In another one, you've patched the same function. In the third one, you're patching some more. And you're getting all this state. And you have to take it into account each time when you are patching for the new library. While when you are replacing these things, it's more, it's actually stateless. It's simpler to, how to say, to, to, to raise this piece of software which does the patching. But yes, you have to review the source code more, like deeper, and, and that's why I was talking also about maintainers. It would be very nice if maintainers could, if the live patching with your space will become more popular, that the maintainers will also somehow place a text, or maybe something like this. Yes, please? Well, no. Not really. Ah, sorry. The question was, have I tried to use, yeah, maybe idea features to check the compatibility of the libraries. So no, I haven't. Yes, please? What's your recipe for libraries that have non-trivial constructives? Like some, they execute something non-trivial. What's your recipe in this case? And? Let me answer one by one. So what's my recipe for the libraries which have like a complex initialization routine, yeah, like constructives? Well, yeah, as well, yes. Well, I don't have any, frankly speaking. So it's, when I was talking about the libraries, I was talking about some generic stuff like global variables which are initialized in the source code, and this plt entries, and other pointers, but not about something very, like, special, doing some different stuff based on environment, for example, yeah, so. But with more than one stack? Yeah, do I have any ideas how to patch the processes with several stacks? Yeah, so-called coroutines, I think, yeah? For example. For example, yeah. Well, yeah, it's a pain. And I think it somehow depends on the way how this stack switching is implemented. So for example, it needs to have this set context, make context stuff. And in these cases, maybe it's possible in a nested coroutine which returns to some stack on exit, it's possible to get the structure and check whether it has a pointer to another one. Because actually, this library leap compile I mentioned, it allows you to inject the binary code as well. So it's quite powerful stuff, where you can do it more or less easily. So in this particular case, it is possible. But I think it also depends on the way how it's done. And well, I haven't tried to investigate this question. Yes, please. Which one? Erlang, yeah? I'd say that if you really plan on patching like, for example, try to patch as you go with so many special cases and so on. But now if you want to go with C, what would you say is missing in C that made it possible in Erlang? Well, for the question was, if I want to live patch and I have so many limitations, maybe I should prefer to write applications in Erlang, which was designed initially, to do this easy. And if not, well, and I'm talking about C, what is missing in C to support this stuff. Well, I would say that I don't know what's missing in C, because it's simply just language. But it would be very nice from my point of view, for example, if GCC could maybe give some hints during compilation about symbols, some additional stuff, say, I don't know, does it, like, is it called some, I don't know, changing stack, for example? Because it's a stack pointer, because GCC knows it's actually, or I don't know some other stuff. It just really can't specify it now. But maybe a compiler could help a bit to define, actually, whether we can patch or not. Because it's one of the major problems. Yeah, things like that, yeah. Yeah, it might be. Because that's actually true. When you load an address of a function on a register, and you're looking into the binary code, you really don't know that the function address, or it's just a variable matching just the same. When you're doing an absolute load, just place in an address. And if GCC, and of course GCC during compilation, it knows exactly, is it a function address or something else. And that kind of information would be really useful for this automated way of checking whether we can patch the application or not. Last question. Extral problems, for example, I have a service running. I want to, just the time it's patched, it hasn't been reviewed, and everything. So I'm going to use this tool to patch it. It's kind of dangerous. So I have to keep up with the tool itself, so I know that it's up to date, that it doesn't have any bugs. So do you see what I'm getting at, that there is an extra level of problems which can come up? Yeah, of course. But it's again, it's a kind of a trade-off. If, for example, you have a service, and you know, it's just revealing that it has a critical vulnerability effect, and you need to do something with it. So today, you have to shut down it and restart. Well, yeah, it should be safe to do this. So when you shut down it gently, it has to store all the content, whatever. But sometimes you just have time enough for this, or this application restarts and initializes itself for half an hour, maybe. And you have this whole time, the service is shut down. It may be worse trying to patch this small change in this case. Is that the other half? Yeah. You can do it for sure. It's a kind of a rolling update, whatever. You can do it. And yeah, it's a safer way. It will require some more resources from your site. If you have 10 containers running and you need them always running, then you have to run to start, I don't know, five more, whatever, aside with a new version, and then replace them. So it's more overhead for your host. And it takes more time also for you. But from my point of view, yes. It's the same like this kernel. You don't need to patch it. You can simply migrate a virtual machine or container to another node. Can you? Yes, you can. But it's again, it's like additional load on network disk, CPU, whatever. Thanks a lot. I'm out of time. Thank you.