 Hi, everybody, and welcome back. The next talk will be Fight Hardcore. It will be by Alex, our memory tagging extension, fighting memory on CRT with hardware. Awesome. So give a warm welcome to Alex. Yeah, thank you, everyone. Thank you, Ivan, for the introduction. Let me just see. So today, my talk is on the memory tagging extension, of course. So my name is Alex. I've been a developer for about two years, I think, now. I'm working mostly on carrier fuse. But this talk is quite different. It's something I've been working on recently and studying, and I thought it might be interesting for the rest of you as well in the KD community. Just one quick thing, I don't think I have control of the stuff in the presentation. So I don't think I can change the slides at the moment. Let me check through. Okay, we're back in. Okay, so onto the next slide. So before we go into the memory tagging extension for long, I want to go into the motivation behind this, or why would we need this new hardware feature? So the reason why we need it is because we use C and C++. And when you use C and C++, it is a memory unsafe language. And it has positives, but it also has negatives. So first of all, what does it mean for a language to be memory unsafe? Well, first and foremost, you have access to pointers and you're able to manipulate them as you wish. You're able to do arbitrary pointer arithmetic, not just indexing to an array, but a lot more pretty much. Anything goes as long as you do it correctly. And this provides some advantages. So you know, you manage memory by yourself. You decide when it's, when you allocate from the heap, when you deallocate from the heap. And this can be quite useful if you don't want to use like a language that has automatic memory management. So Java has a garbage collector that you can use, Python similarly. And in many cases, you do want fine grain control. So for example, if you're writing a game, you might know beforehand how much memory you need and dynamically allocating memory can sometimes, especially in games, can be performance insensitive. So if you know how much memory you need, you can just allocate it in one chunk, ready to go. Once the round is over, for example, you beat the boss, you can deallocate the memory and go to the next one. But you know, there's no such thing as a free lunch. So what do I mean by this? Well, managing memory yourself is hard. You have to think about the lifetime of your memory, who owns this memory, when should it be free and who's responsible for freeing it. You know, and the text needs to solve it. And you know, a lot of the time we do it correctly. But a lot of the time we don't. And this is where the real stinger comes in. So you know, if you do it incorrectly, you invoke, you know, undefined behavior. Once you do something incorrectly related to memory, you know, for example, double three, well, I'll go into it in more detail. You've invoked undefined behavior. So, but what does that mean? Well, let's define undefined behavior first. Undefined behavior first. So now we have a definition straight from the standard. Undefined behavior as behavior upon use of a non-portable or erroneous program construct or of erroneous data for which this international standard imposes no requirements. What does this mean? A person implementing this, the language, has no requirement. They don't have to do anything in response to this behavior. So they could ignore it. They could, you know, print something out if they wanted to. No one could complain really. They just point to the standard and be like, okay, that's not my problem anymore. And so, you know, a user, for example, might commonly see a segmentation fault and might experience a logical bug. It just might be crazy. Who knows? So why have we allowed, you know, undefined behavior to be a thing? Why don't we define the behavior? Why don't we do something like in Java where if there's an off by one error, we just raise a rate index out of bounds exception, for example. Well, these checks aren't free. They have a runtime cost, which you might not want to pay in certain scenarios. The compiler writer also would like an easier time. I mean, C++ is hard enough to implement as it is. And, you know, thinking about runtime checks, you know, can be annoying. Also, assuming a program doesn't have undefined behavior allows you to do some sort of optimizations, which again, if a program does have undefined behavior, can result in some weirdly generated code. So, okay, well, okay, you know, you might get some memory safety bugs. Are they common? Is it even a problem? Can we just shrug it off? Well, they cause countless bugs. You know, just from simple crashes to security bugs. So, you know, all the way back into, actually that side is wrong, it was in 1988. So this was more than 30 years ago. The Morris worm, I think it took up an advantage of a simple buffer overflow and took down large chunks of the internet at the time, which was quite small, but still managed to cause an estimated $10 million in damage. Morris, the one who created the worm was put in prison. So damage can seriously be done. And more recently, we can see, for example, Heartbleed, which in 2014 was an open SSL bug, I believe also simple buffer overflow, very simple to catch. It affected, you know, pretty much half the internet, half of HTTPS sites, which basically rendered them useless because, you know, you could read out private keys, and then so HTTPS wasn't really a thing until you patched it. And Chromium have done, you know, their research, and, you know, they've looked at their bugs, their serious security bugs, high severity. And 70% of them can be attributed to memory safety problems. So the biggest security bugs fundamentally come because of memory safety. Now, again, okay, well, another thing is that CNC++ programs are very common. You know, so we can see all these stats here, you know, Apache and Nginx have, you know, quite a large share of the web server market. Google Chrome is 64% of the web browser market. Windows, large chunk of the desktop market. Android, similarly. And, you know, some of you might notice being like, well, okay, Google Chrome's 64%, but I mean, Firefox, Opera, Vivaldi, they're also all pretty much written in CNC++. Why are we singling this out? So you're correct to note that pretty much all critical, you know, infrastructure that we use today, from kernels to browsers and stuff like this, are written in CNC++. What makes it really bad is that a lot of the software has pretty much monopoly status. So if you're looking at a nation state attacker or just somebody who wants to make money from hacking, finding a vulnerability in Google Chrome, if you can easily exploit it and you can reach 64% of the web browsers or web users, the maths checks out, you wanna exploit it, you wanna look for these bugs. You don't wanna just come across them, you wanna hunt for them. So this kind of monopoly status also has a big effect. Well, okay, well, I mean, you know, I've talked about Google Chrome, Apache, Nginx, why should KDECA? Well, I mean, we, a lot of our software is written, you know, on top of the Linux kernel and user space. There's also a lot of user space software that we use, you know, so maybe Wayland is also written in C, stuff like this. And obviously we write most of our software in C, well, in C++ and sometimes in C here and there. And you know, and our biggest libraries that we use, you know, Qt, again, C++. So, you know, we're also part of this problem, well, experiencing this problem. And, you know, we developers, you bound to deal with crash reports and a lot of these are probably triages, very high importance. I'll go through Nate's blog, you know, quite a few crashes. So guys, we're not perfect. So we've got a lot of work to do. And you know, crashes, you know, security bugs, weird behavior caused by memory safety bugs. Turns our users away and we want people to use our software. So before I go further, I want to just kind of refresh ourselves a bit. So this is the virtual address space layout for a single process. So you can see the bottom of the address space is at the bottom of the slide and at the top of the address space, we have the stack, for example. And the stack starts at the top of the address space and grows downwards, on Linux at least. And at the bottom, you can see the text segment, which is just the instructions of the program you're running with some initialized data at the beginning of the startup. And the heap has dynamically allocated memory that you can get from, you know, Maloc or new, or, you know, make unique, make shared, et cetera. The heap similarly grows upwards starting from lower addresses. And to note that the shared libraries are stuff like, you know, well, for example, Qt is maybe dynamically linked. And so the process will start up and put the shared libraries in the correct space so that you can call into it and it's all good. And knowing that is useful for later. So keep that in mind. The stack is managed by, well, at runtime with, you know, instructions. The way the stack is used is managed by the compiler. In the code generation. So during compile time. And the way the heap looks is managed by the memory allocator code. So usually JLUBC, at least on Linux. Okay, so let's look at our first kind of type of memory safety bug that you can run into. So we have a simple function here that's vulnerable to a stack overflow. Why? Well, this function takes into arguments that it doesn't use, but you'll see the next slide why I wanted to put them there. And we allocate a buffer of 100 bytes. And then we take and use the input and put it into this buffer. Now scanF does not do any bounce checking of the input. It just assumes that, you know, you're not going to use it any more than 100 bytes. And that is not an okay assumption, especially if somebody malicious has an opportunity to put some stuff there. So if it is overrun either on purpose or by accident, you have a stack overflow. So what's the problem here? Well, not only can it crash a program, they can also be used as an exploit. So let's get ourselves into minds of an attacker. So the attacker want to insert input sometimes called shell code that will allow the attacker to take control of the program, to take control of the execution flow. So what does this mean? Well, let's take a look at the, at an example stack frame. So we have the local variable buffer at the bottom here, at the bottom of the stack. When you fill your data in the buffer, it will kind of climb upwards. So if you fill out the buffer, if you start to overflow the buffer, you're going to overwrite, you know, kind of the bookkeeping needed to actually run your program correctly to set up the stack frames correctly. So in this case, you know, you need a, you might need to say frame pointer, if the frame points are generated, you'll need a return address to know what function to go back to once you're done in this function. And you also might have some inputs on the stack, sometimes not, sometimes there is. In this case, I assume it is. So what an attacker want to do is they want to put in code. So the shell code is stuff that they want to execute and they want to overwrite their return address such that it points back into the shell code. So points into code that they themselves have inputted. And so the shell code might, for example, if we're exploiting Apache, we might want to open a root shell because Apache commonly runs this route. I want it to be accessible over the internet. Great, I mean, you have a root shell of a server, game over. Okay, so you might have noticed something. Well, how do I know exactly where the return address is? Well, sometimes it's not obvious. If you have access to the source code, maybe you could figure it out, you can compile the program, play with it locally. Sometimes you might not be so lucky. So the first question is, do I need to guess exactly where the return address is when I'm designing my shell code? No, just write it many times as a suffix if you can. And then one of them will hit, okay. Well, what should my return address be? Well, it should point into the shell code somewhere, but if we're off by a bit, we might just crash the program, we won't execute the shell code we want. Well, we're in luck. We can use certain knock instructions, so no operation, it doesn't do anything. And we can prefix our shell code with it, and then hopefully our return address will pop back in, if it hits any of the knock instructions, it's just going to call knock, knock, knock, knock until it starts until our real shell code. And then hopefully we're good. Now, obviously you hope that the buffer's large enough for you to kind of pull this off. So if you've overflowed a four byte buffer, I mean, good luck getting any decent amount of shell code. But if it's bigger, like a hundred bytes, you could probably pull something off. And these are quite common because you have access to change return address. So this has been around for, you know, well, since the inception of C, since the inception of C++, have we just been sitting there? No. So the first mitigation for this type of potential exploit is write or execute pages. So usually trying to allocate pages and at the time, there was no requirement if you could do write, execute or read pages. But the question is, why is the stack executable? It shouldn't really be executable. We shouldn't want to execute code that the user has put in. That doesn't really make too much sense. So why don't we do, we establish a requirement and say, you can have a page and you can either write to it or you can execute it, but not both. And so how will this stop our previous attack? Well, if the return address leads back into code that we just implemented, we make sure that it's only given write access. So we can only write to this page. And so when we try to execute instructions in that page, we'll generate a fault because we're not allowed to execute that shell code. Okay, so I mean, are we done here? No, we can work around it. So if you remember earlier, there are some shared libraries that have code, you know, like the C standard library, C++ standard library, maybe Q, whatever. And so the workaround is, well, why don't we just set the return address to a commonly known function that we might want to use in the C library? So for example, we might call system bin slash sh. So we'll set up the return address correctly. We'll set up correctly we'll put in the argument to the string that we want or the arguments that we want. And then we'll call it and then we've got ourselves a root shell again. So while this work, well, libc will be mapped in with execute permission. So we've just worked around it. Okay, have we just been sitting again? No. So another concept is, or another tool is address-based layout randomization, ASLR. The idea is randomize the address space, which means that it's hard to guess, you know, what the location of this libc function is. Because without ASLR, I can just be on my local laptop, have a look what libc is on my local laptop and I'll probably map to what I was going on on the server, for example. But with ASLR, it'll be random each time. So again, for 32 bit, it doesn't really work out as an off entropy. You can get around it as a paper on it. And it still isn't perfect on 64 bit, but it's still kind of useful to still have, so distributions will still use it. And if you're interested to see how to break it on 64 bit, hacking blind paper, we'll show you. Although it's kind of complicated, it's a talk on itself. So what you're seeing here is kind of like a cat and mouse game, but we haven't really stopped the problem at its core yet. And maybe we will. Sorry. Okay, yeah. So, okay, well, that's on the stack, but we can also heap overflows. So, we're gonna have two structs here. One just stores a character or an array. And the other has a function pointer. Now, we're gonna allocate these two on the heap. And maybe they might be basically adjacent to each other. And we're going to assign the function pointer, you know, just to the exit function. But at the same time, we're going to accept a user argument and we're going to write it into the name array. Now, what could happen here is that if I overwrite it, I might be lucky enough and find that I, because DNF are adjacent to each other or adjacent to each other, we hope, I can actually overwrite it such that the function pointer is not exit, but a function of my choice. Again, if I set up the arguments correctly, I could do something similar to, you know, a return to libc attack. Even to my own shell code, similarly, again. But the same work around the play. The same mitigations play. But as we can see, the heap is still exploitable. Despite the fact that the heap doesn't really contain a return address per se, there's only kind of like internal bookkeeping. We've still managed that in this case, we've still managed to find a return address and change the control flow of the program. Again, if YO execute exists then this doesn't work out straight away. But what I'm trying to illustrate is that you can do on the heap or you can do on the stack, although it is a bit harder. Okay, so those two, the heap and stack overflows are kind of what we call spatial memory safety bugs. So those are like when we manage memory incorrectly and stuff starts overwriting each other over time. What about temporal memory safety bugs? So this is when we incorrectly manage the lifetime. Of memory that we allocate. So here's a use after free function. So we declare a pointer to an integer. We allocate some memory to it. We assign a value. We free it of an assign a value to it again. Now, this example by itself isn't exploitable. But the vulnerability is that we've used something after we freed it. So we have no claim to this memory. And so this could easily be a crash. Although again, it's undefined behavior so it might not crash at all. Okay, so let's imagine a scenario where we could exploit a use after free. So, as in the previous case, we freed the memory but we still had a dangling pointer to it. So let's assume that the program, assume as a data contains a function pointer. So the dangling pointer points to a function pointer. And let's say the attacker is able to input a return address of his choice into a new allocation. So maybe there's a malloc call or a new call later and the attacker can put in some input. But if the program uses that dangling pointer and the attacker successfully puts a return address of his choice there, again, you've taken over the control flow. So again, a lot of assumptions here, is this likely? Well, I mean, yeah. So half of the 70% of high severity bugs can be attributed to use after free. So quite a lot. So it's definitely something we should look out for. I mean, to avoid dangling pointers when programming, please, once you've freed something, set a pointer to null. That'll help to prevent this particular scenario here. Again, we can do some of these stuff on the stack. So we have a use after return here. So if you see the func function here returns a pointer to an integer. So we allocate one on the stack and return a pointer to it. But obviously, after it's returned, we have no claim to this memory. It's the use after return. And we try to use it in the main function. Again, undefined behavior, we have no claim to that memory anymore. We also have use after scope. So if you're in a scope, the same applies, so it's a return. So an attacker exploits this in the same way as the use after free that we talked about earlier. Now, let's look at some other tools that have been used to help prevent these type of things happening in the first place. So we looked at mitigations already. But let's talk about how can we even prevent these coming into our programs? So we don't even, we could be in a scenario where we don't even need these mitigations. So again, so prevention is better than the cure. So in this section, I'm gonna go through some tools and we'll build up to, you know, MTE. Do some mentor activity. Okay, the time's good. So let's talk about address sanitizer first. So basically address sanitizer is some compile instrumentation and a runtime library. And it takes quite a few of the classes of bugs that we've talked about earlier. It's easy to use. You can use it now in your programs. Please do when you're developing. Luckily, if you use extra CMake modules and the KD frameworks, as you can see with that line of snippet, you'll get the right compile compiler and linker flags set up for you. So it's more of a debugging tool more than a product of an amitigation. But the idea is that kind of upon detecting these types of bugs, it will quit the program and give you good debugging information such that you can fix the issue. So how does it work? What does it use? So to detect special memory bugs, they use a concept called the red zone. So each allocation, both on the stack and on the heap and even in globals has a red zone on the other side. And basically, it means that if you overflow, for example, by one, you'll find yourself in the red zone. Finding yourself in the red zone means bought the program, you've done something wrong. So as you can see, there's an overhead there because the red zone stuff is just empty kind of space, basically, designed to catch your mistakes. And to detect some poor memory bugs, so you know, freeing them and using a dangling pointer, we use something called the quarantine around allocations. So basically, it consists of a queue such that when you free, the allocator tries its best not to reallocate that memory very soon. And it keeps track of the fact that it was free recently such that if you do use the dangling pointer again, you will crash. So how is this done? Well, there's something called shadow memory which you can see a table of here. So it's quite clever. In the compile instrumentation on every load in store, it does a quick check to see what type of memory is it. So as you can see, memory points to, with a simple offset, you can point to the shadow memory. It's very simple math, basically. Well, I think one or two instructions. So to keep the overhead low. And the shadow memory stores metadata in one byte, talking about eight bytes of addressable memory. And then it says, well, how much of this memory is addressable? And for the parts that are unaddressable, what type are they? Even though is it a red zone, is it a quarantine, et cetera? And so using this, it can determine if you've gone into the red zone, if you've gone into something else in quarantine. So it's quite nifty, quite smart. So it was released probably about 10 years ago. It was released 10 years ago. And it reduced overheads quite considerably compared to Valgrind. So if you ever use Valgrind, the overhead was quite excessive. People still use Valgrind now. It does detect some things that Asin doesn't. But Asin had much better CPU overhead, 73%. So 10% less, sorry, not 10%, 10 times less, older magnitude. And the memory usage increased probably three times, so quite a lot, but still probably more manageable than Valgrind. And so you could use this easily for debugging for most programs, but production use probably not. And Chrome and Mozilla, and I guess us, have had a good time finding bugs with this. So when they used it, they found 300 bugs in the Chrome code base in the first 10 months. But it is a dynamic analysis tool. So a static analysis tool just looks at a source code and decides if there's any problems that it knows about. But dynamic analysis tool only does its work when it's running. And so your analysis is only good as the test suite you're using. No tests, Asin is effectively useless. But recently we've seen the trend of fuzzing. Katie does use fuzzing, I think we, somebody's managed, I think it's Albert, who manages OSSS fuzz, which uses Asin. So fuzzing basically is just using random inputs. And if it crashes, it's like, okay, I'll try more inputs that are like this. That's a whole field in itself. But Asin and fuzzing are amazing tool. Okay, so what's the next iteration? What we're going for next? So now we have hardware address sanitizer. So that consists of compiler instrumentation and a runtime library, similar to Asin. And it detects a similar class of bugs to Asin. But it uses a completely different technique of doing its work. So firstly it relies on a feature called ARM TBI to store something called tags. And the technique it uses is called memory tagging. So I'll talk about that more in a moment. Again, pretty much as easy to use as Asin, just provide the correct compiler flags. We don't support an extra CMake modules yet, I believe currently if I'm wrong. It's only supported in client, I think as well. I don't know it's supported in ARM for now. Intel is releasing something called linear dress masking, which might mean Hwasan might actually work on Intel soon. So keep your eyes peeled. Okay, so I mentioned I'm TBI, but what is it? So registers, at least on ARC 64 are 64 bits wide. But virtual addresses are only 48 bits. So there's only 48 bits of addressable memory, two to the 48 bits. And so the other 16 bits are just set to zero for all ones, and it has to be that otherwise bad times for you. But on TBI is a new feature, well not new, but a hardware feature that relaxes this requirement. And so in that top byte, you can store any value that you want. So just the eight bytes, not the, sorry, just a byte. The other byte under it still has to have a certain value, but this top byte, you can put any value you want. And so in this case, Hwasan will use it to store a tag associated with a pointer. Now, some of you may have thought, but is this a good idea? Like is it always the case that we're not gonna use all 64 bits? I mean, Intel is going to introduce or might have already introduced five-level paging, which means that they've opened the fact that we might have a two to the 57 bit address space. So could it be 64 soon? Could this not work out? I'll leave it to you to think about that. So let's talk about the concept of memory tagging. It's a general concept. It's not related to RMT by itself. You can just think of it independently of the hardware. So let's first define two key words before we carry on. So first we have the tagging granularity. You want a memory associated with any given tag. So 16 bytes for Hwasan. And then we have the tagging size. So it's the possible size of a given tan. How many tag values are there? So if it's one byte, you can have 256 different tag values. And so each granular memory is associated with a tag. And then all pointers that same location memory should have the same tag. And so upon each memory access, you want to make sure that you compare the tag of the pointer with the tag of the memory at points two. And if they don't match you have bought, you say it as an issue. Okay, so that's quite general. Why for memory safety? Well, it uses a compiler instrumentation and the runtime library. And it uses this concept to basically do its magic. So what does it do? So the memory allocator is modified to do the heap tagging. So on allocation, you have to align to the tagging granularity. And you randomly assign a tag to the memory in the pointer. And they should be the same. On deallocation, so I'm free for example, you assign a new random tag to memory. So why does this work? Well, it works because let's see allocator value X. And then you, so if you're printing use off the free, if you deallocate and get a new value, chances are it's not going to be X, it's going to be some other value Y. So if you use that pointer again, if you use that dangling pointer again, it's gonna be a tag mismatch. Similarly for a series of allocations, they're likely because they're independent to have different tags. And so if you overflow into a different allocation, they're likely to have different tags. You still have a one over two, five, six probability of missing some, so sometimes they might match when they shouldn't, but I mean, this is more than 99%, so this is pretty good. The compiler also needs to be changed. So all local variables need to be aligned to the tagging granularity. And again, you assign a tag to the memory and pointer. So instead of deallocation, you change the tag again on function exit. So when you get rid of the stack frame. For the stack, a single base tag is used and all other tags that derive from it. So basically it's a bit too expensive to do a separate allocation, sorry, to generate a random tag for each allocation on the stack. So you just use a base tag and then you might increment on top of it and just go in circles, for example. So add one to the next allocation, add two to the next allocation and stuff like that. Okay, so is it better Vanessa? Well, it has a smaller RAM overhead because you don't need a red zone. You don't need quarantine. You just need the storage for the tags itself and you need to align both the stack and the heat variables. It's also better at detecting non-adjacent outbound access, so the Renzo doesn't really help you if the access is not adjacent, if it's like all over the place. It's also better at detecting use after three over time again, because the quarantine is only good as the size of the queue. But ASAN still does have some kind of advantages. So the bug detection is deterministic for ASAN, but Hwasan still has a chance to miss it even though there's one over 256 probability of missing it. Also, because of the granular, if you, for example, have an eight byte allocation and you overflow 12 bytes, it's gonna look like it's the same tag or they will have the same tag and so you won't catch that. Okay, so let's motivate RMMTE. So RMMTE is a new hardware feature. It's not available yet, but it's available in ChemU if you wanna play around a bit. So it's an implementation of the memory tagging concept with the same tagging granularity of 16 bytes, but the tagging size is four bits, so it's less than Hwasan. Again, it uses on TBI to stick it in, to give space for a tag in the pointer. The mapping of granules to tag values is stored in main memory, so it's reserved that boot. So this is a memory that you can't use it for any other purpose after it's booted. So that's roughly 3% of your memory gone. So keep that in mind, that's a cost, that's an overhead. There's also some instructions to generate manipulate and view tags. And there's two modes of usage. So assuming you've mapped the memory with MT enabled, so there's a flag for that, you don't have to think about it as application user. But there's two types, the synchronous. So a tag is checked on each load and store instruction, straight away, and the segmentation fault is generated with a faulting address. This is slower, but again, you get more useful information for debugging. But asynchronous tag checking is delayed until the next context switch. So a segmentation fault is generated without a faulting address. So it's quicker, so it might be good as a mitigation for these types of bugs, but it's not good for debugging. Okay, so you can have, so again, heap tagging, there's an implementation for G-Lipsy. A certain tag is reserved for internal day structures, like bookkeeping and headers and stuff like that. But fortunately, the tagging granularity is the same. So the current, so heap tagging doesn't have much of an alignment issue. Again, a similar algorithm to Hwasan, but to note here, the difference is that caloc is actually no more expensive than malloc because there's a special instruction to set the value to zero. So it doesn't cost anything extra. And re-alloc always re-tag, no matter if you have to move the memory or if it doesn't move, you always just re-tag. So that's useful. Stack tagging is done with Clang, so it has an implementation for that. And so you're interrupting the Alex. You have three minutes left. Okay, okay. So again, similarly to Hwasan, there's a stack tagging. It's implemented as a simple function pass, but the alignment is much of a bigger problem now. So you need to do a 60-min alignment, and that can be quite annoying. And, but stack safety analysis can be used to basically, it's basically a tool that decides if there's a chance of memory safety issues occurring. If there's no chance of this occurring in a function or a scope, don't bother tagging it, and you'll be fine. So that can help reduce the code size overhead and just a runtime overhead. So here's an example. I think we're gonna have to skip over this just to the time issues. But you can see there's some extra instructions there. But what ARMMT offers is offers, it should offer smaller CPU overhead because you don't need to do compile instrumentation for each access that's handled in the hardware for you. Obviously, the hardware isn't released yet, so we don't know what that is, but hopefully it's quite low. The RAM overhead is also kind of a tiny bit smaller. Well, you can't really see it in user space, but there is storage. So there's 3% already gone, and then there's like the alignment that you need to deal with as well. And the code size also should be smaller because you don't need the compiler instrument, or a lot of the compiler instrumentation. And from my measurements, it's less than 5% as claimed. And for heap tagging, a good thing is that you don't need to recompile your programs. So long as you've got G-Lipsy, you're in good shape. Hoasai is still better because the tagging size is bigger. So with only four bits, then we still have a 7% chance of missing out on the bugs. So that can be a big issue. So that's our unconventional use cases. There's only one minute left, so I can't really talk much about it. But I think the slides are clear just from reading it. So I'm gonna leave it to half a minute of questions, I think, unfortunately. I wanted to go for these slides, but I think I can. All right, any questions? Okay, first, thank you for the awesome talk. I would really, really, really, really advise you to split into three talks for the next academy. So because this is something that we should all learn and I guess not by heart. A question that I would like to ask, so most of these tools are something that mitigates already made bugs or detects the already made bugs. Do you see a world where we could avoid making those bugs in the first place? I think the world is always trade-offs, right? So in many cases, we can avoid quite a lot of them by using just a different language. The real question is, can we do it? Can we have it like a free lunch? So I guess we've seen rust as some improvement in certain areas. Again, not a free lunch, just in some ways, a harder language. So I don't think we'll kind of reach that kind of 100% stage, but hopefully the trade-offs will fall more in our favor over time. So I think RMT is one of those, for example. Cool. I think that's all the time that we have. Thank you yet again. And enjoy all the claps in the chat. People are clapping, clapping, clapping. I'm not going to count all the claps that you got. So thank you yet again. Yeah, but if any questions, feel free to email me or just I'm commonly in, my nickname is Feverfuse, so I only have questions. Please let me know. Yeah, thank you. Cheers.