 We have a lot of whole space intrusion detection system and we claim that this is the one that can actually do the following things. So we all know that computer attacks actually exploit software flaws. So the most classical example is this buffer overflow attack. So basically attackers try to come in, compromise some of the control sensitive data structure and maybe execute, quote, assuming the identity of the owner of these victim applications. And this is a very dangerous kind of attack because most of the modern worm attacks actually use buffer overflow as a basic primitive. And then there are other kinds of software flaws like attacks that actually try to exploit the input validation, like SQL injection or the directory traversal. And then there are classical problems like race conditions and so on. And then there are other attacks like they don't have nothing to do with software flaws like social engineering, password cracking, denial of service and so on. What we are going to focus on today is really just what we call control hijacking attacks. In this kind of attacks, attackers basically try to hijack the control of the network applications and then run some piece of code, assuming again the identity of the owner of the victim applications. Now the interesting part of this control hijacking attack is that there's actually a well-defined recipe to mount this kind of attack. There's no mystery there. At least at the higher level, of course, individual victim applications have different kinds of vulnerabilities. But at the high level, you can actually analyze the general kind of buffer overflow or control hijacking attacks in particular that there's actually a recipe. And this recipe is actually not unlike the kind of procedure that real hijackers would actually use to hijack an airplane. So what exactly actually happens? Usually what happens is that the attacker would try to inject some sort of malicious code or data into the address space of the victim applications. So this is just like a hijacker try to sneak in some weapon into the airplane. And then the attacker would try to twist the application to transfer its control to this injected piece of code so that you can actually do something. And this is like a hijacker actually try to take over the airplane. And then finally, the attacker would actually try to execute some system code. Anything you want to do, except you want to amount some sort of denial of service entering the infinite loop. Anything more interesting than that involves some sort of system code. Exactly. You want to read somebody's email. You want to read somebody's address book and send, duplicate yourself as an email. You want to read some sensitive file and send it over to some places. You want to download a compiler. All these things eventually involve some sort of system code. So eventually, once you get control, you want to do certain things. So this is just like a hijacker would actually, once you hijack the airplane, someone probably use the airplane to hit something. And that's basically the flow that typical control hijacking attacks would actually take. So a typical example of this sort of three-step recipe, we can use this as an example. So this is a very simple stack overflow attack. In this particular case, the attacker actually try to overflow a particular local data structure like user ID. So this is just a simple integer array, five element array. And the left-hand side is the piece of code, the right-hand side is the corresponding stack layout. So as you can see that the attacker actually try to overflow this particular integer array. And then by overflowing this integer array, you can override certain other things like this local variable. And then you can overflow this frame pointer and then this return address. And this return address is what we call control sensitive data structure in the victim application. There are other examples of control sensitive data structure. Any data structure that would affect the control flow of the program is called control sensitive data structure. Return address is one example of that. Function pointer is another, some sort of import function table is yet another example of control sensitive data structure. So in this case, it's just the attacker simply overflows, overrides the return address. And it overrides the return address with the value of 100. And you can see that 100 is actually the base of this local array, user ID. So assuming that this attacker actually overwrite this memory location 108 with the N80 instruction, which is the instruction that you use to actually issue a system call, then the attacker can actually get the control after the input returns. Because when input function returns, it uses the contents of the return address to jump to. In this case, the content of the return address is 100. So the 100 happens to be the base of this user ID. And the control goes to 100. And then 104. And then 108. 108 happens to contain the N80 instruction. And that's game over. Right? That's the first step. That the attacker actually grab the control and then issue a system call to do some bad thing. So that's that. It actually illustrates that there's actually a three-step recipe in this process. So there's nothing new up to this point. Now recognizing that there's such a three-step recipe, how can we deal with this general class of control hijacking attack? That's where we want to focus on today. So we have been working on this control hijacking attack problem for a while. And we basically take a multi-pronged approach to solve this problem. And the umbrella project is called Palladium. Today what I'm going to only talk about a particular component of this Palladium project. By the way, Palladium has nothing to do with Microsoft's Next Generation Secure Communion Base project. It has complete no relevance with that one. All right? This is complete our own project. So recognizing that there's actually three-step procedure to mount a control hijacking attack. What can we do? Remember the first step is that the attackers try to inject something into the victim application. And usually the way they inject something is through buffer overflow. So the right thing to do should have been you should compile your application using every bound checking compiler. If your compiler has every bound checking mechanism built in, then you should not have that problem. But how come that people don't want to use every bound checking compiler? There are actually commercially available every bound checking compiler. And open source one will be our bound checking compiler, GCC, which is derived from GCC. The reason that people don't want to do array bound checking is because it's very expensive. The typical performance overhead we are talking about is between 100% to 300%. So it's pretty expensive. Because every memory reference now you actually have to check whether it is exceeding the bound of its associated data structure, like something within a data structure or something within an array or something within a dynamic allocated to buffer. You have to check whether every memory reference exceeds the corresponding object bound. And that's very expensive. In the Palladium project, we use a very interesting idea to solve this problem. The way we solve this problem is that we exploit this particular feature, virtual memory hardware features in x86 architecture. Basically, what we do is we use the segment limit check in the x86 architecture's virtual memory hardware. And by twisting it in a certain way, we can actually get that performance overhead under 10%. This is the fastest array bound checking compiler you can have. Of course, this is a very architecture specific, mind you. This is not a penalty, which we cannot automatically apply this technique to other architectures, but since x86 architecture accounts for more than 95% of the machines, so it's probably okay. But this is sort of one of the interesting aspects of Palladium is that we can actually do very fast array bound checking by exploiting certain virtual memory hardware feature in x86 architecture. So that's the first line of defense. If you compile your application with the array bound checking mechanism enabled, you should be able to prevent attackers from injecting anything. And that should be the cleaning solution you should use. If you don't like it because you don't want x86 or you just cannot afford to recompile all your application or you just get binaries from your vendors, you don't really have that option. Then we go to the second line of defense. In the second line of defense is your compiler solution. What we are trying to do is remember in the second step, what they are trying to do is they try to twist the victim application to transfer its control to some maliciously, some injected malicious code. So if we can somehow protect this so-called control sensitive data structure, then we should be home free. And that's exactly the idea of the second line of defense of Palladium. We try to protect the integrity of this control sensitive data structure. What is control sensitive data structure again? We turn address function pointer and this import function table. So the idea is very simple. Every time the application actually modified this control sensitive data structure, we keep a copy of that. And from that point on, we know it should not be modified. And later on when the program actually tries to use this control sensitive data structure to transfer its program's control, we'll double check whether the one that we store away and the one the program actually use, they actually match. If they are different, then we know somebody is trying to tamper with this control sensitive data structure and something is wrong there. And that's the essential second part of the defense. So again, even an application, you don't like array bound checking, compiler, fine. We'll protect this control sensitive data structure for you. All this assuming that we have access to a source code so far, all right? So that's basically the second line of defense. Now if you don't like to do this array bound checking or you don't like this integrity check for some reason, then we have this third line of defense, which is the topic of this talk. You want to do some sort of system call policy or system call model check. So idea is that I can allow you to come in, inject whatever you want. And even if you can twist my application to transfer to the injected code, as long as you cannot issue arbitrary system call, I'm okay. So it's okay for you to come in and then do your things. Of course, you can actually insert the infinite loop, but even that in the modern operating system, it's not a problem. Every process has a quantum, so this is not really a problem. It cannot really mount a very serious denial of service attack. So our basic assumption is that if we can somehow protect each application in such a way that the application cannot issue arbitrary system call that is not supposed to, then it probably should be okay. And that's what we are going to focus on. So remember just that at the, somehow we have a way to derive some sort of model of this, of any arbitrary network application. And at the wrong time, we are going to monitor its wrong time system call behavior against this statically derived model. And if there's a deviation between the two, then we know this application must have been hijacked. And that's why it's wrong time behavior is different from the statically derived behavior, right? And that's the thing that we are going to talk about. Now, when all these fail, let's say this attacker is really good. That just the user doesn't want to deploy all these solutions. Then we have the very last line of, it's not really a defense. It's really sort of taking a different viewpoint of this entire security problem. So we build a system call repairable file service. And the idea there is that we try to borrow ideas from sort of four-tower and computing community. Where they actually recognize that it's very expensive to build highly sort of four-tower and computing systems. So, and then we realize that there's actually two things that users want. One is called reliability. The other is called availability. And at the end of the day, what the users really want is availability. Now, reliability typically means that it's mean time to fail is very long, infinite, right? And availability means that the system somehow is up with the high percentage of the time. So that's the availability. Now, there's actually a subtle difference between the two. Because availability can actually be defined as mean time to failure. Over mean time to failure plus mean time to repair. So it's a mean time to failure at the numerator, mean time to failure plus mean time to repair at the denominator. That's the definition of availability. We want this ratio to be close to one. Meaning 100% of the time it's available, right? Now, the four-tower and computing community recognize that to achieve this availability metric to make it equal to one, there are two ways to do it. You can make mean time to failure to be infinite, right? The highly reliable, never fails. That's one way to do it. Or you can make mean time to repair close to zero. That also can get you a high availability, right? So the last line, the last element of this palladium system is to say that we can take the same idea in the security world as well, right? Or we try to build a system such that it would never be broken. There's no security breach, but we know it's getting more and more expensive. All the low-handing fruits are gone, right? And then, of course, there are always misconfiguration, human mistakes, these and that. So maybe the right thing to look at right now is to see how we can quickly recover from a compromise system so that we can get down with our lives. If we can quickly recover a compromise system like within sub-seconds, why do I care? If you can compromise whatever you want, assuming that, no, of course, there's some price about this, which is that if somebody steal your intellectual property, right, there's nothing you can get back. But we are talking about many people coming in and screw up your system, and you can come in and spend a week or days to clean it up. And that's the time that you cannot afford to have, or somebody screw up your e-commerce site, and therefore your business cannot continue and so on. So we are talking about that kind of scenario. That this repairable file service can actually help you. So the question now becomes that if we want to build something into a computer system in such a way that it can repair itself quickly, even after a successful intrusion, what does it take to do something like that? And that's essentially what this repairable file service is doing. And we have been able to do this kind of thing for NFS server and SQL server just fine. So basically what it can do is the following. If imagine that somebody screw up your database 24 hours later, you find it out. That is your option. Your option is let's roll back the database to 24 hours ago. For people in Wall Street, this is probably not an acceptable option, because they don't want all the good transaction that happened during the 24-hour period to be gone. So they need somebody to come in and carefully identify the damage that actually caused by the intrusion, and then selectively roll back those damaged data. And that's what this system can do you, can help you. We can actually do something like that for NFS server or for a database management server, all right? So all of these things are part of Palladium. But today, as I say, I want to focus only on system called model checking, all right? So we call this system program semantics-aware intrusion detection. Now, just to give you an idea, most of today's horse-based intrusion detection system are using similar ideas. They are monitoring some sort of system called patterns of the applications. Even though the commercial system called behavior blocking system, they do the same thing. They basically have to have some sort of policy for some important application. And they can somehow, you know, monitor the system called patterns of this application at a run time, and see if they actually deviate from the original model that they are supposed to follow, all right? So the big deal about the paid system is that we are able to derive this system called model in a way that's accurate and automated, all right? So most of the existing horse-based intrusion detection system follow the same model. The only difference is that the way they derive this model tend to be manual or going through some sort of statistical learning phase, all right? So you have to understand that there's nothing new about the idea of monitoring the system called model. This is what everybody is doing. The new thing here is really about how do you derive such a model in the first place? I give you a PowerPoint. What are the sequence of system called that PowerPoints are supposed to make? That's the question that we want to know, all right? So as I say that the current approach to this system called monitoring approach is really about how do we derive this system called policy or model in a way that's accurate and hopefully automatic. Now, existing approaches are something like they can use many specifications. Somebody go in and then either reverse engineering or look at the patterns, run the programs for a long time and see what happens and so on. So obviously this is an error-prone process, labor-intensive and not very scalable. Or you can use machine learning techniques that you can run the programs 100 times and then try to derive the so-called normal behavior. And then if anything deviates from that normal behavior at the wrong time, then you can declare this possible intrusion. And all these things would lead to false positive because you can never cover all execution paths of the program. There's no guarantee, right? After all, you are running them. There's no guarantee you can cover all of them. And that's why a lot of this whole space intrusion detection system have false positive. I mean, sometimes they are even worse than that. They don't even perform such a process for each and every application. They just have a single policy for all the reasonable network application. And what we want is that we want to do it in such a way that it's accurate and automatic and hopefully they can be tailored to each and every individual network application. And our position to this problem is very simple. We say that everything is there in the program. Assuming you already have a source code. Of course, you can compile a program and you can derive the system code pattern at compile time. Information is there. After all, the program is actually making a system code, right? And so you can actually derive this information. And not only we are going to derive the order in which these system codes are made, that's what everybody is doing, like open, read, write and so on. That's the order of system code. We also want to derive the sites at which these system codes are allowed to be made. So for example, a program actually makes two system codes open. But that program makes the system code at two different places. Then we want to make sure at a wrong time, actually they are two open codes. System codes are made, and they are made at exactly those addresses, okay? So we are going to check not only the order of the system codes, but also the places at which these system codes are made, right? So this is going to tighten up the flexibility of these attackers. Remember how the attacker is going to attack? Once he gets the control, then he tries to issue a system code. And most likely this system code is either in a stat, or he actually makes some code to the libc in order to make a system code. In either case, we should be able to detect them, right? Because we are going to check the sites as well. So when they make a system code, they follow the order that I derive at a compile time, and not only they then make a system code at the places that I think this system code should be made, okay? Only then that this kind of system code that attackers make can be successful, right? So as I say, emphasize again that the process of deriving per application system code policy or model in paid is automatic and is actually very accurate. There are some small window of vulnerability still. That's why it's not zero force negative. But it has a very close to zero force negative. I'll explain what's the window of vulnerability of paid. So in terms of this architecture, it's very simple. You take an application going through the compiler, and you would derive this system code graph. So it's basically a graph. It says that if you go come here, open, and follow, it should be a read or write. And it can be read or write because there's a branch there. We don't really know at the compile time which ways it's going to take. But we know that after open, it should be either read or write, all right? So that's why it's a system code. It's not a sequence. And then at the wrong time, we observe the actual system calls made by the application and compare the two. If there's a deviation, then there's an intrusion. And when we declare there's an intrusion, there's indeed an intrusion. That's why it's zero force positive. So in terms of attracting the system code policy or model, it's actually very mechanical. You can take a program, build a system code graph from its function code graph and function control flow graph and blah, blah, blah. And in the end, you can actually do it, okay? So I'm going to skip this mechanical part and talk about something that's more sophisticated, where the problems really are. I mean, this part is sort of pretty easy to do. The problem with this approach, if you think about it, is there's actually a fundamental flaw of the assumption that at the compile time, you can derive the system code model or policy from a network application. Can you think of any reason why that assumption, there's actually a flaw in that assumption? The problem is that at compile time, sometimes you cannot even derive a complete control program. So imagine that you have a program that has a function pointer. And the value of this function pointer is only available at runtime, right? So I cannot even construct a control program. So what are we talking about? You can statically derive a system code model, right? Wouldn't that be like a pretty unrealistic? And that's what we call by dynamic branch targets. Dynamic branch targets are branch instructions whose target addresses are not known at compile time, right? So we don't really know where it goes, right? So examples of this could be something like, you know, signal handler. For example, you have a program that includes a signal handler. In your program, there's actually no call site that calls this signal handler, right? By definition, no part of the program will call signal handler. Who will call signal handler? The colon will call signal handler, right? Your application registers the signal handler to the operating system. And then when something happens, the operating system will call back the signal handler and do something, right? So if your program contains a signal handler, you don't even get to see. Now who jumps to it, right? So there's no way you are going to analyze the signal handler system code model. So how do we solve this problem? That's basically what we are talking about here. So it turns out that there's actually a loophole we can explore so that we can solve this problem. And the basic recognition is that although at compile time, we cannot know the target addresses of all the branch instructions. We know that we don't know, right? We know that sometimes this is a function pointer, so I don't know it's a value, but we know we don't know. And at the places that we don't know, the branch instructions target, we can actually insert a special system called call notify. So this requires some sort of operating system support. So essentially what we have is we are given a program and at those places whose target addresses are not known, we are going to add an additional system called call notify. And the arguments of the notify will be the actual value of the target addresses of these conditional branch instructions. And this value will become available at a run time. So I can actually inform the run time checker about the values of these dynamic branch instructions. And that's how we solve this particular problem, right? So that's one problem associated with this approach. There are other problems as well. Real programs are actually more complicated than we thought. It's not just a bunch of control for event, air, square loop, and so on. There are things like set jump or long jump. And then there are things like signal handler. And there are things like dynamic link library. Like if you have a program, and they're actually at a run time call DL open of some module. And this module name is not known at compile time. At compile time, you don't even know this piece of code is part of the program. So there's no way you are going to analyze it and then give the corresponding sort of system call grant, right? Fortunately, all these problems can be solved by something similar to notify, right? At the run time, we all know that these are all pretty core key control constructs. And then we can all insert this notify system calls appropriately. So that at the run time, the run time checker will know. So for example, the way to deal with the DL open, the program at the run time call DL open to load a module. And then at the run time, we are going to analyze the module to construct the corresponding system call grant and continue, right? So some of the stuff they will do at the compile time, now we have to do at the run time because we have no choice. At the compile time, I just don't know that much. So I have to defer some of the things that we need to do at the run time. And that's basically what we are doing. So that's the other problem we have to solve in order to make this work. Now, even if you solve all these problems, they are actually sophisticated attackers, right? Imagine what we have. We have, I have a compiler, and then I can actually analyze any network application to derive the corresponding system call grant. At the run time, I'm going to check that network application system call pattern against this grant. Any deviation from the system call grant would be flat as an intrusion. Seems pretty tight, right? Not really. There are people who can do so-called mimicry attack. The notion of mimicry attack is saying that the attacker can come in, exploiting the good old buffer overflow vulnerability. And then from the point now, he is going to, he or she, is going to emulate the legitimate system call sequence according to the same system call grant. So he's trying to fool the underlying run time checker saying I'm a good guy. I'm actually issuing a system call exactly according to the system call grant. Now, if this particular application contains some system call that the attacker needs in order to really mount the attack, the attacker can actually follow some certain legitimate system call sequence until that particular system call comes along, and then issue a system call to the damage. So for example, you have an application, and that application contains some system call like exact, which is the system call allows you to fork another process. And at the buffer overflow point and the exact system call, there's a sequence of system calls in between them. So the attacker would actually follow exactly the same sequence or system call as specified in a system call graph until he hits the point, hit that particular exact system call and do, and issue the system call. So that's the way to mimicry attack. The idea of mimicry attack is that the attacker actually can use the same tool that we are using to derive the system call graph and embed the knowledge into the attack code as well. Now, we are assuming pretty sophisticated attacker now, but we have to assume they are as good as we are. So they actually can use our tool, derive the legitimate system call graph, and then try to derive a mounted attack at the point that the particular system call that they actually need. So that's the idea of mimicry attack. So in order to mount a mimicry attack, you actually have to issue this sort of dummy system call to follow a system call graph, and then at the end, when you go to that particular system call you need, you actually have to do it. So for example, for exact, you can say now I want to focus process, a shared process, for example. So at that point, you have to change the argument of the system call so that you can do what you want. So these are the two things that you need to do. You basically have to issue each intermediate system call without being detected, and you have to grab the control back so that you can continue doing that and at the last moment issue that damaging system call. So how do we solve this problem? Basically, what we need to do is that we have to check the system call argument whenever possible, right? Remember, at the last step, that guy is trying to say that this is the exact system call that I need, and I'm going to fit that share slash bin slash share, that particular argument into the system call so that I can get a shared share process. And the other thing is I have to prevent the attacker from grabbing the control back. How does the guy grabbing the control back? The guy actually have to fake all these system call and then put into the return address back to his injected call, right? So if every time a system call is made, I check the return address on the stack, make sure none of the return addresses are actually in the stack. We can prevent the attacker from grabbing the control back. So these are the two things that we need to do. Make sure the system call arguments are all legit, and make sure that the control never goes to the stack, for example. So these are the counter measures we can use to prevent the mimicry attackers. So I'm going to skip some of these and then I'll give you an example, talk about what paid actually provides. So paid actually checks the following things. Again, it checks the ordering of the system call. At the wrong time, the system call has to follow a certain order as we derive at the compile time. So that order has to be correct. Paid also checks site, that each system call has to be in the right place. And paid also checks stack return address. The return address on a stack cannot be in a non-text area. They have to be in a text area. This is to prevent the mimicry attacker from grabbing the control back. And then finally, paid also includes additional mechanism, which inserts a random notify system call at the low time. I'll defer this point when I show you some example. So let's say you only have ordering check. And that's essentially what everybody has nowadays, the whole space intrusion system. They only check the order of the system calls. So, and this picture basically shows the control flow graph of a program. So in this case, it makes five system calls, set UID, read, open, step, and write. So if the attacker actually somehow grabs the control through buffer overflow vulnerability, all he has to do is just issue the system call in exactly the same order. And then eventually he can compromise the system, right? So because it's a mimicry attack, right? It's a mimicry, the exact order that the program is supposed to make. And when he hits the one that he needs, then he can mount some sort of damage. He can cause some damage to the underlying system. Now let's say we actually check both ordering and site. So in this case, the attacker cannot just simply issue the system call in arbitrary places. He actually had to issue the system call exactly at this place. So he actually had to issue the system call at this location as specified in the original program. So that's what he needs to do. So that's what happens. You do this jump, go to that right system call and come back, go to that place and then come and eventually compromise system, right? And the way that he can get the control back is by modifying the return address, all right? So we can prevent this if we do stack check as well. We do ordering, we do site, we do stack check. So in this case, the attacker can do the same thing and try to go here. When this is done, the control has to go back to the original flow because he cannot fake the return address because if he understands we are checking, then he cannot fake. Yes? Yes? Yes. I understand that. Yes. I attended your talk. Yes, that particular attack would actually fly. You can actually fake the return address without being detected. But there's a way to deal with it. I'll talk to you later. Not that particular attack. It's a completely different way to solve this particular problem. Then if we do actually stack check, there's actually one loophole. Let's say this guy only needs exec. So he actually doesn't actually need to follow the exact system call. So exec is the only one he needs and he doesn't really need to grab the control back. Then in this case, the even ordering site and stack check cannot help you, right? The guy wants to grab the control can actually go ahead and issue the exec call. So we cannot really help you. So that's why we have the last mechanism called notify. The way notify works is that at a low time, when a process is started, we actually would randomly modify the binary in such a way we can insert the notify in random places. So this is not something that you get from binary. This is something that we insert at low time, right? So in this case, if the attacker is doing the same thing, then this particular attack will get detected because he needs this particular notify, all right? Remember, all the notify, and notify is part of the system call policy, if you will. At the wrong, when the program started, we randomly insert notify and this will be part of the system call model and we check that they should follow the same model. And because the attacker doesn't know how we actually insert this notify at a low time, so there's no way for them to actually follow this notify exactly, right? So that's the way to deal with it. So we also do system call argument check. Basically what we do is that we can actually derive, analyze different kinds of system call argument and classify them into three categories, static constant, dynamic constant, and dynamic variables. And static constants are something like hard coded into the program, dynamic constant are something you read from the configuration file, from command line input, or from environment variables. Since that you won't be able to know at a compile time, but once you get initialized, they never change. So these are the dynamic constants. Our system can deal with the static constant and dynamic constant. We also can incorporate the static constant and dynamic constant type of system call argument into the system call policy. So now not only we are checking site, not only we are checking order, site, and return address, and randomly insert notify, we also check the system call argument has to be correct. Okay, for those that we can check. Unfortunately, there are a lot of them fall into the notes, the category of dynamic variable system call argument. This is like something like, you know, you have Apache that get a URL, and you can get an URL with the root directory of Apache, and then you get the full path name, and you use for open, for example. Those kind of dynamic variable we cannot do, either statically or at a long time. So we cannot really check, so we just let them through. And this is the only loophole that attackers can use. So when they attack, they have to use some system call, and the system call argument better be dynamic variable. Otherwise it will get caught. So in the end, the window of our ability is the following two types. That at the time of, at the point of buffer overflow, vulnerability, the attackers, the system call the attacker needs in order to mount any damage, to cause any damage to the underlying system is immediately following the buffer overflow vulnerability. So in that case, there's no way we can check it, but the next system call is exactly what he needs. So that's one way. The other way that attackers can sneak through is that they don't really need to make any additional system call. The attack is in the form of changing the system call argument. So if they can overflow some data structure which eventually would affect the system call argument, then they can also sneak through this because they don't need to make additional system call, we cannot really do anything. So these are the windows of vulnerability, and these are where, this is the reason why paid force negative is not exactly zero. So we have prototype implementation, and I just wanted to show you the throughput. Basically what we do is we try it on a bunch of large programs and try to see whether all this mechanism that we add, that we add, whether they will actually incur a lack of performance overhead. It turns out the performance overhead is really acceptable in most cases. So in this case, we just basically tried four different configurations. The generic paid, and paid with the return address check, paid with the random insertion of notify, and paid with both the return address check, and random insertion of notify. So it seems that everything seems to just fine. That the performance of the throughput overhead is under 11% for all cases. So it seems it should be okay. So I hope that I give you some idea of what people can do with the horse face, the intrusion detection system today. That use compiler technology, you can actually do a lot. Of course the downside is that you don't always have access to the source code. That's the only problem. But if you are willing to go this route, the compiler can help you a lot. Previously compiler is about compiling program. And then in the 90s, people use compiler to improve the performance of program. Now we probably go into the era in which the compiler can help you to strengthen the security of the application. We are not only interested in translating the high level program into binary code, or to speeding up the program. We also want to improve the security of these programs as well. So the big item that we want to do now is to implement a version of pay that can work with the Windows binary directory. So as I said, this removes one of the biggest criticism of pay that you have to have access to source code in order to use pay. And we are hoping that we can do something with the Windows binaries without having access to the source code. So if you want more information, you can go to this particular website. And if you are interested to try our pay, then send me a mail or contact me towards. Thanks.