 I popped in and watched Gil talk about SMTPD, and there was one particular slide that was interesting to me. Very started talking about old security risks and trying to manage the design, so that in case of a failure, the part of SMTPD wouldn't become a big risk factor, and he used the word nightmare. And Pledge is just the latest piece of the puzzle for some things which have been worrying me for a long time and keeping me awake. And it basically comes down to two papers. I just wanna show off hands. How many of you have read either of these two papers? Okay, actually let me do it a different way. How many have read the first paper? Okay a few, and how many have read the second paper? Okay, all of you should read the second paper. This is the most scary paper in computer science in the last 20 years. And I was just stunned when I saw this paper as to the number of additional mitigations we're going to need in OpenBSD. So first of all, to get started, the first paper is about a methodology that you called ROP, which is used for attacking, and originally people thought this really was only a problem on X86 architectures with some of their instruction formats and stuff like this. So the guys on the first paper, they actually went and did return-oriented programming on 64-bit Spark. Since they knew where the gadgets were, they then wrote a C compiler, a mini C compiler, that would directly from C code generate attack code and upload it directly into the application. It was absolutely ridiculous, just to show how much tooling, just a couple of university graduates can build to have a complete straight from C to attack approach. Then, acting blind is a paper about BROP, okay? I'm just gonna jump ahead and explain. So return-oriented programming is where you hijack the control flow. Normally you're going up and down the stack saving return addresses and doing your code all by yourself. But in the return-oriented programmer, what the attacker does is he uploads a replacement stack that contains return addresses. But the return addresses return into tiny little fragments of the code, which are called gadgets, a gadget being a number of instructions before a return instruction. And those number of instructions above the return instruction modify the registers somewhat in some way. And by chaining these together with false return patterns, he's sometimes able to generate perhaps even all the way to a tooling complete behavior or perhaps just damage the machine and then return. On x86 architecture is even worse because you don't just have return instructions but because it's a variable-sized architecture. You also have polymorphic sequences that occur in the instruction stream. For example, if you have an instruction which loads a value to register but the value it's loading contains the byte C3, that byte C3 will occur in the instruction scene. It's a return and therefore the instructions before the load, whatever their meaning may be, they're not actual real instructions. They weren't intended to be but they are instructions and attacker can actually use those as well. So it's quite a terrifying architecture. The attacker needs to know a few things and this is a recurring trend you'll notice that the attacker needs to know a few things. There's some other techniques called Jop and Srop which tie into this as well. Blind Rop is based upon the observation of the fundamentals of learning about the address space that you need for Rop. What it is, it's an address space oracle and it works specifically against address spaces which have been reused. For example, if you take a process and you create a fork copy of the process, now it has the same one. And if this process dies, if your parent keeps on creating a new child which is exactly the same memory layout, that is when B-Rop comes into play. Because across a socket, what the attacker will do is he'll first look for a way to make the program fail in a way so that it crashes, that's the easy part. Then he will search with his payloads that he's able to upload and try to find a circumstance where the program spins. Then he can close the socket but now he has a difference between two behaviors, a stop and a spin. And now he can variance around again, trying to find if he can find a stop that's controlled by a spin. And searching this way, it sounds a bit of a logical jump, it's described in the paper. Soon he can start guessing, byte by byte what the stack protector cookie is. And then soon after that, he can start scanning the address space for where instruction sequences are which give him tremendous power, such as writing out the entire code segment back over the socket to himself at which point he knows what the memory layout looks like and then he can upload a proper rock sequence because he has all the knowledge. So it's very, very scary technique. Now, the problem at large is that software will never be perfect. We just are not capable of doing it with the tools that we have today and the way that we apply those tools. Because it really only takes one or two mistaken if statements to end up in a logic error, which creates some sort of a memory, a piece of memory has been damaged or a piece of memory has not been initialized and yet we're barreling on. And therefore you can end up with all these consequent behavior and it keeps on cascading. When software goes wrong, it doesn't stop immediately at that fault. I mean, many of us have been in the buggers and seen a fault that had to back trace it to a condition that was completely unrelated and thousands and thousands of steps beforehand. And the software has simply kept on trying to do its job, unaware that a condition has happened. There's no hard failure. So therefore I work on these things called mitigations. They're small tweaks that are designed to slow, to hurt these attack methods. So that either you can't exercise the full capability of what the method would normally give you or you, or it completely stops it. Those are more rare. But it also spots cases in the runtime where an anomalous condition occurs that we shouldn't continue on from. And it tries to turn these into hard failures so we fail close so we can head towards robustness. Now robustness is a word that I've been struggling for a lot as to the way it's used in computer science. So it wouldn't sound like a definition for robust. And it's this crazy definition. It suggests that when a system starts breaking, it should keep on trying to keep on running. And I don't agree with that at all. Like it says it can recover. I don't see how you can recover from a mistake that you didn't even know about. This definition doesn't seem real to me at all. That's not the way that my experience with the debugger over the last couple of decades has been. We don't have robustness. And the reason is because we don't have a substantial set of failed close behaviors that cause software once it goes wrong to stop quickly. Instead it just keeps on barreling along. So mitigations are supposed to try to stop us earlier. Now, what's a good mitigation? If we can diminish the effectiveness of an attack, that's good. It should be low overhead because nobody wants a 5% slowdown of their machine to gain the safety. Or for example in managed languages, some types of problems people are dealing with. Parts of this with garbage collectors, it's too slow, right? And then people don't prefer not to use a type of tooling. The mitigation should be easier, pretty easy to understand. So that people don't get confused as to whether this thing is actually artificially slowing down or causing various problems for them. And we shouldn't assume that one mitigation is gonna solve a wide set of problems. It might only solve one problem in one way. But sometimes when we build these mitigations, we can also be optimistic that two or three of these mitigations near each other may combine into a slightly sort of stronger surface. Okay. And if a mitigation's good, it usually also develops a cult of people who believe that this mitigation is good. And because some mitigations are not just automatic, some of them actually require some interplay with the developer. It's good if you also have people who understand it well enough to believe in it and try to modify their code to become closer towards the behavior of this mitigation. And then backtrack up to the types of, so how an attack actually works and what people actually play with. The first thing the attacker needs is a need knowledge. So he needs to find a bug, okay? After that, the most powerful tool he has available to himself is this substantial consistency in the outer space layout at the moment of the crash. Generally, the attacker on his machine has exactly the same binary that you have running on your machine. So he knows what the instruction layout generated by the compiler was. The instructions are generally in exactly the same place. He knows this address space randomization in play. So for example, the shared libraries may be a different place, but he has the same shared libraries as you. So the fixed offsets within the executable are the same and the fixed offsets within the shared libraries are the same. There's also a stack which could be anywhere, but that stack inside itself also has objects in the same consistent order. So even though base addresses for these objects are randomized, the relative addresses between these objects are still the same. The attacker finds himself in a circumstance where he has registers available at that attack moment. And some of them are fixed values because the code as it was running has these variables in these known conditions, but also certain registers have pointers in them. And these pointers may be inside an object which he doesn't know the address for, but he knows what thing in that object it is. So therefore, he may have a pointer to a function in libc, but therefore at a fixed offset off that thing inside libc, there's another object. So this consistency is a very powerful thing for the attacker to utilize. He also uses these things called gadgets. He uses the constants, like I said, the pointers and the register values. He combines these things using a variety of, he combines these with the gadgets to try to get towards his goal. So the mechanism he uses with gadgets is he ends up reusing code that you already have inside your binary. And he tries to reach out towards system calls. And eventually, of course, he's gonna be operating on the file system space, if possible, or he's gonna be trying to do operations in open file descriptors such as sockets or whatever else. So that's the components. And this is the area where we want to start messing things with. So how do we mess with them? I'm just gonna show a couple of examples. So over 17 years, these are the types of mitigations that we've been squeezing into the system to try to make it harder for things to work in an unorganized fashion. Many, almost all of these things do not change any of the normal way that any of you are supposed to deal with software. You're supposed to deal with the API that's provided by the functions. And at some points, you have considerations about the ABI of a function. But almost all of these things operate at a level below that, which are not things that are guaranteed by the system to work in a certain way. So we're allowed to change these things. The ones in red are not done yet, but they're work in progress. So these things try to make the aberrant software crash earlier. And of course I'm a heretic because BSD got this right many, many years ago. Of course, it was perfect. There's no reason to change these underlying mechanisms. But I think that's not true. I think the rules of engagement with the user base, the user base has first of all exploded. Inside that user base, criminal elements have shown up. People have gotten clever. They've recognized that our machine architectures aren't so good. And security concerns who didn't exist inside a small community 30, 40 years ago, they are now here and we have to deal with them. So we cannot ignore these problems. So I see academic papers all the time coming up with ideas to how they solve these problems. And in the same vein, I'm using OpenBSD to try to see if we can do a better job. So it's a research. For me, OpenBSD is all about research in this area. And I want to discover new patterns which will resolve some of these problems and have a very, very powerful community and test zone around me, which is basically the entire base operating system is a test zone and we have our ports tree, which is something like 8,000 pieces of software. And if we can add a mitigation and it doesn't break 8,000 pieces of software, then we know we're on the right track. And if we can find that it actually goes and breaks 20 pieces of software and it turns out those 20 pieces of software have a preexisting bug, then we know we're on the right track also. So once in a while, a piece of software, once in a while, a mitigation we work actually does run into a bit of a wall and then we backtrack and see if we can find a way out of that. So I'm just giving an example though that it's not like a one-step process. For example, ASLR. ASLR has been in OpenBSD since 2001. And first of all, we use it to randomize the, for example, I'm just gonna speak about shared libraries and stuff like that. We randomized all the shared libraries so they have base addresses. So their base addresses were randomized. We recognize pretty soon that since some of the shared libraries have fairly fixed sizes, the gaps between them still looked a little bit familiar. They were still, and then so two years later, we randomized the order that we actually mapped the shared libraries. And that was obvious. And then we realized, well, they're packed together. So in 2005, we put little guard zones in between them. And then in 2007, I mean, this is just recently, somebody discovered that you could actually reach off the bottom of the stack. And so we added a guard on the bottom of our randomly located stack. And now I'm playing with actually randomizing the internals of shared libraries and the kernel. So for example, we're relinking libc at boot time. We're relinking libc so that all the other binaries using that libc actually, the attacker will see I have a pointer into libc, but I don't know what offset printf is at. I don't know where write is because the .o files in the random library have been randomly linked to every single boot. So it's just a mitigation strategy. So what's going on to get to backtrack? We are trying to remove knowledge that an attacker can discern from outside. We're trying to remove historical weaknesses of permission models that you're not supposed to rely on. Like for example, the fact that executable code is always readable. Like the fact that the writeable memory is always readable. That static const, the data used to be in the data segment. We now have it in the RO data segment. We're trying to do things like this. But it's not enough. It's not enough. Doing these low level changes is not enough. It's great, but it's not enough because if someone can still bypass all of these low level mitigations and still succeed at actually executing code, then he can do go-do system calls. So we've done a pretty good job at getting rid of the low level things for the knowledge for the attacker, but it's time to start working on more sort of structured control things and the way system calls are used to access the resources. So we're still concerned about that area over there. I got a slightly different way of looking at the problem. Originally, people used to do Stack Overflow type attacks with directly putting the code right there. And then we made it so they couldn't write out to the stack. So those are the mitigations we added in that era. And then when Rop came in, we started privilege separating programs so the address space between the two sides was more different. But then B-Rop came along and told us we're not allowed to share address spaces and so we started doing fork and exec. And now we're working on Pledge and Redcard and we're gonna try to make the code segment execute only. And then we're gonna try to see if we can get rid of the polymorphic returns in the text segment as well. Pretty crazy stuff, pretty crazy stuff. Don't know if it's ever gonna finish. But I just wanna talk about this area over there. So privilege separation is a strategy, a design pattern, that we started doing in OpenBSD 2004 heading, SSHD was like the earlier. SSHD, SSHD, Marcus Friedl, Niels Provis, 2000, 2000. Yeah, yeah. So SSHD was basically the first major privileged severed program which used file descriptor passing. There were two programs beforehand which were privileged separated. They were PostFix and Qmail, okay? Qmail did it by file system scanning. PostFix did it by passing objects over pipes. But SSHD was the first one which started doing file descriptor passing. So file descriptor passing is the fundamental concept that we use now in many of our daemons. The fundamental idea is pretty simple. It's to take two parts of the program that follow different work domains and move them into separate processes and have them communicate across an IPC mechanism of some sort. It's really, really simple. So if you've got a piece of code that you just don't trust, then you go put it off in a separate process and it does the work. But you've got a front end that goes and does the tricky stuff in the front. So you separate them out. A really good example. Did anybody see that list of 300 TCP dump bugs about a week ago? Yeah? Aren't they hilarious? Aren't they hilarious? I saw it and I ignored it. I don't care about it because RT speed dump is written this way. And all of those bugs are in the crap string handle handling area of the software over there. And we've completely locked that down. But we used to use locking down using CH route and other things. But if we don't use the CH route and anything, then the problem with this approach over here is we have separated the domains, but really it's only in theory. In theory that process could perhaps still do something. He still has access to all the system calls. And I didn't like them having the system calls. So there's been many, many ideas about how to separate system calls and contain them. SysTrace was one of the original ones. The ideas in that were copied and turned into seccomp and linux. But I've never liked these because the abstraction is too low. The abstraction is at the, it forces developers to know what the system call numbers are. What the system calls are, which are gonna be used, perhaps by layers and layers of library calls. The abstraction is all the way down. So I wanted to do something which was more conceptual. So that's what pledge is. Originally it was called tame, but I thought the name was kind of tame and so change it to pledge. So a pledge request is done by a process at the point after it's done initialization to log out, to register the types of functionality it will use in the future. Anything it's not gonna use in the future, it simply emits. The subsets are a little bit hokey sounding, but they're hokey sounding because the entire idea is I'm trying to create a high level understanding amongst the development community as what the types of parts of Unix they're actually using. So I had to come up with names for these sub-components like standard IO, the ability to read a path to write a path, file attribute changes, Inet of course, opening and managing sockets. DNS, we've separate DNS out from regular sockets. A PROC can exec allow you to fork and exec. PROC is not called fork because it gives you a couple of bit more things that you usually use when you're spawning a process, things like this. And it's not just done at the actual system call level. Some of the, there's a few tricky bits that are done in the kernel because we're trying to also, because fundamentally we have to make sure that libc is happy. We need to make sure libc is happy. We need to remove some of the warts and weird things that Unix does so that people don't have to think about them directly. So they're POSIX subsets. This I think is very important for people to understand. When you pledge a program, there are no behavior changes except for that the parts you said you wouldn't use are inaccessible to you. There are no error returns from any of these functions which you're not allowed to call. If you use a piece of Unix which you said you weren't gonna use, your process is killed. This stuff fails closed. This is easy for developers to learn. They don't have to go through their code and find places where they didn't check an error return. And then the program keeps on cascading through failure. By not having error returns, we don't create new failure conditions. That's very, very important. So this finally allows us to apply enforcement to the design that we had before. The pledge process over there, it's the crappy string handling process over there with a pledge of standard IO basically can only read or write back up towards its network speaker. The standard IO includes a variety of calls like read and write. It has enough so that you can do M-Map. It has the fundamentals that you would need for continued operation of a process without giving it any ability to allocate new descriptor resources or anything like that. You cannot receive file descriptors. You can only act upon what you already have. See, that's not hard. Anybody can do this. Anybody can do this. And that was so strange about pledge coming in OpenBSD. We had 15 new developers show up just out of the blue sending a pledge for a program. They just showed up. It was easy for them to do and they were very satisfied with their result. And then they started becoming better and better developers after time. So what it kind of really is, it's a second specification of what the program said it would do. That's a, I think that's a very important point for people to understand. That is what separates it from the other security designs other people are talking about. There's no behavior changes. And for example, if you define pledge to zero and compile the entire OpenBSD tree, it'll continue working. The pledges haven't done anything. They're just a specification which will be mandatory honored. I've inserted a slide over a weird criticism that pledges received sometimes. Many programs in Unix are shells. Like of course the shell is a shell but there's other programs. See, the Unix model is with the shell. You take two programs, you pipe them together and that's your piping model for your shell. But there are programs which do this spawning internally to themselves. They just simply decide that they want to go run some sub-command. So they'll do a fork and an exec. If we ignore that requirement, then those programs cannot be protected with pledge at all. They cannot be protected with capsicum either. They cannot be protected with seccomp. You're not allowing a process to have a child and continue on from there. So therefore you can't protect the parent because you have to give the parent the ability to create a child. It's just the reality. Can't take that away. So pledge has an exec permissions which permit these operations. An exec VE turns off the pledge features in the child. It leaves them untouched in the parent because we anticipate that the new image we're about to execute will immediately, since it's open VSD and almost everything is pledged, will immediately pledge itself. And during that window, when it starts up and when it actually pledges, it's not actually doing any real work yet. It's just initializing so that it can get to the point where it calls pledge. So an open SSH SH, for example, since with the way it's pledged, if you find a bug in it and you managed to actually start running some rock code in it, you'll discover that you can't open sockets. You'll discover you can't pass file descriptors. You can't do DNS lookups. That process, that KShell is not supposed to do those things. And it never has done those things. That type of work is actually done by it going and asking another program to do it. It doesn't do them themselves. So these programs are protected, about 20 of them. Also watching the developers who started playing with pledge, we noticed another development technique show up. And we soon started calling it Wasting. And it was very interesting watching the disappointment to glee sequence occur multiple times. A person will take a program, a quite a large program and add pledge to it. And they'll discover soon that their promises they're adding to it are wrapping off the end of the line. The program needs almost all the resources. And so it works. And they're happy, but they're not really happy. But now that they know what the parts of Unix are that the program uses later on, they'll take one of those keywords and knowing what it does, they'll go through the program and they'll search for initialization code which uses that functionality later on. And they'll try to move it to the top of the program. Hoisting it is the word that we started calling it. And then if they're successful, then they can remove that pledge and now they're super happy. And therefore they go and look at the next one that they can do the same thing with. So we've had programs that initially were pledged and had something like 20 promises. And after successive refactorings, people have gotten down to four or five. There's an effort going on in our TCP dump right now that's really ridiculously funny. The thing is just being, TCP dumps internals are just being rearranged and when this thing's done, both of the processes will have almost no permissions. It's just incredible. It's just incredible to watch people really get involved in this. I'm also working with the two other developers on a file system containment model called pledge path. Pledge path will be done before pledge. It is a registration mechanism where you register files that you're interested in, files or directories, you register them and the V node is grabbed and held. And then later on, when you do your standard open calls or stat calls or whatever else as permitted by your pledge promises, later on, at that point we'll discover that this V node was one that you registered that you were interested in and then it will turn it into a file descriptor or allow you to directory traverse across it in a safe way. It's, I think it's gonna work out. The semantics are a little bit weird but it doesn't have any time of check versus time of use risks. So it fits the metaphor of fail closed. We're not going to encourage people to use it in all scenarios. And it also feels kind of right because the diff is under 300 lines. So that's usually a sign that something's done right. So the strength of pledge is that it's really easy for people to use and so they will use it. And then when they use it, sometimes they re-factor a program to make it better. I just wanna speak about the fact that people will say it can only be used on simple programs. So yeah, we can't use it on Firefox. It is this amorphous piece of junk it has no inherent security in it, let alone privilege separation. It's all basically written the old style of software where everything is just a function call away. The layering is just not ready for this type of thing. And so everything must be allowed. In fact, Firefox doesn't even run if you pledge it with all of our pledges because it actually, it fundamentally uses everything. And who knows when it will pull in a shared library to do more, you just don't know. So it can't be done. But in contrast to that, Chrome was designed to be privileged separated from the start. Chrome was basically pretty much the fourth or fifth privileged-separated program in the entire ecosystem. Q mail, post fix, SSHD, then something else, and then Chrome. And then we really started our rush with, yeah, but Chrome's outside OpenVST. Yeah, but Chrome was already before that. Was Chrome before BGPD? Was BGPD before Chrome? Okay, okay, then we're good, yeah, so okay. So by the time Chrome was privileged-separated, we probably had 40 programs privileged-separated. Yeah, I think we now have like about, I think we have 72 programs privileged-separated. Yeah, yeah. So it was designed to be privileged-separated from the start, and the privileged separation at Chrome was driven by their attempt to use SecComp. So SecComp being a system called Limiter, but at a very, very low technical granularity, therefore makes it possible for us to also determine what the high level of abstraction is that fits into it. And so Robert did the work, and Robert is here. Robert, did it take you about a week? Okay, yeah, about a week, yeah? Yeah, and it's five processes, five potential places where you pledge. Yeah, so some of them are threads, right? But they turn into processes, yeah. Okay, so if large programs designed for this work with it, then I think that's fantastic, okay? So, and finally, I'd like to remind you all that the OpenBSD Foundation takes contributions and then helps out the OpenBSD project. And that is really with the hackathons and the resources to give us. That really is the foundation that allows a community to exist because I couldn't have done pledge on my own. I was able to do pledge because of this large community and the developers from the ports tree and developers from the base system who lifted up and applied pledge into all the programs at the same time. And by doing so, allowed me to make sure that the semantics work correct and then other kernel developers worked with me to make the semantics correct. So it's a team effort and it is basically possible because of the risk factors of running a group this large and trying to get it together taken away. So remember to pledge there as well, please. Okay, thank you very much. Are there any questions? As far as I understood pledge, it's sort of a programming pattern. Say what? It's a sort of programming pattern. Yes, no, not entirely. No, pledge is not a programming pattern. Pledge encourages better patterns. Okay, so my question is, is there any paper describing in a formal way the way you should use pledge? Well, is there a paper that describes it in a formal way? Or a documentation that you can refer to in order to be able to apply those pledge rules to your software. There's a manual page and there are 600 programs that are examples. And so those 600 examples in our tree, I think are probably the best way to approach the problem that you go look for a similar problem and then you follow that pattern. That, we're not really academics who write papers. I think that's probably the correct answer. There was an abbreviation. I didn't know TOC, TOU, if I remember correctly. Time of check versus time of use. Ah, okay, thank you. I actually think that memory safe programming languages are worth more consideration because I don't think that ASLR will ever become a fully watertight. Okay, so when it comes to memory safe programming languages, I would like to ask a question. Where's LS and where's cat and where's sort and where's grep and where's word count and where's the K shell and where's the kernel? I haven't seen any UNIX program written in a memory safe language yet. So where's the ecosystem? If there isn't that attempt to write the first program in a memory safe language, how can we even talk about memory safe languages? I noticed that there's RIP, grep and there's Redox. Okay, so it is not done yet. Okay, so is that grep possible compatible? Is that grep command possible compatible? If does its manual page have exactly all the options that grep has? That's, once again. It is not, it is intended for something else. Exactly, exactly. So that is the problem with that entire ecosystem is that they just are not replacing UNIX. And so until there is someone who actually goes and replaces UNIX with memory safety, then we're not there. And memory safety is not the only problem here. Memory safetyness will not solve all of these problems. No, it doesn't solve all problems, but it solves all that blind, returned oriented programming and. No, it doesn't. No, no, it doesn't. Because eventually in that memory safe thing, they're gonna make a mistake. And then someone's gonna find that mistake and play with it like an oracle against it. And now since none of the other mitigations will be there, the RAP attacks will be even easier. So, you know, we're human. We make mistakes. And so to go and say that. Both of you say that memory safe languages are not actually memory safe, which is probably, which may be true because there will almost always be some some sort of C code or assembler code or a compiler in there that we trust that is memory safe. Yeah, so like the GIL ecosystem with Sego, and soon as Sego becomes popular in Go, they added address space randomization. And yeah, yeah, the right way to use Go is not to use Sego, of course. So fundamentally, these mitigations are cheap. They're free. So why not have them? But I agree, address space randomization done on the minimal is very weak. I agree with that point. You mentioned that some programs don't run with the full pledge set. So what's the things that you can't do with full pledge set? Some programs start with the... If you do the full pledge set, so every single option, some programs still don't work. I used to Firefox doesn't work with that. Yeah, some, yeah, okay. Some of the pledge semantics light up the moment that you actually call pledge. So a pledge with all of the options does not include all the system calls. It's true. I don't have any examples on me right now. I can't think of a clear one, but from the get-go, each of the subsets was designed as a subset that's necessary for this function. And so there was never an effort to ensure that the inclusion of all subsets ends up as the full set. That's the reason, that's basically where we stopped work before that. Oh, and one other thing was in the capsicum talk, he mentioned that if you can, if you have permission to fork, then you can bypass. So when you said SH can't talk to a socket, but it can create Netcat instance, for example. If you fork a process, the child has the same pledge. Xsec, so if you have Xsec rights, so SH can create an instance of Netcat. Yeah, so SH can fork and exact Netcat. Okay, yeah, now what's the problem? That's what it's supposed to do. Yeah, yeah, I'm just saying it can still connect. You can still connect to the internet. By running another program. Yeah, but that's what it's supposed to do. That's SH is supposed to run programs. So what's- Yes, but the slide said that one of the strengths was SH cannot talk to the internet. Yeah, SH, the program cannot talk to the internet. Technically, yeah, okay. Yeah, it has to run a command, right. But let's say, for example, you take an SH binary, let me create an artificial model, a CH root jail. I've got a CH root jail, and inside the CH root library is statically linked SH, and that SH over there is handling some objects. But somehow it's given some data, and in that data, someone managed to wrap attack and start executing code inside that SH. But that SH is in a jail where it has no other programs it can execute. So how does it open a socket? In our world, that SH cannot open a socket because it doesn't have a command that can execute that can open a socket. That's the security model we're trying to bring to here. That the program itself cannot do that. But unfortunately, I'm sorry, but SH is supposed to run programs that can do anything. So, like, what do the capsicum people want us to do? What do they want us to do? What's their proposal? That we should not allow SH to exec or should we follow their pattern? That you can't capsicum a shell? Well, that's their answer. You can't capsicum SH, right? And you can't set comp bash on Linux. So instead, we've provided a security enhancement which no one else has, and we're receiving criticism for it. It doesn't make any sense to me. It doesn't make any sense. I'm willing to be criticized by people who've done something similar to what we've done, but I'm not willing to be criticized by people who haven't done anything. It's crazy, okay? Thank you. All right, if there are no more questions, one more. Thank you. Very nice talk. So, do I understand correctly this is entirely a runtime mechanism or do you somehow statically check that you've pledged the right things or because from what you said, you're registering something when you start a process essentially at runtime you do something? Yeah. So, I understand that it works really well for operating systems and things that are like built in a very close community, but aren't you afraid that if you develop something at a very high velocity and it's like, you do something and you link a library from someone else and he then suddenly starts to do something that he shouldn't, that it suddenly becomes a denial of service vulnerability simply because you're going to be shot down because some dependencies of yours now require something that you haven't pledged or do you somehow transitively transfer things that you've pledged if I live some library in but I guess you don't because it's process started, right? No, pledge blocks that process and all of it shared libraries. So, the scenario you're talking to it, I suppose the answer for that is, let's just, you're talking about Pam, to be quite honest, okay? So, shared libraries come in and they do who knows what? But it's a response, but how are we going to protect the process that shared libraries are doing crazy things? And so, if you've got a bug in the program, it's attacked. I'm sorry, I don't want a shared library to do something which I didn't anticipate that program to do. So, Pam is a big problem, yeah? And that's why the BSD authentication system is a much safer system by separating them and once again, it's privileged separation. So, it's this combination of privilege separation and pledge which really show how to build safety designs. And the DL Open model of doing things, it has no safety, there's no seat belts in that whole model. So, I think it's, yeah. Yeah, the model's wrong. Yeah, well, it's on the operating system level. I can see that working very well, but on the application level, not so sure that I would be able to do this for my code, for example. I guess just one more. So, as a specific response to the question, a number of interpreted programming languages do have pledge available within them to call. I believe in Lua and Python and several others as well. So, those application developers can use Python or can use pledge in their programs if they desire. All right, thank you very much. Thank you very much.