 born some 24 years ago as a foreg of NetBSD. And we're currently running OpenBSD 6.6. And the operating systems hallmarks are easy installation, quick installation, rich set of documentation, and very, very good security features. What about security mitigations? Our next speaker, Steen, is going to have a very close and systematic look at them. Steen. So good morning, everyone. I'm Steen, and today we're going to talk about evaluating OpenBSD mitigations. So I'm going to explain to you why I'm here on stage. And then together we're going to go to an arbitrary subset of OpenBSD mitigations. And at the end, as usual, we talk, there is a conclusion. So why am I even here? Because like every good computer-related story, it starts on IOC. We were discussing with a couple of friends the exploitation of a particular vulnerability. And at some point, someone said, whenever I read RobChain, I reminded why I run OpenBSD. I didn't know much about OpenBSD. It was like, why? Because OpenBSD is taking security seriously. Wow, that's a short statement. So I did what everybody else would have done. I spent dozens of hours reading OpenBSD source code, mailing lists, design documents, PDF, and everything. And then I rented back on IOC. And another friend said, hey, you should do a talk at the CCC about this. It's a good idea. It's never going to be accepted anyway, so why not? So here I am. Yeah. So OpenBSD, it's an operating system, for example, like Windows or Linux kind of things, except it's based on NetBSD. It was forked by Theodor Rad in October 1995. The goals taken from the goals.html web page of this website is to pay attention to security problems, fix them before anybody else does, and to be the number one most secure operating system. That's really cool. Also be politics-free as possible. Solutions would be decided in the basis of technical merit. I really like this. That's nice. People have low expectations for my talk, things like misinformation, low-quality talk, false assumption, international embarrassment, apparently. So we'll try to disappoint these people. More seriously, when the talk was, the abstract of the talk was published on the C36C3 website, there were a lot of heated responses, like just look at the innovation web page, just look at the events web page, a lot of mitigations. How dare you say that OpenBSD is not secure? That's the point of my talk, actually. OpenBSD has a lot of mitigations, but I want to know if they are working or not. Having a list is not enough. Also, there are almost no exploit for OpenBSD. Well, there are no exploit for TempleOS or Haiku or MenuOS, but I'm not sure they are super secure. Also, OpenSSH and OpenSMTPD are really great. I know, I'm using them. This talk is not about OpenBSD software, it's about OpenBSD security mitigations. Someone else said that all the mitigations are complementary while you need picking on small mitigations. They're just a hand-wavy statement, like everything is complementary, don't you dare criticising OpenBSD. Someone else said just read Undeadly.org, which is a website dedicated to news about OpenBSD. But on undeadly.org, every time there is a new mitigation, everybody is cheering at it, but I haven't seen anyone criticising the mitigation, saying there is a weakness here, or maybe this or maybe that. And also, when I was discussing with friends that are writing exploits, they are complaining about people fuzzing OpenBSD, not about new mitigations. Also, someone said that the title is clickbait, but apparently it's not clickbait enough because the room isn't full. So security mitigation, how do we measure security? It's really hard. Alva Flay designed the mitigator creator, which is an alligator always working to make exploitation harder. Like, yes, I killed his vulnerability, but it's not really helping. So how do you measure a good mitigations? How do you design good mitigations? Alva also wrote on Twitter because apparently dislike writing blog posts for some reasons, said that to have a good mitigation, you should avoid hand-wavy statements, like, it makes harder for an attacker to do that. You should have stuff like, what class of bugs does it kill? Or what CV, like, this is killing CV one, two, three, four, for example, or by how many hours does it delay the publication of a working exploit, for example. Because it makes harder for an attacker, it's not something that would be acceptable, for example, when designing cryptographic protocol or security protocol, like, yes, I did a for loop here because it makes harder for an attacker to get a clear text, doesn't work as well. Also for a good mitigation, you should ask your friendly neighborhood exploit writers about this, like, hey, I've wrote these mitigations, it's killing this class of bugs, you are all the exploit, can you bypass it please? Can you try? And also, code review, where is the mitigation coming from? Did you read some papers, or did you come up with the idea yourself? Other people are using it maybe, what is the code complexity? So that's, it doesn't guarantee good mitigation, but at least I think that good mitigation stemmed from these good practices. Also, threat modeling, I think this is really important. Someone called Ryan Malone said, threat modeling rules of thumb, if you don't explain exactly what you are securing against and how you are secure against, the answer can be assumed to be bears and not very well. So for example, there was a mitigation added to OpenBSD and the commit said, I quote, thereby forcing the attacker to deal with the hopefully more complex efforts of something something. This is not something you want to read in a change log for adding a new mitigation. So here we go, in this talk, as I said, I'm going to go with you through a subset, an arbitrary run of OpenBSD mitigations, where they're coming from, where they're invented by OpenBSD or improved by OpenBSD maybe, what are they defending against? Are they effective? Are they killing exploits? And how is the outside world doing compared to OpenBSD? For example, Linux has been not really improving, but Windows has been investing a lot of money and effort and time into making it more secure. Why an arbitrary subset? Because I only got 45 minutes and a bunch of questions, so that's not enough time to go through every single one of them. There are not a lot of sources in my slides because I don't have much space, well, it's a big screen but still. But there will be a website at the end with all my research material published there. I also put small prefer fishes in the bottom of the slides as a notation mechanism to express my opinions about the mitigations. So yeah, here we go. Attacks of fast reduction. Ivan Fratic, which is someone working for Google's Project Zero said in 2019 that empirical evidence suggests that attacks of fast reduction is one of the most impactful things that can be done for product security. So this is a class of mitigations that should be really effective. Privilege separation, privilege drop in 1997, a long time ago. Daniel Bernstein wrote Q-mail and Q-mail was composed of several processes with only one running as root. So as an attacker, if you managed to compromise, for example, a process talking to the internet, maybe it's running with low privileges so you wouldn't yield you automatically a root shell. That's really good. Postfix did the same the same year and five years later, OpenSSH got privilege separation. So if you have a remote code execution OpenSSH, maybe it doesn't give you a root shell automatically. That's pretty cool. Almost all OpenBSD written programs nowadays are using privilege separation and privilege drop. Privilege separation is to have different processes running with different privileges and privilege drop is the idea of dropping your privileges as soon as you don't need them anymore. For example, when you're issuing a ping command on Linux, maybe on OpenBSD too, I don't know. The ping is set to ID because it needs to have high privileges to open a specific socket and then immediately drops privileges to send a payload. So that's the idea of privilege drop. OpenBSD is using it almost everywhere. I said there, that's why I put five perfect fishes. It's a really good unit, I think. For example, they've got rootless Xorg since 2014. That's really amazing, but they kept it situ ID. So this resulted in a trivial looker root on OpenBSD. Okay, that's just me being mean. Pledge, I really like this one. In the Linux world, there's something called Seccom that was created in 2002 and merged in 2005. The idea is that the process could enter a mode of secure computation in which only the exit, secret turn C-Score are allowed as well as read and write on already open file descriptor. That's not really convenient for real-world programs. So Seccom BPF was created in 2012 and with this, you can restrict what C-Score, your program can make, for example. OpenBSD created Tame, which was renamed as Pledge in 2015. It's a really amazing mitigation, I really like it. It's really simple to use, because contrary to Seccom, it's not based on C-Scores where you have to dig through your pating system source code saying, hmm, what is this C-Score doing? Do I really need it for this Java program? Maybe not, I don't know. Here it's capability-based. So for example, you can say, hey, this program is only allowed to use standard inputs, standard outputs, or this program is allowed to do DNS resolution, and that's it. For example, I think the NTP client of OpenBSD is running different processes with different Pledge policies. Like one is allowed to resolve the domain, the other one, more privilege is allowed to change the time of the system, for example. That's really neat. Also, it's more used, that's Seccomp. Seccomp is mostly used like in Docker, for example, or Tor, IPT at some point as well, a couple of other programs, Chrome maybe. But in OpenBSD, there are 850 calls to Pledge in OpenBSD SRC, so it's used a lot in OpenBSD, it's code-based, and I think it's doing impressive engineering work, and it's working very well. Super effective. They've got Unveil, which is kind of Pledge, but for files, not really. The idea is that Pledge allows you to restrict the view of your file system to a specific program. For example, if you've got a web browser, the web browser needs to be aware only of, for example, a folder for the cache and the cookies and another one for downloads, and that's it, the web browser shouldn't have access to your SSH keys, for example. It doesn't abort on violation, so if your program is behaving weirdly like trying to access your SSH key, maybe you will get a log message, but the program won't be aborted automatically. It's used by 77uselandprogram, OpenBSD, that's kind of a decent number because OpenBSD, with its defaults, install doesn't come with a lot of programs. I think that this one is also really good. It's like Apamore or SLinux running on DBA and Ubuntu or Android, for example, but I think it's much better because the policy is residing inside of the program. So let's say you're using WGet to upload some file on the internet. You can make the whole file system read-only because WGet only needs to read some file on your disk and then upload it to the internet. Or if you're downloading a picture, the only thing that needs to be writable on your disk is the destination file for the picture you're downloading. You cannot have that with Apamore, which is more like, yeah, WGet can only access this and that's it. So being able to reduce the attacks you face depending on what your program is doing, I think it's really cool. Hardware vulnerability, we got a lot of them in the last three years. Apparently, you cannot trust your CPU anymore. That's a shame. Here, I think the most interesting thing that I'm going to talk about is the reaction time because it's usually faster to update your operating system that it is to update your CPU. And for some vulnerabilities when they were published, the researcher managed to write proof of concept in the matter of hours. So I'm quite sure that serious players are able to have production grade exploits in a couple of weeks, maybe months. Hyper, hyper, hyper, what? Hyper threading. So OpenBAD disable hyper threading sport by default, which is a bold move and a lot of people call their names because of this like OpenBAD doesn't care about performance, blah, blah, blah. But they did some benchmark and the performance impact is pretty low except from some specific workload. And this allowed them to dodge a couple of vulnerabilities for example, lift in user land or MDS is variants like zombie load or riddle, for example, as well as user land. So this maybe should have been in the tax office reduction part instead of here. I think it's really cool. It's a really bold move. And I think it's a good indicator that OpenBAD, there are some people at OpenBAD that care about security. Spectre v123, the idea of Spectre is that the branch prediction speculative execution of your CPU has observable side effect. Like your CPU tried to be smart and infer some things and an attacker can watch the observe the CPU doing this and extract some data from this. So Spectre v1, which is the first variant of the Spectre attack, the mitigations on Windows is compiler based on Linux. It's manually removing some gadgets using a magic rep and OpenBSD there is nothing so you need to update your CPU if you're worried about this. Spectre v2 also compiler based on RedPolines. Day zero for everyone, that's really impressive. Three months for MD64 and OpenBSD. That's not a long ish time, so right. KPTY also, Spectre v3, candle page table isolations. Day zero for everyone, one month for MD64. That's pretty fast. Interestingly, and because I'm a mean person, OpenBSD got KPTY after DragonflyBSD, NetBSD and FreeBSD, that's just me being mean. There are other ones, Lyft, MDS, SwapArgs. Everybody was using the same mitigations except that OpenBSD was able to dodge a couple of them for user land because they disable hyperthreading, that's really cool. So everybody pretty much the first week, day zero, day three, nine, yeah, that's really good. And for Lyft, interestingly, nine days after the embargo was Lyft, Teodorat said that there won't be any mitigations for OpenBSD 6.2 and 6.3 despite them still being supported at the time, which is an interesting statement. He sent this on the mailing list, I'm not sure if people know about it. Randomization, OpenBSD has a really strong focus on randomizing everything to make the Lyft and attacker harder. ASLR, so the idea of ASLR is to map area of the other space at random location. For example, your stack is the random location every time your program starts, so is your heaps, your libraries and everything. It was invented by the PAX project, which is a patch for Linux in 2001. And the same year, OpenBSD added a random offset for the stack, that's pretty fast, pretty neat. 2003, OpenBSD added a random offset as well for libraries and M-Map, and it took two more years to Linux to join the bandwagon. Technically, it's ASR and not ASLR for OpenBSD because the delta between the different maps is constant between the launch. For example, when you're running your binary the first time, you've got the delta between your library and your stack, for example, and when you launch it, their map is a different offset, but the delta is still the same. It doesn't matter that much, at least still better than per boot randomization like Android, iOS and Windows are doing. Also, OpenBSD claimed to be the first widely used operating system to have ASLR, but there was Gentwarden before and at Dementix. I don't know if Gentwarden was more popular than OpenBSD because it didn't publish numbers, but I'm not sure the statement is true for OpenBSD. Position independent code, so here it's not only the stack, the heap and the library that I mapped a random offset, but every time you're running a program, the binary will be mapped a random offset, removing fixed points for the attacker to see where it is or what to override or where to jump, this kind of things. Also invented by PAX, 2001. Gentwarden enabled this for the whole user land in 2003 and Fedora and Red Hat Enterprise Linux used this for security and network-facing binary because there were some performance concerns to enable this mitigation. OpenBSD got support for Pi five years afterwards. 2011, Pi by default on iOS and OSX by Apple. There are a lot of text in this slide because I think that here the timeline matters. Also 2012, Android, that's really cool. And 2012, Pi enabled by default on OpenBSD, that's pretty nice. Except that on the OpenBSD's website it's written that OpenBSD53 was the first widely used operating system to enable it globally by default on seven hardware platform. Android was first for six different architectures, Fedora was first for eight different architectures, and also there was Gentwarden, Adamantix, maybe they got less users at OpenBSD at the time, but also Apple enabled it for OSX and iOS, and I'm quite sure this is more mainstream operating system than OpenBSD is. It's still an amazing mitigation. Carl, I really like this one as well. So July 2017, OpenBSD relinks kernel object at a random author after every boot. Like the kernel is relinked, like if your kernel was a giant puzzle, the pieces would be shuffled and that's some of the different order for every boot. So when you reboot, your kernel looks entirely the same. And for example, on Ubuntu, every time you're rebooting the kernel is the same. So as an attacker, I only have to have the same version of Ubuntu as yours, write my exploit for the kernel, it works, it will work usually on your machine. So it's pretty nice. It kills single pointer leaks and relative overwrite because if I can leak a pointer to your kernel, I know where the pointer is, but since the kernel changes upon every boot, it doesn't give me much information and also relative overwrite if I'm able to write whatever I want, but in a relative manner. I don't know what I'm going to overwrite. Now, it's really useful against attackers that doesn't have an abyssal read or a CPU side channel. So, yeah. Also, the debuggability of this is really horrible because everybody has a different kernel. So if my openBSD crashes and I want to send you the stack trace, I will have a different one that you do, for example, or maybe I'm not a power user, I don't know much about this, I'm just sending you a screenshot and there is no way for you to know what my kernel layout is looking like. Also, it doesn't work very well with trusted boots because you've got a different kernel after every reboot and you would have to sign it every time it would kind of defeat the purpose of trusted boot. But openBSD, I think, doesn't really care about trusted boot, so it's all right. This one is interesting as well. They are randomizing like they do for the kernel except for the libc and libcrypto at boot time. That's pretty nice, 2016. It also kills single point leak and relative overwrite. If an attacker has an arbitrary read, this mitigation is moot. And also, this one is unable to blind-drop, but openBSD has some measure in place to make blind-drop a bit more difficult. It's useful against remote attacker, but since it's per boot, it's entirely, usually, use less against local attackers. Library order randomization. So here is not randomization inside of libraries, but the library are mapped in a different order every time. 20.03, this was also done by Android at some point by default. The smallish improvement over ASLR, but when you've got a single leak to a library large enough, for example, the libc or the libcrypto, I don't know, there are usually enough gadgets there that you don't need to look for other libraries. Also, the entropy is pretty terrible because an attacker, I don't care about figuring the particular order of all the libraries. I only need my libc to be the first one, for example. So if I've got n libraries mapped, I've got one chance out of n to exactly hit this library in the first try. So it doesn't hurt to have this, but it's not very effective. WX, to say, the mitigation ID is to have the memory section either have writable or executable, but never at the same time. It's a pretty old mitigation. It was, I think, first made public by CasperDick for Solaris in 1990, something, something. So a lot of designer wrote patch for Linux kernel for this as well. It prevents the introduction of new codes because an attacker cannot, like, put his code into a writable section, directly jump onto it. This used to be the case in the 90s, but nobody's doing this anymore because of these mitigations. OpenBSD was pretty late to the party, took them a couple of years, 2002 for user land, 2015 for kernel land. Amazing mitigations, except that it's lacking things like packs and protect from packs and NetBSD nowadays, I think, or ACG on Windows, or the kind of hardware equivalent in Apple World, which is KTR or in hardware. The idea of this is that the operating system will keep track of the memory allocation. For example, if you are locating a page as writable, it can ever, ever be mapped as executable. Even if you map it as ProTnone, for example, and then map it as executable. And this really prevents introduction of new arbitrary code because otherwise an attacker, if I've got, I know, some Robgadjets and I've got code execution, what I can do is that I can allocate a section of memory, mark it as writable, put my payload there, map it as ProTnone, and then map it as executable and jump on it. So this is basically the attack adding new code inside of the binary. And when you've got packs and protect, the kind of things, you're not allowed to do that. So you have to write your whole payload in Rob or use that on the attacks, but you cannot bring your own shell code anymore. Yeah, WX refinement in 2019. So Theo said that he wanted to block direct syscalls from this area, forcing the attacker to deal with the hopefully more complex efforts of using JIT or probably even harder, discovering the syscalls that are directly inside, randomly relink Lipsy. What is the point of this? That's a subset of packs and protect that mentioned previously. They are blocking syscalls from executable memory. So if an attacker has an executable memory, they cannot issue syscalls from here. And the further refinement was to block syscalls from memory that doesn't have the msyscalls flag. So the operating system will map a particular section of your address space when your binary is running and you can only issue syscalls from this version of the binary. Couple of days ago, Samuel Gross did a talk about IMSA's exploitation and his exploit would have entirely bypassed his mitigation it's not even present on Android because when as an attacker you've got enough control to map an area as writable, put your code there, map it as executable and then jump on it and then do a syscall, usually you've got enough control to just rope away to syscalls to wherever they are because you usually have an arbitrary read anyway. His mitigation is pretty useless. I think this is the juicy part of the talk. It's about other memory corruption mitigations. Userland heap management, July 2008. Automalog by Otto Mürbik. I think sorry for butchering your name. And Damian Miller is an amazing piece of software. Out of band metadata, so when your allocator is allocating some stuff, the metadata about the data that was allocated are kept separately. So as an attacker, if you've got an overflow, for example, you're not able to mess with the data that are here. Also read-only structures, so as an attacker if I've got an arbitrary read and arbitrary write, for example, there are some structures that I cannot mess with. Quarantine, they delay it free. The idea is that once your program doesn't need the memory anymore, maybe you want to free it. But the free doesn't happen immediately. The section will be put into quarantine. At some point it will be free. It is helped to mitigate use after free because as an attacker it's a bit harder to know when the memory will be freed. Junking, like once a memory is allocated, there are junk data put there. So as an attacker, if I try to immediately look into this memory, I'm not able to leak some things. Canaries to detect linear overflows or linear on the flows. There is a secret value that's put behind or before the buffer. And when an attacker, when I overflow it, the program will notice at some point that the canary value has changed. Guards pages, page alignment, the idea is to align your location per pages. So as an attacker, when I've got an overflow, odds are that I will fall into a page that is not mapped. Guards pages, like canaries, but instead of putting secret values, you're putting entire pages before and after, map them at prot known, for example. So when the attacker touches them, everything will explode. As usual with OpenBSD, everything is randomized everywhere. It's a really cool piece of software. Unfortunately, it's a bit slow compared to, for example, Scudo, which is the Google hardened allocator that they plan to use for Android. Some benchmark shows that Automalloc is 12 times slower, but apparently the OpenBSD people care more about security than care about performance and that's entirely fine. Read on your location. So it is more tricky to explain, but basically the idea is that when your program needs a, I don't know, a function from another library, like let's say you want to display some text, you use printf, and your binary doesn't implement printf. So it will ask you, hey, where is the function printf again? And there is more stuff that will say, oh, let me look. Here it is. There you go. And the program will take the function, put in a small cache. So next time it needs to call printf, I can just look in the cache and call printf. The idea of read-only relocation was created by Red Hat is that to make this cache as read-only. Because an attacker, for example, if the cache is not made read-only, what I can do is that I can swap the pointers there. For example, next time you're going to call printf, since I messed with the cache, you're going to call system and give me a shell. So the idea is to have this as read-only, but the caveat is that you need, when you're starting your program, to resolve everything, because you cannot dynamically change the cache. But there is a plot twist. OpenBSD still has lazy bindings, which means that they're resolving things at runtime, but still having a read-only zone. So it's a bit weird. But so the way they are doing this is by adding a new syscall called cabind. And the idea of cabind is that it allows the program to have an arbitrary write inside every memory that is mapped in the address space. So even if it's read-only, the program can still write there. So to prevent an attacker from using it, there is a call side verification. Like the first time it's called, the operating system will remember where the function was called, and also use a magical cookie to make sure that the caller knows the magical value to be able to use it. Unfortunately, you can just rope your way to bypass the call side verification, and also the cookie value when you've got an arbitrary speedy moot. So I think it's a dangerous syscall, and the right way would have been to have immediate bindings instead of still supporting lazy bindings which are things from the past. Trap sled. So it's hilarious. So Todd Mortimer sent a patch to replace the padding between the function that used to be nops by traps. And the idea is to remove nops sleds from program libraries and makes it harder for an attacker to eat any rope gadgets or other instructions after a nops sled. Nobody is using nops sled with rope. Nops sled were used back in the day when the stack was executable. People are jumping precisely to the gadgets nowadays. You can look at every exploit out in the wild. Nobody is using nops sleds. Also Microsoft Visual Studio had these features since the 2010 editions and never branded it the security features. Also OpenBSD has an obsession about removing rope gadgets. They are doing this by changing the register selection algorithm. Like instead of using EAX, for example, they were favor EBX instead, why not? They're also replacing instructions. So instead of moving A to alpha, they are exchanging A and B and then moving and then changing again. They are forcing alignment with the trap sled. There is a whole jump above a trap sled and the rat instruction to prevent an attacker from jumping in the middle of an instruction hitting the rat afterwards. Also they've got wet guard to protect against the line rats usage but I'm going to discuss wet guard a bit more after. Rope gadgets removal. Why would you do this? Because they are using a script called ropegadget.py which was written by Jonathan Salwan. It was written as a proof of concept for fun. Nobody is using it except maybe during CTF as a try and usually it doesn't work because the heuristic it's using are pretty simple. And the way they are measuring the success is that they are running rope gadgets on a kernel binary to generate a user land exact via a rope chain and the rope gadgets that PyScript managed to generate a full chain before their mitigations and when they applied the mitigations rope gadgets doesn't manage to generate the complete chain anymore. That's a weird metric. Also apparently all their dense to remove gadgets are reducing the number to 11% on MD64. That's not a lot. When you're writing a rope chain usually you need like I don't know a dozen gadgets maybe 20 but not that much. And 11% this doesn't make any difference. Like there are still dozens of hundreds of gadgets lying around. They claim that there are no more rope gadgets on AM64. That's amazing. So I've run rope gadgets.py on the kernel binary and remove all the rat based gadgets and there are still 1200, 12,891 job or co-op or pick up gadgets everywhere. Also it doesn't kill as I mentioned rope, co-op, pick up and all the return to return to CSU, return to lib, return to anything. And Tios said that once they address the rat problems everything else would be easy to address and also any case, substantial reduction of gadgets powerful except it's not. There was a paper published in 2019 by Michael Brown and Santosh Paddy called is less really more that is explaining why removing rope gadget doesn't usually improve security and sometimes even worsen it. Amusingly GCC used to have an option called dash dash and mitigate rope but it's now removed because I quote, this option is fairly ineffective. Nobody seems interested to improve it, deprecate the option so you won't lure developers to the land of false security. Secret guard, so Crispin Cowan and his friends wrote StackGuard 1997, the idea as mentioned previously with cookies is to have a secret value somewhere and when the attacker tries to overwrite things to memory the attacker will also override the cookie value that's allowing the program to detect that something is wrong. Here it's usually on the stack like when you're calling a function the return address because you're executing your program some point you might want to call a function for example but you need to know where to come back. So the idea is that when you're calling a function usually like at least on MD64 you're putting the return address on the stack the function is doing things and then it's taking the address back and coming to the call site. So as an attacker if I can overwrite the address I can make the control flow of the program point wherever I want. OpenBSD I did StackCookies in New Zealand in Canada in 2003, six years after invention and amusingly they were using a segment filled with random data for this and they marked it as static const so the compiler was smart enough to say huh this is a static const segment so it must be zero so it simplified the comparison for cookies so the cookies in OpenBSD were ineffective between 2016 and 2017. So Red Guard 2017 edition the idea was to XOR do an exclusive OR on the return address at top of the stack with the stack pointer value itself. That's an interesting move except it doesn't protect against partial write like if you can partially overwrite a pointer you don't care about this. Also if you've got the read primitive on the heap because on the heap there are stack pointers usually you defeat the cookies and the SLR Canada land if you can leak some part of the kernel stack in the kernel text segment you get the cookie for free. So this was not the smartest move ever that's why they improved it with Red Guard 2018 edition. So here are some assemblies, assembly singular. The idea is to move like Red Guard thing this is from the random, the segment with random data here XORing with RSP at the beginning of the function and at the end of the function there is a verification that the value is still the same and then a big jump above a trap sled and the return. Nice, that's nice. Air 11 is still on the stack when you're calling different functions so if you've got an arbitrary read you can just leak the cookie values from all the functions above you. There is one cookie per function with a small improvement. Also cookie has stored in a dedicated segment so you cannot overwrite them which you can do in other operating system and I think this is really interesting the integrity is on the return address itself. It's not just a cookie anymore that is like shielding the return value below but the integrity is on the return address itself so even if you've got an arbitrary write you cannot really mess with this. It's still a small improvement over SLR because well a small improvement sorry over regular stack cookies because when you've got an arbitrary write an arbitrary read, oh I'm tired sorry. When you've got an arbitrary read it's still game over because you can leak everything. Newly derelict kernel land. So the idea is that when you've got a new pointer the reference like a pointer pointing to zero in kernel land like you forgot to initialize a function pointer let's say. Since the zero, the address zero is in user land what the kernel will do is that it will jump to user land so as an attacker you just have to map your shell code there trigger new pointer dereference the kernel will jump there and you've got code execution. The Pax project killed this in 2004 with scan exact and apparently twice in 2004 I knew the ref in 2006. A detail here does not really matter. Iliavon Spoodle gave a talk at the 23 C3 that's pretty old 2006 called the unusual bug where it demonstrated some new pointer dereference exploits. This was mitigated by Linux the year after with a mapping that they are preventing user land from mapping things at the very beginning of the address space to prevent an attacker from putting the shell code here. 27, everybody was copy pasting exploits for OpenBSD it was a really fun time to be alive. 28, OpenBSD prevented to map the first page as well. Teodorad which is the person leading the project say it's not super proud of the solution it seems best faced with stupid intellectual size. It seems that everyone else is slowly coming around the same solution. It took them two years to implement this. I think it's the other way around. They were slowly coming to the solution. Smaps map all the friends. The idea of this is that you can enforce at the CPU level that things running on supervisor mode cannot access user land. For example, your kernel maybe you don't want to kernel to access things that are residing in user land to prevent people from writing the payload in user land triggering a bug somewhere in the kernel and then jumping to user land. Forcing an attacker to put the payload directly into the kernel land instead. PaxUDRF was kind of an emulation of this implementing software. The Intel and then AMD release support for SMAP and SMAP. SMAP stands for execution so the kernel cannot execute stuff coming from user land and SMAP is access so the kernel cannot access things coming from user land. It was added in 2012. Everybody had the support for it. Amazing. And then someone burned a cool open BSD SMAP bypass because they forgot to clear a magical flag on interrupt and kernel entry. So they were never built for five years. It was a really fun bug. Map stack. It's present on Linux. Almost no practical use. Besides when you're cutting slash proc, you will see this memory part here is the stack. Windows used to have anti-exclamation might be as valid they choose a stack because it's Windows. They removed it in 2012 because it was useless. The idea was to check that the stack pointer was pointing on something that was mapped at the stack upon every single C-scroll. There were generic bypasses from this mostly published by Ivan Frantic. The idea number one was to write a stub that called M-map with the map stack flag, put your payload there, jump on it. Or before every C-scroll, you can just make the stack pointer point to the stack, do your C-scroll and then make it point to something else. Open BSD improved this. They didn't cite Windows mitigation or any paper. And they're checking the stack pointer upon every C-scroll but also page fault. I think this is a really cool improvement. Except that there are some open BSD-specific bypasses that are left as an exercise to the crowd. Other mitigation that didn't fit into the previous categories but I think they are worth mentioning. Scene cookies, Daniel Bernstein again, 1996 with Erin Schenck. The idea is to have a state list storage of the scene handshakes. For example, when you are establishing a TCP connection you're doing scene and then the server reply AC and then you reply scene AC and then you can exchange data, it's amazing. But if a client sends, since since since since since since since since since since the server needs to keep track of everything at some point will just blow away. So the idea of scene cookies was to be able to store all the scenes in kind, store in a stateless state. Anyway, it landed in Linux the same year, enabled by default, everything's super great. And open BSD implemented it last year. Nowadays it's kind of useless because everybody on the internet can trivially dust you with terabytes of data for just a couple of bucks by renting a botnet. Maybe it's useful on your LAN but if someone is dusting you on your LAN you usually got bigger problems. Map conceal, free BSD, the idea is to be able to mark some section of the memory in your binary as please never touch the disk. So for example, when you've got a crash, crash dump, maybe you want to give the core dump to the developer but you don't feel comfortable leaking your secret usage key for example. So you map it as no core or don't dump our conceal and they will never be put on the disk so you can safely give the core dump away. 2012 for Linux, 2019 for open BSD, they bragged about it. Ted O'Nikes, which the core open BSD developer said the name conceal was chosen to allow some flexibility like prohibiting P trace, the idea is to keep secret from escaping into other programs. It seems that there is a threat model issue here because if you've got P trace control of a program you can just rewrite the code of the program to access the things that you store in map conceal or mount some data on the attack, I don't know. Development practice, I think this is important. They don't have any bug tracker, everything is done by email so you don't know if somebody is assigned to your bug or it's kind of thing, you have to go through the mailing list. There are no public code review when they are pushing code that say, oh, Theo said okay, or Bob said okay, or Ted said it's okay, it's literally Theo okay at the end of the comments. There is no justification context or threat model for mitigations. Like hey, I edit mitigations, make life an attacker harder. There is no paper, there is no threat model, there's just hand wavy statements. Also security issue, when security issue a patch there is a narrator web page with, here is the patches, here is the signatures so you can verify it's just worthy, do patch. But there is nothing about, is it a remote vulnerability? Are there exploits on the wild? Can I have a write up? Well, what is the context here? Do I need to reboot? Is it in Kernland? Is it in New Zealand? Well, just apply the patch and reboot. This doesn't scale very well when you've got hundreds or thousands of machine running OpenBSD, you cannot reboot all of them instantly. They've got no current integration, they've got stable release and they've got current. Current is broken from time to time. My VM stopped booting like at some point every month or two or something like this. Apparently it's accepted there. Also they're using CVS for a version control system instead of other things. So they have no branches almost. 50% of the commit message are less than 10 characters long. Hello World is 11 character longs. Three quarter of the message are less than 20 characters. So if you write Hello World, Hello World as a commit message, it would be longer than three quarter of the commit message of OpenBSD. Conclusion right in time. OpenBSD has invented some really cool stuff. I really like AutoMalloc. I really like what Demian Miller is doing with hardening OpenSage, for example, and other things. They've also got an entropy gathering C-Score that they didn't mention. Also, yeah, they've got some good ideas. They improved some ideas of others sometime without giving credit. For example, they've got Tame, Plage. They've got password hashing. They invented B-Crypt, that's amazing. They've got some useless mitigations that are either complexity or are just hilarious. Trapsled, for example, the whole WX refinement, the weird rope gadgets, removal ideas, cabine, everything. And I think that this could likely be improved with systematic security engineering, like doing more tests, maybe writing threat model and everything. Because the SMAP bypass, for example, shouldn't have been leaving this long. Also, nobody would create cryptographic primitive today the way that OpenBSD is doing security development. This wouldn't be acceptable. Why is it acceptable to develop mitigation this way? Proper mitigations, I think, can stem for proper design and threat modeling. Strong reality-based statement like T-skills, T-skill vulnerability, or T-skill T-CV, it delays the production of an exploit by one week. And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking. Thank you very much. Also, since I didn't put a lot of sources there, I did a fancy website with a crazy domain name. It doesn't address the question, is OpenBSD secure or not? I didn't address this in my talk either because I think it's important to empower and help people to answer this question by themselves. Thank you, Stein, for this definitely systematic review. So let's go to the question and answers. Do we have any questions from the internet? I see a no. So are there any questions here in the hall? I don't see any people at the microphone. Nobody? Some more time to think about some questions. Guys, no questions. Microphone number two is our starter. Thanks. When you showed response time regarding some of the mitigations, do you know if the OpenBSD people had access to the information ahead of time like the others? Because Linux and Windows, I would assume, they would have access to the information to be able to write a mitigation in time to deliver a zero-day mitigation, whereas OpenBSD, I'm not sure. It's an interesting question. I didn't mention embargo handling on purpose because apparently it's a sensitive topic. OpenTO say vehemently that OpenBSD never broke any embargoes, but they are known for not playing nice with embargoes. So they are usually nowadays excluded from embargoes. So they weren't included in the disclosure process. They just had to delete in a rush. It wasn't a rush. OK, we got a question from the internet. Yes, thank you very much. And there's one question. Do you have a response to the statement of OpenBSD developer Brian Steele that MapStack is something very different from similar implementations in Linux and Windows? MapStack, this is the mapping. Maybe I can show the slide again. Oh, this is confusing. Yes, MapStack, as I said on Linux, is just used for cosmetic purposes. Windows, it was removed. But MapStack is the same idea of verifying the stack pointer upon every Cisco. OpenBSD improved it by doing it on page fault. It's an improvement, but it's still not a tremendous mitigation. OK, we have a person standing at microphone number four. So how do you compare the pledge with the capability system on Linux because there is such a thing on Linux and how it is different? On Linux, what do you mean by capabilities? Like, for example, there is the, for example, this CAP network bind that runs the capability for PIN, for example, to create a raw socket or something. I can't remember the exact name. Yeah, I see what you're talking about. They're really confusing. There are a lot of them, the documentation scars. I think that Spender from GeoSecurity wrote a blog post detailing all the capacities and how much of a mess it is. Maybe it's efficient, I don't know, but since it's not really usable by normal human beings, I think it's not a good mitigation. What I really like about pledge is that you can just say input would put, and that's it, or network. And that's it, you don't have to mess around with a lot of documentations everywhere. Do I need this particular type of exotic socket in my Java program? I don't know. Thank you. All right, another question from microphone number five, please. There used to be this developer channel, the ICB. Is this something which is still active in OpenBSD, or did they switch to IRC now? I don't know much about OpenBSD ecosystem and everything beside the mitigations and didn't interact with our community at all, so no idea. Thanks. Any more questions? So there is one more question from the internet. Sorry. Yes. And how does OpenBSD compare to FreeBSD in the context of your talk? I don't know much about OpenBSD. Maybe I will do a talk next year. No more seriously, there is the HardenBSD project, which is a soft fork of OpenBSD, of FreeBSD, trying to improve the security of FreeBSD, but I don't know much about it. OK, any more questions in here? We still have time. Internet, no questions anymore? Well, I'm then going to close this session here, and thank Stein again with a nice applause, please.