 I'm going to be talking about LibreSSL more than 30 days later. So LibreSSL was officially announced to the world just about exactly five months ago. And Bob gave a pop-up talk at BSD Can about everything that happened in the first 30 days. So for those of you who weren't there, I'm going to review a little bit of that material. But I think it's better to watch it online if you care. So then I'm going to move on to some of the newer updates and what's been happening in the 30 days or in the four months after those first 30 days. So let's review. What is OpenSSL? Because LibreSSL is a fork of the popular OpenSSL crypto and TLS library. And TLS is the standard name for the SSL protocol, which is that other secure transport protocol that was developed in the 90s. And it's used for HTTPS, most notably, but also IMAP, SMTP, and pretty much everything. Probably most client traffic for end users on the internet today is encrypted with TLS. So it's pretty important. And there's a couple of implementations, but really it boils down to two. And OpenSSL was the de facto standard for servers and also for a lot of clients. The main alternative client library is NSS, the Netscape Security Services Library, which is used by browsers such as Firefox and Chrome. But if you didn't upgrade NSS last week, you should go do that. And so there's bugs in all TLS libraries. And that's just because cryptography in general is very hard, and TLS in particular is pretty difficult to get right. The TLS protocol has just been hacked on and hacked on and hacked on over the years, where they find bugs and then they try to fix it, and then they find a bug in the workaround, and then they try to fix that, and then they tell you to use CBC mode. No, use RC4. No, don't do that either. Use AES, but not block mode. Do something else. And so there's all these workarounds to defeat and mitigate many attacks. And so it leads to a pretty ugly implementation problem. Now, as I mentioned, OpenSSL kind of dominates in the server space. So there's pretty much a monoculture there. And that isn't strictly a bad thing. Because hey, let's put all our eggs in one basket and they will watch that basket very carefully. The problem is nobody was watching the basket. So, and then I think that also led to a mentality where, well, I'll use OpenSSL, and yeah, I'll get burned by some bugs, but everybody else will get burned by bugs. So I won't be any worse than anybody else out there. Kind of bizarre mindset, but it's kind of how people run, do things. And so let's fast forward past 100 other bugs in OpenSSL to Heartbleed, also known as the worst bug in the history of ever, although that title is pretty heavily contested. I hear Bash has announced it is entering the contest for worst bug ever. So what was unusual about Heartbleed? As far as I can tell, it was kind of unique because it was a vulnerability instead of an exploit that got a name and a website and a logo. But previously, we'd seen the internet worm, the code red, blaster, Stuxnet, and really the only difference there is we named the exploits. We didn't name the vulnerability. I don't think, I mean, can anybody give me the name of the vulnerabilities that Stuxnet exploited? But, you know, Heartbleed can't even be considered the worst OpenSSL vulnerability. Previous bugs of which have been numerous have resulted in remote code execution. In fact, about 10 years ago, there was a worm called slapper, and that exploited a bug which had the sexy title of SSLV2 Client Master Key Buffer Overflow. And that gave up not just the encrypted data on your Apache server. It also gave up the private key because remote code execution gives up everything. Gave you remote shell on the server, and then the worm propagated itself to other servers. So yeah, that was probably worse than Heartbleed. But it didn't get headlined. So yeah, whatever. Now, the reason I'm kind of going on and on about this is just to reinforce that LibreSSL is not the result of the worst bug ever. It wasn't one bug. And so, you know, I may call you dirty names, but I'm not going to fork your project on the basis of one missing overflow check. OK, so why fork OpenSSL? LibreSSL is here because of a kind of tragic comedy of other errors. And we'll start with the obvious. Why were Heartbeats, which is a feature only useful for the DTLS protocol over UDP, built into the TLS protocol that runs over TCP? And why was this entirely useless feature enabled by default? And so we looked into that. And then we asked ourselves some other questions. Then we dug a little deeper. And then we saw this nonsense with some buffer allocator. And then there's some nonsense with free lists and exploit mitigation countermeasures. And we keep on digging. And we keep on finding stuff we don't like. Bob's talk has a lot more detail on that. So I'll just cut to the chase. Why fork? Why not start over from scratch? We got to start somewhere. And as I said, TLS is a very complicated protocol. It's built on piles and piles of hacks. And so if you start from scratch today, you're going to have a hard time interoperating with other real world implementations. And also the LibreSSL team, we have more experience in secure coding than necessarily the TLS protocol. I know how to free memory after I'm done using it, but not before I'm done using it. And so I can find those kinds of bugs in a library much more easily than I can analyze the state machine of protocol as complex as TLS. And so I didn't want to be messing with the hard bits while there was still a lot of comparatively easy fixes to be made. And why not start with some other library? Sad reality is all the other libraries are pretty much equally bad. If you may recall not so long before Heartbleed, there was the Apple go to fail bug, which at the time was also the worst bug ever. So pretty much par for the course that your TLS library is going to have the worst bug ever on a regular basis. So what have we done in LibreSSL? We go to the junk, and then we rewrote a bunch of functions, and then we added a couple cool new things. And that's pretty much what I'm going to talk about today. All right, so let's look at some POSIX code here. And I could probably spend you, I could talk for a full hour explaining why supporting obsolete broken systems is a terrible idea and detrimental to your overall code quality. And but unfortunately, if I told you all the things that I have learned about VMS, it would probably violate your human rights. So instead, I think this one example will suffice. This is obviously a cut down example of code from OpenSSL. But there's some if-def checking for a define, then where we got some great working code. And then if the define's not there, we got some crappy workaround which tries to run on whatever platforms don't have non-blocking I.O. Now in theory, this is a reasonable kind of thing. But there's one small problem. The code is testing for this macro. That macro is only defined if you include the fctrl.h header. If you forget to include the header, the macro doesn't get defined, and then the crappy workaround code runs. Guess what header the file that this code came from forgot to include. The working code was never executed on any platform even when it would have worked. Instead, the crappy workaround always existed. So this is a problem where you have workarounds and they get picked up accidentally. Because by permitting bad code to exist, you're pretty much guaranteeing that the bad code is going to continue to exist and actually be executed. So I kind of lied. This picture was just too good. And the Socklunti workaround in OpenSSL was too horrible to skip over. This has been talked about before. I love the picture too much. So here's the problem. You want to create a variable, and you want that variable to be the same size as Socklunti. How do you do this? Well, one fairly obvious solution would be to declare a variable with type Socklunti. That's not how OpenSSL does it. Instead, let's create a union of a couple different ints of different sizes. Then we'll call accept, and then we'll inspect the different fields of the union to see which ones the kernel overwrote. And that will tell us maybe how big the int is. But we have to remember that some platforms are big endian and some platforms are little endian. And so you have to check the top word and the bottom word of the union to see which ones were overwritten. And then you do an assertion to make sure that you didn't actually overflow your buffer once you figure out what the size of the Sockline was. So it's not just legacy code and workaround. Even though a lot of the new code in OpenSSL is a Byzantine mess. And so I'm going to point out just two sample options from OpenSSL here. No heartbeats, no buff freelests. But pretty much every one of them looks like this. And this is defined that you'll pick up in your SSL conf.h header. Now, the naming convention alone reveals that there's this default on mentality. Everything is on, and you have to pick and choose which options to turn off. This is the opposite of how you want a security library to work. You want to minimize your attack surface. You don't want to maximize it. And second, this makes applications developing code to test for such options actually rather problematic because old versions of the library that don't have the feature also don't have the no feature defined. So it's actually very difficult to check to see if a feature is present or not in user code. And even more bizarrely, as we discovered when we started stripping these things out of Libre SSL, as OpenSSL adds new features, we are going to have to continue to add new no feature defines to Libre SSL. And our current roadmap plans on adding lots more not features. So that's actually brings me to my next slide here. So a big part of what we're doing is actually not doing anything. We're slowing things down, and we're trying to present a smaller target, not a larger target. So we've pretty much applied the brakes on new developments. And I'm going to pick on the previous D guys here for a bit, but it's not their fault, so don't blame them. So August 6th, OpenSSL advisory comes out, bunch of bugs, some crashing, some buffer overflows, the usual. And then on September 9th, FreeBSD issued their patch and security advisory that was equivalent to the OpenSSL advisory. That's over a month later. So it's like, hey, guys, what were you doing? It was a month. But I'll tell you what they were doing, because I did the same thing. They were wading through the 13,000 lines of Diff that OpenSSL decided to drop as part of that update. It was a security release update, and the Diff between the previous release was 13,000 lines. So projects need to consider how downstream users are actually going to deal with their copious volume of security patches. If you're dropping security patches on a regular basis, they cannot be 13,000 lines. And so this goes back to how did heartbeats get into the ecosystem? People were applying these patches blindly without reviewing them, auditing them, or even inspecting them casually, because nobody's going to look through this. Hey, there's a bug. We've got to fix it. Bam. It goes to the next version. Hey, there's another bug. Bam. It goes to the next version. And so all of a sudden you end up with this menagerie of features that you never knew existed, and they're all turned on. Now, I don't want to say that LibreCell development is completely frozen. For instance, we've added support for a few new ciphers, notably Cha-Chua-Chwani. But as we do so, we consider what new failure modes we can introduce. And so buffer overflows are actually pretty rare in the implementation of a cipher, because the inputs and outputs of a cipher are generally not as good as the inputs and outputs of a cipher are generally very fixed. And so we felt pretty confident that we could review the code, test the code, and make sure that it worked as intended. Usually ciphers come with great test suites. They encrypt all sorts of funny Latin fake strings, and then check that the output is a known value. And so we're pretty comfortable with that. Now, this doesn't exclude the possibility that there's a crypto break, which allows you to decrypt all of this traffic, but it's a little harder for an attacker to exploit that. And also in TLS, the server and the client both have to agree on a cipher before they'll use it. So you would have to be connecting to a malicious server to force it to use you, in order for it to force your client to use a weak cipher. And at that point, you're talking to a malicious server, so I don't think the encryption is going to be the problem. Now, actually, one more timeline. On May 5th, I removed the SRP code from libssl. And around that time, we removed Kerberos support and some other protocol extensions. The problem with the way this code is integrated is it sprinkles about a dozen if-defs and this crazy nested if-else chain right into the heart of some of the most critical functions in the TLS protocol, like the key exchange. So auditing these thousand line functions is impossible when they're just shredded by code that's either on or off and you can't tell what's going on. And they have all these conditions where maybe this field is set and maybe this field is not set. And so we cut it back down to the basics. You get RSA, DSA, ECDH, and so forth. Now at this time, the SRP code in the libcrypto was left alone because it wasn't in the way and it wasn't interfering with auditing of TLS. Now, on July 2nd, OpenSSL received notification that there was a crash causing bug in the TLS SRP code, which is the code that had been removed on May 5th in libssl. July 2nd, though, the bug was not yet publicly known. So on July 28th, I deleted the SRP code from libcrypto. And at the time in my commit message, I mentioned, hey, there's a bug in this code, but the details are secret. Now, this is actually kind of misleading because the secret bug was not in the code I deleted. The bug was in code that had been removed months earlier. Nevertheless, three days later, on July 31st, two researchers found a remotely exploitable buffer overflow in the libcrypto SRP code. It's like, throw a rock. You're going to hit something. Like, I pointed people in the wrong direction, and they still found a bug. So on August 6th, let me just go down a little bit here, OpenSSL 1.01i is released. And that contained fixes for both of the above issues, along with like 12,800 lines of diff for other sort of things. And then on August 8th, a user reported that SRP support in the new release was broken. And that was the fix for the first issue that broke SRP support. So to be clear, OpenSSL sat on an embargoed bug for over a month, and nobody tested the fix. What's the lesson here? Don't drop jumbo security patches on users. They can't handle them. And also, anybody actually using SRP around this time was in quite a bit of a pickle. They couldn't upgrade to get the fix for the buffer overflow because that broke SRP support entirely. Instead, they had to go through that giant diff and try to pick out the one little gem that they needed. Now, if the patches had been issued separately, such as the TLS diff released in the beginning of July and the LibCrypto overflow fix released at the end of July, then there would have been time to get the first diff right. And then users could apply that. Then you could issue the second fix, and users could apply that. Now, so nobody's perfect. I've flubbed a few patches myself, and that's exactly why you don't combine security fixes. If they had been separated, the regression could have been discovered and fixed, and then the buffer overflow fix would have just applied like that. Sorry, this is my fancy display technology here. So I still haven't actually talked too much about what we've done in the last update, and that's kind of on purpose, because there hasn't been a lot going on. We've mostly stopped the lopped deleting code. The rampage is over. And there's still some scary code left, but unfortunately, a lot of it is actually in use, and so we're going to have to rewrite it the slow way. And there was also a bit of a summer lull. There was a hackathon, and then after the hackathon, people like to decompress, and then OpenBSD56 went into freeze, so we couldn't really work on things too much. But things are picking up again. And the usual way do we do this is you go into a directory, vistar.c, start hitting page down. And you look at all the points where memory is allocated, then you look at all the places where the memory is freed, and you make sure it's freed one time, and exactly one time, no more, no less. And sorry to make it sound so tame, but avoiding excitement is really part of the plan now. The first 30 days were all about the revolution, but now we're in evolution mode. Because we had some time there from when the rampage started to just kind of whack blindly and then fix it later, but five, six had to go out the door, and users are going to be starting to run that. And so we're slowing down. And I think also the rampage accomplished its mission. We deleted all the code we needed to delete. So that actually brings me to portable. And due to the quarks of release timing, the first version of LibreSSL released was actually the portable version, not the OpenBSD native version. The first native LibreSSL release will not actually be coming out until November when five, six is released. And so I personally do not work on the portable version. There are other people who do that. I kind of keep my eye on things, but in some ways the less I know, the better. But I'll tell you what they've been up to. First, you'll be happy to know LibreSSL portable should work on all the BSD platforms. It works on some other OSes, too, I've heard, but whatever. So the good news is most of the extensions that we use extensively, such as Sturrell copy and so forth, are already on other BSD platforms. This makes the port very straightforward. You probably don't even need the portable configure build system. If you're going to import LibreSSL into another BSD system, I think it would be simpler and easier to just copy the openBSDmake files and build it that way and use the BSDmake framework instead of taking the BSDmake framework converted to configure, auto, make, whatever, nonsense, and then building that on a BSD system. Now that's the good news. The bad news for now is that LibreSSL uses functions that exist on the operating system if it finds them. And notably Arc4Random, that is the source of all random numbers in LibreSSL. Now Theo has a talk coming up on the evolution of Arc4Random, but it's enough for me to state that openBSD has changed it quite a bit. It doesn't even use the RC4 cipher anymore. It's Chacha20-based, and FreeBSD, and FBSD, and Dragonfly are all kind of lagging, I think, in that regard. I know there's some patches for FreeBSD to update it to be a lot more similar to the code in OpenBSD, but the patches seem to have stalled. So somebody's going to want to pick that up. OK, cool. So there was an issue earlier where LibreSSL would try to override Arc4Random regardless, but that led to a whole bunch of crazy linking issues where you have two symbols with the same name and a program, and you don't know which one's going to run. And so the solution is to configure script. If it detects Arc4Random, it is not going to build it. It's going to use whatever you've got on your operating system, and if the one on your operating system isn't fork-safe or whatever, then, you know, unfortunately, LibreSSL can't really deal with that. That's an OS problem. And we're still targeting Pulsex platforms. Window support isn't out of the question. It just kind of requires somebody to figure out a build system on Windows, because I think a SIGWIN port of LibreSSL is kind of useless probably for most people. And I think there were some patches for like Debian FreeBSD or whatever, like hybrid weird things. But second tier. So now I want to spend a bit of time talking about what I hope is going to be the future here. And this is more forward-looking. So a lot of the initial reactions to the announcement of LibreSSL was like, oh my god. You guys are crazy. The OpenSSL API is so bad that nobody would want to preserve this. You guys should just throw that away right now. And I'm inclined to agree. But we've preserved the API because that's what the install base that we have is using. And we need to work with the programs that exist today in order to succeed. But that doesn't mean that we're married to this API long-term. And so Joel Singh and I have been working on a replacement API. And we have appropriately named it Russell, R-E-S-S-L, which is reimagined SSL. And our goals are consistency and simplicity. In particular, we are trying to answer the question, what would a user like to do? And not the questions, what does the TLS protocol allow you to do? Or what can we make the user do in order to establish a secure connection? And so you make what do you want to do? You want to make a secure connection to the server. And so you can also host a secure server. And you can read and write data over that connection. But there are no OpenSSL types or functions exposed. The Russell API is completely standalone from a programmatic point of view. And in fact, not even any Russell internals are exposed. We were very careful to expose only opaque types and use functions for getting and setting fields. And cardinal rule, you should never, ever need to contemplate the existence of X509 or ASN1 or anything like that. And those are details that are far beyond the level that most developers are going to care about. And so we just said, no, you don't get that. So what you do get is an interface that actually could almost equally well describe transport over SSH tunnels. Like, what do you want? Do you want a secure connection? Hey, we give you a secure connection. Details, don't worry about them. And we've also tried to keep this easily bindable from other languages. And so I have a kind of particular interest here in that languages like Ruby and Python and Lua can pick this up and interface with it very easily without having to try to figure out the calling conventions for some strange API. And the implementation of Russell, however, is not tied to Libre SSL. The Libre Russell library works with OpenSSL out of the box. It's very portable. And it will even allow other implementations to be used. And so I think previous efforts at replacing OpenSSL, like GNU TLS or whatever, have usually ended up with these compactions where they try to emulate the OpenSSL API. And that's terrible because you have this ridiculous mismatch of types and function definitions where unless you actually do things in the crazy way that OpenSSL does them, you're going to have to jump through hoops to make it happen. Instead, we've pushed the API up to a much higher level where you don't worry about what's happening. You just kind of say, hey, send this data over the socket. And if you would encrypt it for me, that would be nice. And then the details of how you encrypt it and so forth are left to the implementation, and the user doesn't get involved in that. Now, but I think the most important thing here is that the API is abstract enough that others are welcome to the party. Now, clearly, I'm claiming that LibreSSL is one of the best quality TLS stacks that you can get. And that's one of our goals. But going all the way back to the beginning, I think the ecosystem benefits when we break the monoculture. If LibreSSL, when LibreSSL is a runaway success and becomes the de facto TLS stack, it's actually going to make me a little sad because I think it means we didn't learn an important lesson about variability and competition. I do not want to see LibreSSL actually become the one and only TLS stack that people use. And so a compatible API like Russell, which allows other people to plug in and replace our code and keeps all the important programs that you want to run, gives them the flexibility to pick and choose their implementation. I think that benefits everyone in the long term. But one of the things that we're trying to do right, I mentioned, is secure by default. And so public service announcement here, host name verification. If you've ever written any TLS code using OpenSSL and you did not verify your host name, which is actually kind of likely, you have a pretty wide gaping man in the middle hole there. So in order to make a secure TLS connection, you have to do two things. You have to validate the certificate and then you have to validate its trust chain. Everybody kind of knows this. Then you have to verify that the host name and the certificate is the same as the host name that you're connected to. If you connected to PayPal.com and you get a certificate that says notpaypal.com, no matter how valid that certificate is, you're not talking to PayPal. And so you don't want to be using that connection. Unfortunately, OpenSSL doesn't give you a lot of assistance in verifying the host name. You have to do it yourself, which requires pulling things out of the X519 certificate. And you have to know about things like component names and subject alt names and ridiculous things that you probably haven't heard of. Or if you haven't heard of these things and you've written TLS code, you're in big trouble. And the best thing that we can say about this is that popular bindings for languages like Python and Ruby include a function to verify the host name in their wrapper. But the bad news is calling this function is entirely optional. And if you go on GitHub or whatever, pick a random Ruby project or Python project that uses TLS, it does not verify the host name. It's not going to call that function. The LuaSec binding, for instance, for Lua to OpenSSL doesn't even include such a function. And then if you go and look at other libraries like Thrift or something else that does it right, you'll find that everybody, since everybody has to write this host name verification function, everybody does it a little bit differently. And you can tell, just by inspection, that they're going to accept, in particular in the case of wildcard certificates and everybody else's favorite feature, embedded null bytes in the host name, everybody is going to give you a different answer for whether a host name is verified or not. And so we've pulled this up into our API, and it's on by default. And whenever you make an SSL connection, you have to pass a host name, and it has to verify. And you can't accidentally forget to call this function. I think that's our biggest contribution to date would be a API that does host name verification by default. As far as I know, Go, which is actually what the RESTL API is kind of modeled on, is one of the few other implementations that enforces this by default. Now, currently, there are two programs using the RESTL API, the OpenBSD FTP client and the OpenBSD HEDPD. So we have a client and we have a server. And this allows us to test things out and move them forward. And we're, I would say, not nearly ready to call for third-party support. So don't rush out yet and start porting programs to the RESTL API. But if this is something that interests you, I would encourage you to look at what we're doing. An hour ago, sitting in the lobby, I actually committed a diff that changed how the configuration works. So things are clearly still in flux. But I would say by five, seven, so that's another six months from now, I think I'd like to have things locked down. And then that will be that. And I think I ran through a little faster than I was planning, but I am at the end. So I'll open the floor to questions. So, hi. First question. In RESTL, do you plan to have anything to make this API more useful in trade applications? Because using OpenSSL in trade applications is shit. There's documentation on how to do this, which conflicts with itself. So it's hit and miss most of the time. How exactly to get it. So for example, we have a piece of code that we know that works and we never touch it. Like we copied from project to project, which is a bad idea. Well, I think the hard part there is we are building on OpenSSL for now. And so it's going to be hard for us to avoid some OpenSSL peculiarities in respect to thread support for as long as we're using OpenSSL under the covers. But actually, could you be a little more specific? I'm trying to think of what would be a particular issue. There was a problem that there's a problem when do you initialize OpenSSL? Which initial initialization functions you call in threads and which you call before you call the threads? And also, another problem with I'm not really sure with, I think, some global structures that OpenSSL uses that if you link OpenSSL twice in some way. For example, you have Apache. You have Apache with OpenSSL. And you have PHP, which links the PostgreSQL client, which links OpenSSL at this site vaults most of the time. Stuff like that. I'm going to say we are not going to deal with that right now. Thank you. And one more thing. So OpenSSL is actually the second word software I've played with to the first one, big FM pixel. Thank you for this. Hi. So let me understand things correctly. So SSL API, RESTful API, it uses LibCrypto or LibCrypto and LibSSL? It uses LibSSL and LibCrypto because you have to call into LibCrypto for the... LibSSL uses LibCrypto, basically. Yeah. And is there any ongoing discussion on the API, on the design of API? Because what I'm worried about is that the API would be nice and clean, but only for some very specific needs. And if, for example, we would need certificate pinning and we don't care about hostname, we care about fingerprint of the destination certificate and you will make the hostname mandatory. That's a problem. So I wonder if there is some discussion going on on the API so we can actually comment on the very early stage, actually. Yeah. So I think that's actually why I brought this up now is I want to increase awareness of what we're doing and kind of solicit feedback. And we didn't want to do this necessarily before we had any code because I think you need to throw something out there so that people can look at it and see what you're doing in concrete terms. But certainly I think now you can turn verification off if you need to. And you can also specify a certificate so you can specify either a root cert or a particular sort that you expect. So the API that we're evolving, I think we are trying to do it somewhat slowly where we're going to look at the needs of particular programs and then add functions as necessary to the API to support the functionality needed. But without I think what I'm trying to get at is we want to see what people need to do versus necessarily what or how they're doing it today. And so we want to make it easier for people to do things versus give them like people say, oh, I want a hammer. And we're like, well, maybe a screwdriver would be better. OK, thank you.