 Okay, thanks for coming. My name is Rich Sales. This is Tim Hudson. We're here to talk about OpenSSL after Heartbleed. Lower the volume a little. Okay. Anyone still got ear drummed? Okay. We promise, so first of all, the OpenSSL team is meeting face-to-face all week here, partially funded by Linux Foundation, the Core Infrastructure Initiative. We have an open session for anyone who wants to come and talk about it. Porting issues, bugs, introduce themselves at the end of the day today, one of the last sessions. So, we'll just get started. This is, we promise, the last talk where we'll ever mention Heartbleed. Okay. But we think there are some really interesting and useful lessons for open source development. OpenSSL certainly learned a lot and improved a lot. So, just get right into it. How many people recognize this date? And this was, yeah, this was the rekey the Internet date. It's when Heartbleed came out. On a lighter note, it means that ever since April 2014, every defect, every security bug, had to have a logo, had to have it on a website. We saw some really great drawings internally for like Poodle, Shell Shock. There was an OCSP one that had this rocket ship crashing to the Earth and so on. So, what it means now as security researchers obviously have to have somebody with some kind of artistic sense as part of their team. But it did really change the world. Having short, understandable names for things are very useful. You can say, am I susceptible to Heartbleed? And everyone knows what it is. If someone says, am I susceptible to CVE 2014, 1744? Gee, I don't know. I don't memorize strings of numbers like that. I don't think most people do either. Heartbleed was the first real security defect or first real general internal facility, internal component defect that caught mainstream press. Daily Mirror, English tabloid. I don't know who Riva's mum is and I think she's fallen off the radar. But Heartbleed will be known for a very, very long time. It showed up on the front pages of tabloids. It showed up on front pages of major websites. Hopefully, we'll even outlast the Kardashians. We'll see. Bugs get, you know, defects get cartoons now. And this is actually a really very interesting one. It was a very simple bug. The cartoon explained it. It's simple, easy to understand words. You said, give me ship. And here's, give me 20 bytes back. And it just gave you the first 20 bytes that it had. So as a result, it was a way for an attacker to read remote memory. What you got back was sometimes interesting, generally boring, but it could include, depending on how the memory was used, sessions, data, private key information, things like that. One CDN, not mine, said, oh, we're not susceptible to Heartbleed. We put up a server, try it. And I think within two hours, they had recovered the private key. It goes back to the really old days of the computer hobbyist era where people would just do peak and poke at random memory locations to see what happened. And so this was all peaking. So if you're a security researcher looking at issues, you have to find the basic thing. You have to write it up. And that's a good thing. It puts more pressure on projects to fix bugs. Transparency is good, but it's an interesting question. Why did this one hit media so hard? In the intervening timeframe, we've seen millions and millions of accounts stolen, right? Which is sort of the worst thing that could happen with Heartbleed. It doesn't seem to get the same level of coverage. Maybe it's just because it happens so often, it's not news anymore. So one of the things that's really interesting to look at is effectively how many catastrophic bugs do occur? Is Heartbleed unusual? When's the next Heartbleed going to occur? And we've just got a summary of a few of the different CVEs up here in terms of major impacts, major issues. So it's not just Heartbleed that's out there. Why was Heartbleed so important, so special? And frankly, you know, my parents don't pay much attention to technology, but they knew about Heartbleed and they asked about it, and everybody knew because it had a logo. It was in the newspapers. In Australia, it didn't make the front page of the standard press, but it was in the technical news for sure. And there's a lot of bugs that are out there that are incredibly serious that have an impact. And one of the things that we do is we have a look at the nature of what the bug is. What is its impact? How exploitable is it? What's its overall score? What's its rating? And when you turn around and compare things like Heartbleed to some of the other, at the time, relatively contemporaneous issues that occurred, you can see Heartbleed doesn't sort of rate all that high as an overall score. You know, go to fail, and you look at the RSA signature forgery issue, it was highly exploitable. So why did Heartbleed capture so much attention? And that's one of the things that we have as a project team to sort of look at. Why was this one special? Because we don't really want more of that sort of special in future. So we've had to take a more detailed look at not just the technical issues, but what caused it? What did it lead into? So what actually happened? So it's a pretty simple bug. There was a validation check. There was a variable that contained a length. The code was contributed into the tree. The bug had been in OpenSSL for three years. Three years is a fairly long time for a bug to be in existence. Now, it's certainly not the longest bug. The project team member that checked the code didn't notice the bug. None of the other team members saw it. External security reviewers kind of sailed right on by. All of the external users didn't notice it. The security reviews that do occur in the major organizations kind of completely bypassed them. One thing that was really important is all of the existing tools that you run for static code analysis, none of them reported Heartbleed. Now, three days after Heartbleed was announced, they all report Heartbleed. But at the time, it just was completely missed. And one of the more interesting aspects of this particular bug is it not only allowed clients to attack servers to recover memory, but it operated in the opposite direction. A server had the ability to reach back through and look at a client's memory. And that's interesting in a whole pile of context because the client's usually operating in a very different environment. So you're sitting in your corporate environment with lots of stuff going on, reaching out through however many firewalls. Well, this is coming backwards to enable a memory read issue to occur. So there was a lot of aspects in it that were quite interesting as a bug. So as a bug, it was pretty small. This is the Git repo statistics. 600 lines added, 20 taken away. A large portion of those lines added were the old OpenSSL copyright. It was done by a graduate student to do... The intent was to measure the maximum transit unit for DTLS. Arguably, it was an OpenSSL bug that also added it to TLS support. If it was in DTLS, nobody may have ever noticed it because not many people use DTLS. We're having an ongoing discussion about how impactful that is. But it was a very small amount of code, and the commits were always public. People were looking at things, and many eyes make all bugs shallow, but only if they're open. And so Hartbley did have the advantage, at least for OpenSSL, of opening everybody's eyes. So why were they closed before? The project was snoozing to put it politely. It had become more abundant. Releases weren't announced. The policies were like, oh, we're putting out a new release. We think this is the best one ever. Everybody should upgrade. Oh, we found a vulnerability. Everybody should upgrade. There was no ability to plan. There was no documentation that major organizations that were repackaging or including OpenSSL or building products on OpenSSL needed time. We now look when we're scheduling releases. Well, we could do it Thursday or Friday, but that means the IT staffs of IBM, Red Hat, they're going to have to come in over the weekend. So maybe let's wait until Monday unless we have a really pressing need. Things weren't pre-announced. It was complex and arcane. Anybody who looked at the old SSLEAY and OpenSSL code, it made things like Proc Mail look pretty. It was complex. It was arcane. The curly braces were positioned in a way I've never seen before. Everything was done through tables of function pointers. And that's okay perhaps if you're implementing a state machine, but if you're implementing code to just keep track of errors and again go through a table of function pointers that's probably not the best way, just have an error mechanism. Have a thread mechanism that uses the native system capabilities. Don't allow people to switch these things out at runtime. It was hard to maintain because of all those features that led up beforehand. The code was really hard to read. It was opaque. There were almost no comments. There's still almost no comments, but at least two people have looked at the code and someone says, put a comment here. We do have lots of comments that have been added purely because of code reviewer. Someone says, I don't understand this. So I was commented. It was harder to contribute. If you were in the US, it was kind of impossible to contribute because of stupid US regulations. The team was spread across Europe. Loosely affiliated. It was hard to get code in. If you created a code patch or a bug report, there was no guarantee as to when anybody would ever get around to looking at it. There was no policy as to when anyone would get around to looking at it. The main developers were overworked and overcommitted. At the time of Heartbleed, there were basically two developers. Barely making enough money to live. They were doing it not on mainline OpenSSL, but they had to go outside for other funding that we'll talk about shortly. Donations were minimal. This was an open source project that barely got $2,000 a year to keep going. Even if you're living in a nice European state with lots of support systems, it's still hard to move into a place and do OpenSSL. People were pounding on you, throwing patches that you throw in code over at you, screaming and yelling because their bugs didn't get fixed. As I said, two guys who were very, very overcommitted or two and a half guys depending on how you counted it. Looking at some GitHub stats for the two years prior to Heartbleed. It's the period there in the gray box. These are the two top committers, Steve Henson and Andy, whose name I still can't pronounce very well, so I won't. The next nearest committer had 100 people less, 100 commits less. When I say it was two people running this barely eking out of living, this is true. This was the platform used for what was called e-commerce at the time, now we just call it the Internet. It was severely underfunded, underresourced and overcommitted, and two people were doing all of the work. There were a bunch of people knocking at the door trying to help, trying to contribute, trying to commit, but they didn't even have the bandwidth to take on that additional work. So how do we let this happen? There's a number of ways, and I can talk about when I say we, I wasn't part of the team then, but we let this happen, but that sounds a little adversarial. Very little time was spent on building the community. The mailing lists were maintained using this old system-major DOMO. No one even had the chance to go fix that up. It was hard to search archives. People would, just people using the source would comment on mailing lists very few times, the developers would do very few times to be able to respond. It was a really long time to understand the code. It still takes more time than we'd like, but that's a problem when you're writing complex cryptographic code and C, that's just the nature of the beast. The project membership was static. It had not changed. There was a wariness about involving other people. Certainly nobody from the U.S. could be involved. It was hard to get socialized and get to know other people, so it was static. There was a strong need to focus on consulting dollars. The major project that brought in consulting dollars was doing a FIPP certification. FIPP is a U.S. government standard checkbox item for selling into major parts of the government or other organizations that follow the U.S. FIPP standard. And that had to keep the project alive. It had to keep the people fed. Which is reasonable. The cause of all of those pressures, like announcing or even keeping any plans. We could say, yeah, we'll put out a release next month. Didn't do it. We still, at least now, we can make plans, we announce them. We're pretty good at keeping them. We slipped this last release by, I think, three months, but that's overall not too bad. But all of these things, all of the concerns about involvement, inability to bring on other people, inability to look beyond the next day, made a very ultra, and the personalities too. Also played a part, frankly. All that up to a very, very ultra cautious, scared, if you will, of any kind of change. So it was stuck in this little bubble. Everybody depended on it. Very few people knew about it. And it was there, isolated. And the easiest way to not break things is don't change them. So if you make changes infrequent, you reduce the problems for yourself. And it's sort of human nature to if you have a problem and you can see a simple way of reducing it, you tend to take that, especially when you're overworked and frankly rather underpaid. So one of the things that happened post Heartbleed, and this is a kind of sanitized version of some of the feedback, most of the feedback that came into the project was highly negative. It's like you guys must be completely totally incompetent. How could you ever let this happen? How could I trust you with my toaster in future, let alone e-commerce on an internet level? So it's interesting when you're receiving this sort of feedback when in order to fix the problem you need to be able to focus on what the actual problem is to address it and you're getting all of this negative feedback coming in. A very common question was how many more Heartbleeds are they going to be? When's the next one coming? Anybody who turns around and says my open-source project is defect-free and will never have a security issue they're kidding you because people write code, people make mistakes. There will be more bugs. Now we're doing what we can to reduce the likelihood, but you can never eliminate the possibility of a security bug. So the question is how many more Heartbleeds are there? Why didn't the project notice us? Why was it asleep for three years? We often got a lot of feedback as people were realizing, well, I'm impacted by Heartbleed I've got all of this software that uses OpenSSL that I never knew beforehand, used OpenSSL why aren't the people who are making all this money off OpenSSL contributing? How can we trust anybody that makes such a big mistake ever again? What do we do about this? Why don't I just go and find somebody who's not those guys and use them as well? There are a lot of the feedback that came into the project. Some of those discussions got pretty personal, pretty focused and pretty difficult for some of the team members to deal with. And they're all good questions to ask. It's an open environment, it's an open source project. If anybody's got a viewpoint on it you're free to express it. If you don't like what the project team is doing it's open source. Fork it. Go your own path, follow your own direction see what you can do to achieve. And that's one of the benefits of it being such an open source project and widely available. So what happened? Well Heartbleed was a wake up to the industry and those commercial companies that were effectively getting a free ride on OpenSSL did wake up to the impact it was having in terms of we need to do something about this. We can't be relying on a couple of guys who are poorly funded for such a critical piece of infrastructure. So the Linux Foundation set up the core infrastructure initiative and effectively got a group of a dozen or so commercial companies together to be able to offer funding for not only OpenSSL but other critical projects that are under resourced. Let's reduce the likelihood of people working on a project like this where there's so much work to be done and so little funding. Let's get more infrastructure, more support, more ability to turn around and address the issues so that saner processes and more code, more eyes can look at the code. There's less likelihood to turn around and say, hey, I'm so busy I can't stop to think. You want to actually have that capability and it's one of the things that the core infrastructure initiative has actually done for OpenSSL. Still do. Still me. Okay, so where were we prior to Heartbleed? So prior to April 2014, as Rich has already said, there were effectively two main developers. Now don't get me wrong, the OpenSSL team was bigger than two people but two people were doing most of the heavy lifting. It was all volunteers. Nobody was funded by a large corporation to work on the project and in fact, as Rich has already mentioned, most of the funding came through on consulting work. The decision making process and there was one but it wasn't particularly formal. So as of December 2014, so we're looking six months after Heartbleed, this 15 project team members. There's two people who are fully funded by the core infrastructure initiative to work on OpenSSL as their day job. And there's two people who are funded to work full time based on the donations that came in from people who were concerned. Here's this project. I see it's got a problem. How can I help? Can I contribute money? I may not be able to contribute code. I may not be a security person. I may not even be a developer but I'd like this problem to be fixed. Oh, it's just money. Here income donations. So that sort of thing has helped a lot and with a bigger team, of course, you need more processes in place. We have a very formal decision making process for the team. It might surprise folks to realise you get 15 people together, you don't get 15 people agreeing on everything. You have to have a mechanism for making a decision and we've got a pretty simple one within the team. One of the things I just want to mention and still find very interesting is one of the initial founders and sponsors of it was Microsoft because they recognise that if the internet goes down because people can't rely on it and it's not secure that they're going to go down too. So it's not just the usual IBM, Red Hat, Oracle, you know, standard open source vendors. Microsoft has also become more open source friendly obviously in the past two years but it's people who depended on the internet. The next one will be people like banks and other organisations, you know, grow the CII. So one of the things we had that immediately came out is after the team was grown, two years ago in Frankfurt we had our first face-to-face meeting ever. There's a picture. In Dusseldorf? Sorry, Dusseldorf, you're right. There's quite a difference between the two cities. Yeah, I'm an American, it's just Germany, right? Sorry. But it was really critical because it's really important to know your colleagues. You really have to work, if you've worked remotely versus in an office you can understand some of that. If you see these people every day you occasionally go out to lunch, you go out for beers at the end of the day, you sit there and work on the poodle CVE fix during the day and then go out and drink afterwards. It's a great team building exercise, it's a lot better than trust walks or camp outs and things like that. But socializing really helped us get a good level of comfort with each other. For example, it also makes code reviews better. When someone posts code on the internal team and then someone else reviews it, you know that they are just, you know, it's not that Andy hates me, he's pointing out that no, it's kind of a stupid mistake to assume Pearl works this way. I don't take it personally as much. At that face-to-face we drafted several of the major policies that are still in use, we still follow them for the most part, we're in the process of refreshing them. A release strategy pardon me. A security policy, how we know when security defects, how we categorize security defects, what consumers and other downstream users of open SSL need to know when we say it's a moderate defect. If it's a low defect it's just going to show up in the source base, people may or may not care. If it's a high defect okay, get your IT team ready. If it's one of the things we learned after is we need something worse than high we need a really, you know, severe or critical, rekey the internet. So for the first couple of releases after Heartbleed the press, twitter and all the other social media, they all panic. Oh my god it's another open SSL release. No it's okay, it's not that bad. We haven't had one that bad since. So far we've been lucky and good. We may again, but at least people will have the right expectation now. Oh it's another release, don't panic okay, let's just get another set of security fixes. And we've actually gotten pretty good marks from folks who appreciate our openness are over eagerness at times to say yep that's a vulnerability, here's a CVE assignment for it and again the openness and the process and the pre-notification systems we use. All of this is related to transparency looking at some of the core problems before it was insular and opaque to the outside world. Now it's just the data structures that are opaque. We use github for many things. Pull requests people make pull requests all the time. People open up issues all the time. We are still figuring out how to do it. When is something an issue versus when should it be discussed on the mailing list? It's an educational process for the team and for the community. We have public policies and security fixes, a release schedule. High level content, here's what's going in the release. Oh look we put out beta releases, alpha releases so that people could try to test things. It's worked reasonably well. Some people have tested it. Many more are doing it now. We have the code of conduct. Don't be an ass. And so on. The email traffic has increased. I think it definitely seems to be more useful. There are other members of the community now contributing answers to questions. Members of the team are responding quickly and rapidly. We seem to be more engaged in having a more virtuous cycle of feedback. Security fixes. This comes from a blog posted by team member Emilia Casper. It shows the red or the high fixes, high security things and how many days it took us to fix all of these things. Every issue, although some of them have taken longer than we like, it's been a while. People's bandwidth is consumed. We haven't had anything slipped to the 90 days. Severe vulnerability fixes get fixed within two to three weeks. There's a pre-notification scheme that all the downstream distros get to see it ahead of time. So we've been very, very good. We'll probably do an update on this one at the end of this year. Hopefully. We set a goal and we were able to meet it. And we take, as a security and crypto toolkit, obviously security is really important and we think that reflects, our behavior reflects that. What's happened this year? I was preparing some stats for a report for the core infrastructure initiative. 3200, 3300 commits. All done, you know, through Git, every step, Git commits our internal repository, gets immediately pushed out to GitHub. We've had one major release, 15 bug fixes. I think that's now 17 because the bug fixes had bugs. Oops. Obviously we still have a ways to go on some of those things. 29 CVEs were addressed. The GitHub activity, 280 people contributed. 122 issues. And that was just in the first 10 months, nine months of this year. I didn't go look before, you know, 2016. 63 pull requests were added. We closed 900 issues, 970 issues, almost a thousand, which is mind-boggling to me. We closed and almost all of them were merges. Sometimes things were rejected or sometimes submitters closed it. There was the occasional jokes, you know, you know, most of the PRs were accepted and taken on by the team. 730 pull requests from GitHub. I think that shows an amazing amount of community involvement. So continuing on from what Rich is saying, if you turn around and have a look at the left-hand side here, this is effectively in the lead-up to Hartley, what was going on before the team refreshed itself and took a hard look. If you have a look at the activities on the right-hand side, and this was leading up until the beginning of this year, you can see that the mix of committers has changed. The details are moving along fine. You can see new faces in there, new team members being major committers. There's more committing going on by different people, there's more review, there's more interaction. There's actually more of a sense of community. So not just as the project team itself doing more work because there's more people, the community interaction is radically increased. The amount of work that's going on, the dialogue, the feel of the project team, and the feel of the community of users has radically altered from pre-Hartbleed through until now. And one of the things that's made a difference is we had a huge number of historical issues raised in the defect tracking system we were using. And you see the issues there in blue, that's the issues closed in red, that's the issues remaining open. And effectively, this is at the point in time when things start changing, there are new team members, there are people that have the ability to turn around and start going through those issues. And a substantial portion of the issues that were sitting in logged against OpenSSL, they were issues that had actually been fixed much earlier. But nobody had had the time to see that the issue was fixed to be able to close the issue. So a lot of clean-up work went through just going to think, oh, okay, the status is out of date. Or this is something that makes no sense, but nobody had any time to look at it and say, this makes no sense to do. Or that's a good idea, but it's superseded because the code's moved in a different direction since this issue was logged seven years ago. So those are the types of things that we've had to go through as a team and just look at what's there. It's a lot better at looking at a defect and analyzing it and saying whether or not it's something that needs to be dealt with. So the time between a defect goes in and some project team member has a look at it is greatly reduced. And that can only be a good thing in an Open Source project in terms of if you don't look at the reports that are coming in from your user base, there's no way you're going to know what's going on out there in the community. And by getting that feedback and paying attention to it, they're cautious about being willing to make a change. This might break something for somebody, but that's okay, we'll have a dialogue with them. So it's all right to turn around and look at improving the code in ways that may have an impact on the users when the users are in a dialogue with you and you're communicating. So one of the, yeah, as Tim said, the dialogue is crucial. It also means if people report bugs and they don't get a response, they stop reporting bugs so things like that. We are not, frankly, at where we promised we would be. We said everything would get a response or a review within four days. It's probably three times that, but at least it's, you know, within a couple weeks. The supported releases, this is on our website. 110 is going to support it until the end of September in 2018. We've put out our first long-term support release that will get, you know, bug fixes, security fixes, and features added. The root numbering criteria, we'll probably stick with it, but it's a little awkward, right? The initial one could probably go away and we could just have, you know, one, two, three, four, five. So we have support for, you know, 098 is no longer supported, 100 is no longer supported, and 101 support ends at the end of the year. We're only doing security fixes. That lets us focus on the things that are in use by most people and working on keeping the project vital and adding new sets of features. One of the things about being responsive is people then start to look at your stuff more and that's really good. I had Kenny Patterson who, you know, discovered all the problems with RC4 says, you know, for about a year there, you could get a 10-year if you just wrote a paper about a security defect in OpenSSL. That's good and bad, right? At any rate, people are doing more fuzz testing. The bar has been raised, right? Just as crypto, you now have to have constant time implementations, even for network remote stuff, you now have to do fuzz testing, continuous integration builds. All of that stuff is OpenSSL is doing. It's doing with software testing lab in Oregon who had this interesting technology about mutations where they modify the compiler so they flip around statements and if your test doesn't find it, that means you don't have a test to cover this particular branch of code. So we're on the cutting edge. The static code analysis tools, as Tim mentioned, within a week everybody had upgraded to catch this kind of thing. We use co-verity as part of all of our builds, part of all of our releases, part of our effects are, I don't know, 0.2 is the number. Report issues are more quickly analyzed. And the biggest change is everything gets read by at least two team members. The person who wrote it and somebody else. We are not where we want to be on code reviews. We could be better. Many of them are like, I don't know, Pearl, but I trust you. I don't know system through, I don't know system Z assembler. It's fine, good. Sometimes it's like, oh, you need a white space here and they're more about the formatting. But we're trying to really dig in more. We recognize the need to dig in more on code reviews. That also comes from more familiarity and comfort level with each other. But it's a major tool for improving our code and improving the people writing the code ourselves. Project Roadmap, we published it. We're working on all of these things. We're doing a refresh of it. So by next week, we'll have an update, set of updates to the website. We are trying to be honest and frank forthright. Yeah, we didn't do this. We'll try to get better at it. Yeah. We're going to document every API. There's 2400 of them to be documented. We're going to focus on this one set. We'll leave 2000 of them alone for now. We're going to set expectations appropriately. The increased fatality is its own reward. We have about 1100 forks on GitHub. It's too many to display. That's why it says can't load the network graph. We have people like Daniel Sternberg of Curl saying, I found a bug. You fixed it in 15 minutes and I can keep moving forward to my port. So that's a great set of testimonials. Future plans. 15 minutes left. That's good. These are the things that we know we want to do. TLS 1.3 If you've been to the IETF OpenSSL is notable by adapt since the timing just didn't work. We were busy with the 1.1.0 release. TLS 1.3 is certainly high on the list of the things to put in the next release. We have stated publicly we want to move the licensing to Apache V2. We've been working on that. People from the Software Freedom Law Center have been helping with us. That will happen in due time. I wouldn't put a timetable on that. More testing. We integrate Fuzz testing. We've gotten donations of system images from Amazon to do more continuous Fuzz building. There's probably other things that are needed. And then FIPS is very important to a large section of our users. Tim can talk about that. One of the things that's historically kept the OpenSSL project alive and certainly the couple of developers working pre-Hart lead was the importance of having FIPS 140 validation for US Government use or people selling into US Government and FIPS and the work associated with it and the validation process itself is effectively what provided the funding that kept OpenSSL available. So without the funding that was coming from the FIPS activities the guys working on OpenSSL would have not been able to make a living at all. So it became pretty important. It's a very time-consuming and frankly irritating process to go through. There's a couple of team members here who will quite happily talk to you at length over a cold beer just how irritating it is. There's lots of stories around that. But it's something that for an OpenSource project is incredibly difficult when you've got a set of requirements you have to meet over a lengthy period. So we're talking a couple of years for a validation and generally it's sufficiently expensive you've got to coordinate it between multiple sponsors, then somebody's got to keep the sponsors happy, figure out what the different objectives are and make sure everybody's happy that the end result which you hope eventually comes out with the overall requirements. So a lot of work has gone into that and it's something that's incredibly important to a substantial portion of the OpenSSL user base. So to give you an idea, a lot of people turn around and think, oh, FIPS validation, that was work you did it once and boom it's all finished and that work happened three, four years ago. Why are you still talking about it? Well this is just a summary of all of the updates on the current module. So every one of these lines is a whole pile of work that was done, an update that had to happen. The module is being continuously maintained and updated and change letters are being done against it, requirements are being checked. It's not something that's static and every one of these activities that you go through here involves a whole pile of time and effort. There's a commercial organization that wants something done, you've got to talk to them, you've got to figure out the terms and conditions, there's testing work that has to occur, you've got to engage a FIPS lab. A whole pile of process has to be gone through just to turn around and keep the module alive and available. So this is just for the existing module that's sitting there and not the other modules that used to exist. So next slide, Rich. So we've got the FIPS 2.0 module and that's for the OpenSSL 10x series. The 10 module, if you're using it, well you shouldn't be because it's no longer useful. There is a new FIPS validation project underway and we have a company SafeLogic that has agreed to provide the funding for that and it will be a multi-year journey. So it's not going to turn around and be, hey, six months from now there will be a FIPS release for the 1.1 series. It's something that's going to take a pile of time and a lot of the work that we did in making the data structures opaque in 1.1 should enable us to make the meeting of the FIPS requirements less intrusive in terms of the overall solution. So that's a whole pile of work that's going on. It's a big chunk of work that's separately funded and we're going to see how that pans out over the next 6, 12, 18 months. But it's one of the things where we know we have to do a better job of not having any requirements that come in from a FIPS validation impact the project team or the module or the mainline code base in any way that sort of perturbs to the general use. And that's something that I think this next module is going to be a lot better in terms of how it can handle those sorts of things. And one of the flipping off the FIPS topic back to, what do we actually learn? What has the Heartbleed experience meant for the project team for OpenSSL as a whole? It doesn't matter how good any one person is. Nobody should be relied on to perform superhuman feats. Review code, look after a large user community, work crazy hours, and not make any mistakes. So if you're relying on one individual to be the best that is possible, you're ultimately going to be disappointed because we're all human. We make mistakes. I make mistakes. If you're doing code reviews you've actually got to make sure you're looking at the code review. And you've got to look at it in detail. So when you know what a bug is, finding it is easy. When you don't know what it is, it takes time to sit down and review. And hoping that your user community will review the code for you certainly doesn't work. And relying on automated tools because you haven't got the humans to spend the time doing reviewing also doesn't work. So you basically got to go back to using experienced people to go through and perform detailed reviews. And that takes time. Okay. So as we're approaching it, so how to contribute, how to help open SSL, help you make your stuff more secure. Download the pre-releases, which is now download the release, build your applications. One of the team members works on DBN. I think we have one third of the 500 packages that use it converted over. So in fact that you can no longer look inside what's in an RSA struct or an SSL struct and play around with those fields. I speak from direct experience of my daytime employer who said, oh look, here's an SSL context. Let me add 17 variables to it to control what I want. By not being able to do that and forced to use the existing extension mechanisms, you'll be much happier in the future. When the next release comes out, you can just no matter what it is, drop in and do it. It's a two-way street. It's a two-way street. Join the virtual circle. As we have and the team has become more responsive, people have been contributing more. As people contribute more, others join in so we get this feedback loop in a positive sense. As a minimum, there's mailing lists, open SSL dev for development of the project itself, open SSL users for people who don't have access to the package. Submit, I was going to say, report bugs to RT? Maybe not. We're looking at RT with a drawn-dust eye these days. Submit patches on GitHub. Help close bugs. To submit patches, as a reminder, we want a license agreement. We're not taking, as of now, we're not taking anything that doesn't have a license agreement signed so that we can convert over to your code or rewrite the code. We know, we don't know as much as we'd like to about our user community. For example, sure, people build command lines in Apache, Nginx, all the other web servers use it. That's certainly a major use. It's not the only use. So if you're using TLS for other things, DNSSEC, DEPRIV, things like that, we'd like to know about it. We'd like to hear from you. Note to the OpenSSL users mailing list, mail to the OpenSSL team saying, hey, I'm doing this kind of stuff. If you're downstream of OpenSSL and you are a major internet company or a major distro, get in touch so that we can discuss futures and plans all open in an open form, but we're trying to reach out and understand more how what we do affects people. There's a community page on the website. And we can encourage everyone or we encourage everyone to look at that, contribute write docs, write bugs, build it on well, I was going to say build it on other platforms, but we may not care too much about that. Before we go into the questions, we want to make them stand. So most of the OpenSSL development team is here. You guys want to just stand up and wave or something? Come on. Thank you. We'll be here all week as the joke goes. So feel free to stop anybody who's in the team and ask any questions. It's one of the reasons we're here is to be able to interact with the user community. And hopefully there's no more hard bleed questions. Hopefully we've now closed the door and moved forward in a better way. In a more productive way and more useful way. So team members are here. Country affiliation, whether or not they're attending and so on. And who the funding is. So we have about three, four minutes left, so we're glad to take any questions you might have. Yeah. We can repeat the question. Yeah. Okay, so the question is why are we doing Apache 2 since it's incompatible with GPL? Apache 2 has patent protection. And unfortunately in the crypto world it's rife with patents and the team thinks that that's important. We've also joined the open invention network for defense, particularly with the elliptic curve emerging as the soon to be dominant crypto and we see a lot of people doing mobile phone companies in Canada suing other people. We just think it's real important. It's unfortunate. We'll point to the Apache page where they say we don't think it's incompatible with the GPL, but FSF does, we have to respect their rights. It's better than it was, but yeah. Any chance of doing dual licensing? No. We've pretty firmly decided it's important. We've had a lot of discussion. We've gone around with the software freedom law center run by Eben Moglin and the advice from there too is also dual Apache. We know it doesn't please everybody. Anyone else? Yeah. How did we restart the community? I think everybody within the team realized that something needed to change it's effectively by gathering a greater pool of people who could work on what was going on and being transparent with the community. Dirty laundry out there. This is what happened. This is what went wrong. It actually began a dialogue with the user base. Frankly, it could have led to open SSL sucks. I think it was the original team members at that point in time that they talked to say, right, we're going to open up the team, we're going to make it larger, we're going to address these long standing issues. I think it's led to a real positive feedback from the open SSL community itself. Yeah, I think I joined after Heartbleed and I know internally we had discussions about, oh, please don't make me work with those so and so from being honest and admitting what was obvious to everybody and saying yeah, okay, we're trying to get better and just being responsive, that's a big thing. If some people see that what they comment or what they issue is a diff gets responded to then they're more likely to get engaged. The late refreshes on the website and the mailing list infrastructure helps but it's really about seeing that the communication is a two way street. And occasionally folks will open a pool request on GitHub one team member saying, hey, I like what you're doing and another team member saying, I don't agree. Being open in that dialogue, we're not one organism that has a single viewpoint. We're a pile of developers with very different backgrounds and very different views. But we've got a way to work together as a team that improves the pretty critical piece of infrastructure for everybody. Yeah, we could probably have made a sitcom out of the discussions on where we put the curly braces when we were deciding the coding style. I mean, like Tim said we had a process and there were votes and what about this set of indent flags and what about that set of indent flags? I mean, these were all experienced C developers and so 10 people had 12 opinions. I think the general opinion on the coding style was not what's currently there. That was the consensus. Anything but that. What's the relationship? Okay, the relationship with people like Libra. So there were two major forks that came out of open SSL. One is really closer whereas boring is heading off in their own direction more. They were good. Kind of surprising. They get pre-notifications. They have helped improve patches for security things. They still have to, you know, they say, yeah, sorry, we got to make fun of you guys because that's how we, you know, rile up our base to get money. But we don't take it personally. There have been mistakes made I was going to say on both sides but on their side we forgive them. It's good. Not always has it been good but there is an active dialogue and I think that's a good result. And again, if another fork turns up for a community that wants to head in a different direction, then we as a project team we just recognize anybody can fork the code. That's something that you've just got to respect that there is a community that feels it's important to do something different and you can't ignore them. Libra SSL is part of the open community. So yeah, we get along well. So we actually received external audit advice. So an audit was performed on the code base. It basically said, here's the areas we think you need to think about restructuring. So if you have a look in the 110 code base you'll find a lot of the record structure processing has been completely redesigned, completely re-implemented. There's a pile of areas where we think we know we need to re-identify this area and we're being up front about it. We think we're going to change this. We don't like how this is handled. We're going through and more consternation. We're looking at some patches to get S-sized T all the way through the code base. There's a lot of stuff along those lines that by having more people looking at it we can identify areas that we think need additional attention. So we're certainly doing that. Part of taking all of the data structures out of the code base is the ability to re-architect without breaking everybody's code every single release. Yeah. I'm already excited. One when I was good, but the stuff that's in master, go to that now man. It's much better. Have we ever considered bug bounties? No. Not that we've considered it and said no, but we've never actively considered it. I don't think we've ever had the intention to consider spending some of it on bug bounties. And frankly finding a defect in open SSL, you know, you get that's publicity is your credit. There are other people who do bug bounties and people have reported things to us and say, can I now claim a bounty? That's up to them. But finding a bug in our error page on the web server, that doesn't count. And that's what most of them tend to be. So thank you for your time. Feel free to approach anyone on the team during the rest of the week. Thank you. Thank you.