 One round of applause for our speakers. Thank you. My name is Akir Darumaric. I'm a graduate student at the University of Michigan, where my research broadly focuses on what can we learn about security from these large data analysis. Last year, you heard about some of my research from Alice Hallerman, including work on the Z-Map and how do we scan the entire internet, some work on weak cryptographic keys, the HTTPS ecosystem. I'm here today to talk about Heartbleed. What happened when Heartbleed actually hit the internet? What was the aftermath? In April 2014, OpenSSL disclosed a catastrophic bug in the implementation of the TLS HeartB extension. And the vulnerability essentially affected every piece of software that used OpenSSL to facilitate TLS connections. This concluded everything from web servers. We heard a lot about Nginx and Apache, but also affected everything such as Tor. It affected the Bitcoin client, a lot of other pieces of software. And in the end, it allowed attackers to essentially remotely steal any piece of information that potentially sat in memory of the SSL client. And this might include logins. It might include your credit card number that you had just transferred to Amazon. Maybe included your cryptographic private key that you're using to facilitate HTTPS connections. And it affected a large number of sites, not only just a large number of software products, but in the end, a huge percentage of the internet was actually impacted. And our estimates of somewhere between 24 to 55% of all HTTPS websites were initially vulnerable, could have had this information stolen when the bug was first reported. So the TLS HeartB extension. A lot of you hadn't heard of it prior to the Heartbleed exploit. The extension was released in 2012, and essentially allows either endpoint of a TLS connection to ask its partner, are you still there? And the partner essentially replies, yes, I am still around, keep the connection open. And for the most part, we never used the TLS HeartB extension because we used TLS over at TCP. And TCP takes care of this on a lower network layer to make sure that our connection is still live. But TLS is still designed to work with other lower level protocols such as UDP, where you might not have the session management. And you just wanna be able to say, are you still present without having to renegotiate an entire session? In the protocol, it's actually very simple. Either end point the connection can essentially say to its partner, are you there? And it does this by essentially sending some piece of data and some random padding and then asking its partner to repeat that data back. And if its partner repeats the data back, it knows it's still there and it doesn't terminate the connection. So very simple. The bug existed in that both a length header was sent and then a set amount of data was set. And in OpenSSL, we trusted the length header. And essentially you would echo back that amount of information in the memory regardless of how much data had actually been specified. So if I send you four bytes of data but I told you I sent you eight bytes of data, you'd echo back eight bytes of data. And if those extra four bytes of data happened to be my username and password, you would happily echo those back to the client. So this fix is very simple. We check the length field and we check the amount of data and if the length field is longer than the amount of data, then we throw away the request. But at the same time, the effect is rather catastrophic. Any piece of information up to two to 16th bytes after that data could be echoed back. And depending on how your memory is laid out, this could be your logging credentials, this could be a key, it could be anything else. And this affected a very large number of software. A lot of pieces of software actually end up using OpenSSL to facilitate connections. In the case of Nginx, in case of Patchy, this is used to serve HTTPS sites. In terms of the Bitcoin client, if you connected to a malicious server that could potentially read back your private key. In the case of Tori, you might be able to look at a relay and look at other information that was currently being passed across that relay or one of its short-term keys. So it's affected a large number of different services. We're gonna focus on web servers primarily because that's where we saw a lot going on. But it did affect everything from mail servers to database servers to everything else. So when this vulnerability first came out, we decided to try to track its essentially global effect on the entire internet. And the way we did this is by scanning the entire internet to look for the vulnerability, by altering the Zmap scanner and scanning the IPv4 address base on a regular basis, scanning the Alexa Top 1 Million websites, scanning for different mail servers, scanning for different database servers, and just looking at who is vulnerable. But we did this in a fairly special way. We didn't want to take advantage of the actual vulnerability itself. We didn't want to leak any private data from other people's servers. And so instead of actually exploring vulnerability, we took advantage of a rather lucky phenomenon that was associated with a specific vulnerable version of OpenSSL. And according to the RFC, in which the TLS extension is defined, essentially says if I send you a zero byte request that doesn't have any information to be echoed back to me, you should drop that request. There's really nothing that's just not according to the RFC, it's not according to spec, just drop it. But in the specific vulnerable version of OpenSSL, instead of dropping that request, we would actually respond back with some random padding. It wouldn't respond back with any data, but you'd get back a response back. And only the one specific vulnerable version of OpenSSL acted like this. The patched versions of OpenSSL actually fixed this bug. In different TLS libraries, new TLS, for example, all followed the RFC and didn't have this specific phenomenon. So essentially what this led us do is it let us check whether a server was vulnerable without ever exploiting the actual Heartbleed vulnerability. So probably what's the most interest to people are the top 100 websites. And the top 100 websites actually did fairly well at patching. All of the top 500 websites were patched within the first 48 hours when we started to do our scanning. We can go back and look however though, because these are very large organizations that are kind of in the press and media spotlight. Most of them put out press releases. They answered questions from reporters. And by going through and aggregating these pieces of data, we found that about 44 of the top 100 websites were initially vulnerable. So a fairly large percentage. And this is a lower bound. There are some organizations that never responded who didn't put out any information, but these were the numbers that did put out information that they were vulnerable. And a handful of websites were still vulnerable 24 hours. This included Yahoo, Imgur, Stack Overflow. And these were the sites that we saw active attacks going on. People were stealing Yahoo credentials for Yahoo mail accounts for essentially in the wild 24 hours after the vulnerability was released. Within 48 hours, all of these had been patched. However, the top 100 websites are fairly special. These are websites that probably not only have a dedicated sysadmin, they might have dedicated teams of sysadmins who are looking after these sites. What about the other websites? In our first scan, we've performed our first scan approximately two days after the vulnerability was first released. And about 48 hours after, the first thing we found is that about 45% of all websites support HTTPS. And that's a bit discouraging to start. But of those, approximately 60% of HTTPS sites support heartbeat. But this doesn't mean that they're initially vulnerable. Other pieces of software such as GNU, TLS, or Microsoft IIS supported the extension, but they didn't have this bug. And so they weren't necessarily vulnerable like OpenSSL was. Well, if we split those out even further, we start to look that approximately 91% of the sites used OpenSSL to facilitate TLS connections. And we were able to look at this based on the server agent strings that are reported back by web servers. So 9% are using these other software vendors such as Microsoft IIS. But let's just estimate that approximately 55% of HTTPS websites at most were likely vulnerable when the heartbeat vulnerability was first announced. So that provides an upper bound. On the lower bound, we're able to estimate as well because two extensions or two pieces of functionality were also introduced along with the heartbeat extension in OpenSSL 1.0.1 2012. And these are TLS 1.1 to 1.2. And so while no one is really thinking about the heartbeat extension, no one was really testing for it before the heart bleed vulnerability happened, people were checking for TLS 1.1 to 1.2. These are major features that were people who were interested in the deployment of. And Ivan Ristik who runs the SSL Pulse and the Qolsys SSL Lab site wherever he goes to check their HTTPS website to see if it's correctly configured had actually been scanning for this and for looking for the deployment of these two new protocols. And what we found is that about 32% of websites supported TLS 1.1 or 1.2 prior to the heart bleed vulnerability. And that lets us estimate that approximately 24% at a minimum of the websites that use HTTPS were initially vulnerable. So it lets us ballpark between 24 to 55%. In the beginning, a lot of the news reports said that somewhere between 60 to 65% were vulnerable just based on how many sites used Nginx or Apache. It might have been a little bit of a high guess but not by much. A very large percent of the internet was actually vulnerable. When we go back out one more layer and look at the full IPv4 address space we see yet another story. And we see that only approximately 11% of HTTPS hosts support heart beat and 6% of hosts were vulnerable. And the main reason here is because most of the HTTPS hosts on the IPv4 address space aren't websites. They're not the Googles or the Yahoo's what they are of these embedded devices. These are home routers. They're video cameras. There's these skated devices we'd hoped would never see the light of day. And it's everything in between. And for the most part these tend to use much slimmed down pieces of HTTPS software. They never support the heartbeat extension to start with. But about 6% of HTTPS hosts were vulnerable. And the sad thing about these embedded devices is almost all of the embedded devices still remain vulnerable today. We know that embedded devices aren't really patched even if patches are available. In most cases they aren't. Users don't go out and apply these. And these range from everything. We really did see skated devices that were vulnerable to heart bleed. We even saw, I think with the strangest case was a pizza point of sale software that we saw repeatedly being vulnerable. When we start to look at the patching behavior we actually did fairly well in terms of the top 100 websites or the top 1 million websites. If we expected that between 24 to 55% of websites were vulnerable when the vulnerability was first released we were down to approximately 11 to 12% within the first 48 hours. That's fairly impressive. We did pretty good as a operator community of getting our websites patched. What's a little bit sadder is when we look at the IPv4 address space and also look where the Alexa websites kind of stagnated. Somewhere around 4% of websites remained vulnerable for quite a while. We're down to somewhere around below 4%. And unfortunately the number remains there today. These are big websites. These are the top 1 million websites. Approximately 4% are still vulnerable today. The one thing that jumps out right away is what happened here. There's this large jump where we almost have the number of vulnerable devices on the entire internet. And what happened here is that there are a couple of very large ASs that are very large shared hosting providers. Every single one of their websites was vulnerable. When the vulnerability came out they patched every one of their virtual machines and the number drops drastically in a couple of days. And these are actually, you can kind of see there are three or four specific hosting providers that all patched in this time period. But as you subtract that one drop the IPv4 address space itself is actually fairly stagnant. We haven't really patched it. So is this fast enough? We said 24 to 55% were initially vulnerable and this looks pretty good for the Alexa top 1 million websites. And when we consider are we fast enough is to look at when did a tax start to first happen? And sadly, tax started to happen before we started to do our scans. We saw our first internet-wide scan for the Harpley vulnerability approximately 22 hours after the vulnerability was released. So yes, we did well but we did not do quite good enough. And this is an internet-wide scan. If we consider targeted scans these were happening much before. If we looked at who is attacking Yahoo this was going to happen before these large indiscriminate scans of the full IPv4 address space. So looking at the entire attack scene what we did is we looked at a honeypot on the Amazon EC2 just essentially an unused IP address that was watching all traffic that came in. We also looked at packet traces from the Lawrence Berkeley National Lab and from ICSI in Berkeley to look at what attacks were coming into these three disparate networks. And we find no evidence of attacks taking place prior to disclosure. And this doesn't mean that no one knew about it but it means that no one was doing large indiscriminate internet-wide scans of it. There may have been targeted attacks but we weren't seeing large exploitations in the wild. The first piece of scan traffic we saw was approximately 22 hours after disclosure and originated from the University of Latvia. And in total we observed approximately 6,000 different Heartbleed exploit attempts from almost 700 hosts within that first couple of weeks. And there are two major outliers that jump out right away which are Flipo's site that lets you test whether your website was vulnerable to Heartbleed. And the second was SSL Labs which had a similar app that let you test for the vulnerability. And what we find is actually the number of people doing large internet-wide scans was actually fairly small. Only 11 hosts scan both the Amazon EC2 Honeypot and our network at Berkeley. And approximately only six hosts in total actually scan more than 100 hosts. So there actually appeared to be very little internet-wide scanning. Only six hosts and several of these are academic institutions, Michigan and TU Berlin, that were looking for the vulnerability. There were some, however, four other hosts, two in Chinese ASs that were completely unidentifiable. We really don't know what they were looking for. They could have been academic institutions. They could have been attackers. We have really no way of telling. We know that they actually exploited the vulnerability but that's about it. On the other hand, there are a very large number of hosts that targeted our EC2 Honeypot. And this wasn't a well-known website but it does show that with almost 200 hosts scanning that Honeypot, people, attackers are going after this other address space. They are looking after these possibly, they're cloud providers, they could possibly just be more densely populated IP address space. Two weeks after disclosure, approximately 600,000 hosts on the internet remained vulnerable. And at this time, we decided to go ahead and to notify everyone on the internet who had a vulnerable host. This is one of the first times one of these large kind of internet-wide vulnerability disclosures had been done. So we didn't necessarily know how this would go. And so we decided to run an experiment and to essentially split these hosts into two groups and to see what was the effect of actually doing an internet-wide notification. And so we contacted everyone eventually that had a vulnerable host but we split them into two groups and contacted them approximately two weeks apart. And what we find is actually quite surprising for the research community. When we went into this, we expected that global notifications would have actually very little impact on actually who patched. And for a long time, we've believed this. And what we found was actually the opposite. We found that by doing these global notifications, we could actually increase patching by almost 50%. This is actually fairly encouraging to us as researchers. I think a lot of times when we find vulnerabilities, we say, there's nothing we can do. We found these but no one will pay attention. And what we found is that actually people are paying attention. Even if you don't get responses to emails, even if we don't get a positive response back, people are actually reading these and actually are patching. Even in cases where we received balance notifications back and we thought no one actually received this email, a lot of times the email made it through to a second person that we didn't know about. And even though it looks like a wasted effort to us to actually increase the patch rates, it's just something to keep in mind as you're doing these large internet-wide scans. But unfortunately, patching wasn't enough. Cryptographic keys were also at risk. So, partly, let you steal cryptographic keys. Nick's gonna talk about this a little bit in the second talk, but essentially, if you were using HTTPS over Apache over Nginx, someone could go and they could potentially pull down your cryptographic key. And so not only did you need to patch your server, you needed to change your cryptographic key. And while we did fairly well at patching, the sad story is the cryptographic keys didn't go nearly as well. What we find is that only 10% of the sites we found vulnerable actually replaced their certificates. Even more discouraging, 14% of the websites that did replace their certificates just reused the same vulnerable private key. So, no protection. 4% of websites that were vulnerable provoked their certificates. These are pretty dismal numbers. There's a lot of improvement here. So, what do we learn out of all of this? I think the first thing that jumps out at us is that we have these large open source projects that we are all depending on. We have open SSL. We have 60% of websites that were depending on this for HTTPS. The lack of attention support to these critical projects is likely what led to these types of massive vulnerabilities. These projects deserve more attention from us. They deserve more support from us. The HTTPS ecosystem is incredibly fragile. I wish we could say our Heartbleed was the only major vulnerability we had last year. It was one of several. If we look at Poodle, if we look at S-channel, I know Poodle's a little bit less severe than S-channel or Heartbleed, but we affected almost every HTTPS site between those two vulnerabilities. One was a remote code exploitation. The other one allowed us to steal credentials and cryptographic keys. These are bad, right? These are pretty severe. These continue to come. We just saw Poodle 2 a matter of weeks ago. HTTPS remains incredibly fragile even today. The supporting PKI is not equipped to handle the massive revocation. Nick will talk about this a little bit more, but essentially the current, what we have in place for certificate revocation does not work right now. No matter the fact that only 4% of websites tried to revoke their certificates, the ecosystem barely handled that. We could say actually didn't. Web browsers were not aware of most of the revocations that took place. As we start to discuss, and we have been as a community discussing how do we change revocation or what do we do going forward? We need to consider the fact that massive revocation is something that we have to take account for. We need to take account for the fact that we might need to revoke every certificate on the internet. That's what happened here. We're not talking about trickle here or there, one person lost their private key. We need to be able to handle large correlated revocation. The recent advances in scanning let us quickly respond and let us measure the impact here. For one of the first times we could actually see what was the impact of one of these large vulnerabilities. In a positive light, it let us actually not only understand what happened about impact patching, we were able to increase patching for the entire internet by almost 50%. And that's incredibly encouraging. We have this fragile ecosystem but we're able to spur on patching and we're able to help out. But at the same time, I think we do have to take a look back and say there's a lot of room to grow still. And with that, I'm gonna switch over to Nick. Hello, hello. Okay, this seems to be working. Hello everybody, I'm Nick. I'm the security engineering lead at Cloudflare. And I'm gonna talk to you today about Heartbleed if you guys aren't sick of it already after this long year. So specifically I'm gonna talk about my personal experience or my experience at Cloudflare with what happened during Heartbleed when it came out and what happened after the fact. So has anybody read this article? This is a snippet from The Verge and it is kind of a semi-fictional account of what happened during the disclosure of Heartbleed. I can kind of paraphrase a little bit how that conversation went, but it kind of went a little bit like this. Like, so do you use DTLS? No, not that I know of. Does anybody use DTLS? And that's for websites, nobody used Datagram DTLS. How about TLS Heartbeats? Well, my answer was, well, what's a TLS Heartbeat? I mean, at this point, barely anybody knew what this was. It was a very obscure feature. And the answer here was, oh, this is stupid and there's a bug, you should turn them off. Right, so recompile OpenSSL with OpenSSL, no Heartbeats, and that was kind of, that was that. So Heartbeats were off for Cloudflare. We have a pretty simple architecture, so deployed it really quickly and the answer was, okay, looks good. Public disclosure should be around April 9th. So I guess a lot of people had this question, but why tell Cloudflare? Well, let me just go real quick and describe what Cloudflare does. So it's a reverse proxy. So if you have a website, Cloudflare can sit in front of it and block malicious traffic, that's the sort of red X up there, as well as send cached content, static content, that's the bright orange. So it reduces anybody getting to your website. That's malicious and reduces the load on your website. And for this to work, this cloud has to be closer to the visitor than it does have to be to your website. So we have this global network, so our nodes are closer to the visitors and that's how it works. And there's over a million sites on Cloudflare, including banks, government websites, Bitcoin exchanges, almost every Bitcoin exchange is on Cloudflare. The ITF's website Reddit, I can go on and on and on. So lots of sites, but what Cloudflare does is very simple. It's essentially three services or two, DNS, HTTP and HTTPS, which is powered by OpenSSL and Nginx. So the architecture is very simple in that every machine that we have can serve every site. So thinking back to Heartlead, this is actually, you can see why this would be a really, really bad situation for Heartlead to hit. Anyways, so it happened early, April 7th, 1027, OpenSSL published their advisory and that hit hacker news really quickly after that. Within half an hour, it hit the front page and about an hour later, we posted our standard Cloudflare, customer sites are patched, you don't have to worry about it, sort of post. It was a thing, it was a bug and it was starting to gain some steam and then about an hour after that, there was a tweet from CodeNomicon and I think everyone can know what this is, this is, that's the next site. This Heartlead itself, it was branded and it came out to mass media. So this became a really big deal. Heartlead.com had a logo, hit the mainstream press, Heartlead virus, I don't know if you guys remember that but people were saying, oh, there's a Heartlead virus out there and I knew it got really bad when my mother called me and said what's going on? So this was kind of a big deal and well, we were finished patching so well, we had some time to kill, what are we gonna do? At this point, we decided on three things and one was to help keep this scanner that Filippo had from falling over. The second was to turn our network into a large honeypot to see what type of attacks or what type of scans are happening and then the third was to decide or to figure out what we're gonna do about our certificates. We have quite a few websites that use our service and many of them use SSL and we had about 100,000 certificates and revoking them was not a really, at this point, the day after disclosure, it wasn't absolutely clear that this was something that you had to do. So first, let's talk a little bit about this Heartbleed scanner. So Filippo, who's now a Cloudflare engineer, wrote this server in Go and you type in your host name and it scans it for whether or not it answers heartbeat requests that are malformed. So these are small ones around 100 bytes so that it doesn't leak anything beyond your standard frame. It shouldn't leak any information. You put it on AWS and then put it behind Cloudflare and shout out to Kyle Isom from our team for helping keep this up. But this is kind of what from Filippo's server, what it looked like, there was April 8th, up to 2,000 requests per minute. So this was a very highly used tool and that's nothing because this is the next two weeks. 2,000 is the bottom tick right there. So up to 10 to 20,000 a day, 10 to 20,000 scans per minute for the next two weeks and we held it up to 200 million tests in the first two weeks. So with the scanner up and running, yeah. And thank you to Filippo, wherever he is, he's somewhere in the room. Stand up, thanks for the tool. I don't see him, he's somewhere. But in any case, this is what he found in terms of domains, it was really bad at first. This is the ninth, this is two days after Harplead was originally announced, up to 30% of the sites that he scanned were vulnerable and this luckily cut down and a lot of people use this in their automated testing to validate sites, but yeah, it got down to a low number pretty quickly. So now that that was up and running, back at Cloudflare we decide, well, what can we do? Well, we can log every heartbeat that we see that comes in with a bad size and well, we can put that data in a shelf until now and that's kind of what we did. So here are logs from the ninth and 69% of them had a message size of 16384 which you might know as the largest power of two, you can fit into a signed 16-bit integer but you might also recognize it as the hard-coded value in the SSL test Python tool that came out the first day. 20% were 121 and that was actually from Flippo site and University of Michigan maybe, were you guys scanning on April 9th? Or in any case, there were a lot of requests that used the zero length packet which is another way to just check to see if your site is vulnerable but about a week later, it was around the same. So if there were people who were mass exploiting this against sites, they were probably using just the basic SSL test Python script and around 20% were still Flippo's tests. So if you can do the math here, it seems that a lot of people were just scanning with Flippo's tests. It wasn't a lot of mass exploitation and flipping the numbers back, you see that 1% of Flippo scans were actually against sites on Cloudflare. So this is what the map looks like for where the attacks were coming from. I know IP maps are not really that interesting, they don't tell you that much but this is where it is. There's some strange spikes like in Iceland but don't read too much into it. Now the question is, and this is what we were thinking when it first came out was why was it really so dangerous? Why was Heartbleed so bad? Well it's a kind of a layer six request that doesn't necessarily get logged. People don't log parts of handshakes very often unless you have a specific IDS rule or something like that and it's really bad in that 64K of server memory can be exfiltrated by one request and this has login info, session cookies and perhaps TLS private keys, we didn't know and if you kind of look at the diagram here, this is, that's the heap right there. Everything that's above the request, if you have a new request comes in, it gets put on the heap and anything that was previously removed from the heap is still sitting there. So we know that there are passwords, cookies, people were finding this right away and the question was would the key be there? Is the key gonna sit above one of these requests? So we looked at the code, right? And what did the code say? Well it said this can't happen, at least not in Nginx, the key gets loaded right away and therefore gets at the bottom of the stack and anytime that you do allocations or for requests coming in, they're gonna be higher up on the stack and they're not gonna be able to read the original key, right? And Nginx itself was single threaded so if you have this, it's not gonna be able to catch something halfway in the middle of another thread doing an operation. So OpenSSL has a big number library that they use and they clear the memory when they're done. So if you're doing a handshake for TLS, all of the cryptographic material is gonna be cleared by OpenSSL. At least that's what we thought, right? And we weren't sure, I mean I just looked at some code and what do I know? So we launched the Cloudflare Heartbleed Challenge. So this was something that we did to crowdsource an answer. So we set up a standard Nginx which was outside of Cloudflare. It was on a third-party VPS and it had the vulnerable version of OpenSSL and we said to all of you guys, come and find it, see what you can do. And to show proof, give us a message signed with that private key. What did we find? Well, for the first couple hours, there was trolling and as you can, this is so many clappings as they did this, but basically anything that you post onto the page is going to be put into memory in the Nginx. So people were posting private keys in there and they were posting what looked like, you can see my name there, Nick, what looked like a passwords file. So everyone was getting really confused, getting all this, there's a private key in my Heartbleed request and but nobody was actually getting the key until we saw this tweet from Fedor. And we took a look, this is the Cloudflare office, that's me pointing out a television screen and yeah, he solved it. So congrats to Fedor. And so he wasn't the only one. This was in the first 24 hours or so, there were 12 people in the first 48, about 25 people had solved it and got the real key and sent proof. So can you steal private keys? Absolutely yes. And it was solved in under 10 hours and private keys can definitely be vulnerable, but another thing we did was we logged where in memory the Heartbleed requests were coming in and we compared that relative to where the private key was initially allocated and they never overlapped. So how did this, how did it actually get solved? Well, there was a second bug in OpenSS cell. Who would have known looking through that code? If you dump the memory of the request, all the places in red are where private keys did exist at one point. And it turns out some temporary variables were not wiped. This is the code to clean up the mess. There's big num free versus big num clear free. This is just, yeah, it was just in certain cases in the Montgomery multiplication they didn't clear up. They didn't clean the partial pieces. We can do a little bit of math just to show how people actually solved it with this. But RSA, you have a couple different things. You have a public exponent E. You have two primes multiplied together and they make the public key and a private exponent D. And any one of the, if you get any one of PQ or D, you get the whole private key. So what people did was they took every 128 bit block that they saw in exfiltrated, heartbleed data and they just tried to divide it into the modulus. And if it divided, there you go, it's factored. And this is how nine out of the 10 people solved it. Turns out that one of the prime factors is just sitting there on the heap after tens of thousands of requests you might luck upon it. But one enterprising gentleman, Ruben Zoo, who's at University of Cambridge, he used a much cleverer method which is Coppersmith's attack, which is a lattice reduction attack where you only really need about 60% of one of the private keys to find it out. And this depends on the fact that the public exponent is small. So for performance reasons, the public exponent in RSA is small so that any public operation is really fast. But he solved it in only 50 requests. So that was actually really interesting. So private keys are gone, right? What does that mean? Revocation time. You know, the internet was built for this, right? I mean, people who designed the PKI, they said, yeah, you know, people are gonna revoke 100,000 certificates in 24 hours. This is just, this is how we designed the system, right? And this is what Sands kind of reported. That's mostly us, actually. That's all the internet, but that's mostly Cloudflare. And this is what it looked like. The blue line is revoked certificates. And as you can see, that's April 7th. This is after Harpley. This is when everybody was revoking. And then that green spike is Cloudflare revoking. But once we revoked all these certificates, we found out that, well, it didn't really mean that browsers wouldn't accept these certificates anymore. And I can go into that really quickly. There are three methods to do this that were built for handling revocation certificates in X509. And first is the certificate revocation list. And this is just a flat file with a list of certificates that are revoked. And did us revoking 100,000 certificates break CRLs? Heck yeah, it did. So the CRL from Global Sign grew from 22 to five megs. Yeah, this basically did us the CRL server. And lucky for Global Sign, Cloudflare was in front of their CRL server, but unlucky for us. We're used to that kind of traffic. But yeah, you can see here every three hours there are waves. And this had to do with the cycles in which CRLs were updated and Microsoft Internet Explorer downloading them. And yeah, it was pretty rough. Anyways, CRLs broken. How about OCSP? OCSP, well, this is the online certificate status protocol. This is a kind of question answer. Is this certificate revoked? Yes or no. And well, it's really broken and Chrome has kind of known this for a while and stopped checking. But if you have hard fails, it does disallow you from using certain networks, especially with captive portals. And if you have a soft fail, somebody on the network can just kind of drop the OCSP response and voila, the site is no longer revoked. So it really doesn't work or scale to the degree that we'd want it to in plain OCSP being requested from the browser, in that sense. How about CRL sets? This is Google's proprietary method that they basically collect all the CRLs that they can from different certificate authorities and put them together and install them in the browser. So you get updates with the browser of these sets of CRLs. I shouldn't say all the CRLs, but it's specific certificates. And this is kind of what we found out. They only do EV certs and special EV certs, and if the browser doesn't get updated, then you're not gonna get an update in your CRL set, which is, it's kind of bad. So cloudflarechallenge.com, once it was solved, we revoked it. And the way it was not an EV cert, but Chrome did mark it as revoked because, well, they added it manually into a JSON file. So none of the 100,000 cloudflare certificates were being marked as revoked by Google Chrome. So we basically made a hack, and I don't know if you can read this, but it's the most efficient four line of C++ revocation. This is in Chromium, it revokes all of our certificates. This is a hack, this is not scalable, you shouldn't do it this way, but it was how it had to get done because there were no valid ways of doing revocation. So yeah, revocation is pretty much broken, and what can we do? Well, there's shorter certificate expiration periods that could help, that would at least help with the CRLs because you won't have to be holding on to old certificates for very long. You can sort of shrink the size of the sets pretty quickly. OCSP must staple is an extension that requires you to send the server to send the OCSP response in the handshake. That can help too, certificate transparency, something else has thrown out there to solve this, but none of these are widespread and none of these have been implemented, so something has to work for revocation. So I guess in summary, there are three things that we did after Heartbleed. We kept the scanner from falling over, turned our site into a honeypot, and while we definitively answered that yes, you have to revoke your certificates and there's no excuse. So there's a lot of takeaways from Heartbleed, right? I don't want to be the one to sort of tell everybody how to take away what to learn from this, but open source disclosure is hard, and this really was the first one of the year of what turned out to be many open source disclosures, and we did learn a lot of lessons of how to do that correctly. Other things that pointed out, which seem obvious to people, but weren't obvious or weren't in OpenSSL, which is features should be disabled by default. Nobody who installed OpenSSL 1.01 wanted heartbeats necessarily, so turn off features by default. Another thing is, well, expect the unexpected, that's sort of obvious in computer security. We didn't really learn that from Heartbleed, but it was definitely a shock when it came. Other things, these attacks, a lot of the attacks on real sites that Cloudflare saw, we saw quite a few attacks, as I mentioned, but a lot of them were just scans from people just trying to see if their sites were alive, so that's a reassuring sign. Crowdsourcing was effective for the Heartbleed challenge. I couldn't find the private keys, and luckily there are very smart people out there who were able to find it. Is anybody here in the auditorium a winner of the Cloudflare challenge? Is anybody here? I don't see any hands, but anyways, I don't have my glasses, so congratulations wherever you are. And the last thing is revocation needs a solution, and the last conclusion from this is really, you know, support OpenSSL. It's, I messed up my microphone there, but thanks. No really, I mean, this is part of critical infrastructure for the world and for not only websites, but as Zach here was telling, in embedded devices, and these guys need support, so please support OpenSSL, and I'm done, thank you. Okay, quick announcement before we start the Q&A. If you are going to leave the room, please get up now, go out quietly without talking so we can do the Q&A nice and quickly. Also, you will be only able to leave the room at this point. Yes, if you have any questions, please line up at the microphones. We'll start with microphone number one. Thanks, not really a question, thanks for the talk. We really followed the progress even before the talk with your paper you published, and we just want to contribute a piece of missing information. So the first attacks, as you called them, 24 hours after the disclosure, that were us. We weren't really attacking. We started working on a scanner about 20 minutes after the public information, and it was the top scanner in the region, so it was a scanner and used multiple different methods. As we went, we came to the conclusion that there may be firewalls be deployed in between which may stop some of the packets. So we used many different packets in one, in every request, so that's also a scanner there. Is there my camera? No. Quick interruption, if you're leaving on the ground floor, please only use the front left and the front right door. If you're up there, you can use any door you want, but if you're down here, please only use this door and this door to leave the room. Thank you. Does your mic work yet? Hello? Yeah, thanks for the comment. I don't think I saw anything specific to your account. Yeah, no, no, it's great to know. So I mean, it's one of those things that's really hard to tell from our perspective whether something's gonna attack, whether someone's malicious, whether there's someone just scanning. A lot of the hosts that we see don't have any information that identify what the intent is. Microphone number two, please. Hello? I have a question to CloudFair. Could you stop blocking toy users, please? Make the internet more central every day. All the fancy homepages which they're and come on the door network is so small, you could handle it with that finger-snip, all of the bandwidth and you're blocking all the end nodes all the time. We're looking into solutions for Tor. The main problem is that there is a lot of spam on Tor for websites and right now filtering through the good and the bad is something that we're working on and we are focusing on that this year. So look for something coming up this year for Tor users. Thank you. Next up, we have a question from our signal angel asked on IRC. Yeah, thank you. So given that maximum TLS record length is 16 kilobyte, how is it possible to even get back 64 kilobytes? You can split a heartbeat request over multiple records. I believe, I think that I'm pretty sure that's how it works. Microphone number one, please. What is your opinion on Libra SSL? Go for it. Um, I'm not sure if I'm the most qualified person to answer that, to be honest. I think there's well intentioned, I think the community acknowledges that there are a lot of problems in open SSL in terms of its maintainability. It's one solution, I'm not sure if it's the correct one. Yeah, I think Libra SSL works for its goal, which is open BSD. They have a portable version by now as well as they did with all their other forks. And it's much cleaner code. And still open SSL tries to support way old systems like Windows 16 bit and bullshit you really don't need. So the unmaintainability will remain for a while. Yeah, I agree that open SSL does support quite a lot of different platforms and some people need that. But the different forks of open SSL that have come recently Libra SSL, Boring SSL are taking patches from each other. So I think the more people looking at this project, the better. We have time for one more question. Microphone number three, please. Hello, and both your talks, you showed numbers of vulnerable sites decreasing and increasing again afterwards. Can you explain that? So I think there were small bumps that were just people coming and going. I don't think we saw a lot of websites that became vulnerable. I think a lot of it is just pieces of measurements, websites that came and go between different scans. I don't think there were any large jumps that we saw. When it comes to the data that I was showing, this is from Filippo's scanner and not everybody was scanning the same domains every day. So that's just standard variance. Thank you very much. So here, Nick, please give our speakers a warm round of applause.