 OK, I will have the talk in English. I'll tell you a bit about TLS. And it's nice that we just had to talk about DNSSEC, because I will have a couple of slides where I will tell you that I don't think DNSSEC is a good solution for anything. So yeah, maybe we can have a controversial debate about that later, yeah? Yeah, OK. So last year on Easterhack, I had to talk how broken is TLS. This time, I'm a bit more optimistic. I'm telling you, yeah, things are definitely improving. So we have a lot of problems. But there's stuff happening that's improving the situation, definitely. And I think TLS is much safer today than it was some years ago. Why do we care about TLS at all? It's the most important crypto protocol on the internet. Yeah, it's what we use to secure our banking, our emails, or whatever. And TLS has been under attack in the recent years. We had things like the beast attack, crime attack, lucky 13. Then we had things like Heartbleed and Berserk, which were not problems in TLS itself, but in implementations. And recently, we had Poodle and Freak. And so there are a lot of attacks happening. So yeah, one of the biggest issues with TLS was always this thing with certificates. So when we have a web page, we want to secure via HTTPS. It usually has some kind of certificate. What does that do? The certificate tells you this key for the encryption belongs to this web page or this domain name. So if you're going to Google, then the certificate tells you, yeah, this is really Google. This is not someone else. And how this is done is there are a number of certificate authorities. I think it's more than 100. And there are sub-certificates, which are several hundred. And each of them can issue certificates for a certain domain. And this has problems all the time, just last two years. Last year, ANSI, which is a French government controlled CA, issued wrong certificates for Google. Then an Indian certificate authority issued wrong certificates for Google and Yahoo. Then we have this whole SuperFish issue where there was a software pre-installed on Lenovo laptops that completely broke certificate validation. And then there was another software called PrifDoc, which came from a company that was founded by the CEO of Komodo and which also broke certificate validation. Then recently Microsoft had problems that someone registered a certificate for LiveFE. Same thing happened for Access4All. And then just, I think it was a week ago, Google found some certificates by an Egyptian company. And the very latest news, which was two days ago and yesterday, that Google and Mozilla decided to throw C&NIC out of the browser. So a bit more detail. So yeah, this is really the big problem is we have a large number of certificate authorities, and they are all on the same level. So it doesn't really matter if your certificate authority is especially good. For example, I saw things like a German email provider who was advertising, we have a certificate from a company in Switzerland. And that's more trustworthy than a certificate from the United States. That's total rubbish, because the certificate authorities from the United States still can issue a certificate for your domain. So in general, the problem here is the lowest level of security in the CA system decides. Because if you have one certificate authority that does bad things, that creates a problem for the whole system. Yeah, this was the very latest example. C&NIC is a Chinese organization, which is kind of an independent organization, but they are controlled by the Ministry of Information. So it's more or less government controlled. And they issued a sub-certificate that was also able to issue other certificates for Egyptian telecommunication security company, whatever. And this Egyptian company used it for doing man-in-the-middle interception of TLS traffic. So they created live certificates for every web page someone was viewing. They say they used it only internally. No one can check if that's true, but that's what they say. And Google noticed this because Google has some feature in the browser that does pinning for Google's own domains. And they alert Google if they see a wrong certificate for Google domains. So kind of they got caught doing this, which is clearly a violation of the rules for certificate authorities. And the very latest development here is that Google said that they will remove this CA and Mozilla then one day later said the same. And they both said they can reapply to be included into the browsers if they do certain things, if they increase transparency. It is really a very new development because for a very long time it was kind of this frustrating thing. There are always problems with the certificate authorities and nothing happens. Like in 2011, there were many, many issues with Komodo, which is now the biggest CA of all. And yeah, there was a talk by Moxi Malin Spike, which is on YouTube. I suggest that you watch it if you're interested where he says, yeah, then all these things happened, but what happened to Komodo? Nothing. Like Komodo had not even, it didn't hurt their business. They could just continue. Yeah. Then this other very recent problem that how do these certificate authorities check if the domain belongs to you? What they do these days often is they send you an email to an address like administrator, hostmaster, something like that. And then what can you do if someone offers you some kind of free mail service or mail forwarding service and you can register such an administrator name or hostmaster, then you can register a certificate. Microsoft had a problem here that someone was able to generate hostmaster at LiveFI. So he was the hostmaster of the Finnish Microsoft Live website. Some days later, someone said, I created hostmaster at live.be, so the Belgium Microsoft Live webpage. And I did this five years ago, and I told Microsoft, and they never answered. So he had this email address for five years. And yeah, access for all had the same problem, and there are probably a lot of others. So if you offer some kind of email service where people can register a mail address, you must check that you don't allow these names. And these are in a standard. So there are the baseline requirements, which is kind of the standard for operating a certificate authority. So there's a fixed list of mail aliases that you have to disallow if you offer some kind of email service. Then another problem is revocation. Revocation means we have a certificate. We know that it's been hacked. The private key has been stolen, something like that, or Heartbleed, where we knew that people could extract keys. Then you want to somehow mark this certificate as invalid. And there are two technologies for it, is CRL and OCSP. CRL is a certificate authority has a long list of revoked certificates. Obviously, that doesn't scale because, for example, after Cloudflare revoked, all the certificates, the CRL list grew to several hundreds of megabytes. OCSP is the more modern standard where the certificate authority has a server where you send some kind of hash or ID from the certificate, and it answers you, yeah, this certificate is still valid. The problem is the way the browser's implemented this in the past was if there was no answer from the OCSP server because they are down all the time, browser just accepted it still. So if you wanted to attack, if you had a bad certificate and you wanted to prevent the OCSP check from happening, you just had to prevent this connection. So it didn't really provide any real security. Google recently then decided, OK, if it doesn't provide security, we can switch it off. What Google and Firefox are now doing is that they are distributing some lists of high-profile revoked certificates centrally. But this doesn't scale. So this is only, like, if there's a bad certificate for Google.com, they will add it to their list. But if you're a personal web page, you want to revoke your certificate, you have no way to revoke it in a working way. And there's a technology called OCSP Stabbling, where the idea is that the web server can cache the OCSP reply and send it in the TLS handshake. That makes this whole thing a bit more reliable. The problem is you need to indicate to the client somehow that you use this technology and he has to check that you're using it. Because if you have a soft fail mode, it's the same. Then an attacker can just not send this OCSP Stabbling, and it will not help. There's a draft that you can indicate in the certificate that this certificate must be checked with OCSP Stabbling. But that's just a draft right now. And one problem I noticed recently is that the implementation of OCSP Stabbling is really bad. If the OCSP server is not available, then Apache will not use the old OCSP reply if it's older than an hour. So the whole point of OCSP Stabbling is a bit, it doesn't really work in Apache right now. Yeah, but you cannot configure it in the same way. We can talk about that later. But yeah. Yeah. Is there a question or? And then we recently had some attention for this issue of men in the middle proxies. So basically, if you have HTTPS, then some people have a problem because they want to maybe censor things, do some youth protection, or do some filtering in some kind. And this doesn't work if your connection is end-to-end encrypted. So what they are doing is that they're installing root certificate in the browser and generate on the fly certificates for each web page. And you can do this in a completely broken way like you include a root key into your software, and install the software on thousands of laptops. This is what Lenovo did. Of course, what you can do is you can extract this root key and then you have men in the middle attack for all these Lenovo laptops. That was super fish. And then it was also found out that there's a way to circumvent this super fish check. It was completely broken. Then there was another software, and I found that. So there was a threat on Hacker News where someone said, there's this Priftog, and is this like super fish? And I had a look at it and said, oh no, it's Western super fish because it completely disables verification of certificates. And what was interesting was that this software had some connection to Komodo, which is a big certificate authority. And several anti-virus applications are doing this kind of interception. For example, Avira does it by default. It's not completely broken. So it's not like super fish or Priftog, but it's still all anti-virus applications. I checked decrease in some kind, the TLS security. So they don't verify the certificates in a really modern, sane way. But this is not directly a problem of the CAs or TLS. If you install software on your local system that does bad things, then there's nothing the protocol can do against that. So if you have some kind of bad where, I don't know, malware or whatever on your system, there's nothing you can do in the protocol. And in general, I think not only should you be careful if you implement something like this, but I think it's generally a bad idea to intercept TLS traffic in that way. If you want to do filtering, then do it on the client and do it in the browser after the encryption. But filtering on the TLS dream is generally a bad idea. Wrong direction. So, and now it's very easy to say, yeah, this whole CA thing is broken. A lot of people like to do that. The problem is what's our alternative? And I very often have discussions where people say, yeah, this whole CA thing is bad. It's all corrupt. It's broken, which is true. But then they say, okay, then I don't use HTTPS at all. Then they use unencrypted HTTP. I don't know, there's a German hacker organization which does kind of argue like this, yeah. And one thing to keep in mind is with all these downsides, the CA system has one upside. It's a working system that's very usable. You can tell your users there's a green lock and if the green lock is there, then it's secure. That's like, people can understand, huh? It's usable. The security this system provides with all its limitations is very usable. So if you have a system like PGP Web of Trust, which is maybe more secure if everyone knows what he's doing, but it's not usable for normal users. Yeah. And yeah, one of the things that's very often proposed is Dane. We heard about this before. Like it's like for DNSSEC, you have used the DNS system to distribute those certificates. I have to tell you something, Dane won't provide you any security today because nobody's checking these records and it's very uncertain if it will ever do anything useful. And I'm surprised because there are other alternatives or improvements of the CA system that are much less popular. So for example, I often write articles about TLS issues and very often when I write something about CA is the first comment is we need Dane. And yeah. So what's the problem with DNSSEC? I think the major problem with DNSSEC is really the complexity of the deployment. Like if you want to have a secure DNSSEC, you need many pieces to come together. Like you need that route assigned, okay, that's done. You need that your top level domain assigned. That's I think two thirds. But if you have some of these fancy new domains, I don't know your company name or I don't know what they are. Then you may have problems or if you live in a country where your domain provider says we're not interested in DNSSEC, you can just not deploy it without changing your domain name. Then you need a domain broker that supports it. Like for example, some time ago when I still believed that DNSSEC was something interesting, I asked my domain provider if I can have DNSSEC and they always said, yeah, no, not yet. We're not done with that yet. So one of the problems is you are relying on other people to do something. You cannot, depending who you are, you cannot really deploy it yourself without the help of others. And then, yeah, you need your DNS server to, yeah, question? Yeah, there's, I think there's some work around, DLV is this right where you have some, but, yeah. Yeah, but that's working right now. We can have the discussion later, okay? And then you need, also you need something on the client to verify your certificate, your signatures because you can sign as much as you want. If your client doesn't check it, it doesn't make sense. And so my, I'd say so working DNSSEC deployment is near zero. So you can sign your records, okay? But if your client doesn't check it, then it's not a working deployment. So if you sign something and you don't check the sneak address, yeah. And so what, how exactly should this work with the DNSSEC client? It's not even clear to me, like what could you do? You could, the next slide. What could you do? One thing is you could say, okay, our provider has a DNS resolver and will check the signatures. Do we want this? We would trust our provider. I don't really want that. Like for example, I'm in a cafe and using the wifi. Do I want to trust the wifi to give me correct DNS answers? No. Should our operating systems ship DNS resolvers? Today they don't. You could say, okay, operating systems should change, but then you have to, huh? Yeah, maybe, but right now most operating systems don't have DNS resolvers. Maybe this will happen in the future, but yeah, right now it's not the case. Or the other way is you could add a DNS resolver to your application. That's for example, what these Firefox plugins do, but it's also not happening right now. So for me, it's not even. Yeah, there's an available plugin, but how many people have it installed? It's probably like below 1%. So and to me, it's not even clear how you want to deploy DNS on a client. Yeah, and there are some more problems. Like there's generally with DNS, there's this issue with reflection attacks. So the idea is DNS is UDP and it doesn't verify your sender IP. So you can send a packet to a DNS server with the wrong sender IP and it will send the answer to someone else. And if the answer is bigger, then you can use this for a DDoS attack. And DNS attack has very large records, so you have much better DDoS attacks through DNS reflection. This can be fixed. Like you can say, if the answer is too big, then I will not answer on UDP, I will send a TC flag and answer on TCP. The question is, do people do this in the real world? And I recently had a discussion with someone where I said, yeah, DNS attack, there's this reflection problem. And he said, yeah, but you can fix this with this sending only over TCP. And I said, yeah, okay, that's nice that you can do that, but your DNS server doesn't do it. So yeah. Then we have a lot of bad crypto. Like there are a lot of 1024 bit RSA keys in the DNS attack system right now. You could use longer keys or you can use elliptic curves, which somehow also helps against this reflection problem because then you have shorter signatures, but mostly that's, right now you have a lot of crypto that is not really up to date. Yeah, and then what I also find a very big issue is that essentially you're delegating your trust to the top level domains, which are nation states. Like for example, if you have a Chinese domain name, then CNNIC, the CA that was recently removed from the browsers is responsible for the domain names. And you cannot revoke trust from a nation state, at least not easily. So, I will come to that next slide. Okay. Yeah, and no, but can we have the discussion later and only questions if you don't understand anything now. Okay. Yeah, so and then what's Dain? The idea of Dain is if we have DNS attack, we can build something on top of it, which might be a good idea if we had DNS attack, but we don't have DNS attack in a working state. So, yeah, so Dain is the idea let's put the certificates in a DNS record. If it's signed, we can verify that, but would only work if we had a working DNS attack deployment and we don't have a working DNS attack deployment. So, yeah, and that's not only my private opinion. Like I have some quotes here that one is from Ryan Sleavy, who's a developer from the Chrome browser. He said, Dain ethic is undeployable in practice. It's in the state far worse than IPv6. And no amount of wishing our activism is going to change that. Then Alex Stamos, security chief of Yahoo recently said DNS attack is dead. Yeah, they don't intend to deploy it. Thomas Patrick, he's a very respected crypto expert. Also, he wrote a long blog post telling why he thinks DNS attack is bad. So there's also kind of this, there's a widely resistance against DNS attack at the high profile IT companies. So I think this also kind of thing that it's really not a good situation to deploy it widely. So you cannot expect that much will happen there in the near future. Yeah, okay. And so what else can we do to improve the system? Because I don't deny that this certificate authority system is broken. It has a lot of problems. We need to improve it. But I want alternatives that are doable and that are really deployable in a wide scale. And one thing that's now also, I think almost a standard is HTTP public key pinning. And the idea there is that a webpage over HTTPS can send a header that sends a hash of its current key and some replacement key. And then the browser can save these hashes and check in future connections if the key matches this pinning. And then you also have some time value. So the webpage can save these two key hashes for like three months maybe. And then each time the browser goes on this webpage, it checks if the key still matches. So what you have there is some kind of trust on first use. It's a bit like SSH where you connect one time and then you save your key. And next time it's checked against that. And that's kind of on top of the protection that we already have, the weak protection, but the still protection through the CA system. This is how such an header looks like. So you have maxH, that's a seconds value. And then you have two pins. It's a bit tricky to generate these hashes, but I have a script online on my GitHub repo how to create these hashes. Yeah, so and then you can also add a report URI, which is a URL where if the browser sees a bad certificate for your page, it will report it to that URL. The problem with that is it's not currently implemented, but it's in the standard. Yeah. HPKP is currently supported by Chrome, Chromium and Firefox. And to compare this to like Dain, you only have two things you need to change. One thing is the browser and one thing is the server, but on the server it's only configuration issue. So you need no software support. It's something you can add in your Apache as a configuration option. So it's like you only have two pieces to come together and then you have a very strong protection, very strong additional protection for your certificates. And one more thing is that some large web pages have their certificates preloaded in the browser, but you cannot do this for your private web page. But for example, Google can do that and Facebook can do that. One drawback is that HPKP is only for HTTPS. So it's an HTTP header, so you cannot use it for SMTP or anything else. There was a different proposed standard called TAC, which was very similar, basically the same technology, but on the TLS layer, but currently there's no development in TAC. But I think for the future, we need something that's like HPKP, but that's not only for HTTP protocols. But right now, but I mean HTTPS is the most important protocol, so it's already a big step forward. But one warning, HPKP provides a lot of protection, but it can be dangerous, because if you add two keys and then you lose these two keys, then you have locked out your users. Like the idea is you have your current key and a replacement key when you replace your certificate, but then maybe if you store both keys on your server and then your server crashes, your hard disk is broken, you don't have backups, and then it's a bad situation, because then the browsers will refute, you could generate a new certificate, and the browsers from your current users will refuse to connect to your web page. So you need to be careful with that. You need some kind of key management. You should like have your keys on an encrypted external hard disk that you store somewhere or something like that. You need to be careful and you should understand what you're doing. Like I say HPKP is a very strong technology, but you should know what you're doing when you're deploying it. And the other thing that's currently proposed to strengthen this CA system is certificate transparency. And this is a bit like Bitcoin, that you have a lock that's ad-only lock. So it's a lock file, it's a public lock that everyone can see and everyone can check out what's in this lock and through some kind of cryptographic caching, it's guaranteed that once something is in and if other people are checking that this lock is operating correctly, then you cannot fake entries or change entries that are already in the lock. And the idea is, let's just have a public lock of all certificates. So if there are some bad certificates, then at least you can see it. Like you can go to the lock and check are there any certificates for my web page that I don't know, because then something strange is happening. Yeah. Certificate transparency will, like OCSP, run in some soft fail mode, because you have to expect that these locks can be down, can be not available. But yeah, the idea is, then the browser can later contact the lock and it makes it really very hard to hide if you're doing an attempt to do that. If you're doing an attack with rogue certificates. So, and Google currently plans to require certificate transparency for new extended validation certificates and on the long term for all certificates. And how this works is then you have some kind of proof in your certificate that I sent this to a lock before I issued this certificate. So you have some kind of pre-certificate that you send to the lock and then you get some confirmation, yeah, you send it to us, here's a cryptographic proof that you send it to this lock. And right now I'm currently looking for a certificate authority that will give me such a certificate with such a proof. I think there's one, but they are too expensive. Yeah, I want a wildcard cert and they want like $500 or something like that. Yeah. And then one development is that, like in the past it was often, oh, it's so expensive to get these certificates. And there's since a few years start SSL, which is issuing free certificates. Right now they say only for non-commercial use and it's for one year. And there was some criticism because start SSL does not revoke certificates for free. So you can get the certificate for free, but if you want to say, okay, now this certificate is invalid, you have to pay for it. And especially after heartbleed, this was a problem because people were saying, okay, we should revoke all certificates because due to heartbleed, people could have extracted our private keys. And they still said, yeah, you have to pay for that. There's now a new Chinese certificate authority called Wolvesign and they offer you free certificates for two years. And I also read that they offer free revocation. So yeah. And also maybe interesting, they at start SSL, they always add your main domain to the certificate. So for example, I had a domain which was a subdomain. I wanted a certificate for it, but I didn't trust that server that much. I trust my main servers. So I didn't want to put a certificate for my main domain on this less trusted server. So there you can get this. So at Wolvesign, you can also get a certificate just for a subdomain, but that's mine I think. And then there's an initiative, which is mostly by EFF and Mozilla. They want to create a new certificate authority that makes it super easy to secure your sites and that will hopefully start in summer, which is Let's Encrypt. And they want to make certificates free and for everyone and with all the features and as easy as possible. Yeah. Okay. The comment was they let's Encrypt needs help for NGINX, but yeah. But that's kind of the bonus feature that you have it automatically in all the servers. But we will have a certificate authority which has also strong connections to kind of the community, the free software community, and also kind of the hacker community with EFF in that issue free certificates. Yeah. Okay. So that's it for certificate authorities. And now let's talk a bit about the crypto in TLS. So you may know with TLS you always have these ciphers. Like you have a list of cipher suites that your server supports. And here's a comment from Adam Langley, who's a Google developer who said, this seems like a good moment to reiterate that everything less than TLS 1.2 with an AEAD cipher suite is cryptographically broken. So only the very latest standard and only one algorithm in this very latest standards can really be considered secure these days. And I will explain why. So this is how such a cipher suite looks. Like here's ECDHE, RSA is 128, GCM should 256. So there's a lot of information in it. The first ECDHE is the key exchange, an elliptic curve exchange. RSA is the public key algorithm. Then we have a symmetric algorithm AES and then we have a kind of block mode, is it called, GCM, and a hash function. And here's also an example for a very bad cipher, an export cipher. Export ciphers are from the 1990s where the United States had some law that strong cryptography is not allowed to be exported into other countries. So they created weak cryptography for international market. And it's using RC2, CBC, and MD5. Yeah, and in general we have three things. We have a public key algorithm, a key exchange, and a symmetric algorithm. Yeah, strangely, I will say something about that. But I think probably yes. Yeah, so for the public key algorithms, which is also what's in your certificate. So a certificate contains some public key and there are three algorithms in the standard. That's RSA, DSA, and ECDSA. RSA is what you're normally using, like I don't know, more than 90% of the internet. DSA is not used by anyone. Like there was a scan of the whole internet and some years ago and they found 17 DSA keys at all. And it's now removed from the browsers like Mozilla and Chrome don't support it anymore. And ECDSA, it's an elliptic curve based algorithm so it's faster and has smaller keys and kind of more modern technology. It's currently used by some big players but it's currently not widely available for, like there are no certificate authorities that by default issue these ECDSA keys. But for example, Cloudflare is using it because they have a lot of their pages and Google is using it for performance reasons. And then you have some key exchange. So when you initiate this TLS connection you need some kind of a shared key and you can generate it with different methods. The classic way was to do this with the standard RSA exchange so you just, the client encrypts something with the key of the server and sends it over. The problem with that is it doesn't have this property called forward secrecy. That means if sometime in the future someone steals this key from the server he can decrypt all the communication in the past. So what you want is that you generate a temporary key with some kind of cryptographic mechanism that's only used for that session and then is destroyed afterwards. So if someone steals your server later he cannot decrypt what he has recorded in the past. And there are two ways to do this in TLS. One is the standard Diffie-Helman exchange and one is an elliptic curve Diffie-Helman exchange. The problem with the standard Diffie-Helman exchange is that for a long time it was used with 1024 bits and that's not considered secure really. Like it's not been broken in public but it's kind of known that there are algorithms when you build some special hardware which costs millions of dollars but we know there are some organizations that might do that that have millions of dollars then you can break that. And Apache for a very long time only supported this weak kind of Diffie-Helman key exchange. Only the version 247 which I think was released sometime last year or two years ago but still many people are using Apache 2.2 and it only supports this small key exchange. And with the elliptic curves that what teachers said is there are some issues that these curves have been generated by the NSA. So in 1999 and there are some numbers that were used to generate these curves and they're not explained. So they look like random numbers but there's kind of okay maybe the NSA could have done something. Right now the general like what I hear in the crypto community is that most people think it's not a vector. So there's kind of this suspicion there might be something but most people don't believe there is something because if the NSA had done something back then then they must have known something in 1999 what we don't know today. And that seems kind of unlikely. So because there's been a lot of scientific development in researching elliptic curves. So yeah I asked this question to some high profile cryptographers and they didn't give me a clear answer. So I won't make a statement which one is better but yeah. But in general I think if you use a Diffie-Hellman with a reasonable size or you use an elliptic curve Diffie-Hellman then it's probably secure against all reasonable attackers. Yeah that's the best I can tell right now. Yeah okay and then we have symmetric ciphers these are the ciphers that are actually used to encrypt the data that's going over the wire. And up until TLS 1.1 all block ciphers used method called CBC in the order MAC then encrypt. What does that mean? MAC is a message authentication code that's kind of the part that checks that your message is correct. So if you want to have a secure connection you want two things. You want that message is correct that nobody can manipulate it and you also want that it's encrypted nobody can read it. So the MAC is the guarantee that nobody can manipulate it. And also this CBC mode was used with something what's called an implicit IE so they used some kind of value from a previous connection to start their encryption. And there was an attack against it this was the beast attack. It's not really practical like you need to generate lots of TLS connections but it tells you that this is not really a very secure technology. And this it was already known and it was fixed in TLS 1.1 but TLS 1.0 was still used a lot when this beast attack came out. And then there's another problem if an attacker can separate if the MAC fails or the padding fails. So what you're doing is you have a message with the MAC and then a padding and then you encrypt the thing. And if the attacker can separate different kinds of errors that can happening on the server decoding site then he can use this for an attack. And the first thing was the padding oracle attack because originally TLS implementations would just send different error messages so you just could send manipulated packets and see from the error message what kind of error happened. And then this was changed that there was only one error message always the same but then you still had a timing problem and that was lucky 13 attack a few years ago. You can completely prevent this if you change the order. If you first encrypt and then do the MAC because the MAC always guarantees that packet is correct. So the first thing you do is you check the MAC and if it's wrong then you throw an error. So there's nothing that can go wrong with that. There's an extension but it's not really used right now. And I like to quote this every time I talk about TLS. This is from the TLS 1.2 standard. It says this leaves a small timing channel since MAC performance depends to some extent on the size of the data fragment but it is not believed to be large enough to be exploitable. So what they're saying here is yeah, we know that there's a problem but we don't think it's a real problem. Just some years later someone proved them wrong. So it's in the standard. And I was saying lucky 13 attack it was already described in the standard. And then we also have a stream cipher that's called RC4 which can be used for TLS. RC4 was developed sometime in the 90s. It was internally developed at RSA. And then it was somehow leaked to some news group in the internet and then everyone liked it because it was so simple. And then in the early 2000s people looked at it and said okay but here are some problems. It's not as some biases. This led that WEP the wireless LAN encryption standard was broken due to these RC4 weaknesses. And then many years later in 2013 the first real attacks were shown against RC4 and TLS. And just recently in recent weeks there were two new attacks. One was that especially attacked IMAP and HTTP basic auth because there the password was earlier in the data stream and that made the attack easier. Yeah, so there's still a research happening on this but we now have a standard that says RC4 should not be used anymore at all. Yeah, okay, and so what happened? We have problems with this CBC mode and we have problems with RC4 and there wasn't much left. Until some years ago everyone was using TLS 1.0 which only has these two modes which are problematic. So the only alternative is the very latest TLS 1.2 which has this GCM mode which is an authenticated encryption mode. That means it does the encryption and the guarantee that it's not manipulated in one step. It's kind of nobody, my feeling is nobody really likes GCM. Like there are a lot of cryptographers say, okay, it doesn't look that nice. It's complicated to implement and maybe we have timing issues but it's the only thing left that's not broken. But it's not realistic today to say I will only allow this GCM mode. So we have to use some backwards compatibility on at least if you have a normal web page. Like for example, Apple Safari still doesn't support this GCM mode so you need to support the CBC modes as well. And for the CBC modes you can do some mitigations in your TLS implementation to prevent the attacks. These RC4 attacks cannot be prevented in software. So kind of as a compromise, you still support the CBC modes and yeah, yeah. And then I'll talk a bit about some specific attacks. We had Poodle at the end of last year which was an attack against SSL3. Now SSL3 is the very old SSL standard from the 1990s developed by Netscape. Does anyone remember Netscape? It was the first company that had a successful browser but yeah, long ago. And this was a variant of this padding oracle attack I was talking about earlier. So the problem with SSL3 is that it allows some padding and it allows random values in this padding. And later this was changed in TLS that you only can have zeros in this padding. So this, yeah, but why do we have a problem with the protocol from the 1990s? You can ask, because in theory, a server and a client should negotiate the best protocol they both support and there shouldn't be any software out there that doesn't. So we don't have that many computers from the 1990s around anymore. It shouldn't be a problem. Why it is a problem is that browsers implement some kind of backwards compatibility. So at some point people deployed TLS 1.2 and then they found out, okay, there are servers. If you say them, I want to connect with TLS 1.2 then what usually should happen is the server says, okay, I don't support TLS 1.2, can we try it with TLS 1.0 or something like that? But there are some servers, if you try to connect with the TLS version, they don't know, they just don't say anything. So what the browsers did, they did some fallback. If the connection doesn't succeed with the latest protocol, they tried with an older protocol and an older protocol and an older, all the way back to SSL3. So we have a bad workaround for broken servers that caused a security problem. And it was not the first security problem, there was an attack called virtual host confusion that also relied on this property of browsers to fallback to older protocol version. And what people did, you could say, okay, let's disable this workaround because it's bad, but the browser vendors are very careful in breaking compatibility. So what they did was they developed a new extension that the client can say, the server can say, I'm not such a broken server, you don't have to do this fallback thing. And then you can prevent this fallback. But now it's kind of, you have a workaround for a workaround that was bad, yeah. Mozilla shows a different way they removed that bad workaround, which I think is a better solution. And then, yeah, then people said, okay, we have this poodle attack, SSL3 is broken, let's just disable it, 1990s, nobody's using Netscape anymore, no problem. It turned out it was a problem. Like for example, Microsoft and Nokia produced phones in 2010, which is not that long ago. And they were high profile, very expensive phones and they have a mail client that only supports SSL3. Then AVM produced Fritz boxes, which had some feature with mail servers which had the same problem. So, yeah, we have the problem that even though SSL3 is a protocol from the 1990s, up until very recently, there were companies delivering hardware only supporting that very old, very broken protocol. And even today, we have the same problem. There are hardware vendors that are delivering hardware that doesn't support current TLS. For example, there's this company called Apple, they are distributing these very expensive laptops that have a browser that doesn't support GCM. One idea I have there is that I would probably at some point implement on my web page that if you surf to it with an older TLS version, it will show a warning, like you're using bad crypto, you shouldn't do that. And right now what already Chrome already does something like that, if you click on the connection icon in Chrome, it will tell you that this connection uses deprecated crypto, which is very nice. They should do it more prominently, but yeah, I like that. Yeah, then the Berserk attack, I'd say the Berserk attack really didn't get the attention it deserved. It was an attack in NSS, which is the library that Chrome and Firefox are using. And it was released at the same day as Shellshox. So this really took the attention away that it deserved, I think. What was happening there? In 2006, cryptographer called Daniel Bleichenbacher found an attack which affected PGP, mainly, and GNU PG, and also some other crypto implementations in RSA. So what the old RSA standard looks like is, here what you have with RSA PKC is 11.5, that you have a zero, a one, and then some padding with FF values, and then an ASNN encoding of the hash algorithms. So like, let's say you use SHA-1, then it's the ID of SHA-1 encoded in something called ASN-1, and then the hash itself. And this is later signed. And the original attack was that he found out that some implementations would just check the padding, check this ASN-1, and then check the hash, and don't look what's after it. So you could have some rubbish after the hash, and the implementation would ignore it. And with some math, he could use that to forge signatures. It's sometimes a bit confusing, because Daniel Bleichenbacher created two attacks. One was against RSA encryption, which was in 1998. And both attacks are what I would call kind of zombie attacks, because they keep reappearing here and there. We found a new Bleichenbacher attack in, like for example, last year in Java, someone found that there was a new variant of this Bleichenbacher encryption attack. But Berserk was a new variant of the signature attack, because someone found out, okay, this ASN-1 encoding is ambiguous. So there are different ways you can encode this hash value for this ASN-1 string, and this could be exploited for an attack. This only works with the old RSA standard, which there was a new RSA standard in 2002, but it's mainly not used right now. Like TLS, you must use this old standard. I hope with TLS 1.3, this will change, but right now we're stuck with this old standard. And it only works if you're using a very small exponent. Like if you use an exponent of three, that's some kind of math value that's used in the RSA algorithm. And so it's suggested that you don't use these very small values there. You can use a value that's, yeah. So if you use something like, I don't know, 15 or 17, then you don't have this problem. And the suggested value is 65,537, I think. Not one. Hm? Not one. No, if you use one, then the RSA operation does nothing. Yeah, but the problem was that the browser still ships the A certificates with this small value. And so you could exploit this attack last year. Yeah, then some other attacks in short, so you have heard of them, there was a crime and the breach attacks that attacked the compression. So due to the compression, you could get some information about the encrypted data. So for TLS compression, the general recommendation is just turn it off because nobody uses it anyway. But there's also HTTP compression, there's a bit trick here. Like if you're using protection for CSRF and things like that, you have to keep that in mind. And there are some countermeasures, but it's really kind of an ugly attack because it's hard to defend against. And then there was a CCS injection attack, which was also against OpenSSL, which came out relatively quickly after Heartbleed, which was an issue in the state machine of OpenSSL. So OpenSSL had some kind of state where in the connection we are and you could confuse that state by sending a message that it didn't expect. And then there was Triple Handshake attack, which was a very complicated thing, but it only affected client certificates and that's not used by many people, so it wasn't a big issue. And then there was this Virtual Host Confusion attack, which was that there are two ways to indicate to the server what host it is. So if you connect to Google, you get a certificate for Google Com and you indicate in your TLS Handshake that you want to connect to Google Com. And you get, and you indicate in the HTTP request that also that you want to connect to Google Com and this attack trickily exploited if there are different values for both and it could be used for some attacks. And Cookie Clutter was the idea if you have cookie and you can truncate the connection at a specific point, then you can change the meaning of the cookie, which was also an interesting thing. And Smack and Freak that was very recently was also issues with the state machine and especially OpenSSL could be tricked into one of these old export modes and we talked about it earlier like in the 1990s, you had especially vCrypto due to US export regulations and then you could attack that old export cipher. And other browsers, now what we have right now, many of the features I told you about like key pinning or certificate transparency, mostly they come from Google and Chrome implements them first. Firefox usually follows sometime later and Internet Explorer and Safari are very much behind in terms of TLS security. For example, Internet Explorer doesn't even support HSTS, which is I think a very basic feature these days. No key pinning, it has no downgrade protection. Safari doesn't support GCM, which is the only really non-broken cipher right now and also no key pinning, no downgrade protection. And what's even worse are these alternative browsers on Linux, like for example, KDE has its own browser conqueror, GNOME has its own browser epiphany and they basically don't have really security. They lack very basic security features that they want and they also usually use WebKit in versions that have known vulnerabilities. So I think the only reason they are not owned by the time is because almost nobody uses them, but I would strongly suggest that if you want to have a secure browser, don't use any of these alternative Linux browsers because I'm not aware of any of them that has a reasonable security management. Yeah, and implementations, what right now I'm sometimes getting a bit angry if people bash OpenSSL all the time because I mean, yeah, there was hot bleed and there were a lot of issues, but it's really getting better. If you look at the release notes of OpenSSL right now, the issues they find are really kind of obscure things and you see, okay, a lot of people are looking at this and they're really fixing things. And for example, in January, OpenSSL released a new version and they said, yeah, only low vulnerabilities then people said, yeah, that's a high vulnerability. OpenSSL is playing things down. Then in March, OpenSSL said, now we have a high vulnerability and it wasn't that high and people said, oh, they said it's high and it's not really a thing. So it's really kind of, everyone likes to bash OpenSSL. We tend to forget that, like I said, NSS had severe issues, the Windows SSL implementation had remote code execution issue, Apple had these go to fail issue, NewTLS also had a go to fail issue. So basically all major SSL implementations had issues in the past years. Some people are trying to do a completely new approach to implement TLS in other programming languages. One is MITLS, they are using F-Sharp, which is somewhat obscure programming language but and what they are doing, they have a formally verified TLS implementation. Now you say, okay, F-Sharp is probably not going to be used widely, but what's very interesting is that these people find a lot of issues in other implementations and in the protocol. Because like this triple handshake issue, this virtual host confusion issue, the Berserk attack, all these were found by the developers of this new TLS implementation because they do some formal verification methods so they look very carefully, how does this protocol flow in TLS work and is this mathematically sound? And by doing that they find issues in it which is a very interesting approach. And then there's OCaml TLS, which is part of MirageOS, they are trying to from the ground build up a new secure operating system. And like what these people are also trying to do, like a lot of these errors in OpenSL and others are C programming errors. Like you have memory corruption issues and these can be pre-empted by using safer programming languages. But none of these is really ready for primetime yet but it's very interesting development. Okay, then lately there's been a lot of push to say, okay, let's use HTTPS everywhere. Let's just make it the default. Yeah, Cloudflare said we will enable HTTPS for all our domains. And Cloudflare is really a big provider with big content delivery network. They have a lot of domains. They're using ECDSA certificates because it's faster and yeah. And they're using SNI, which is a technology to use more than one certificate on one IP address because in original SSL it wasn't possible. You always had only one certificate per IP. And this extension is not supported by other old browsers. It's no problem with modern browsers, but for example, if you have an Android 2, then it doesn't support SNI. And Google has plans to mark HTTPS as insecure in Chrome. So they want to have some kind of red sign in the toolbar that indicates this is not a secure connection. And they are also pushing it by saying, okay, HTTPS web pages get a better search engine rank. Now, when I discuss this with people, I often get counter arguments. People say, yeah, I want to continue to use HTTP unencrypted. One of the arguments is performance. And performance, I'd say that's mostly an urban legend. Like the performance issue of TLS is really, really small. You have, of course, you need to do the crypto and it's math and your CPU has to do it, but it really usually just doesn't matter. It's like, and here's a quote from Adam Langley. When they changed Gmail, he said, they basically didn't notice. They didn't need any new hardware. It was barely noticeable. So for most people, I would say, if they enable HTTPS on their web page, they won't notice it in the load of their servers. Then people say, okay, we don't need this if we only read web pages because it's public information. I'm going to a blog, don't need HTTPS. Now, recently I was in Brussels at the train station and they have free Wi-Fi. And this is a web page that's a dummy HTML file which contains almost nothing. And then you see this blue button there. What's happening here? The Wi-Fi at Brussels train station will inject this button, will inject some JavaScript into each HTTP web page you're surfing. They also inject a cookie and they will use this to record what you're surfing to. And they say this so on their web page. And also there are things like there's recently was a thread on Stack Overflow where someone said, I had this idea, I have a cafe and I could change the banners on the web pages people surf to with my own banners. So I earn money from, yeah. So things like this happen. People intercept HTTP traffic, inject things, manipulate the content. And an important thing here to remember is HTTPS does not only do encryption, but it also guarantees you that the data is transmitted correctly. So it gives you a guarantee that the web page you're seeing is really the web page that got sent from the server and there's not something changed in between. And I think this is a good thing for just about every web page because you want to have your information that you're getting from the internet in a reliable way. Then some people say, okay, I will encrypt my login but I don't need it for the whole web page. This is completely insecure, like Amazon is doing it, eBay is doing it, the popular ones, because then you can do what's called an SSL stripping attack. So if you transmit a login form on HTTP unencrypted, that then does a login secure, then the attacker can just change the login form. Yeah. And for example, there are many banking web pages. When I do online banking, I go on the web page of my bank and there's a link to online banking. Now the online banking is HTTPS, but the link is HTTP. So an attacker could just change that link. Okay, I can check, I'm now in online banking, I have to look for the green lock, but who does that? And I think you could probably take a lot of people. Okay, thanks. So I'd say, yeah, there's really no way to have some kind of combined HTTP, HTTPS thing and make it secure. Yeah. So I think often when people reject HTTPS, it's poor understanding of TLS, but one issue a lot of people have is external content. Like I'm often writing for a webpage called Golem, which is a German IT news webpage and they have ads on their webpage and that's how they earn their money. And if you have HTTPS, when you include active content, so JavaScript, Banners, Flash, that needs to be HTTPS as well. So if you have ads, your ad network needs to support HTTPS and most ad networks don't. And I asked some of them and it's not really a topic they are talking about. So most of them didn't even answer to me and some of them that, yeah, it's difficult. No, we don't have any plans. So that's the big reason why news web pages are not HTTPS these days. And for example, The New York Times said they want to make HTTPS and they said they will try for 2015 to make it HTTPS, but they cannot do it right now because their ad partners don't support it. Yeah, then one more feature if you want to do HTTPS only, you should also send a HSTS header. What that does is it tells the browser this webpage is HTTPS only. So in the future, if you call it by HTTP, it automatically gets forwarded to HTTPS. That also prevents these SSL stripping attacks because it, okay, the very first time you go on the webpage, you might just type in the URL without a protocol, then you get forwarded to HTTPS. But then your browser knows, okay, this is HTTPS. I need to go via HTTPS there every time. And what you can even do is you can tell Google and Firefox will also use this. This is an HTTPS only webpage. Put this in the source code of your browser. So like for my webpage, the Google Chrome has it built in that it's only available via HTTPS. Yeah, one very nice attack on HTTPS was presented last year at Blackhead in Amsterdam. Because what it's doing is it's sending you an amount of time how long this should be stored. So what you're doing, you're relying on the time as something trustworthy. Now, is your system time trustworthy? It's an interesting question. Actually, usually it's not because you're using NTP. NTP is a protocol from the 80s and it has no security whatsoever. So you can do a man in the middle attack on NTP and change the time of a victim's system and by that, circumvent HTTPS. I mentioned this here because I think this attack really didn't get the attention it deserved. This was a really nice attack and it had a really nice ASCII Art logo, which is also... So right now, the time setting protocol we use doesn't provide security. There's a tool called TLSState that's using TLSConnection to set the time. The problem with that is it's not as accurate as NTP and there's a very nice idea from OpenBSD that's to use this kind of TLSConnection for the time but only use it as a rough estimate for the time and set some boundaries and then do NTP and check if this NTP time roughly fits the TLS time and then use NTP. I think this is a very nice idea. Problem with it is it requires Libre SSL. He says NTP provides security. That's not really true. There's a secured NTP protocol but it's broken. So there's an authentication for NTP but it has been broken some years ago. It's completely insecure and there's no working secure NTP protocol. Yeah. And look into the future. TLS 1.3 is currently developed the new next protocol version of TLS. Many things are not decided yet. There are many discussions going on on the working group mailing list. If you want to join it's public. Everyone can participate in the working group which is one of the nice things about IETF. Improvements are they want to reduce the round trips on a handshake so that you reduce latency issues with TLS. They have decided to remove the static RSA key exchangers. So all the key exchangers will provide forward secrecy and they will only use authenticated encryptions. So all the issues we talked about earlier with the CBC modes and RC4 these modes will all be removed. And one thing is that we will have a new elliptic curve. It's this curve 25519 which is developed by Dan Bernstein and which is really a reaction to these issues that the NIST curves some people think are not trustworthy but it's also that they are considered more secure because it's easier to implement them in a timing safe way and some theoretically attacks that could be issued against these older curves are not possible with this new curve. There was a very long debate because it was already kind of everyone said yeah we want this curve and then Microsoft came and said yeah we have now developed our own curves and we want them to become the standard. It was a very long debate which curve is why and who did it more rigorous. And but in the end yeah now we have a decision that curve 25519 will be a new standard and will be part of TLS 1.3. One thing in the maybe more far away future are quantum computers. Quantum computers are theoretical concept. We don't have quantum computers today at least not in any relevant size. And if you have a quantum computer you can break all public key encryption algorithms that are used today. So RSA is broken, elliptic curves are broken, DSA is broken, Diffie-Helman is broken, everything. There's some development going on on algorithms that are resistant against quantum computers. One is called Sphinx which is a signature algorithm. The problem with that is it's signatures are 40 kilobytes and if you think about it, if you do a connection to HTTPS server then you have like three or four or five signatures transmitted in the handshake only and this multiply with 40 kilobytes it's not really practical. And there's a algorithm called Ring Learning with Errors which is a so-called lattice-based algorithm and they already have open SSL patches for it so you can do an exchange with this algorithm already today. It's not standardized and it's probably not something you should do because these things are really current research. So there's a lot of research going on and a lot more research has to go on. So if you talk to people about post-quantum cryptography it's always, yeah we have some things but we don't really know if they are secure because we don't know so well and it's a topic of research but it's interesting that it's going on and we will probably need it at some point and if you ask people when we will have quantum computers the answers are very different and some people even said maybe in 10 years and that's kind of frightening if you think in 10 years all crypto we use today will be broken, that's yeah. But most people say it will take longer but yeah these are, yeah it's something that will become a topic in the future. Yeah so some final notes, you should use HTTPS everywhere I think. You should of course also use TLS for email logins, for Java for whatever protocols you use always use encryption. You should support TLS 1.2 with GCM modes. You should disable SSL 2 and 3 so the very old Netscape algorithms. You should disable RC4 and you should disable TLS compression. You should use HTTPS, OCSP stapling and key pinning and if it becomes available certificate transparency. And there's this SSL test from QALIS SSL labs which you probably all know which can give you an A plus if you did everything right. You should use that and read what it is to say and if you get a D or an F there you should look after it. What's wrong with your page? Yeah. Okay that's it. I have a lot of links sources but I think you can look that up later. Yeah. Yeah. Questions, discussion. Do you think DNS is still the thing? Two short questions. Can you say something about GCM week keys? And second question is some more quick MAC algorithm in newer open SSL or newer TLS standard? I don't know what you mean with GCM week keys. There was a paper some time ago. Okay then I don't know that. Okay. There's also a new algorithm called Chacha 20 with the block mode Poly 1305 which is also by Dan Bernstein which is very fast in software. Like if you have no AES acceleration then this Chacha algorithm is faster and it's been like there's been a competition on stream ciphers and I don't know what's called Salsa was the winner and Chacha is a variant of Salsa. And this can already be used in TLS. It's not standardized but Google is already using it. Cloudflare is using it. So I think open SSL there are patches but it's not part of the main code but I think the Chacha algorithm is something that is very popular and will probably be used in the future a lot. Yeah also two questions. There was a paper released by Ferguson in 99 Microsoft cryptograph about week keys and GCM as he said and I guess it's not checked in the actual open SSL implementation if. Okay that sounds interesting. I should look at that. We can talk later about it. And the other thing with your HP KP I guess it's also a deployment problem. If you see every webmaster has to implement that this will not really happen in the real world. So. Yeah but I mean I can implement it and I can be protected. Yeah but if you go I don't know to some e-commerce shop or I don't know then it's still a problem. Yeah okay it's a problem but it has to be deployed to be used of course. And it's the same problem as with DNS6 then. But it's easier to deploy than DNS6. But it's centralized or other people do it for you not every website provider has to do that. And if I look at some banking websites they still supported RC4 for over years and years and years and years. I mean there's a joke like I won't buy products from you because you use RC4. Oh it's because of mobile. We will talk later. Stuff like that it's still a problem so. Yeah I mean of course you need to deploy the better technologies to use them. But I mean right now my webpage has HPKP and right now browsers can use that to check the certificate. So it already improves my security right now today. Okay but still in the deployment problem. So as you said or. Of course yeah. Okay but that's the other discussion. Just a small remark because he's sometimes mentioned GCMSI for GCMS is just an operating mode like CBC. Yeah the cipher is AES and GCMSI mode yeah. Yeah but actually you can use GCMSI with any block cipher of 128 or 64. But not in TLS. In TLS it's only specified for AES. Yeah but GCMSI itself it's more flexible so you can replace the block cipher if you want to. I think there have been privacy issues with OCSP stapling and multiple chain certificate so if I have intermediate certs they would still use OCSP directly so the server can see what I'm referring to. Is this solved in the meantime? What privacy issues with OCSP stapling? OCSP is basically a problem with privacy so you have stapling so you provide the stapling and the server doesn't get connected. And I think for intermediate certs there was a problem that you couldn't staple. It's the intermediate cert so is this solved? Okay I'm not aware of that. But do the browsers check the intermediate all the time? If it uses OCSP they have to. If it uses a CRL it doesn't have but if it uses a CRL it's... I think the browsers don't use OCSP for the intermediates. They only use centralized... But I'm not sure about that. I think the certificate specifies what your variation point is if it's OCSP. Yeah but they can ignore it. Yeah okay. I think that's what they're doing. So but you don't think... It's not a known problem for you currently so if I have OCSP stapling it should be privacy safe. It provides more privacy than standard OCSP. Okay yeah that's for sure. Maybe there's still an issue but I would have to look into that. I don't know it. Okay then... Yeah we can have the DNS tech discussion afterwards. I will of course participate and... If you have questions come to me. Talk to me. Write me an email. Yeah.