 All right, now it's my very big pleasure to introduce Hanno Beck to you. He's no stranger to the Chaos crowd. He's been to several Easter Hacks and at several other Chaos events. Today he's here to talk about TLS 1.3, what it's all about, how it came to be and what the future of it is going to look like. Please give a huge applause and welcome Hanno. We have a new version since August, TLS 1.3. In truth, I'd like to go a bit into the history of why we have this new version, how we got there and what design decisions there were to make this new protocol version. So the very first version of SSL, which it was called back then, was released in 1995 by Netscape and it was quickly followed up with version 3, which is still very similar to the TLS 1.2 that we mostly use today. And then in 1999 it was kind of taken over from Netscape to the IETF, which is the Internet Standardization Organization, and they renamed it to TLS. And so that's kind of the history. We had SSL and I've marked it in red because these two versions are broken by design. You cannot really use them in a way that is secured these days because we know vulnerabilities that are part of the protocol. Then we had, in 1999 it was renamed to TLS and TLS is kind of still kind of okay if you do everything right, but that's really tricky. So it's kind of a dangerous protocol, but maybe not totally broken, same with TLS 1.1. TLS 1.2 is what we still mostly use today and TLS 1.3 is the new one. And what you can see here for example is that the biggest gap here is between 1.2 and 1.3. So it was a very long time where we had no new development here. Okay, you probably heard that we had plenty of vulnerabilities in TLS around TLS. And also these days good vulnerability always has a logo and a nice name. And I want to go into one vulnerability which doesn't have a logo. Not one of the variants. I was very surprised when I realized that. But that's so-called padding oracles. They are in CBC mode which is encryption we use for the actual data encryption, the symmetric data encryption. So the thing is when we encrypt data what we usually use are so-called block ciphers. And they encrypt one block of a specific size of data. It's usually 16 bytes. And this CBC mode was the common way to encrypt in past TLS versions. And this is roughly how it looks like. So we have some initialization vector which should be random, wasn't always, but that's another story. And then we encrypt a block of data and then we XOR that encryption into the next plain text and encrypt it again. Now one thing here is that because these are blocks of data and our data may not always be in 16 byte blocks, it may just be 5 bytes or whatever, we need to fill up that space. So we need some kind of padding. And in TLS what was done was that first of all we had some data. Then we added a MAC which is something that guarantees the correctness of the data, the authentication of the data. And then we pad it up to a block size and then we encrypt it. And this order of things turned out to be very problematic. So this padding is a very simple method. If we have 1 byte to fill up, we make a 0, 0. If we have 2 bytes to fill up, we make a 1, 1. 3 bytes, 2, 2, 2, 2, 2, 2, 2, and so on. So that's easy to understand, right? Now let's for a moment assume a situation where an attacker can manipulate data and can see whether the server receives a bad padding or whether it receives bad data where this MAC check goes wrong. And here is the decryption with CBC mode. And what an attacker can do here, so the first thing the attacker does is throws 1 block away at the end. It just blocks the transmission of that block and then he changes something here. So what we assume here is the attacker wants to know this decrypted byte because it may contain some interesting data. So what he can do is he can manipulate this byte with a guess. And the byte is only 256 ways, the values the byte can have. So he can guess enough times and XORs it with this value. And if you think about it, we XOR it here with the plain text. That means if we end up with a 0 here, then the padding is valid. If we end up with some garbage value here, then the padding is probably invalid. So by making enough guesses, the attacker can decrypt a byte here under the condition that he learns somehow whether the padding is valid or not. So he could decrypt one byte. But he can go on. Let's assume we learned that one byte. We have decrypted it. And then we can go on with the next byte. So we XOR this byte on the right with the guess with what we already know that it is and with the one. And then we XOR this next byte with our guess and also a one. And if this ends up being one one, then again we have a valid padding. So the attacker learns the next byte. And he can do this for all the bytes. This was originally discovered in 2002 by Sergei Bruny. But it was kind of only theoretical. So one thing here is that TLS has these error messages. And there are different kinds of errors. And if you read up in the TLS 1.0 standard, if the padding is wrong, then you get this decryption failed error. And if the MAC is wrong, so the data has some modification, then you get this bad record MAC error. So you could say this would allow this padding oracle attack because there are these error messages. But the attacker cannot see them because they are encrypted. So this was kind of only a theoretical attack which didn't really work on a real TLS connection. But then there was a later paper which made this attack practical by measuring the timing difference from these different kinds of errors. And this allowed practical decryption of TLS traffic. Then in later versions of TLS this was fixed or kind of fixed. But there's a warning in the standard which says, so this is right from the standard text. This leaves a small timing channel, but it is not believed to be large enough to be exploitable. If you read something like that, it sounds maybe suspicious, maybe dangerous. And actually in 2013 there was the so-called Lucky 13 attack where a team of researchers actually managed to exploit that small timing side channel that the designers of the standard believed was not large enough to be exploitable. And it is in theory possible to implement TLS in a way that is safe from these timing attacks. But it adds a lot of complexity to the code. If you just look at when Lucky 13 was fixed it just made the code much longer and much harder to understand. Then there was another panning oracle which was called Poodle which was in the old version SSL 3. And this was kind of by design. So the protocol was built in a way that you could not avoid this panning oracle. Then it turned out that there was also kind of a TLS variation of this Poodle attack. And the reason here was that the only major change between SSL version 3 and TLS version 1 was that the panning was fixed to a specific value where in the past it could have any value. And it turned out that there were TLS implementations that were not checking that enabling this Poodle attack also in TLS. Then there was the so called Lucky Microseconds attack which was basically one of the people who has found the Lucky 13 attack looked at implementations and saw if they have fixed Lucky 13 properly and they looked at an S2N which is an SSL library from Amazon and they found okay they tried to make counter measures against this attack but these counter measures didn't really work and they had still a timing attack that they could perform. Then there was a bug in OpenSSL which was kind of funny because when OpenSSL tried to fix this Lucky 13 attack they introduced another panning oracle attack which was actually much easier to exploit. So yeah we had plenty of panning oracles but if you remember back what I said for the very first attack that this didn't really work in practice in TLS because these errors are encrypted. But theoretically you could imagine that someone creates an implementation that sends errors that are not encrypted. For example you can send a TCP error or just cut the connection or have any kind of different behavior because the whole attack just relies on the fact that you can distinguish these two kinds of errors. And yeah you can find implementations out there doing that. So yeah panning oracles are still an issue. Then I want to look at another attack which is so called Bleichenbacher attacks and they targeted the RSA encryption and that is kind of the asymmetric encryption which we use at the beginning of a connection to establish a shared key. This attack was found in 1998 by Daniel Bleichenbacher and if you look at the RSA encryption before we encrypt something with RSA we do some preparations and the way this is done in old TLS versions is the so called PKCS 11.5 standard and how this looks is it starts with 0002 then we have some random data which is just again a padding to fill up space. Then we have zero which marks the end of the padding and then we have a version number 0303 which stands for TLS 1.2. It's totally obvious right? I'll get to version numbers later and then we have the secret data. Now the but the relevant thing for this attack is mostly the 0002 at the beginning so we know each correct encrypted block if we decrypted it starts with 0002. So we may wonder if we implement a TLS server and it decrypts some data from the client and then it doesn't start with 0002 what shall it do? And the naive thing would be yeah of course we just send an error message because something is obviously wrong here. Now this turns out to be not such a good idea because if we do this we will tell the attacker something we will tell him that the decrypted data does not start with 0002. So the attacker learns something about the interval in which the decrypted data is either it starts with 0002 or it doesn't. And this is it turns out to be enough to if you send enough queries and modify the cipher text you can learn enough information to decrypt data. The whole algorithm is a bit more complicated but it's not that complicated it's relatively straightforward it's a bit of math and I didn't want to put on any formulas but yeah. Now as I said it was discovered in 1998 so TLS 1.0 introduced some counter measures and the general idea here is that if you decrypt something and it is wrong then you're supposed to replace it with a random value and use that random value as your secret and pretend nothing has happened and then continue and then the handshake will fail later on because you don't have the same key. This prevents the attacker from learning whether your data is valid or not. In 2003 a research team figured out that the counter measures how they were described in TLS 1.0 were incomplete and also not entirely it was not entirely clear how to implement them because there's this version thing and it was not exactly described how to handle that if only the version is wrong. So they were able to create an attack that still worked despite these counter measures. So more counter measures were proposed and in 2014 there was a paper that Java was still vulnerable to blind buffer attacks in a special way because they used some kind of decoding that raised an exception and the exception was long enough that you could measure the timing difference and there was also still a small issue in open SSL although that was not practically exploitable. In 2016 there was the so-called drown attack and the drown attack was a Bleichenbacher attack in SSL version 2. Now you may wonder SSL version 2 is this very, very old version from 1995. Is this a problem? But it actually is because you can use encrypted data from a modern TLS version TLS 1.2 and decrypt it with a server that still supports SSL version 2. So that was the drown attack and then last year I thought maybe someone should check if they are still servers vulnerable to these blind buffer attacks. So I wrote a small scan tool and started scanning and scanned the Alexa top 1 million. The first hit was Facebook com was vulnerable and it turned out from the top 100 pages roughly a third were vulnerable and in the end we found like 15 different implementations that are vulnerable. Probably more but these were the ones we know about. Yeah. And just I think a month ago there was another paper that you can use cache side channels which is mostly interesting if you have cloud infrastructure where multiple servers are running on the same hardware which you can also use to perform these Bleichenbacher attacks. Now what I want to show you here you cannot read this because it's too small but this is the chapters in the TLS data that describe the counter measures to these Bleichenbacher attacks. So we knew about them since before TLS 1.0 so there was a small chapter what you should do to prevent these attacks and then they figured out okay that's not enough we need to have more counter measures and even more. So what you can clearly see here it's getting more and more complicated to prevent these attacks. So with every new TLS version we had more complexity to prevent these Bleichenbacher attacks. These were just two examples. There were a lot more attacks on TLS 1.2 and earlier that were due to poor design choices. I've named a few here sloth which was against three caches, freak which kind of attacked issues in the handshake and compatibility with old versions, sweet 32 which attacks some block ciphers that have a small block size triple handshake which is a very complicated interaction of different handshakes. The general trend here was that in TLS 1.2 and earlier if there was a security bug, if there was a vulnerability in the cryptography, what people did was we need to work around for the security issue and then if this work around doesn't work, it's not sufficient, we need more work around and also we create more secure modes but we still keep the old ones and then people can choose. We have this algorithm agility. You can choose, there's the secure algorithm, there's a less secure algorithm, take whatever you want. Which in practice meant very often still the insecure modes were used because like for all of these things, there were modes available in TLS 1.2 that didn't have these vulnerabilities but they were optional. But I think that is the major change that came with TLS 1.3 was a mindset change that people said okay, if something has vulnerabilities, if it's insecure and if we have something better, then we just remove the thing that is vulnerable, that is problematic. So the main change in TLS 1.3 was that a lot of things were deprecated. So we no longer have these CVCIL modes. We no longer have RC4, which is another cypher which was problematic. We no longer have triple desks which has these small block sizes. We still use GCM but we no longer use it with an explicit non-switch also turned out to be problematic. We completely remove RSA encryption. We still use RSA but only for signatures. We remove hash functions that turned out to be insecure. We removed Diffie-Hulman with custom parameters which turned out to be very problematic and we removed elliptic curves that kind of looked not so secure. But also there was something that some academics looked at TLS with the more scientific view. They tried to formally understand the security protocol, properties of this protocol and to analyze and to see if they can prove some kind of security properties of the protocol. And many vulnerabilities that I mentioned earlier were found by these researchers trying to formally analyze the protocol. But also these analysis have contributed to design TLS 1.3 to make it more robust to attacks. So this is I think also a big change. There was a much better collaboration between scientists who were looking at the protocol and the people who were actually writing the protocol. But you may say all the security is nice but what we really care about or maybe some people really care about is speed. We want our internet to be fast. We want to open our browser and immediately get the page loaded. And TLS 1.3 also brings improved speed and I'm showing here the handshake and this is very simplified. I've kind of only added the things that matter to make this point but if you look on the left if we do a handshake with an old TLS version it starts that this client sends a client hello and some information what version it supports what encryption modes it supports then the server sends back which encryption modes it wants to use and the key exchange. And then the client sends his part of the key exchange and the so-called finished message and then the server sends a finished message and then the client can start sending data. In TLS 1.3 we have compressed this all a bit. The client sends his client hello and immediately sends the key exchange message. And then the server answers with his key exchange message and a few more things that I left out for simplicity but the important thing is that with the second message the client can already send data. And this is the situation for a fresh handshake like we have not communicated before I want to make a new connection to a server and it goes one time back and forth and then I can send data which and in the earlier version I had two times back and forth. So I can send data much faster. So yeah we remove one round trip from a fresh handshake. There's also security improvements to this handshake so this is nice we have more security and more speed. And particularly we have better security on so-called session resumption which means we're reconnecting using a key from a previous section. And we also protect more data which may avoid some attacks where an attacker may fiddle with the handshake. These were more or less theoretic attacks but these are also prevented in TLS 1.3. So TLS has a more secure and a faster handshake and if you want to have more details about this handshake there was a talk two years ago at this congress which goes into this in much more detail so if this particularly interests you you should watch that talk. I've put a link here and I will put the slides online so there's also something called a zero round trip handshake and this is even faster we can send data right away. Now how can we do that? This is kind of cheating because what we need here is we need to have a previous connection and then we have a key from a previous connection can create a new key from that and use that to send data right away. So yeah we need a so-called pre-shared key which we have from previous connection and then we can send data without any round ships. So even more speeds that's nice right. But this zero RTT mode does not come for free. There is a problem here with so-called replay attacks which means an attacker could record the data that we're sending and then send it again and the server may think okay now this request came twice so I'm doing twice what this request was supposed to do. So there's some caveats with zero RTT and the standard says yeah you should only use if it's safe it says something like you should only use it if you have a profile how to use it safely. Now what does that mean? Let's look at HDTPS which is the protocol we're using usually. If you look into the HDTPS standard it says something that a GET request has to be idempotent and a POST request does not have to be idempotent. Now what does that mean? It more or less means if you send a request twice it shouldn't do anything different from just sending it once. So in theory we could say yeah GET requests are idempotent that means they are safe for zero round trip connections. The question is do web developer, sorry, you can do a little experiment if you meet someone who's a web developer ask them if they know what idempotent means and when they can use idempotent requests and when they cannot. So in an ideal situation where web developers do know that we can use zero RTT safely with TLS 1.3. Zero RTT also does not have as strong forward secrecy as a normal handshake so there's kind of a trade off here because this pre-shared key is encrypted with a key on the server and if that key gets compromised that might compromise our connection even if the key is only leaked later on. So this looks a bit problematic and many speculate that the future attacks we'll see on TLS 1.3 that at least some of them will focus on the zero RTT mode because it looks like one of the more fragile parts of the protocol but it gives us more speed so people want it to have it. Maybe the good news is this is entirely optional we don't have to use it and if we think this looks too problematic we can switch it off. So if it turns out that there are too many attacks involving zero RTT mode we could disable it again and use it without it. It will still be faster but not as fast as it could be with this. Yeah. Okay. Deployment. Now if we have this nice new protocol we not only have to make sure it's secure and fast and everything but we also have to deploy it and we have to deploy it on the internet. On the real internet like the one we have out there not some theoretical internet where there are no bugs and everyone knows how to implement protocols but the real internet with lots of IoT devices and enterprise firewalls and all these kinds of things. And now I want to get back to this version number. This may sound like a trivial thing but TLS 1.3 has a new version number for the protocol version. Here's a wire shark dump from a TLS 1.3 handshake. And if you're trying to look for the version number you will find multiple version numbers. And in case you cannot see it I have made it a bit larger. So at the top you see version TLS 1.0 encoded as 0301. Okay that looks strange. Then a few lines later you have a version TLS 1.2 0303. But we thought this was TLS 1.3. I mean it says here at the top but somehow there are these other versions. And then if you scroll further down you will see extension supported versions and then here it lists TLS 1.3 which is encoded as 0304. So what's going on here? This looks strange. So the first thing to realize is why do we encode these versions in such a strange way? Why are we not using a 1.0 for TLS 1.0? TLS 1.0 came after SSL version 3 which kind of makes it version 3.1. And that's how we encode it. Like TLS 1.0 is really just SSL version 3.1. TLS 1.1 is SSL version 3.2 and so on. And for TLS 1.3 it's complicated. So the very first version you saw earlier in this wire shark dump was the so called record layer and this is kind of a protocol inside the TLS protocol which has its own version number which is totally meaningless but it's just there. And it turned out for compatibility reasons it's best to just let this on the version of TLS 1.0 and then we have the least problems. And this record layer protocol is kind of the encoding of the TLS packages. Now if we have a new TLS version we cannot just tell everyone tomorrow we will use TLS 1.3 and everyone has to update because we know many people won't and so we somehow need to be able to deploy this new version and still be compatible with devices that only speak the old version. So let's assume we have a client that supports TLS 1.0 and we have a server that only supports TLS 1.0. How does that work? There's an extremely complicated mechanism here. So the client connects and says yeah hello I speak TLS 1.2. Server says okay I don't know TLS 1.2 but what's the highest version I support it's TLS 1.0 so he sent that back. And then they can speak TLS 1.0 and in case the client still supports that and we have a connection. This is very simple I would think so. So to illustrate how you would program something like that you would say yeah if client max version is smaller than server max version then we use the client max version otherwise we use the server max version. So you would think that there's no way anyone could possibly not get that right right I mean it's very simple. But I was saying earlier we were talking about the real internet so and on the real internet we have enterprise products. In case you don't know that an enterprise product is something that's very expensive and it's buggy. So yeah we will have web pages that run with firewall from Cisco or we will have people using IBM Domino web server and all these kind of things. And this is the TLS version negotiation in the enterprise edition. So a client says yeah I want to connect with TLS 1.2 and the server says oh I don't support this very new version it's from 2008 I mean that's 10 years in enterprise years it's very long. So yeah so the server just sends an error if the client connects with the TLS version that it doesn't know. It doesn't implement this version negotiation correctly. And this is called version intolerance and this has happened every time there was a new TLS version. Every time we have devices that had this problem if you try to connect with the new TLS version they would just fail they would send an error or they would just cut the connection or have a time out or crash. So browsers needed to handle this somehow because the problem here is when a browser introduces a new TLS version and everything breaks then users will blame the browser and then they will say yeah I will no longer use this browser I'll now switch back to internet explorer or something like that. So browsers needed to handle this somehow. So what the browsers did is okay we try it with the latest TLS version we support and if we get an error we try it again with one version lower and again one version lower and eventually we may succeed to connect. So here we have a browser and we have an enterprise server that supports TLS 1.0 and we will eventually get a connection. Now do you remember Poodle I mentioned earlier there was this padding oracle in SSL version 3 which was discovered in 2014. So you may wonder SSL version 3 which is from 1996 so that's really old. Who uses that in 2014? It was deprecated for 16 years I mean who uses that? Okay Windows Phone 7 used it but on these Nokia phones they also never got an update but okay but like normal browsers and servers at least used TLS 1.0 I mean they maybe didn't use TLS 1.2 but they used TLS 1.0 but we have these browsers that are trying to reconnect if there's an error and so what an attacker could do is that the attacker wants to exploit SSL version 3 so he just blocks all connections with the newer TLS version and therefore forces the client to go into SSL version 3 and then he can exploit this attack that only works on SSL version 3. So browsers say okay these downgrades are causing security issues what do we do now? We could add another workaround so there was a standard called SDSV which basically gives the server a way to tell the client that it's not broken. Like it says hey I have this special extension or it is a kind of special cypher suite which tells the client hey if you did these strange downgrades here please don't do that I'm a well behaving server. So yeah we had a workaround for broken servers and then we needed another workaround for the security issues caused by those workarounds but at some points even enterprise servers mostly had fixed these version intolerance issues and browsers stopped doing these downgrades so attacks like Poodle no longer worked. Have I just said they fixed it? No of course they have not fixed it I mean they fixed it for TLS 1.2 but of course they did not fix it for future TLS versions because they were not around yet so with TLS 1.3 we would get version intolerance again and breaking servers and would have to introduce downgrades again and all the nice security would not be very helpful. So the TLS working group realized that and redesigned the handshake and it was redesigned in a way that the old version feels still set that we're connecting with TLS 1.2 and then we introduced an extension supported versions which signals the support for all the TLS versions we can speak and which signals support for TLS 1.3 and possibly for future versions. Now at this point you may wonder if we'll have version intolerance with this new extension once TLS 1.4 gets out because the server may be implemented that it sends an error if it sees an unknown version in this new version extension and David Benjamin from Google thought about this and said yeah we have to do something about that we have to improve the future compatibility for future TLS versions and he invented this grease mechanism and the idea here is okay a server should just ignore unknown versions in this extension he gets a list of TLS versions and if there's one in there that he doesn't know about he should just ignore it and then connect with one of the versions he knows about. So we could kind of try to train servers to actually do that and the idea here is we're just sending random bogus TLS versions that are reserved values that will never be used for a real TLS version but we can just randomly add them to this extension in order to make sure that if a server implements this incorrectly they will hopefully recognize that early because there will be connection failures with normal browsers. So the hope here is if enterprise vendors will implement a broken version negotiation they will hopefully notice that before they ship the product and then it can no longer be updated because that's how the internet works. Okay so we have this new version negotiation mechanism we no longer need these downgrades and we have this grease mechanism to make it future proof so now we can ship TLS 1.3 right. Then there was this middle box issue so sorry that's the wrong year it must be 2016 sorry in 2016 in summer TLS 1.3 was almost finished but then it took almost another year till it got sorry I mixed up the years now that's no it's correct so in 2017 the TLS 1.3 was almost finished but it took till 2018 till it was actually finished and the reason for that was that when browser vendors implemented a draft version of TLS 1.3 they noticed a lot of connection failures and the reason for these connection failures turned out were devices that were trying to analyze the traffic and trying to be smart and they thought okay this is something that looks very strange it doesn't look like a TLS package how we're used to it so let's just drop it yeah. So yeah this is a strange TLS package I don't know what to do with this I'll drop it. These were largely passive middle boxes so we're not talking about things like man in the middle devices that are intercepting a TLS connection but just something like a router where you would expect it just forwards traffic but it tries to be smart it tries to do advanced security enterprise I don't know and they were dropping traffic that looked like TLS 1.3 and then the browser vendors proposed some changes to a TLS 1.3 so it looks more like TLS 1.2 and the main thing was they introduced some bogus messages from TLS 1.2 that were just supposed to be ignored so one such message is the so-called change cypher spec message in TLS 1.2 which originally didn't exist in 1.3 due to this new handshake design and this message in 1.2 it signals that everything from now on is encrypted so the idea was okay if we send a bogus change cypher spec message early in the handshake and then maybe this will confuse those devices thinking everything after that is encrypted and they cannot analyze it so yeah and it turned out this worked a lot of this reduced the connection failures a lot so and there was a few other things and then eventually the failure rates got low enough that browser thought okay now we can deploy this. There were a few more issues this is a PIXMA printer from Canon these things have an HTTPS server they have network support and we have to talk about these people here so if you remember the Snowden relations one of the things that got highlighted there was that there's a random number generator called dual EC DRBG and that has a vector and basically these days everyone believes this is a vector by the NSA and they have some secret keys so they can predict what random values this random number generator will output and also what was in the Snowden documents was that at some point the NSA offered 10 million dollars to RSA security so they implement this random number generator and then there's a there was a proposal a draft for a TLS extension called extended random that adds some extra random numbers to the TLS handshake why it wasn't really clear like it was just yeah we can do this it was just a proposal I mean every one can write a proposal for a new extension it was never finalized but it was out there and in 2014 a research team looked closer at this dual EC random number generator and figured out that if you use this extended random extension then it's much easier to exploit this vector in this random number generator and coincidentally RSA's TLS library BeSafe also contained support for that extension but it was switched off so and they didn't find any implementations that actually used it so it was sort of okay this was no big deal right but actually it seems these these cannon printers they had enabled this extension and they they used this RSA BeSafe library and enabled this extended random extension which was only a draft and so as extended random was only a draft it had no official extension number so such a TLS extension has a number so that the server knows what kind of extension this is and for this implementation they just used the next available number and it turned out that this number collided with one of the mandatory extensions that TLS 1.3 introduced so so these these cannon printers could not interpret that new extension they thought this is this extended random and it didn't make any sense and so you had connection failures yeah eventually so they the then the in the TLS protocol they just gave this extension a new number and then this no longer happened yeah there were many more such issues and they continue to show up for example recently from Java which is like also very popular in enterprise environments it now ships with the TLS 1.3 support but it doesn't really work so you have connection failures there yeah now with all these deployment issues what about future TLS versions will we have all that again and we have this grease mechanism and it helps a bit like it prevents these tolerance issues but it doesn't prevent these more complicated middle box issues there was a proposal from David Benjamin from Google who said yeah maybe we should just every few months like every two or three months ship a new temporary TLS version which we will use for three months and then we will deprecate it again just constantly change the protocol so that the internet gets used to the fact that new protocols get introduced my prediction here is that these deployment issues are going to get worse I mean we know now that they exist and we kind of have some ideas how to prevent them but if you go to enterprise security conferences you will know that the latest trend in enterprise security is this thing called artificial intelligence we use machine learning and fancy algorithms to detect bad stuff and that worries me and here's a blog post from Cisco where they want to use machine learning to detect bad TLS traffic because they see all this traffic is encrypted and we can no longer analyze it we don't know if malware is in there so let's use some machine learning it will detect bad traffic so what I'm very worried that will happen here is that the next generation of TLS deployment issues will be AI supported TLS intolerance issues and it may be much harder to fix and analyze speaking of enterprise environments one of the very early changes in TLS 1.3 was that it removed the RSA encryption handshake one reason was that it doesn't have forward secrecy the other was these all these Bleichenbacher attacks that I talked about earlier and then they came and emailed to the TLS working group from the banking industry and I quote I recently learned of a proposed change that would affect many of my organization's member institutions the deprecation of the RSA key exchange deprecation of the RSA key exchange in TLS 1.3 will cause significant problems for financial institutions almost all of whom are running TLS internally and have significant security critical investments in out of band TLS decryption what it basically means is they are using TLS for some connection they have some some device in the middle that is decrypting the traffic and analyzing it somehow which if they do it internally it's okay but this no longer works with TLS 1.3 because we always negotiate a new key for each connection and it's no longer possible to have this static decryption there was an answer from Kenny Patterson is a professor from London said my view concerning your request no rational we're trying to build a more secure internet you're a bit late to the party we're metaphorically speaking at the stage of emptying the ashtrays and hunting for not quite empty beer cans more exactly where draft 15 and RSA key transport disappeared from the spec almost about a dozen drafts ago I know the banking industry is usually a bit slow off the mark but this takes a bit okay there were proposals then to add a visibility mode to TRS 1.3 which would in another way allow these connections that could be passively observed and decrypted but they were all rejected and the general opinion in the TLS working group what that the goal of monitoring traffic content is just fundamentally not the goal of TLS the goal of TLS is to have an encrypted channel that no one else can read the industry eventually went to Etsy which is the European technology standardization organization and they recently published something called enterprise TLS which modifies TLS 1.3 in a way that it would allow these decryptions the ITF protested against that and primarily because of the they use the name TLS because it sounds like this is some addition to TLS or something and apparently Etsy has previously promised to them that they would not use the name TLS and then they named it enterprise TLS okay but yeah yeah TLS 1.3 is finished you can start using it you should update your servers so that they use it your browser probably already supports it yeah so in summary TLS 1.3 duplicates many insecure constructions it's faster and deploying new things on the internet is a mess so yeah yeah that's it and I think we have a few minutes for questions all right yeah as Hano management we have like six minutes or so for questions we have five microphones in the room so if you want to ask a question hurry up to one of the microphones and please make sure to ask a short concise question so we can get as many in as we possibly can maybe you just go ahead over there at mic too the question is if there's a way to prevent the use of that enterprise TLS yes there is because the basic idea is that they will use a static Diffie Herman key exchange and if you just connect twice and see that they are using the same again then you might reject that although the problem is some servers may also use it for optimization so there are longer discussions on this question so yeah I cannot fully answer it but there are more or less there are options all right before we go to the next question a quick request for all the people leaving the room please do so as quietly as possible so we can finish this Q&A in peace and don't have all these noise going on mic 3 please hi I was wondering about the replay attacks why didn't they implement something like sequence numbers into the TLS protocol there is something like that in there the problem is you sometimes have a situation where you have multiple TLS termination points for example if you have a CDN network that is internationally distributed and you may not be able to keep state across all of them all right then let's take a question from our viewers in the internet the signal angel please binary strike asks with regards to TLS more than 3 in the enterprise shouldn't we move away from perimeter interception devices towards putting the controls on the endpoint like we would have a zero stress just in so in my opinion yes but there are many people in the enterprise security industry who think that this is not feasible but I mean discussion about network design that would be a whole other talk all right then let's take a question from mic 4 please yeah it's also related to the enterprise TLS the browser can connect to an enterprise TLS server without any problems yeah so it's built that it's compatible with the existing TLS protocol thanks thanks and the reason like whether you can avoid that or not that's really a more complicated discussion that would kind of be a whole sub talk so I cannot answer this in a minute but come to me later if you're interested in details all right then let's take another question from the interwebs I mean that that's what I said what is done that's actually what browsers are doing and I think this is a good idea I just think that this only covers a small fraction of these deployment issues okay we still have plenty of time so let's go to mic 2 please yeah as you said we have still a lot of dirty work around concerning TLS 1.3 and all the implementations in the browsers and so on is there a way to make like a requirement for the TLS 1.3 or 1.4 compliance to meet some compliance to the standard so you have like a test you can perform a self-test or something like that and if you pass that you are allowed to use the TLS 1.3 logo or 1.4 logo you can do that in theory the problem is you you don't really want to have a certification regime that people like have to ask for a logo to be able to be allowed to implement TLS and that's kind of one of the downsides of the open architecture of the internet right we allow everyone to put devices on the internet so we kind of have to live with that and there's no TLS police so we kind of have no way of preventing people to use broken TLS implementations even and I mean people won't care if they have a logo for it or not right all right let's go to mic 5 all the way in the back there okay I have a question about Shor's algorithm and TLS 1.3 because since quantum computing is getting very popular lately and there are a lot of improvement in the industries so what's the current situation regarding TLS 1.3 and all those quantum based algorithm break the complexity into polynomial times yeah there's no major change here so with TLS 1.3 you still are using algorithms that can be broken with quantum computers if you have a quantum computer which currently you don't but you may have in the future there is work done on standardizing future algorithms that are safe from quantum attacks but that's kind of in an early stage and there was an experiment by Google to introduce a quantum safe handshake but they only ran it for a few months and but I think we will see extensions within the next few years that will introduce quantum safe algorithms but right now there's no change from TLS 1.2 to 1.3 both are can be attacked with quantum computers okay so I think we are getting to our last or second to last question so let's go to Mike 3 I think you've been waiting the longest okay in all the versions of TLS there was a problem for smaller devices such as IOT and industrial devices has there been a change in 1.3 to allow them to participate I mean I'm not sure what entirely you mean with the problem I mean TLS needs some the performance issues of TLS have usually been overstated so even in a relatively low-powered device you can implement the crypto the I mean the the whole protocol is relatively complex and you need to implement it somehow but I don't think that such a big issue anymore because even IOT devices have relatively powerful processors these days okay all right that concludes our Q&A unfortunately we are out of time so please give a huge round of applause for this great talk