 Basically, the upcoming talk is about deploying TLS 1.3, and it's by Filippo Falzordo and Nick Sullivan, and they're both with TLS, sorry, they're both with Cloudflare. So please, a warm welcome to Nick and Filippo. Hello, hello, everyone. All right, we're here to talk about TLS 1.3. TLS 1.3 is, of course, the latest version of TLS, which stands for Transport Layer Security. Now, you might know it best as, of course, the green lock in the browser, right? Or by its old name, SSL, which we're still trying to kill. Now, TLS is a transparent security protocol that can tunnel securely arbitrary application traffic. It's used by web browsers, of course. It's used by mail servers to communicate with each other to secure SMTP. It's used by Tor nodes to talk to each other. But it evolved over 20 years. But at its core, it's about a client and a server that want to communicate securely over the network. To communicate securely over the network, they need to establish some key material, to agree on some key material on the two sides, to use to encrypt the rest of the traffic. Now, how they agree on this key material is in a phase that we call the handshake, where the handshake involves some public key cryptography and some data being shoveled from the client to the server, from the server to the client. Now, this is how the handshake looks like in TLS 1.2. So the client starts the dances by sending a client hello over, which specifies with supported parameters. It can use. The server receives that and sends a message of its own, which is the server hello, that says, sure, let's use this Cypher Suite over here that you say you support. And here is my key share to be used in this key agreement algorithm. And also, here's a certificate, which is signed by an authority that proves that I am indeed cloudfer.com. And here is a signature from the certificate to prove that this key share is actually the one that I want you to use to establish keys. The client receives that and it generates its own key share, its own half of the Diffie-Hellman key exchange and sends over the key share and a message to say, all right, this wraps up the handshake, which is called the finished message. Server receives that, makes a finished message of its own, and answers with that. So now we can finally send application data. So to recap, we went client server, server client, client server, server client. We had to do two run trips between the client and the server before we could do anything. We haven't sent any byte on the application layer until now. Now, of course, this on the mobile networks or on certain parts of the world can build up to hundreds of milliseconds of latency. And this is what needs to happen every time a new connection is set up, every time the client and the server have to go twice between them to establish the keys before the connection can actually be used. Now, TLS 1.1 and 1.0 were not that different from 1.2. So you might ask, well, then why are we having an entire talk about TLS 1.3, which is probably just this other iteration over the same concepts? Well, TLS 1.3 is actually a big redesign. And in particular, the handshake has been restructured. And the most visible result of this is that an entire round trip has been shaved off. So here's how a TLS 1.3 handshake looks like. How does 1.3 remove a round trip? How can it do that? Well, it does that by predicting what key agreement protocol, what key agreement algorithm the server will decide to use and sending preemptively a key share for that algorithm to the server. So with the first flight, we have the client hello, the supported parameters, and a key share for the one that the client thinks the server will like. The server receives that, and if everything goes well, it will go like, oh, sure, I like this key share. Here is my own key share to run the same algorithm. And here is the other parameters we should use. It immediately mixes the two key shares to get a shared key because now it has both key shares, the clients and the servers. And sends, again, the certificate and a signature from the certificate. And then immediately sends a finished message because it doesn't need anything else from the client. The client receives that, takes the key share, mixes the shared key, and sends its own finished message and is ready to send whatever application layer data it was waiting to send. For example, your HTTP request. So now we went client server, server client, and we are ready to send data at the application layer. So you are trying to set up a HTTPS connection, and your browser doesn't need to wait four times the latency, four times the ping, it can, it only has to wait two times. And of course, this saves hundreds of milliseconds of latency when setting up fresh connections. Now, this is the happy path. So this is what happens when the prediction is correct and the server likes the client key share. The, if the server doesn't support the key share that the client sent, it will send a polite request to use a different algorithm that the client said it can support. We call that message hello retry request. It has a cookie so that it can be stateless, but essentially it makes a fallback to what is effectively a TLS 1.2 like handshakes. And it's not that hard to implement because the client follows up with a new client hello, which looks essentially exactly like a fresh one. Now, here I've been lying to you, TLS 1.2 is not always two round trips. Most of the connections we see from the Cloud for Edge, for example, are resumptions. That means that the client has connected to that website before in the past. And we can use that, we can exploit that to make the handshake faster. That means that the client can remember something about the key material to make the next connection around round trip, even 1.2. So here's how it looks like. Here you have your normal TLS 1.2 full two round trip connection. And over here, it sends a new session ticket. A session ticket is nothing else than an encrypted, rapid, blob of key material that the client will hold on to. The session ticket is encrypted and signed with a key that only the server knows. So it's completely opaque to the client, but the client will keep it together with the key material of the connection so that the next time it makes a connection to that same website, it will send a client hello and the session ticket. If the server recognizes the session ticket, it will decrypt it, find inside the key material, and now after only one round trip, the server will have some shared key material with the client because the client held on to the key material from last time and the server just decrypted it from the session ticket. Okay, so now the server has some shared keys to use already and it sends a finished message and the client sends its own finished message and the request. So this is TLS 1.2. This is what is already happening every day with most modern TLS connections. Now, TLS 1.3, your assumption, is not that different. It still has the concept of a session ticket. We changed the name of what's inside the session ticket to a PSK, but that just means pre-shared key because that's what it is. It's some key material that was agreed upon in advance. And it works the same way. The server receives the session ticket, decrypts it, and jumps to the finished message. Now, a problem with the resumption is that if an attacker controls the session ticket key, the key that the server uses to encrypt the session ticket that has inside the key material. An attacker can passively or in the future even with a recording of the connection decrypt the session ticket from the client hello, find a PSK inside it, and use it to decrypt the rest of the connection. This is not good. This means that someone can do passive decryption by just having the session ticket key. How this is addressed usually is that we say that session ticket keys are short lived, but still, it would be nice if we didn't have to rely on that and there are actually nice papers that tell us that implementations don't always do this right. So, instead, what TLS 1.3 allows us to do is use Diffie Hellman with resumption. In 1.2, there was no way to protect against session ticket key compromise. In 1.3, what you can do is send a key share as part of the client hello anyway, and the server will send a key share together with the server hello, and they will run Diffie Hellman. Diffie Hellman was used to introduce forward secrecy against the compromise of, for example, the certificate private key in 1.2, and it's used here to provide forward secrecy for resumed connections. Now, you will say, well, but now this looks essentially like a normal 1.3 handshake. Why having the PSK at all? Well, there's something missing from this one. There's no certificate because there's no need to reauthenticate with the certificate because the client and the server spoke in the past, and so the client knows that it already checked the certificate of the server, and if the server can decrypt the session ticket, it means that it's actually who it says it is. So the key, the two key shares get mixed together, then mixed with the PSK to make a key that encrypts the rest of the connection. Now, there's one other feature that is introduced by TLS 1.3 resumption, and that's the fact that it allows us to make zero round-trip handshakes. Again, all handshakes in 1.3 are mostly one round-trip. TLS 1.2 resumptions can be at a minimum one round-trip. TLS 1.3 resumptions can be zero round-trip. How does a zero round-trip handshake work? Well, if you think about it, when you start, you have a PSK, a pretty shared key. The client can just use that to encrypt the early data that it wants to send to the server. So the client opens a connection to a server that it has already connected to in the past and sends client hello, session ticket, key share for Diffie Hellman, and then early data. Early data is this blob of application data. It can be, for example, HTTP request encrypted with a PSK. The server receives this, decrypts the session ticket, finds the PSK, uses the PSK to decrypt the early data, and then proceeds as normal, mixes the two key shares, mixes the PSK in, makes a new key for the rest of the connection, and continues the connection. So what happened here? We were able to send application data immediately upon opening the connection. This means that we completely removed the performance overhead of TLS. Now, zero TT handshakes, though, have two caveats that are theoretically impossible to remove. One is that that nice thing that we introduced with the PSK ECDHE mode, the one where we do Diffie Hellman for assumption in 1.3, does not help with zero TT data. We do Diffie Hellman when we reach the green box in the slide. Of course, the early data is only encrypted with a PSK. So let's think about the attacker again. The attacker somehow stole our session ticket encryption keys. It can look at the client, hello, decrypt the session ticket, get the PSK out, use the PSK to decrypt the early data, and it can do this even from a recording if it gets the session ticket later on. So the early data is not forward secret with respect to the session ticket keys. Then, of course, it becomes useless if we are doing Diffie Hellman to get the server answer. That's only useful for the first flight sent from the client. So to recap, a lot of things going on here. TLS 1.2 introduced forward secrecy against the compromise of the certificate private keys a long time ago by using ECDHE modes. So 1.2 connections can be always forward secret against certificate compromise. TLS 1.3 has that always on as well. There is no mode that is not forward secret against compromise of the certificate. But when we think about what might happen to the session ticket key, TLS 1.2 never provides forward secrecy. When TLS 1.2 compromising the session ticket key always means being able to passively and in the future decrypt resumed connections. In 1.3 instead, if we use PSK ECDHE, only the early data can be decrypted by using the session ticket key alone. Now, I said that there were two caveats. The second caveat is that zero RTT data can be replayed. The scenario is this. You have some data in the early data that is somehow authenticated. It might be an HTTP request with some cookies on it. And that HTTP request is somehow executing a transaction. Moving some money, instructing the server to do something. An attacker wants to make that happen multiple times. It can't decrypt it. Of course, it's protected with TLS. So it can't read the cookie. And it can't modify it because, of course, it's protected with TLS. But it can record the encrypted message. And it can then replay it against the server. Now, if you have a single server, this is easy to fix. You just take a note of the messages you've seen before and you just say, like, no, this looks exactly like something I got before. But if, for example, like Cloud for your running multiple data centers around the world, you can't keep consistent state all the time in real time across all machines. So there will be different machines that if they receive this message will go like, sure, I have the session ticket key, I decrypt the PSK, I use the PSK, I decrypt the early data, I find inside something I execute what it tells me to do. Now, of course, this is not desirable. One countermeasure that TLS offers is that the client sends a value in that bundle, which is how long ago in milliseconds I obtained the session ticket. The server looks at that value and if it doesn't match its own view of this information, it will reject the message. That means that if the attacker records the message and then takes 10 seconds later, tries to replay it, the times won't match and the server can drop it. But this is not a full solution because if the attacker is fast enough, it can still replay messages. So everything the server can do is either accept the zero TT data or reject it. It can't just take some part of it or take a peek and then decide because it's the server hello message that says whether it's accepted or rejected. And the client will keep sending early data until it gets the server hello. There's a raise here. So the server has to go blind and decide, am I taking zero TT data or am I just rejecting it all? If it's taking it and then it finds out that it's something that it can't process because oh God, there's a HTTP post in here that says to move some money. I can't do this unless I know it's not replayed. So the server has to get some confirmation. The good news is that if the server waits for the finished message, the server sends the server hello, the finish, and waits for the client's one. When the client's one gets there, it means that also the early data was not replayed because that finished message ties together the entire handshake, together with some random value that the server sent. So it's impossible that it was replayed. So this is what a server can do. It can accept the early data. And if it's something that is not even important, something that is dangerous if it's replayed, it can just wait for the confirmation. But that means it has to buffer it. And there's a risk for an attack here where an attacker just send a HTTP post with a giant body just to fill your memory. So what we realized is that we could help with this if we wrote on the session tickets what's the maximum amount of early data that the client can send. If we see someone sending more than that, then it's an attacker. And we close the connection, drop the buffer, free up the memory. But anyway, however countermeasures we deploy, unless we can keep global state across the servers, we have to inform the application that this data might be replayed. The spec knows this. So the TLS 1.3 spec explicitly says, protocols must not use zero RTT without a profile that defines it used, which means without knowing what they're doing. This means that TLS stack APIs have to do one round trip by default, which is not affected by replays. And then allow the server to call some APIs to either reject or wait for the confirmation and to let the client decide what goes into this dangerous replayable piece of data. So this will change based on the protocols, but what about our favorite protocol? What about HTTP? Now, HTTP should be easy. The HTTP spec, you go read it, and it says, well, get requests are even potent. They must not change anything on the server. Solved, we will just allow get requests in early data because even if they're replayed, nothing happened. Yay, nope. You will definitely find some server on the internet that has something like sendmoney.php, question mark, to philippo, this amount, and it's a get request. And if an attacker records this, which is early data, and then we place it against a different server in the pool, that will get executed twice, and we can't have that. Now, applications, now, so what can we do here? We make trade-offs. If you know your application, you can make very specific trade-offs. For example, Google has been running quick with zero TT for the longest time for three years, I think. And that means that they know very well their application and they know that they don't have any sendmoney.php endpoints. But if you're like Cloudflare, that from a wide number of applications, you can't make such wide-sweeping assumptions and you have instead to hope for some middle ground. For example, something we might decide to do is to only allow get to the root, so get slash, which might be the most benefit because maybe most connections start like that and the least likely to cause trouble. We are still working on how exactly to bring this to the application, so if you know of an application that would get hard by something as simple as that, do email us, but actually if you have an application that is that vulnerable, I have bad news. Tide Wong at all demonstrated that browsers will today, without TLS 1.3 or anything, replay HTTPS requests if network errors happen and they will replay them silently. So it might not be actually worse than the current state. Okay, I actually can see everyone getting uneasy in their seats thinking, there, the cryptographers started it again. They're making the security protocol that we need more complex than it has to be to get their job security for the next 15 years, right? No. No. I can actually assure you that one of the big changes, in my opinion, even bigger than the round trips in 1.3, is that everything is being waited for the benefit against the complexity that it introduces. And while Zero RTT made the cut, most other things definitely didn't. Right, thanks, Filippo, is my microphone working? You can hear me? Okay, so in TLS 1.3, as an iteration of TLS, we also went back, or we being the people who are looking at TLS, went back and revisited existing TLS 1.2 features that sort of seemed reasonable at the time and decided whether or not the complexity and the danger added by these features, or these protocols, or these primitives involved in TLS were reasonable to keep. And the big one, which happened early on in the process, is static RSA mode. So this is the way that TLS has been working back since SSL, rather than using Diffie Hellman to establish a shared key. How this works is the client will make its own shared key and encrypt it with the server's certificate public key, which is gonna be an RSA key, and then just send it in plain text over the wire to the server. And then the server would use its private key to decrypt that and then establish a shared key. So the client creates all the key material in this case. And one thing that's sort of obvious from this is that if the private key for the certificate is compromised, even after the fact, even years later, someone with a transcript of what happened can go back and decrypt this key material and then see the entire conversation. So this was removed very early in the process, somewhere around two years ago in TLS 1.3. So much to our surprise, and the surprise of everyone reading the TLS mailing list, just around, just very recently, near the end of the standardization process where TLS 1.3 was almost final, this email landed on the list. And this is from Andrew Kennedy, who works at BITS, which is basically means he works at banks. So this is what he said. He said, deprecation of the RSA key exchange in TLS 1.3 will cause significant problems for financial institutions, almost all of whom are running TLS internally and have significant security critical investments in out-of-band TLS decryption. Out-of-band TLS decryption. Hmm. That certainly sounds critical, critical for someone, right? So one of the bright spots was Kenny Patterson's response to this, in which he said, my view concerning your request? No. Rationale, we're trying to build a more secure internet. Emphasis on more is mine, but I sure meant it, yeah. So after this, the banking folks came to the IETF and presented this slide to describe how hard it was to actually debug their system. This is a very simple, I guess, with respect to banking. Those are different switches, routers, mount, middleware, web applications, and everything talks TLS one to the other. And after this discussion, we decided we came to a compromise, but instead of actually compromising the protocol, Matt Green taught them how to use Diffie-Homan incorrectly. So this ended, they ended up actually being able to do what they wanted to do without us or anybody in the academic community or the TLS community adding back this insecure piece of TLS in the TLS. So if you wanna read this, it shows how to do it. But in any case, we didn't add it back. Yeah, don't do this, basically. So we killed static RSA, and what else did we kill? Well, looking back on the trade-offs, there's a number of primitives that are in use in TLS 1.2 and earlier that just haven't stood the test of time. So RC4, StreamCypher, gone. Triple Des, BlockCypher, gone. MD5, SHA-1, all gone. Yep, there is even constructions that took basic BlockCypher constructions that are gone. AESCBC, gone. RSA, PKCS1, 1.5. This has been known to be problematic since 1998, also gone. They've also removed several features like compression and renegotiation, which was replaced with a very lightweight key update mechanism. So in TLS 1.3, none of these met the balance of benefit versus complexity. And a lot of these vulnerabilities you might recognize are just impossible with TLS 1.3. So that's good. So the philosophy for TLS 1.3 in a lot of places is simplify and make it more robust as much as possible. And there are a number of little cases in which we did that. Some of the authors of this paper may be in the audience right now, but there was a way in which Cypher's were used for the actual record layer that was not as robust as it could be. It's been replaced with a much simpler mechanism. TLS 1.2 had this really kind of funny catch 22 in it where the Cypher negotiation is protected by a finished message, which is a message authentication code, but the algorithm for that code was determined in the Cypher negotiation. So it had this kind of loopback effect and attacks like Freak, Logjam, and Curveswaf from last year managed to exploit these to actually downgrade connections. And this was something that was happening in the wild. And the reason for this is that these Cypher suites in this handshake are not actually digitally signed by the private key. And at TLS 1.3, this was changed. Everything from the signature up is digitally signed. So this is great. What else did we change? Well, what else did TLS 1.3 change versus TLS 1.2? And that is fewer, better choices. And in cryptography, better choices always means fewer choices. So there's now a short list of curves and finite field groups that you can use, the no arbitrary Diffie-Hellman groups made up by the server, no arbitrary curves that can be used. And this sort of shortening of the list of parameters really enables one RTT to work a lot of the time. So as Filippo mentioned, the client has to guess which key establishing methods the server supports and send that key share. If there's a short list of only secure options, this happens a lot larger percentage of the time. So when you're configuring your TLS server, it no longer looks like a complicated takeout menu. It's more like a wedding to take one of each. It's a lot more delicious anyways. And you can look on Wireshark. It's also very simple. There's the Cypher suites, the extensions, the curves, and you can go from there. Now, TLS 1.3 also fixed what I think was one of the biggest actual design mistakes of TLS 1.2. We talked about how forward secrecy works with resumption in 1.2 and 1.3. But TLS 1.2 is even more problematic. TLS 1.2 wraps inside the session tickets the actual master secret of the old connection. So it takes the actual keys that encrypt the traffic of the original connection, encrypts them with the session ticket key, and sends that to the client to be sent back the next time. We talked about how there's a risk that an attacker will obtain session ticket keys and decrypt the session tickets and break the forward secrecy and decrypt the resumed connections. Well, in TLS 1.2, it's even worse. If they decrypt the session tickets, they could go back and backwardly decrypt the original non-resumed connection. And this is completely unnecessary. We have hash functions. We have one-way functions where you put an input in and you get something that you can't go back from. So that's what 1.3 does. 1.3 derives new keys, fresh keys, for the next connection and wraps them inside the session ticket to become the PSK. So even if you decrypt 1.3 session ticket, you can then attack the subsequent connection, and we've seen that you might be able to decrypt only the early data or all the connection, depending on what mode it uses, but you definitely can't decrypt the original non-resumed connection. So this would be bad enough, but 1.2 makes another decision that entirely puzzled me. The whole using the master secret might be just because session tickets were an extension in 1.2, which they're not in 1.3, but 1.2 sends the new session ticket message at the beginning of the original handshake unencrypted. I mean, encrypted with the session ticket keys, but not with the current session keys. So any server that just supports session tickets will have at the beginning of all connections, even if resumption never happens, they will have a session ticket, which is nothing else than the femoral keys of that connection wrapped with the session ticket keys. Now, if you are a global passive adversary that somehow wants to do passive drug-ness surveillance and you wanted to passively decrypt all the connections, and somehow you were able to obtain session ticket keys, what you would find at the beginning of every TLS 1.2 connection is the session keys encrypted with the session ticket keys. Now, 1.3 services, and in 1.3 these kind of attacks are completely impossible. The only thing that you can passively decrypt or decrypt after the fact is the early data and definitely not non-resumed connections and definitely not anything that comes after 0TT. So it's safer, basically. Hope so. Hopefully. And how do we know that it's safer? Well, these security parameters and these security requirements of TLS have been formalized, and as opposed to earlier versions of TLS, the folks in the academic community who do formal verification were involved earlier. So there have been several papers analyzing the state machine and analyzing the different modes of TLS 1.3 and these have aided a lot in the development of the protocol. So who actually develops TLS 1.3? Well, it's an organization called the ITF, which is the Internet Engineering Task Force. It's a group of volunteers that meet three times a year and have mailing lists and they debate these protocols endlessly. They define the protocols that are used on the internet. And originally, the first thing that I ever saw about this, this is a tweet of mine from September 2013, was a wish list for a TLS 1.3. And since then, they came out with a first draft at the ITF documents that define protocols are known as RFCs. And the lead-up to something becoming an RFC is an internet draft. So you start with the internet draft zero and then you iterate on this draft until finally it gets accepted or rejected as an RFC. So the first one was almost three years ago, back in April of 2014. And the current draft, draft 18, which is considered to be almost final, one of it's in what's called last call at the ITF, was just recently in October. And just in the security landscape, during that time, you've seen so many different types of attacks on TLS. So triple handshake, poodle, freak, logjam, drown, that we, there was to talk about that earlier today, lucky microseconds, sloth, all these different types of acronyms you may or may not have heard of have happened during the development. So TLS 1.3 is a living document. And it's hopefully going to be small. I mean, TLS 1.2 was 79 pages. It's kind of a rough read, but give it a shot if you like. TLS 1.3, if you shave off a lot of the excess stuff at the end is actually close. And it's a lot nicer read. It's a lot more precise in it, even though there are some interesting features like TLS zero RTT resumption. So practically, how does it get written? Well, it's a GitHub and a mailing list. So if you want to send a pull request to this TLS working group, there it is. This is actually how the draft gets defined. And you probably want to send a message to the mailing list to describe what your change is if you want to. So I suggest if anybody wants to be involved, this is pretty late. I mean, it's in last call, but the mailing list is still open. Now I've been working on this with a bunch of other people for LAPO as well. We're contributors on the draft. Been working for over a year on this. And you can check the GitHub issues to see how much work has gone into it. But the draft has changed over the years. For example, in, well, over the years and months, but for example, draft nine had this very complicated tree structure for a key schedule. And these, you can see HTK, all these different things have to do with different keys in the TLS handshake. And this was inspired by Quick, the Google protocol that Filippo mentioned earlier, as well as a paper called OP TLS. And it had lots of different modes, semi-static Diffie-Hellman and this tree-based key schedule. And over the time, this was whittled down from this complicated diagram to what we have now in TLS 1.3, which is a very simple derivation algorithm. And this took a lot of work to get from something big to something small, but it's happened. Other things that happened in TLS 1.3 are sort of less substantial cryptographically. And that involves naming. So if anybody's been following along, TLS 1.3 is not necessarily the unanimous choice for the name of this protocol. It's, as Filippo mentioned, 1.0, 1.1, 1.2 are pretty small iterations even on SSL V3, whereas TLS 1.3 is quite a big change. So there's a lot of options for names. So I guess let's have a show of hands. Who here thinks it should be called 1.3? Thanks, Filippo. Yeah, so a pretty good number. How about TLS 2? Anybody? Oh, well, this actually, that looks like more than the original one. Yeah, remember that SSL V2 is a thing and it's a terrible thing. You don't want to confuse it with us. So how about TLS 4? Yeah, it's still a significant number of people. How about TLS 2017? Yeah, all right. TLS 7, anybody? Okay, ever there's? TLS Millennium 2019 X, yes. Yeah, all right. Sold. TLS Vista, yeah. So, lots of options. But just as a reminder, the rest of the world doesn't really call it TLS. It's still called, this is Google Trends, an interest over time searching for SSL versus TLS. SSL is really what most of the world calls this protocol. So, SSL has the highest version of version three and that's kind of the reason why people thought TLS 4 was a good idea because, oh, people are confused, three is higher than 1.2, yada, yada, yada. In any case, this poll was not the only poll that was taken, there were some informal Twitter polls. Bacon was a good one, 52% of Ryan Hurst's poll. So, versions are a really sticky thing in TLS. For example, the versions that we have of TLS, if you look at them on the wire, they actually don't match up. So, SSL 3 is 3.0, which does match up, but TLS 1 is 3.1, TLS 1 is 3.3 and originally I think up to draft 16 of TLS 1.3, it was 3.4, just sort of a bumping the minor version of TLS 1.2, very confusing. But after doing some internet measurement, it was determined that a lot of servers, if you send a client hello with 3.4, it just disconnects. So, this is actually really bad. It prevents browsers from being able to actually safely downgrade. What a server is supposed to do if it sees a version higher than 3.3 is just respond with 3.3 saying, hey, this is the best I have. But turns out a lot of these break. So, 3.3 is in the client hello now and 3.4 is negotiated as a sub protocol. So, this is messy, right? But we do balance the benefits versus complexity and this is one of the ones where the benefits of not having servers fail outweighed the complexity added of adding an additional thing. And to prevent this from happening in the future, David Benjamin proposed something called Greece, wherein every single piece of TLS negotiation you're supposed to, as a client, add some random stuff in there so that servers will get used to seeing things that are not versions they're used to. So, XAXA, so it's all greased up. It's a real thing. It's a real, very useful thing. This is gonna be very useful for the future for preventing these sort of things, but it's really unfortunate that that had to happen. So, we're running low on time, but we do actually get involved with getting our hands dirty and one thing the ITF really loves when developing these standards is running code. And so, we started with the ITF95 hackathon, which is in April, and managed by the end of it to get Firefox to load a server hosted by Cloudflare over TLS 1.3, which was a big accomplishment at the time. We used an NSS, which is the Security Library in Firefox, and Mint, which was a new version of TLS 1.3 from scratch written in Go. And the result was it worked, but this was just a proof of concept. To build something that was more production-ready, we looked at what was the TLS library that we were most confident modifying, which unsurprisingly wasn't open SSL. So, we opted to build 1.3 on top of the Go Crypto TLS library, which is in the Go language standard library. The result, we call it TLS Tris, and it's a drop-in replacement for Crypto TLS, and comes with this wonderful warning that says, do not use this for the sake of everything that's good and just. Now, it used to be about everything, but now it's not really about security anymore. We got this audited, but it's still about stability. We are working on upstreaming this, which will solidify the API, and you can follow along with the upstreaming process. There's a, Google people were kind enough to open us a branch to do the development, and it will definitely not hit the next Go release, or 1.8, but we're looking forward to actually upstreaming this. Now, anyway, even if you use Go, deploying is hard. And the first time we deployed Tris, the draft number version was 13. And to actually support browsers going forward from there, we had to support multiple drafts version at the same time by switching on obscure details sometimes, and sometimes had to support things that were definitely not even drafts, because browsers started to diverge. Now, anyway, we had test metrics that would run all our comments against all the different versions of the client libraries, and that would make sure that we're always compatible with the browsers. And these days, the clients are actually much more stable, and indeed, you might be already using it without knowing. The, for example, Chrome beta, the beta channel, has it enabled for about 50% as an experiment from the Google side. And this is how our graphs look like when we first launched, when Firefox nically enabled it by default, and when Chrome cannery enabled it by default. These days, we're stable around 700 requests per second carried over TLS 1.3. And on our side, we enabled it for millions of our websites on Cloudflare. And anyway, as we said, the spec is a living document, and it's open. You can see it on GitHub. The trace implementation is there, even if it has the scary warning. And the blog here is where we'll probably publish or the follow-up research and results of this. Thank you very much. And if you have any questions, please come forward. I think we have minutes. Thank you. We have plenty of time for questions. First question, go to the internet. The first, the very first question is of people asking if the decision of the zero RTT going on to the application handing it off to the application developers if that is a very wise decision. Well, fair. So as we said, this is definitely breaking an obstruction. So it's not broken by default. If you just update, go and get TLS 1.3, you won't get any zero RTT because indeed it requires collaboration by the application. So unless an application knows what to do with it, it just cannot use that and have all the security benefits and the one round trip full handshake advantages anyway. Okay, next question is from microphone one. With your early testing of the protocol, have you been able to capture any hard numbers on what those performance improvements look like? One round trip. Depends how much a round trip is. Yeah, exactly. One round trip is, I mean, I can't tell you a number because of course, if you live in San Francisco with a fast fiber, it's, I don't know, three milliseconds, six, if you live in, I don't know, some country where edge is the only type of connection you get, that's probably around one second. I think we have an average that is around between 100 and 200 milliseconds, but we haven't formally collected these numbers. Okay, next question from microphone three. Okay, one remark I wanted to make is that another improvement that was made in TLS 1.3 is that they added encryption to client certificates. So the client certificates are transmitted encrypted, which is important if you think about that. A client will move and a Dragonet surveillance entity could track clients with this. And another remark, or a question? Yes, questions are answered with a question mark, so can you please keep it a little bit short? Yeah, I'll try. It might be stupid. So does the fixed defi-helmin groups, wasn't that the problem with the log-gem attack? So does this help with log-gem attacks? Are you referencing the proposal for the banks? No, no, just in general that you can pre-compute. Yes, so in log-gem, there was a problem where there was a defi-helmin group that was shared by a lot of different servers by default, the Apache one, which was 1024. In TLS 1.3, it was restricted to a pre-computed defi-helmin group of that's over 2,000 bits as the smallest one. And even with all the pre-computation in the world, if you have a 2,000-bit defi-helmin group, it's not feasible to pre-compute enough to do any type of attack. But yeah, that's a very good point. And since they're fixed, there's no way to force the protocol to use anything else that would not be as strong. Okay, thanks. Next question for microphone four. Sorry, thanks for your talk. In the abstract, do you mention that another feature that had to be killed was SNI with the zero RTT, but there are ways to still implement that? Can you elaborate a bit? Yep, so we gave this talk internally twice, and this question came both times. So, SNI is a small parameter that the client sends to the server to say which website it's trying to connect to. For example, Cloudflare has a lot of websites behind our machines, so you have to tell us, oh, I actually want to connect to blog.filipo.io. Now, this is, of course, a privacy concern because someone just looking at the bytes on the wire will know what specific website you want to connect to. Now, the unfortunate thing is that it has the same problem as getting forward secrecy for the early data. You send SNI in the client, hello, and at that time, you haven't negotiated any key yet, so you don't have anything to encrypt it with. But if you don't send SNI in the first flight, then the server doesn't know what certificate to send, so it can't send the signature in the first flight so you don't have keys. So, you would have to do a two round trip, and now we will be back at the LS 1.2. So, LS, that doesn't work with one round trip handshakes. That said, there are proposals in the HTTP2 spec to allow multiplexing, and this is ongoing work. It could be possible to establish one connection to a domain and then establish another connection within the existing connection, and that could potentially protect your SNI. So, someone looking would think that you're going to blog.filipo.io, but then, once you open the connection, you'll be able to ask HTTP2 to also serve you this on the website. Thanks. Okay, next question, microphone seven. No, actually five, sorry. You mentioned that there was formal verification of TLS 1.3. What's the software that was used to do the formal verification? So, there were several software implementations and protocols. Let's see if I can go back. So, Tamarin is a piece of software developed by Castcrammers and others at Oxford and Roe Holloway. MITLS is in F-Sharp, I believe. This is by Enria, and NQSB-TLS is in OCaml. So, several different languages were used to develop these. And I believe the authors of NQSB are here. Okay, next question, microphone eight. Hi, thanks. Thank you for your informative presentation. So, SSL and TLS history is riddled with what could possibly go wrong ideas and moments that bit us in the ass, eventually. So, I guess my question is, you know, taking into account that there's a lot of smaller organizations or smaller hosting companies, et cetera, that will probably get this zero RTT thing wrong. Your gut feeling, how large a chance is there that this will indeed bite us in the ass soon? Thank you. Okay, so, as I said, I'm actually vaguely skeptical on the impact on HTTP because browsers can be made to replay requests already. And we haven't, we've seen like a paper slash blog post about it, but no one actually went out and proved that that broke a huge percent of the internet. But to be honest, I actually don't know how to answer you, how badly we will be bit by it. But remember that on the other hand of the balance is how many still say that they won't implement TLS because it's slow. Now, no, it's zero RTT. TLS is fast, go out and encrypt everything. So those are the two concerns that you have to balance together. Again, my personal opinion is also worth pretty little. This was a decision that was made by the entire community on the mailing list. And I can assure you that we've been, everyone has been really conservative with everything, thinking even indeed if the name would have misled people. So I can predict the future. I can only say that I hope we made the best choice to encrypt, to make the most part of the web the most secure we can. Next question is for the internet. Segnal, do we have another question from the internet? Yes, we do. What are the major implementation incompatibilities that were found now that the actual spec is fairly close? I repeat that question. What were the major implementation incompatibility issues that were found? Okay, as in during the, I guess, during the drafts period. So some of the ones that had version intolerance were mostly I think middle boxes and firewalls. There were some very large sites. I think PayPal was one of them. Yeah, but I, yeah. Although during the process, we had incompatibilities for all kinds of reasons, including one of the two developers misspelled the variable number. So we, during the drafts, sometimes compatibility broke, but there was a lot of collaboration between client implementations and server implementations on our side. So I'm pretty happy to say that the actual 1.3 implementations had a lot of interoperability testing and all the issues were pretty quick to be. Killed. Okay, next question is for microphone number one. So I have two quick questions concerning session resumption. So if you store some data on a server from a session, wouldn't that be some kind of super cookie? Is that not privacy a little dangerous? And the second question would be what about DNS load balancers or some other huge amounts of servers while your request is going to different servers every time? Okay, so, yeah, these are just about deploying session tickets effectively. Now, TS1.3 does think about the privacy concerns of session tickets and indeed it allows the server to send multiple session tickets. So the server will still know what client is sending it if it wants to, but at least anyone looking at the connection since they're sent encrypted, not like in 1.2 and there can be many, the anyone looking at the connection will not be able to link it back to the original connection. That's the best you can do because if the server and the client have to reuse some shared knowledge, the server has to learn about who it was. But session tickets in 1.3 can't be tracked by a passive observer, by a third party actually. And when you do load balancing, there's an interesting paper about deploying session tickets but the guess is that you probably want to figure out how clients roam between your servers and strike a balance between having to share the key so that it's more effective, the session ticket key and not sharing the session ticket keys which makes it harder to acquire them all. You might want to do, I don't know, geographically located or in a single rack, it's really up to deployment. Okay, final question goes to Mike from Free. Yes, I have a question regarding the grease mechanism that is implemented on the client side. If I understood it correctly, you are inserting random version numbers of not existing TLS or SSL versions and that way training the servers to conform to the specification. What is the result of the real-world tests? How many servers actually are broken by this? So you would expect none because after all, they're all implementing 1.3 now so that all the clients they would see would already be doing grease. Instead, just as Google enabled grease, I think it broke, I'm not sure, so I won't say which specific server implementation but one of the minor server implementations was immediately detected as the Haskell one. I don't remember the name but I can't read Haskell so I don't know what exactly they were doing but they were terminating connections because of grease. And just as a note, grease is also used in Cypher negotiation and anything that is a negotiation in TLS, 1.3, so this actually did break a subset of servers but a small enough subset that people were happy with it. Okay, thanks. Two percents too high, yeah. Thank you very much. Thank you.