 All right. Good afternoon. Today I'll be talking about the standardisation process for the Transport Layer Security Protocol, or TLS. Specifically, I'll be talking about the standardisation process for TLS 1.2 and below in comparison to the standardisation process for TLS 1.3. This is a joint effort with Kenny Patterson, and it was born out of conversations we'd started having about the nature of the development process for TLS 1.3. We wanted to examine why the Internet Engineering Task Force, the ITF, has moved away from a rather reactive standardisation process towards a very proactive standardisation process for TLS 1.3. Now, TLS is the de facto means for establishing communications on the web. It's used by millions, if not billions of users on a daily basis. And weaknesses identified in TLS 1.2 and below coupled with pressure to improve the protocol's efficiency led the ITF to start developing the next version of the protocol, TLS 1.3. Now, in the development of 1.3, the ITF has adopted an analysis prior to deployment design philosophy. This is in sharp contrast to what happened with TLS 1.2 and below. Previously, a standard would be published, an attack would be announced, and the TLS working group would respond by either updating the next version of the protocol or by releasing a patch, which in the language of the ITF is an extension. And we wanted to ask why? What has changed in the TLS design landscape to enable the shift in the standardisation process? We wanted to think about whether or not this newer process had been effective and how it related to the broader realm of standardisation. And what standardisation model actually best fits critical protocols such as TLS. Now, TLS started life as a secure sockets layer protocol, SSL, developed by Netscape Communications. SSL version 1 was never publicly released. And in 1999, the ITF standardised a version of SSL called the Transport Layer Security Protocol, TLS. And work on TLS 1.3 started in the spring of 2014 basically because this happened. And red means attack. So TLS was definitely due and overhaul. Now, I'm not going to talk about the differences between TLS 1.2 and TLS 1.3. But basically, 1.3 looks like a rather different protocol with mechanisms to address the previously known weaknesses as well as to reduce the latency. Now, the TLS standards are published by or worked on by the TLS working group and they are released as requests for comments documents, RFCs. These are publicly available and they are free of charge. And inputs to these standards come from the TLS mailing list, which is indeed very, very active. And also from face-to-face meetings that are held at ITF gatherings throughout the year. Now, the ITF subscribes to the open model of standards development. There's no barrier to entry. Anyone can contribute to the standard. There's also no financial barrier to adoption because, as I've mentioned, the standards are free of charge. Now, the process for TLS 1.2 and below has been very reactive. Following the announcement of an attack, the TLS working group would respond by either updating the next version of the protocol or by releasing an extension. And we call this the design release break patch cycle for standards development. And we'll look at some evidence of this being the case by examining some high-profile attacks against TLS 1.2 and below. Now, in 1998, Daniel Bleichenbacher released an attack against RSA encryption when the PKCS version 1.5 encoding scheme was used. Now, this allowed an attacker to exploit a padding oracle and uncover the pre-master secret for TLS and hence the connection keys. Now, this vulnerability was briefly addressed in the TLS 1.0 standard. There's a two-paragraph note that includes a mechanism to help remove the padding oracle. This fix was repeated in TLS 1.1 and 1.2. But unfortunately, this attack has been re-enabled in a number of forms in subsequent years. So the question to ask is, well, would it not have been more prudent to switch to PKCS version 2.1, which was available in 2002 before the release of TLS 1.1 and 1.2? And as cited in the 1.1 and 1.2 specs, this was not done to maintain backwards compatibility with TLS 1.0. Now, in 2011, we have a beast attack. And we see the TLS attack floodgates open. So beast took a known vulnerability and made it practical. The chain IV vulnerability has been around for a while, and the beast authors injected some clever JavaScript into a user's browser and were able to recover secure cookies. Now, what's interesting in this case is that TLS 1.1 removes this vulnerability. But in this vast, crazy TLS ecosystem, implementations are very slow to update. And TLS 1.0 is still very, very popular today. And then we have the RC4 attacks. Now, RC4 was recommended as a countermeasure to beast. But the RC4 key stream has long been known to be biased. And researchers started to implement increasingly practical attacks against RC4 when used in TLS. RC4 was eventually deprecated for use in TLS in 2015, but could it not have been phased out sooner, especially given the existence of viable alternatives such as AES, GCM? Now, I will state here that it wasn't all bad for TLS during these kind of seemingly dark years. There were some positive results for the protocol, but I focused on some of the attacks just to highlight the working group's response to those attacks. If you haven't been listening to me, that's okay. I have summarized some of the information in this table. And if you'll notice the middle two columns, there you can really see evidence of the working group either updating the next version of the protocol or by releasing an extension in order to address an attack. Now, why was the process like this? Maybe it had to be like this for TLS 1.2 and below. And there are a number of reasons. As we've mentioned, backwards compatibility was very important. And the wide deployment of TLS and the time lags in adopting new versions and upgrading implementations can hinder making meaningful change. Also, protocol analysis tools have not really been, or they weren't really mature prior to the release of TLS 1.2 in 2008. So they couldn't really be used in an ongoing standardization process. Also, arguably, there was a lack of interaction between the TLS working group and the academic community. And reward for the academic community came from attacking and publishing attacks on already deployed standards. Now, unfortunately, this incentive model leaves users vulnerable to attack and imposes a patch action on the TLS working group. Now, the process for 1.3 has been vastly different. It's been very collaborative. It's been very proactive. And working closely with the academic community, the TLS working group has released multiple drafts and has welcomed analyses on those drafts prior to the protocol's official release. And we call this the design break fixed release cycle of standards development. Now, in the first few drafts, we see some fixes that are incorporated to remove the known weaknesses. And on draft five, we see some academic analyses of TLS 1.3. And this really starts to help guide the working group's design decisions and confirm some of the design decisions for TLS 1.3. Now, from draft seven onwards, we see a radical shift away from TLS 1.2. And the protocol becomes highly influenced by opt TLS of Critchick and Wee. We see the removal of static Diffie-Hellman and static RSA and the reliance on ephemeral Diffie-Hellman. And we see the use of secure, well-known, well-analyzed primitives. Now, Critchick and Wee provided a full analysis of their protocol. And in that work, they also try and address and cover the many use cases and the many modes needed for TLS 1.3. And so the working group draws inspiration from these secure designs and acknowledges the research community's needs, particularly when it comes to start analyzing the protocol. Now, by draft 10, a lot of the cryptographic mechanisms have arguably started to stabilize. And Kramer's at all performed a symbolic analysis of TLS 1.3, and they found a potential attack against the newly proposed post-handshake client authentication mechanism. Now, I was fortunate enough to be involved in this work. And when we posted the attack to the mailing list, we got some really, really lovely and positive responses. And this is one of them. Thanks for posting this. It's great to see people doing real formal analysis of the TLS 1.3 draft. This is really helpful in guiding the design. So we see the academic community's efforts really starting to be recognized prior to the release of the protocol. Now, the IETF also hosted a TLS Ready or Not workshop, the TRON workshop. And the aim of this workshop was to bring those who implement TLS face-to-face with those who analyze TLS. And the workshop showcased some great work in the areas of provable security, formal methods, and program verification. But what fell out of this workshop was a call for contributions from both implementers and those who analyze TLS alike for a full set of requirements for TLS 1.3. Now, it's quite strange that this has happened so late in the game. But nevertheless, the workshop showcases a huge amount of back and forth that is now happening between the TLS working group and the research community. So what's changed? What's enabled this highly collaborative, more proactive process? Well, protocol analysis tools have definitely matured in the areas of primitives, modeling, and automated tools. And because these tools are now mature, they can be used in an ongoing design process. In the area of impact, involvement, and incentives, we see the working group relying on secure, well-analyzed primitives and responding to the research community's needs. And we also see the research community really starting to appreciate the complexity of the many use cases for TLS 1.3. So I think that implementers and researchers really start to understand each other a bit better. Now, I think that this process has been successful, but we asked, well, could we do better? Because as we know, many cooks in the kitchen will bring conflict. And also, the TLS 1.3 standard has been a rapidly moving target. And analyses have become very easily and very quickly outdated. And as we've mentioned, a full set of requirements was missing at the onset. So maybe an alternative cycle for standards development is requirements analysis, a few iterations of design improve, and then release. Perhaps a cycle like this is, well, maybe naive, maybe a bit unrealistic, because for complicated protocols like TLS, maybe your design requirements only come out during the design phase. So maybe it's not that easy to pin them down at the beginning of the process. Now, as I said at the beginning, we wanted to relate this to the broader realm of standardization and compare the process to what is done by other standardization bodies, such as ISO and NIST. And when I say NIST here, I mean the competition model for primitives such as AES, SHA-3, and the more recent post-quantum competition. We also asked, is this newer or collaborative process unique to TLS? Because of its star power, maybe. That's actually quite a difficult question to answer. But we also wanted to think about, well, what standardization route best fits critical protocols such as TLS? Now, I'm not going to run through this whole table, but what you might notice is an overlap between the TLS 1.3 process and the NIST competition model. What's really nice in the NIST competition model is that requirements are defined upfront. However, all of the work is done by small teams submitting proposals. And TLS is a complicated, big protocol. So is it realistic for a small number of individuals to really design the protocol? And the NIST competitions also focus largely on primitives. And TLS, as I've just said, is a big protocol, a complicated protocol. But that's not to say that a competition of some sort wouldn't have worked for TLS 1.3. Now, just to close, I think the newer process has been successful. And I think it's allowed for preemptive decision making and is hopefully going to produce a stronger protocol for users that requires less patching. I think it's been enabled by the availability of better analysis tools and a greater amount of interaction between the TLS working group and the research community. Could the process have been improved, possibly, maybe starting with a full set of requirements would have been good. So maybe running it like in this competition could have worked for TLS. But I think the main takeaway is that the greater, the degree of collaboration between the many different branches within the crypto community is going to help and is helping to build stronger protocols. Thank you. We have time for a question. So I have a question about the sort of the historical pattern of TLS development. And this is the way in which, I mean, I think there's a question which is ignored, which is sort of, how did awareness of these attacks lead to fixes? Because certainly if you look back, we see there's a text file, CBC that text in the openness cell directory back from the 2005 era, which lays out very clearly what the attacks are on the mode that TLS was using. And then in IPSEC, there was also, there had been a competition, there were emails from Phil Rogaway explaining how certain modes need to be combined in certain ways. And I think the sort of question is, what can we say about that history in terms of, is it really a lack of engagement for academics? Or is it the way in which the standards process ignored the inputting, ignored the body of knowledge that was there, and it's just a matter of diffusion that today means more people are aware of these issues? I think it's probably a bit of a combination of both. I think that maybe there was resistance within the academic community to get involved early. I don't know. I also think that maybe it was easier to kind of try and keep the TLS code base as similar to previous versions as possible. So maybe that also contributed to the historical pattern. Hi, Taylor. The original mantra of the IETF was rough consensus and running code, and the new TLS approach is almost diametrically opposite to that. So two questions, I guess, that raises. The first one is how much resistance has there been from the rank and file IETF community to doing a requirement-spaced and pre-release iterative model versus what they're used to, which is get the code out there and break it. I mean, that really is what the model used to be. So have you seen a lot of resistance? Well, I think the fact that this newer process has been deployed shows that there hasn't been that much resistance to trying a new way of doing things. But I also think that within the TLS community that the feeling of maybe getting requirements early on, I think there's a mixed view on that or a split view, at least, because I think some people feel that it's very difficult to tease out the requirements before you start the design process. And other people would really like to see, well, let's start with what we have to do and then design from there. Yeah, I think that's correct. The side effect of the philosophy, I guess, in the past is that the SSL23 TLS 1.0, 1.2, 1.1, 1.2 rolled out relatively quickly, and it seems to me that TLS 1.3 is rolling out very slowly. I mean, for one thing, it hasn't stabilised yet. But when is the end? When are we going to see an actual standard that's called TLS 1.3 that we can start implementing too? So my answer to that question for the past year has been soon. Christmas. I think there's a big focus within the working group to really get it right. And there are many different people contributing to TLS 1.3, and that brings a lot of different aspects that need to be handled and a lot of different opinions that need to almost be consolidated. Thank you. Hi, very nice talk. Your cycle doesn't seem to allow the possibility for break and fix after release. Don't you think that the standard should already predict that more bugs and more vulnerabilities will be found even in this setting and already predict how to react to this? That's a good point. And yes, I think if you're trying to do the cycle where you break and fix before it's deployed, hopefully you'll have to break and fix less afterwards. But yes, you are going to have to run through that process because things may be missed. Absolutely. So let's thank Tyler again.