 is giving the presentation. Hi, oh, yes, now it works. Okay, hi everyone. Thank you very much for the introduction. I'm happy to tell you today about our work on TILAS 1.3 PSK. So as most of you probably already know, transfer layer security is one of the most important security mechanisms on the internet, protecting billions of HTTP connections every day. And its newest version, TILAS 1.3, was released in 2018 and already sees a widespread adoption. Namely, according to F5 Labs, it has been the most preferred version among the top one million servers end of last year. So structurally, TILAS is composed of two protocols. First there's the handshake protocol, which is basically the authenticated key exchange of TILAS negotiating a session key that is then used in the record layer to actually protect the application data using an authenticated encryption scheme such as AESGCM. In this talk, I will solely focus on the handshake protocol. So the handshake of TILAS comes in basically three variants. First there is the full handshake, which uses public key certificates for authentication. And this is the variant that probably most of you associate with the TILAS handshake. Then there is the PSK mode that is mainly used for session resumption and it allows for a more efficient and abbreviated handshake by the client and servers sharing a symmetric key beforehand. And even it allows for the client to send early zero RTT application data to the client. In addition to the PSK only, there is a PSK ECDHE variant handshake that adds an additional Diffie-Hellman key exchange to provide forward secrecy for the session keys. So we will only focus on the PSK handshakes today. So now if we want to prove a crypto system secure, what we usually do is we reduce the security of the crypto system to some computational hardness assumption. For example, DDH. And classically this reduction is considered asymptotically. This means that the security then, the security proof then gives us the guarantee that there exists sufficiently large parameters. For example, the size of the Diffie-Hellman group so that our scheme is secure in some well-defined security model. But if we now look at the real world crypto system where we usually rely on standardized parameters, these asymptotic results do not really give meaningful guarantees for a specific standardized instance of that crypto system. And here comes the concrete security approach into play, making the bounds for running time and success probability explicit. And this then results in bounds of the form as we see here on the slide, namely we have that the advantage of some fix it versus we A against our crypto system is bounded by the advantage of our reduction to the computational hardness assumption times some loss function L, which might or might not be dependent on the adversary. This relation then allows us to either it shows parameters for our crypto system for a desired security level that are backed up by the security proof or even to check whether certain parameters achieve the desired security level. Now, if we want to, if we have such a relationship, then we say that the security proof or the reduction is tight if the loss function L is independent of the adversary. That means, for example, a small constant. The question now is how tightly secure is TLS? And TLS 1.3 was the first version that was developed in close collaboration between academia and industry. So there were already many analyses during the standardization process. And I would like to focus today on the result by doweling it on as it's the most complete computational analysis that we have at the moment. And since we are focusing on the PSK modes, let's have a look at their result on PSK, ECD-HE. So generally, their security proof reduces the security of TLS PSK to its billing blocks. And these billing blocks consist of a Diffie-Harman group G with a couple of finite field and elliptic curve options and a hash function H, which is either SHA-256 or SHA-384. And different combination of these then results in different levels of security. Here you see, for example, 128-bit security configuration or 192-bit security configuration. And if we now look at the bound that is given by doweling it all in a very simplified way, you see that first, they reduce the security of TLS PSK to its billing blocks. And secondly and most importantly, you see that this bound here is highly non-tight due to the quadratic factor that the reduction to the security of the group here loses in the number of sessions that the adversary interacts with. So the question now is, well, this actually is a problem in practice. So let's have a look at some concrete numbers. So here you see four different adversaries using different amounts of resources, three of which are against P256. That means we aim for 128-bit security here and one is against P384. That means we aim for 192-bit security here. And as a target security, we set the running time of the adversary divided by two to the desired bit security level, which you see here in blue on the slide. So if we now compute the concrete values for the bound by doweling it all, we get the following. So in yellow, these yellow bars represent the advantage of each of these respective adversaries. And since you see here that every of these bar crosses the blue line here, this means that none of the configurations actually achieves the target security. And another thing I would like to highlight is that if you look at the line here top that is highlighted in blue now, is that this means that if the yellow bar crosses this line, that the probability of the adversary breaking TLS is actually one or less. That means from a concrete security perspective, the bound doesn't really give any meaningful guarantee in this case. So the question now is whether the parameters for TLS are not chosen correctly or if the bound is simply to lose and does not draw the right picture. And this is what I would like to talk about next. So we as the authors of this paper were able, independently were able to give tighter bounds for the full handshake in prior work. However, both of our works have a couple of limitations. The first one being that both of us make assumptions about the key schedule, the key derivation procedure of TLS which I will come to in the remainder of this talk. And secondly, the signatures that are used in the full handshake for authentication have to be multi secure with adaptive corruptions. And unfortunately, none of the standardized signatures for TLS satisfies this in a tight way. That means we always have an implicit linear loss that seems at the moment unavoidable. And for TLS PSK, we are now able to give fully tight bounds in this work, mostly because we don't have a bottleneck of the signatures anymore. The only exception here is that we don't, that we are not able to give a bound for PSK only with Shars V84, which I will come to briefly at the end of this talk. So let me briefly compare the bound by doubting it with our bound. So our bound is tight with a constant loss and doing the math, we get the following values for our bound that you see here in purple. And the first thing you should notice is that for each of these configurations, actually our bounds easily achieve the target security. And comparing the yellow and the purple bar, we even see that there is a difference of up to 128 bits. And one thing I would like to highlight is that one might argue that we only show the right numbers here, but actually the majority of the configurations we looked at draw a similar picture. So if you're interested in more details on the numbers, then I would be happy if you consider reading our paper. So the natural, so consequently, we have seen that the parameters chosen for TLS are actually justified, but the prior proofs were not able to draw the right picture here. So the question now is why is this the case? So the full handshake and the PSK ECDHE handshake, both at its core are basically a simple if you harm a key exchange. And for security, that means secrecy of our session key, we want to have that our adversary here that only sees the key shares, so G to the X and G to the Y, does not learn anything about our session key. And this is usually captured by indistinguishability from random. So to prove this, one would reduce the security of the key secrecy of the protocol to the DDH assumption by embedding the DDH challenge into our handshake. And this works as follows. So we basically take G to the A as the key share of our client and G to the B as the key share of our server, and then take our Diffie-Hellman challenge in place of the Diffie-Hellman key. So if we now compute the session key, then we either get real or a random key. However, in reality, the problem is that there are many, many sessions in parallel. So the question is where do we actually embed our DDH challenge because we only have one. And the simplest and obvious solution is to simply guess a client and a server and do exactly what I just told you. However, this induces the quadratic loss in the number of sessions that we already have seen. So what do we do? So fortunately, Kuhn Gornet, at Crypto-19, proposed a technique to prove simple DH-like protocols more tightly secure. And here the reduction then works as follows. So first we embed a re-realization of G to the A in every of our client sessions and the re-realization of G to the B in all of our server sessions. And then we model the key derivation of the session key as a random oracle. And here is this crucial that for this technique to work that the key derivation function not only gets the Diffie-Hellman key G to the X, Y, but also the context G to the X, G to the Y that are used to derive the Diffie-Hellman key. Because then we can switch from a reduction to DDH to reduction to strong Diffie-Hellman, which is basically just computational Diffie-Hellman with a DDH oracle. The interesting thing here to notice is that the adversary can only learn something about the session key if he makes a correct random oracle query. And we can use our DDH oracle by observing simply all of the random oracle queries the adversary makes and check whether there is a correct query among all these queries. And if this is the case, we can use this to solve the strong Diffie-Hellman challenge. And this allows us then to simulate the whole protocol in the reduction without committing to one session to embed our challenge. So a natural question that arises now is if this reduction idea actually is a template for a title proof for TLS 1.3. So let's have a look at this. So here you see that the TLS 1.3 key schedule which is basically the key derivation procedure of TLS which is quite complex. It uses a number of HKDF extract and expand calls to derive a number of keys, but the details are not important right now. The important thing I would like to highlight is that the Diffie-Hellman key due to the X, Y enters here above as DHE. And the context due to the X and due to the Y enter here in these function calls. So the important thing to observe is that they don't enter in the very same call. This means that the congoint at alt technique is not directly applicable here. But I already told you that there were titanalities or titer analyses for the full handshake. So what did they do? So the first solution by Davis and Günther is more or less the natural one because they just took the sub-routines, HKDF extract and expand and modeled them as independent random oracles. To overcome the fact that the context is separated as I told you before, they use careful bookkeeping to keep track of the separation basically. However, there's a problem because HKDF extract and expand both rely on HMAC using the very same hash function, they clearly are not independent. So the second solution by Thibaut-Jagar and myself is that we made the assumption that every major key derivation can be modeled as a random oracle. This has the advantage that the proof becomes more direct because we can directly apply the congoint at alt technique. However, we only assume that this actually is true and did not formally justify it. And due to a similar reason that all of these sub-routines here use the same hash function, it is also not inherently clear that this actually is true. So the bottom line of both of these solutions is that both do not capture that actually inside all of these boxes, there is the same hash function. So modeling these as independent random oracles is a bit fishy. And this isn't even the complete picture because TLS also use the hash function to hash transcripts and to compute max during the handshake. So it's even worse than it already looked like. So let me briefly show you how we address this in this work. So we use a modularization using the indifferenceability framework by Moad. And intuitively with this framework, we were able to show that each of the key derivations of the TLS key schedule behaves like a random oracle under the assumption that TLS's hash function is a random oracle. And this gives us the tool to both capture that TLS uses only one hash function, but also that we are able to apply the Kungouan technique directly in our proof. To prove then that TLS PSK is secure, we basically split up the proof into two parts. So first we show that TLS PSK is secure when we assume that every key derivation behaves like a random oracle and TLS's hash function is a random oracle. And secondly, we show that basically this abstraction of the key schedule behaves like independent random oracles is actually indifferenceable from the actual key schedule that is defined in the standard. So let me briefly show you how we abstract the key schedule. So we start with the hash function H, so either shout 256 or 384, and we assume that this is a random oracle. And from this, we split this random oracle representing the hash function up into two random oracles, one for each purpose of the hash function. So hashing transcripts and being used as a subroutine in HMAC or the component like HKDF. From this, we can rely on a result by Dolis et al, which basically says that if HMAC is instantiated with a random oracle, then it behaves like a random oracle itself. Okay, so now we have a transcript random oracle and a HMAC random oracle, and having HMAC abstracted as a random oracle itself, we were able to argue that each of these key derivations that happens in the TLS key schedule behaves like an independent random oracle. And we use therefore a similar bookkeeping technique as David and Günther already used in their proof to apply the Kuhn-Gordner technique. Unfortunately, the second step, so from one random oracle to two random oracles does not work in general. And this is what I would like to talk about in the last part of the talk. So here you see first our abstraction, which is kind of simplified, but I hope it transfers the idea, namely that we introduce a function for every key that is derived in the key schedule, which ultimately will be a random oracle. But why is this actually possible? So in the step form, the HMAC random oracle to the 11 session key random oracles rely on the fact that the TLS standard uses explicit domain separation using labels, which ultimately allows us to separate each of these key derivations using in combination with the bookkeeping technique by David and Günther to basically keep track of each of these branches. Unfortunately, this does not work in the case from one random oracle to two random oracles because we don't have explicit domain separation in the TLS standard for the uses of the hash function. And this means here we need to rely on the structure of the inputs to distinguish whether an input is a transcript or whether an input is an HMAC call or an HKF call, which is basically also just an HMAC call. And TLS transcripts, for example, just consist of TLS messages, which have a pre-script structure to it. And this pre-script structure, we could leverage and we were kind of lucky in almost every configuration of TLS to separate transcripts from HMAC calls. However, it didn't work in the case for PSK only configured with SHA 384, which means that we don't have a result for this very configuration. Unfortunately, to summarize, we give type bounds for TLS 1.3 PSK and show that the parameters in practice actually are justified. We give a new abstraction of the TLS 1.3 key schedule that is used in the PSHA key modes that allows for a less complex and more modular proof in the random oracle model. And we identified a lack of domain separation in TLS 1.3 PSK only handshake with SHA 384. Thank you very much for listening and I'm happy to take any questions. Any questions to Dennis? Yes. Can you give a little bit more information about why the SHA 384 case is not covered? Yeah. But before I go, there are things that there is a paper by Bargavan and others showing HKDF to be indifferentiable from random oracle up to some corner cases. Doesn't that apply to these analysis? I'm sorry, I'm not aware of this result but I can briefly explain what the problem is with the, so when we have an HMI call, we have keys that are like 32 bytes long, right? And they will be padded with zeros to the block lengths of the hash function resulting in 64 bytes. So the key and zeros for the last 32 bytes. So the end of this will be either 36 or 5C at this point, right? And the client hello starts with a version number, a random 32 byte nonce and then the legacy session ID which starts with a length field. And this length field can only take values from zero, zero to two, zero. So we can just check this very byte whether it's three, six or 5C. And if it's a case, we know that it is a transcript or not. But we are not able to do that for the SHA 384 case because we basically were in a region where there could be arbitrary bytes at this position. I'm not sure at the moment, sorry. Thank you very much. Yes, Nigel? Yeah, early on in the talk, in the non-PSK mode, the standard mode for TLS, you were saying that you couldn't get tight security because the signature scheme was not multi-user secure or not multi-user secure enough with a tight enough bound. So is there any signature scheme you could drop in as a replacement, like the Schnorr work better? Yeah, so there are two signature schemes that are multi-user secure adaptive corruptions at the moment. One is by Tivo Jagger and Christian Gjosten, crypto 18, crypto 18. And the other one is, I think, by Tivo Jagger, Kagella, Telen Lü and myself, PKC 2021. And probably, I'm not sure. I think there was another one, but yeah, there are a couple of options out there, but none of them is, of course, standardized. Yeah. Any other question? Yeah. So just another question. Is there an issue that there are no schemes with proofs? So is it clear that other schemes around fail to be multi-user secure with the... I'm not aware of this, but there are no proofs, tight proofs for that, yeah. Exactly. And as a follow-up question, would you recommend to change the TLS standard in order for the proof to go through? Yeah, this is quite a tricky question because we would introduce a couple of more labels, and especially we would need to change HMAC in some cases. So, yeah, this sounds a bit dangerous. So yeah, we have a solution in our paper, a proposition, a proposal, sorry, to fix this, but this is also just more like a hot fix than a long-term solution. So probably... I understand. It has to be. Yeah, all right. I'm not sure, sorry. Sure. Yeah, this is it. I'm sorry. Yeah, I was just like a follow-up again. So you're bypassing the need for multi-user security in this news proof? Yeah, so there are no signatures in TLS PSK because authentication is done using the symmetric pre-checking. Right, and was there a particular reason why it needed to be secure with adaptive corruptions in the previous proof? Yeah, so, I mean, otherwise we needed to... You always need to guess the user for which the adversary attacks in the proof where it basically needs to forge a signature. And this can only be circumvented if we use multi-user security by simply just being prepared in every session, right? Yeah, or for every user, sorry. All right. We're running late a little bit. Yeah. Let's thank Dennis and all the speakers of this session. Thank you.