 Okay, thank you for the introduction. So what are session-resumption protocols? So imagine you have a client and a server, and client and server want to communicate with each other, and they have already communicated in the past, meaning that they have, for example, established a 1.3 TLS connection, and they derived in this connection some symmetric secret alongside with some identifier value. And then after some time, the session has been terminated, and client and server actually want to resume the session. So what the client can do is the client is able to send over the identifier to the server, and the server would then be able to, through the identifier, retrieve the secret on his side, and they could resume their connection with the secret they have derived before. And one of the major innovations of TLS 1.3 is that the client is also allowed to send encrypted early data on the first flight of messages, and those are also called zero RTT data. And so the client does not have to wait until the server replies but can already send encrypted payload on the first flight of messages. And there are different ways to implement this in practice, but before we discuss that, I would like to highlight one security property. And this security property is forward security. The objective of forward security is to render large-scale collection of encrypted data useless. So imagine you have a server, and this server serves multiple connections throughout time with several clients. And at a certain point in time, an attacker might compromise the server and obtains its secret key. So what happens is that it might be that all of the future connections might be compromised, but forward security ensures that the previous session still remains secure. So an attacker who obtains the secret key should not be able to decrypt previous encrypted session and data that was sent over the channel. And we want to have a look on forward security and session resumption protocols. And forward security is a standard security goal of modern protocols. So it's heavily advertised by companies such as Facebook and Google, and we really would like to have that in practice. So one approach to implement central resumption protocols and a simple approach is to implement so-called session caches. In this case, the server just has a table containing several entries of identifiers and secrets for several clients. And if a client resumes its connection, it just sends over the identifier, the server retrieves the secret from the table, and can then decrypt and process the early data. And in this case, after the session has been terminated, the server would delete the entry from the table, and after deletion in the session has been terminated, we would actually already achieve forward security and replay protection for the early data. And by replay protection, I mean if an adversary eavesdrops all of the messages and stores the communication and then replays this request to the server after the session has been terminated, then the server wouldn't process this message again as it already has deleted the entry from the table. But unfortunately, this is not what's done in practice. So if you imagine that the server serves maybe thousands of connections per second, it would have to maintain a huge state. So in practice, a different approach is followed. And this approach is called session tickets. And that's what's actually done until S1.3. So again, imagine a client and a server. And this time the server is in possession of a symmetric key. And do note that only the server is in possession of this key. And now the idea is that the server utilizes this key to shift the memory to store the secret, not on server side, but to the clients. And he does that by encrypting them in so-called tickets. And tickets is just a fancy name of saying ciphertext in that case. So what the server is just doing is encrypting the secret with its symmetric key and sends the ticket or ciphertext to the client. The client then stores both secret and the ticket and the server deletes the secret on his side. When later the client wants to resume the session and then redeems the ticket by sending it back to the server. And the client may also send encrypted early data again. And the server is then able to decrypt the ticket again, retrieve the secret, and then decrypt and process the early data. So now we solved the problem we had with session caches. The server now only stores this symmetric key and uses it for all clients. So he doesn't use a lot of memory anymore. But now unfortunately we have not forward security and replay protection for the early data anymore. As if the server gets compromised and the symmetric key gets leaked. An attacker could decrypt all of the tickets that went over to the channel and then could use the secret to encrypt the, to decrypt the early data. Also we could mount replay attacks. So if we replay the ticket and the encrypted early data to the server, the server wouldn't have a way to recognize whether it already processed this data. And I want to remark what's done in practice is that we would perform an additional Diffie-Hellman key exchange after we have resumed the session. So this would re-achieve forward security and replay protection as we would communicate with a fresh, freshly, with a fresh key. But this would not acquire forward security and replay protection for the early data as we already sent payload data before we wait for a fresh answer by the server. And this seems difficult to fix on first sight since we have a non-interactive protocol and we're already sending payload data. And this has been the main question for our paper. So the question is, can we achieve forward security and replay resistance for early data when using session tickets as done in TLS 1.3? Unfortunately, the answer is yes, we can do that. And we will utilize a primitive called puncturable pseudo-random functions for that. So what are puncturable pseudo-random functions? Well, we all know pseudo-random functions where we have a setup algorithm that provides us some evaluation key. And we have an evaluation algorithm that allows us to use this very key to evaluate on certain inputs and gives us some output. And a puncturable pseudo-random function has one additional algorithm called the puncture algorithm. And what the puncture algorithm does is it's revoking evaluation capabilities of the key. So if we puncture our key at position X, it will compute a new key K prime that does not allow evaluation on X anymore. So if you try to evaluate on X, the evaluation will fail. And the great thing about this primitive is we can do this stepwise. So we can puncture our key multiple times and always replace our current key with the key K prime. And we can thus stepwise revoke evaluation capabilities of this key. And security is also pretty similar to the security of pseudo-random functions. So we should not be able to distinguish an actual evaluation from this puncturable PRF from randomness. And additionally, I will give you a punctured key that is punctured on the value you have to distinguish. Okay, so how will we use that to achieve forward security for the protocol? So the main idea is we will use the PPRF to generate ticket encryption keys. So we will evaluate the PPRF, we will receive or we will compute a key through that and we will use this key to encrypt the secret within this ticket. And then after the ticket has been redeemed, we will revoke decryption capabilities by puncturing this PPRF, meaning that we cannot decrypt the ticket twice. So how would this work? How would this look like? Well, again, this now is a simplified version of our protocol. It's a little different, but we have a client and a server. Again, they already have a shared secret and the servers in possession of a PPRF key. And now the server evaluates the PPRF on some identifier value. This could be a counter in practice and obtains a key. And the server then uses this key to encapsulate the secret and the ticket and sense the ticket and the identifier over to the client. The client then again stores ticket, identifier and secret and the server deletes the secret. So when the client's redeeming the ticket, it will relay back all of the data. So the ID, the ticket, and it may send encrypted early data. And what the server then does, it again evaluates the PPRF on the identifier, recomputing the key. It will then use the key to decrypt the ticket, retrieving the secret. It will then use the secret to decrypt the encrypted early data and process the early data. And finally, we will puncture the PPRF key on the identifier value and obtain a new key, PPRF key prime, and we will place our key with the new one. And after we've punctured our key, we would not be able to reevaluate on the identifier value. And so we would achieve forward security and replay protection for our early data. As if the key leaks, an attacker also is not able to recompute this again. So now we need to discuss instantiation of this protocol. And I will discuss two ways of instantiating it. And it's a generic protocol from PPRFs, so let's discuss some PPRFs. And the first PPRF you might use is the PPRF based on the Goldwasser, Goldreich-Mikali PPRF, which in practice would only deploy hash evaluations and is suitable for high-traffic scenarios where you serve thousands of tickets per second, for example. So how does the PPRF work? So imagine you have a binary tree. And initially you have some random seed value indicated by the K and the root node. And our input domain are more or less the leaves at the bottom. So the input would be one, if it's the leftmost leaf, input two would be the neighbor of the leftmost leaf and so on. So if we would like to evaluate on the third leaf, we would consider the third leaf and get a distinct path from the root to that leaf. And this leaf will tell us how exactly to evaluate the result. And we do this by applying two pseudonym generators. And this path will indicate in which order we have to compose the generators to obtain that value. Okay, and how would puncturing work? Well, when puncturing, we would need to delete all of the values that we could use to recompute that value. So we need to delete the root node in this case and need to compute some intermediary node values so that we can evaluate every other leaf in this tree but the one repunched in that case. So the tree, the key of this PPRF might grow over time. And as you can see, the growth actually depends on where you puncture in this tree. So how will we utilize this for our protocol? So the idea is that we will use the ticket keys from left to right. That means if we issue tickets, the first ticket will be issued with the key computed by the left-most leaf for the second ticket, we will use the second leaf and so on. And but if we want to get a feeling how this behaves in practice, we would need to know something about the redeeming behavior of the client because we only puncture after a ticket has been redeemed. And the memory consumption depends heavily on that behavior. And this is difficult to estimate. So to the best of our knowledge, we don't really have an idea how exactly clients behave. But of course, we can discuss that. So the best case would be that clients return strictly from left to right. So the first client redeems its ticket first, the second client redeems its ticket second, and so on. And this would be quite beneficial for our construction as we would exhaust the spinal retrieve strictly from left to right. And so if we were to serve end tickets in total, we would only need to store at most log-end nodes. But this might not be how clients behave. So the worst case is that only every second client actually returns, and this would force us to pre-compute every other leaf at the bottom of this tree. And we would essentially end up with a session cache where we already pre-computed all of the keys we need. But again, this is unlikely. In practice, it doesn't seem like only every second client would return. So what happens in practice? And since we didn't have reliable data on that, we asked Cloudflare for advice, and they advised us that it seems reasonable to assume that clients return roughly in order from left to right. And what they meant by that is that it's more likely that a client who has been issued a ticket some time ago is more likely to return than a client who has just been issued a ticket. And this seems to make sense, right? And this would also benefit our construction a lot as all of the computations in the tree, all of the puncturings would only take place in one subtree. And well, if you closely look at the tree, you might then see that it doesn't get that bad. OK, so this is the first construction, and then we also have a second construction. And this actually is a new PPF we have developed for this paper. And this PPF is inspired by the RSA accumulator by Kaminish and Lian Skaya. And this PPF has some nice properties. So the key size is independent of the punctures, so the key will not grow. It has a short key in practice, and we have a tight security proof in the random oracle model. Additionally, we have a bounded domain, so the domain size will determine the efficiency and the computations depend on the domain size. And I will give you a brief intuition on how this works. So the idea is, imagine you have an RSA modulus N, and you generate this modulus and you discard the primes. And then we know that it's easy to compute K to the power of some exponent, and K will be our key later on. So K to some exponent mod N is easy to compute, but it's hard to invert this computation without knowledge of the factorization. And we will use this as leverage to realize puncturing values. Additionally, we need to define some prime numbers. So let P1 to PN be the first N odd prime number. So 3, 5, 7, 11. And do note that those prime numbers have nothing in common with the prime numbers of the modulus, so in practice, they will be much smaller. Maybe think of them as the first 250 prime numbers. So how would evaluation work? Evaluation works as follows. So we would have our key K, this is just some random element in ZN, and we would raise K to the power of all of those prime numbers, the product of all the prime numbers, but we would leave out the elf prime number when evaluating on input L. So if, for example, we evaluate on input L equal 100, we would consider the product of some prime numbers, but we would leave out the 100 odd prime number. And of course, we can do that for all numbers from one to N. Okay, and then additionally, we need to hash this expression otherwise there would be an attack, so we need to hide the algebraic structure of that expression. How would puncturing work? Well, when puncturing, we would replace our current K with K to the PL when puncturing at position L. So we will raise K to one prime number of those odd prime numbers. And the idea is, since we do not know the factorization of N anymore, we cannot revert this computation. And as you see, if we would like to evaluate on input L, we would actually need to compute in our key to the power of the prime numbers, which does not contain the prime number we just used to puncture. So we are not able to compute K to the prime numbers anymore, which does not contain PL in the exponent. So that's the idea of puncturing. But as you have noticed, we only have a polynomial bounded domain size and this doesn't feel good. So what you can do about this is you could use multiple instances of this K sharing the same modulus. So you could on the fly generate or extend your domain by just sampling a new K from ZN and using that, and you would not need to regenerate the modulus every time, saving computation time. So how would this perform in practice? We did some calculations, and if you want that your average evaluation is cheaper than one full exponentiation or costs at most one full exponentiation and you would consider a 2048 bit as a modulus, you could carry N equal 232 tickets. And this number comes from the fact that the product of the first 232 odd prime numbers roughly is of size 248 bit. And if you apply this to a session cache, for example, for comparison reasons, you could reduce a one gigabyte session cache to only 51 megabytes of storage. Okay, so this is our second construction. And let's conclude what we achieved. So we developed a new protocol that combines the approach from session tickets and session caches in some way. And it's based on session tickets, but it uses significantly less secure memory than session caches. And we achieve the same forward security and replay protection as with session caches. And especially for the early data, so currently if we're using session caches, we do not have forward security for those. When using session tickets, of course, we do not have forward security for those. And our protocol is immediately usable until S1.3 without changes to the protocol. So other changes are also on server side only, so you wouldn't need to change anything in the client implementations as the client only stores random looking bit strings. And additionally in the paper, we have formally modeled what session resumption protocols are. And we also provided a security analysis of our protocol and defined the adversarial model. So if you want to know more about it, you can of course talk to me or have a look in the full version of the paper, which is available on ePrint. Thank you very much. Thank you, there is time for questions. So how do the two PRF constructions compare in terms of computation time, like which is more efficient? Okay, so the tree constructor is way more efficient because in practice you would only have a huge tree and each computation of one path to a next node is only a hash evaluation. And hash evaluations are very cheap, where in contrast, in the strong R's, A, B, S, X, P, P, I, F, we would need to perform explanations and they're way more costly. So considering that the tree is after all better, so are there other ways of arranging like users in the tree because you said, okay, we will arrange them according to time, but I don't know, depending on the application, you might have like a model of user behavior that is different. So I don't know if there are different policies of setting the users on the tree, but... Yeah, I mean, it really depends what you want. So what we want, of course, is we want to reduce the key size as far as possible, right? And if you have, of course, a good idea how clients behave, you might utilize that. It really depends on the scenario and understanding the behavior of this construction in the real world. So if it's likely that every second always return, every second client only returns, then this might not be suitable, but you maybe could use something else. So underlying this construction is like you're saying, okay, session resumptions, but it will happen only once. But I mean, shouldn't it like, okay, it will happen once and then a second time like the same user comes, like it looks like you're saying, okay, it will just, I will resume once and then I will forget it. So those are multiple users. So this is the server side and the server has served multiple tickets to different clients. No, but what I mean is the same client, he will come back, I don't know, it looks like you're saying, okay, he came back and then I delete, I puncture the PRF. And but I don't know, the same, the same, I don't know what you mean by session resumption, but this session, the same session can resume another time. I don't know, it will just hang on and then resume once again, but... Maybe let's discuss this offline later then. I'm not sure if I really understood what you asked. So if for some applications, some of you thought about it, you can see that like another simple idea would be to, if maybe forward security, you know, corrupting servers may be less important, hopefully the servers don't get corrupted often, but if you just want replay protection, I mean, another idea just to kind of implement some kind of efficient dictionary, you know, essentially just, you know, which IDs have been kind of used, right? You can have kind of dictionary, you can have it encrypted on the server side. So a priority would be hard to find collisions, even though it could be kind of non-cryptographic. Have you considered this idea? Is it a good compromise maybe in between? I think it was actually once deployed in Google's Quick, where they just start, for example, in the Bloom filter, all of identify values. And, but they said it was too hard to maintain if they serve too many connections and they have to somehow synchronize this. And so they're actually not using this anymore and either are knowingly foregoing forward security as they say like you did, that maybe the server does not get compromised or they say higher application levels have to deal with recognizing such requests, such replay attacks. But yeah, so they thought about it, but they are not deploying it anymore. So I have a question. Doesn't this approach open you to a denial of service attack where clients would actually deliberately try to use every other session and so too? Yeah, so what we're doing in the protocol too, so I mentioned here that this is a simplified protocol. And what we are actually doing is we're not simply encrypting this, but we're using authenticated encryption so that it's not as easy to forge an identifier for some ticket. And so this shouldn't be possible anymore then.