 Okay, thank you and welcome to the last talk of the last session of Track A. I realized that there's been a bit of a competition between Track A and Track B. And I did notice that according to the schedule, we're finishing on time 25 minutes before they are. So you're officially all in the winning team, just want to know that? So, starting out, we have good news here. We've done it. So, we've done it. Forward secrecy, full forward secrecy for zero RTT key exchange. If there's one thing that you take away from this talk, I want you to remember it is possible. A lot of people said that zero RTT could not achieve full forward secrecy, but it is possible. Now you're sitting there thinking, what is she talking about? You are in the right place. I'm going to talk about most of the motivation behind this for the majority of the talk. And then give a slight view of what the protocol actually looks like in our construction. So, first of all, if we're going to talk about RTT, zero RTT key exchange, you might as well say what RTT is in the first place. It is round trip time. So, RTT stands for round trip time is essentially when we're sending from a client or any two parties. Say a client to a server and then we get a response back. That is one full round trip. And we're talking about, well, how long is it, particularly if we're talking key exchange, how long is it until we can actually send encrypted data? So, we can say one round trip or two round trips. If you're talking about key exchange with, say, six round trips, we might as well give up because track B wins and they get done first. It takes a while. A round trip takes a while. So, we want to minimize that if possible. TLS, if we're looking for examples, we are going to send a client hello. With probably some other stuff. Server responds back with a finished message. And now the client can compute their own finished message and can actually start encrypting data. We can actually encrypt, compute the session key, which is this green line here and send encrypted data down there. Now, that's one full round trip if we add in TCP because that's what it takes. We're going to have two round trips and it kind of begins to add up. Even if we do TLS over UDP, it still takes a while. So, if we're talking about efficiency, the question becomes, is it possible to send data immediately? Is it possible to actually encrypt on the very first flow of a key exchange? All right, that's a big contradictory. We're talking about sending encrypted data before you've even negotiated a key. If you're thinking that there could be security problems with this, you are thinking correctly. It's problematic. However, no one can argue with the latency here. That's fast. That's sending encrypted data immediately, no waiting. So, this is something that a lot of industry has been getting interested in for obvious reasons. It's quick. Quick, no pun intended, is by Google. It was one of the first FOMORs runners when we were talking about zero RTT key exchange. You're running over UDP. What does quick look like? Well, essentially, there is a medium-lived key up here, this SK, which is going to be part of a server configuration file. During some prior session, this is sent from the server to the client. And then later, when the client wants to reconnect, it will use this to compute a temporary key, a zero RTT key. How it does this is basically a Diffie-Hellman with its own choice of ephemeral value. And then later, the server can choose its own new ephemeral and we can eventually end up with a solid session key down here. But immediate encrypted data can be sent in these flows. Okay. Back to that issue if this could be problematic. Let's take a look at one of those problems can be replays. Replays are a big deal when we're talking about zero RTT data. It can either be the data itself or it could be a key exchange that's replayed and there's a lot of other issues that could come up with replay, but here's just one of them. If we have an adversary, we can replay this ephemeral value and replay the data itself. So if you're thinking that the client is purchasing something, for example, well, then it's been billed twice, essentially. Maybe if we replay this several times and the server thinks, okay, you want to just keep buying the same item. That's not going to make for a lot of happy clients. And an alternative, what happens if this midterm key gets compromised? Well, if that's compromised, then it can compute this temporary key and the payload itself. This is a version of what's known as forward secrecy. So let's take a closer look at the forward secrecy landscape since that is in the title of the talk and therefore we should address it, right? Forward secrecy landscape says we have some initial sessions and then sometime later, after these session keys have been used and completed and et cetera, an adversary compromises the long-term key. Okay, and now in terms of forward secrecy, we're not too concerned about what happens after that. We assume that it's gone downhill. But what we're really concerned about is what happens before that. Are those session keys still safe? And forward secrecy says if those session keys remain safe, even when the long-term key is compromised, then we have forward secrecy. If all the communication that's happened in the past is still okay, even after compromise, then we're happy. In terms of quick, this becomes problematic. Now we're not talking about a long-term key, we're actually talking about a medium lift key because this is part of the server configuration file. However, if we compromise it, there is zero RTT data key and all the associated data can now be read and the adversary knows it, it's gone. The eventual session key in the capital here, that's still okay. But for all the sessions and now that this is any zero RTT session that the client had made to the server, they're compromised. And a lot of people said, oh well, that's all we can do. That's the life of zero RTT. And we said, well, are you sure? Can we take this and make it better? Can we actually get perfect forward secrecy on that initial key exchange? And as I said at the beginning, the answer is yes. Okay, so our construction. Full forward secrecy, check, replay protection, check. How we do this? Well, it's based on hierarchical identity-based Kims with selective security, so it's actually not that strong of a security demand and one-time signatures. It's also flexible. So this is cool. It's flexible so you can instantiate this with any Kim that's sufficient. So for example, post-quantum, if you have a, that satisfies post-quantum, repairings, et cetera. It's a generic construction. So the core idea behind this. It's like, well, everyone said it wasn't possible. How are you actually doing it then? Let's update the secret key. It's a very simple idea. Update the secret key. If the secret key can only be used to decrypt a cybertext once, then even if that secret key is compromised later, it can't decrypt that same cybertext again. So update the secret key. A quick overview of what this protocol looks like. This is the core idea. I'm leaving out a lot of details, obviously. Core idea is it's a key encapsulation mechanism. That's it. We have a key encapsulation happening on the client. The server decapsulates it. And then it updates the key. It punctures it. So we've mentioned puncturing a few times in the session alone. But we puncture the secret key. And then we can no longer decrypt that cybertext again. Some of the hidden details here. We actually have an ID buried in this. The ID is compromised. It's composed of both the one-time signature, verification key, and a time interval before that. So this is how we're going to basically synchronize and track which keys have been used. On a very high level. Let's take a closer look at that key exchange itself, or the key itself. How are we updating this? Now again, I said we have a time interval on the high level, and we're going to have the verification keys of the one-time signature below. And essentially this makes a tree that we can follow through. So we start out with a secret key, shown here in blue. Everything else is just in your imagination. It's just a blue key, right? And then we get a cybertext at some point. And we need to decrypt it. So what do we do? Well, we generate all the keys that we can get us there. And in addition to that, we generate the sibling keys to that pathway. And then we puncture the pathway that gets us to this decryption in the first place. So we can no longer decrypt the cybertext, and we can no longer derive that key by any means. Everything that led us to this key is gone. The new secret key is the blue shown here. So the secret key grows. This is a downside. The blue shown is a little downside to this protocol. Secret key does grow. But the number of times we puncture is per session, because we're actually getting out the session key. So it depends a lot on your use case scenario of how bad this is. If you're talking about, say, a messaging protocol, where you establish a session with a friend, and you set up your message, then you're probably only going to have an initial session and you'll keep talking for forever long until your friend drops their phone in the pond and you have to get a new session started when they buy a new phone. So it isn't necessarily that bad, but it depends a lot on the use case scenario. It's something to keep in mind, though. However, we can improve this memory issue. I mentioned before that we're having some sort of time slot synchronization, and this is on the top part of our tree. So here, we're going to have four time slots, just for an example. And we can erase these time slots when used. So when the secret key has grown a lot because of use over here, for example, we can get rid of that and say, well, let's reduce it back down. Now, this requires only a very loose synchronization between the client and the server. It's not a strict time synchronization. So it's only very loose. But they can improve efficiency, essentially, or improve memory, we'll say. Other evaluations. So this doesn't actually show up in the paper in case you're wondering or have read it or want to read it, but this is just a rough evaluation that we've done for a very basic implementation to give you an idea of what you're looking at. Essentially, encapsulation is taking... Well, seconds, that's good. Decapsulation is taking seconds and it kind of gets a little bit worse for puncturing, which we're talking seconds to minutes. However, as I said, again, this is only a very basic implementation, kind of a no-brainer. I won't say no-brainer, but it's definitely not a detailed implementation. And we need only selective security on the Kim. So this could be improved. We expect it to be. Not to mention the actual implementation itself. Other comparisons. In about two years ago at S&P, Green and Myers discussed puncturable encryption. So if we want to compare what this means versus theirs, here we have a generic... Again, I said it's a generic hierarchical identity-based Kim. So you can instantiate it as you will. There is a specific bilinear groups. So we have a lot more flexibility. It's an improvement in that respect. We expect that we're doing a standard model versus a random oracle model. So this is pretty good, actually. If you think back to Nigel's talk earlier in the week, his invited talk about getting things more efficient and moving from that theory to practice, well, this is now a step where someone here can take it from that theory to practice. There's room for improvement, but we expect it is possible. So what do we have now? What is the outcome of all this? Once again, we have forward secure, zero RTT key exchange. We have a security model for that. We have a generic construction and security proof and a whole bunch of other amazing details which you can, of course, read in the paper. What we have in the future. We can optimize a Kim key delegation. So this is another room for improvement. A lot of work that has been done on these so far have been optimizing, say, decapsulation and not so much the key delegation step itself. For our construction, that's the most expensive part. So if we optimize that, then the whole scheme becomes optimized. And, of course, make it practical. So there's room for future work and, once again, just want to say yes, it is possible and, yes, once again, one of our track B. So thank you very much. So we have lots of time. So we have lots of questions. So can we present the paper in zero RTT time with respect to reviewing? Ah, that would be lovely, wouldn't it? You mentioned that a key discard scheme required some loose synchronization. Could you mention something on how that's done or what do you mean by loose? Yeah, so if we go back to this picture here. So we're only talking about, like, a set time interval. So we're not talking about seconds, for example, milliseconds that are set that way. It's more of a very rough saying of, say, we'll name time intervals. So, for example, if I was to do it, again, this is an example, zero, zero will be our time interval. Here, there'll be zero, one, one, zero, one, one, et cetera. So we'll go up to there as a time interval and then to get us the rest of the tree built we need to use the verification keys of the one-time signature. So, if the, say, the client and servers that are in agreement, and we'll just say this is time zero, which, and that corresponds to puncturing here, then they can both agree we're just puncturing time zero and this part of the subtree is gone. We've purged the keys there. You mentioned TLS at the beginning of your talk. But this sort of expanding key seems like it would be not very appropriate for TLS itself. As it is, no. And it's current form. But again, this is, as I said, the start, so to speak. It's like we've finally gone forward security in zero RTT. There's some room for improvement just for this in itself and our actual construction. And then, I mean, if you expand the idea, it could easily, relatively easily get to the point of usability for TLS. So TLS 1.4, then. Unless it becomes 2.3, right? Have you thought about the suitability of applying this for, like, distributed systems? So, I mean, often we, like, see the ends and stuff, right? You have multiple servers and you want to use the same key with all of them. Would you be able to apply this, or would that synchronization just not be possible? If you want to have the same key in all of the servers? Yeah. Well, obviously, of course, this depends on your security and adversarial model, right? If you're using the same key in all the servers, there can be any complications there in itself. Certainly, there's variations of this that you could apply, but that sort of adversarial model is quite specific. Okay. It's just because that's kind of what Quick faced with there, like, strike lists, right? Is that you can repay a message to just a different server still and still go through. Right, yeah. So, this is, I mean, again, this should only be able to be decrypted once at all, regardless of where you send it. Did you implement it with PN code for 128 security width? Yeah, so our basic implementation, we have a number of variations of it. I didn't want to present any of them in detail because, well, they're, for one basic and also at the trial phase. But, yes, we have... So, I think using PN code for 128 security width should be careful. Yeah, we have a 256 as well. That's why I didn't bother putting up the actual numbers is because we have the variety from obviously insecure key size to probably perfectly fine key size. So, it's all around the same numbers of seconds, milliseconds and minutes.