 Indeed, it's not. Thank you for the introduction. This is joint work together with colleague Britta Hale and Sebastian Lauer who are both here, as well as Thibaut Jagger. And it's about key exchange. So this nice principle, this neat principle that allows two remote parties to establish a shared symmetric crypto key over an insecure connection. Now in practice, turns out that this key exchange step can be quite a bottleneck. So assume you have one message that needs to be sent back and forth between the client, Alice, and Bob, the server. And this means that Alice has to wait for this one round trip time before she can actually send data. So now if you're on a high latency, for example, a mobile network, this will incur quite an overhead, quite a delay, of potentially many 100 milliseconds. So a solution to this is to employ a zero round trip time key exchange protocol. A key exchange protocol is the round trip time if it requires only a single message to be sent from the client to the server in order to establish a key. This means that Alice, along with this key exchange message, can already use that key to encrypt data that just travels along with that message. So he doesn't have to wait for a response from the server. Now the round trip time key exchange is theoretically not a new concept, but has gained quite some practical attention over the last years for being an explicit goal in Google's quick protocol, as well as in the upcoming TLS version 1.3. There are, however, two main security drawbacks with zero round trip time key exchange. They're both kind of stemming from the fact that the server cannot actively contribute to that exchange. The first one is replay attacks. The adversary may simply go and copy the key exchange message and the accompanying data being sent and replay this to the server. Now if the server doesn't take extra care, it will just decrypt this data again. And basically what you have is a replayed message or a replayed data in a sense. Now it turns out in some settings this is essentially unavoidable, as for example, has been discussed by Eric Raskola and Adam Langley at last year's real world crypto. The second big issue with the round trip time key exchange is forward secrecy, or rather the lack of it. So the server will use some kind of secret key to authenticate and derive the session key for that key exchange. Now if the adversary later compromises that key, because there is no ephemeral contribution from the server, it has everything to decrypt the data because it can derive the same key just simply again. Now, as you know, forward secrecy is considered a crucial and very important security goal, in particular to prevent mass surveillance. This means the lack of it here is particularly unfortunate, but commonly considered an inherent limitation of the round trip time key exchange. In this work, we have a show that this common belief is actually false. So we built a zero round trip time key exchange protocol which achieves full forward secrecy. So to understand how this works, let's first have a look at a somewhat related scenario that of asynchronous messaging. So here, Alice wants to send some message over to Bob without any preceding communication. Well, this can be solved of course by pure simple public key encryption. You just encrypt this message under the public key of Bob and that's what you want to do. But this leaves you with the same problem of lacking forward secrecy, of course. Now, already back in 2003, Kanati Halavi and Katz showed how to achieve a core's notion of forward secrecy by employing a hierarchical identity-based encryption scheme in this setting. So for this, you would divide time into some core's intervals and then a secret key corresponds to each of this interval. Now, when an adversary comes and compromises some secret key, this means it cannot decrypt messages from earlier intervals. Still, within the same interval, it can decrypt earlier messages being sent. So it's just giving you some core's grain notion of forward secrecy. And this is why in 2015, Green and Myers came up with a more fine-grained approach to forward secrecy which they called puncturable forward secret encryption. And for this, they fused this high approach with an additional attribute-based encryption scheme. And this allows you to take the secret key of some interval and puncture out the capability to decrypt specific ciphertext you've seen already and by this removing this capability, you get forward secrecy for these ciphertexts. And this is where our idea starts for this work. So our core observation is that this type of puncturable forward secret encryption relatively directly yields forward secret zero-trip time key exchange. Our technical contribution is then two-fold. So the first thing is we establish puncturable forward secret key encapsulation as the core building block for this protocol. So here encapsulation simply means instead of encrypting a message, we're encrypting a symmetric key. We then show how to build such PFS cams in a generic way from any hierarchical identity-based cam such that we achieve strong CCA security without relying on the random oracle and that way improving over the previous designs. The second step, we then show again generically how to build from any such PFS cam a forward secret zero-trip time key exchange protocol. As part of this, we formalize what key exchange security means for forward secret zero-trip time and prove that our protocol achieves the security you want to see there. Now, Ella, don't have time to go into any real details. Let me give you an overview of glimpse of how our protocol works. So we have Alice and Bob. Alice holding the public key of Bob and Bob the corresponding secret key. Now in order to run a session, what Alice will do, she will simply encapsulate a symmetric key for Bob and sense over this encapsulating ciphertext. At this point, she can immediately use this key as well to start encrypting data. On the other side, Bob will then do two things. First of all, it will de-capsulate the key and this point able to decrypt the data that might be arriving from Alice. Now the second step is it will take its secret key and puncture out that it's removed the capability to derive this key again. And this is what gives you this immediate forward secrecy. So how is this puncturing functionality enabled in the protocol in this design? So essentially, the secret key Bob holds is a part of a hierarchical key structure. In the beginning, Bob will hold the root node in this tree. Now with a ciphertext arrive, let's say, 01, Bob will use this root node to derive the secret key corresponding to that ciphertext in order to de-capsulate. It will then puncture this key and by this, it means it removes all the nodes on the way from the root node to this leaf node in order to not be able anymore to decrypt to derive this key again. Instead, as the new secret key, it will store the siblings on the path from the root to the leaf and this allows him to derive still all the other keys in the tree. Now this process is repeated and when the next ciphertext arrives, it first de-capsulate the corresponding with the corresponding secret key and then punctures out this capability. Now if Bob would just proceed like this, the secret key size will grow rather quickly, namely linear in the number of sessions that Bob's running. So we need to do some more here. And what we do is we need to add another layer on top of this tree, which divides this tree into some coarse time intervals that two parties agree on and can be, for example, days. Now when one time interval is over, Bob can simply remove the corresponding subtree and by this reduce the size of the secret key back to a logarithmic amount of data to be stored. Okay, so what are the properties of the scheme of this protocol? Well first of all, it achieves full forward secrecy through this puncturing approach. As soon as the key exchange is over, leaking, compromising the secret key of Bob will not affect the security of this session anymore. Puncturing also gives you replay protection, at least on the key exchange level, because you simply can't de-capsulate the same ciphertext twice. For performance, we did a rough implementation based on a hype scheme based by Blasey Kiltz and Pan who was published at Crypto 2014. As it turned out, this was not the optimal choice. I'll come to that in a second, but we still want to share this initially valuation with you here. So first of all, on the client side, encapsulation is very efficient, taking only a few milliseconds, like in order of like 10 milliseconds on a regular laptop. On the server side, the pure de-capsulation of the key in order to be able to decrypt takes the same amount of time, but deriving in the tree the corresponding secret key, depending on how the tree looks like, may be relatively expensive and can take up to roughly a second. And this key derivation step is particularly expensive in this BKP scheme, so this is why it was not an optimal choice, and which means that puncturing in particular can be very costly or is very costly because there are potentially many intermediate nodes to be derived in order to come up with the next punctured secret key. Still, while this is now not a practical instantiation, there is hope for two reasons. First, there are other schemes which have more efficient key delegations, so that's not a particular focus on hype schemes, but there are other schemes, for example, by Genevieve and Silverberg, where this step is more efficient, and it needs to be looked at. Second, the BKP scheme provides the strong notion of adaptive security where actually our scheme only requires a weaker notion, namely selective security. So that means there's quite some space for improvements, both on the implementation side, as well as for research in better suited hype schemes that can be then just plugged into this generic construction. Okay, so to summarize, first of all, fully forward secret is around the time key exchange exists. So we show this by providing a simple protocol which is provably secure and where the core building block of puncturable forward secret key encapsulation can be built from any hierarchical identity-based camp scheme. The big open question is how to make this practical, both through optimized schemes and optimized implementations of the schemes. Right, this brings me to the end of my talk. Thank you very much for your attention. So we have a couple of minutes for questions. Go right ahead. Hi, I'm interested in your thoughts on denial of service attacks against that scheme. I was thinking if the initial size of the message Alice sends Bob is either comparatively small, Mallory could, for example, just send enough of her own to puncture out a large portion of that tree, or if it's large enough, that's not feasible. Just send enough to hopefully overwhelm memory on Bob's side, trying to keep track of this tree before he needs to expire it. Yeah, so definitely. Because there's quite some heavy crypto involved on the service side, thus resilience is something to be looked at. In particular, you want to add, potentially if you're under attack, you want to add some cookie mechanisms to be sure that at least the clients that want to talk to you are legitimate and really do want to talk to you and this kind of stuff, but it's beyond what we looked into here. Thank you. Welcome. Hello. Can you talk about how your scheme works with a fleet of servers that don't share real-time state? Because that's what session tickets solve for us in TLS, and if we hadn't shared state, it would probably just use essentially session IDs. So can you talk about that, please? Sure, so in this tree basically, so one thing is like, you have to split it across the time, and then within a time interval, the tree is basically formed along the ciphertext. So what you could do for some kind of load balancing is that you would say, like I split the range or the space of these ciphertexts among my servers that I have, and then each of the server has to keep consistent state within this share of the ciphertext. I was thinking more about geographical distributed network, like Cloud for Scenicast. We have like one pop in Europe, one in the US, and we can't just load balance between the two based on ciphertext. Okay, we would need to look into this closer. I don't have a... Thank you. I'll show it in a minute, thanks. Execution of the puncturing step seems to rely on the message actually arriving. Could the adversary attack the forward secrecy properties of the system simply by dropping messages? Sorry, can you repeat the last... Could the adversary attack the forward secrecy properties of the system simply by dropping messages? So the point is like you will get... You will do the puncturing as soon as the message arrives. So in particular, if you hold the message in flight and that you compromise, in that sense, you compromise basically the service key before he sees the message and can puncture, you don't get forward secrecy guarantees, yes. So you need to process... Yeah, you need to see the message, you need to process the message in order to say, okay, now I can't do this again. Yeah, but dropping a message doesn't have to be something deliberately performed by an adversary, the network will do that on its own. Sure, sure, but if the message is dropped and in the same time you don't see the key, then yeah, you don't do secrecy there. Yeah, I was just gonna ask, this is kind of a really basic thing about implementing this. This protocol seems to require the ability to forget data, and if you're doing this on a mobile device with standard flash memory, that's not so easy. Have you thought at all about what you need to do to make sure that you can actually forget the data, or is that just something that's outside the scope of what you're working on? No, so for now, we didn't look into this, but for any type of forward secrecy where you need to forget a key at some point in time in order to not reveal it anymore, you're in this trouble of how to secure the erase data. Yes, that's one trivial. Thank you. So ultimately, TLS 1.3 decided not to do zero RTT key agreement, but instead do zero RTT resumption with a shared symmetric secret, and you sent a ticket along with it. I'm wondering, it seems like you can adapt this method for forward secrecy to apply to the key servers used to decrypt the tickets, where when you encrypt the ticket, you don't use a single key for all the tickets you're encrypting, but you walk down this tree and pick something from the route, encrypt it, and then when you receive it again at a later timestamp, you erase the symmetric key you used along the path. So you mean like fusing this together in the sense of using this, the leaf node as your secret key, like the pre-shared key in the TLS style design? So this is just an implementation detail on the server where you have keys encrypting tickets, and the concern is that the ticket encryption keys have to be kept for long periods of time if you want zero RTT to work over those periods of time, and this idea of going down the tree and erasing might help ameliorate that. Maybe, let's talk a bit of that maybe. Any more questions? Okay, we're out of time anyway, so let's thank Felix, thank you very much. Okay.