 Hi and welcome to my talk about tightly secure authenticated key exchange revisited. My name is Doreen Riepel and this is joint work with Tibor Jager, Ike Kils and Sven Schäger. An authenticated key exchange protocol or short AKE is used to establish a shared session key between two parties. This key can then be used with a symmetric encryption scheme to build a secure channel and encrypt further traffic. AKE is an important cryptographic building block and is widely used, for example in the TLS protocol. In this talk I will first define the security model for AKE and as we study tightly secure AKE I will define what tightness means. Then I will give some comparison to previous work. Coming to our contributions I will show how to build AKE from simpler building blocks in particular from key encapsulation mechanisms, short camps. While this construction has been studied before, our result here is to find the right security requirements for the camp to actually achieve tightness. I will also show an example instantiation of the camp at the end of my presentation. Let me now explain the AKE syntax and security model in more detail. We consider to users Alice and Bob and we give them both long term keys, consisting of a public key and a secret key, while the public keys are assumed to be known to everyone in the network. Now Alice wants to exchange a key with Bob. Therefore she computes some message MA which he sends over to Bob. At the same time she holds some additional secret information which is different from the long term secrets and which we call the state of Alice. The state contains all ephemeral information that is necessary to derive the session key later. In our model we are focusing on two message protocols where each party sends and receives only one message. That means when Bob receives Alice's message he can directly compute the shared session key. He also computes some response MV which he sends to Alice. Note here that Bob does not need to store any ephemeral information and thus he does not have a state. When Alice receives Bob's message she can use her state and long term key to compute the same session key. Correctness of the protocol tells us that those keys are really the same as long as both users receive the messages that the other user has sent. But now when looking at the security of an AKE protocol we also consider an adversary Eve and we put Eve in between the interaction. I will explain the capabilities of Eve now one after another. First she controls the message flows. That means she can drop, modify or inject messages. For example instead of forwarding Bob's message Eve sends another message MV. As a result Alice will either reject the message in case she finds out that there is something wrong with it or at least she will compute a different session key. In the best case we still want some security properties for the key which I will explain later. We also want to give Eve access to some secret information so as long as Alice had not received Bob's response yet she can reveal Alice's secret state. But we do not only want to give her access to ephemeral secrets but also to the long-term secrets and in this example here Eve corrupts Bob's long-term key which means she can impersonate Bob and send messages on his behalf. She can also see how other users react on that. And finally Eve can also reveal session keys of her choice so she can decide to obtain the key that Bob computed and may try to retrieve some other information about Bob. Summing out of this picture and modeling the real world we do not only have two users but we have many users and many sessions. As an example here we add Carol and Dave to the network. Carol has already established a key with Alice and one with Dave and Dave also holds the same key with Carol. In this experiment we model security with a so-called challenge oracle. Eve can make multiple queries to this challenge oracles or session keys of her choice. Let's assume she first challenges the session between Alice and Bob. That means she will either get the real session key or random key. Then she challenges Alice and Carol's key and again will either receive the real session key or random key. The same happens for the session between Carol and Dave. In answering these challenge queries now there are two different ways to model this in the experiment. The first scenario we have multiple challenge bits, one for each challenge. For example the first bit is zero so Eve receives the real key. The second bit is one so she receives the random key and the third bit is also one. In order to win Eve must now find the bit for one particular challenge. She can choose for example the second challenge and she wins if she guesses B2 correctly. Now consider a different scenario where we only have one challenge bit. That means if the challenge bit is zero Eve will always get the real session keys when she makes a challenge query. And if the bit is one she will always receive random keys. And of course the goal is now to guess the bit B correctly. Both models have been used in the literature before but I will tell you now why we believe that the single bit model is more useful. First this notion is actually well established when it comes to multi-challenge security definitions for example in standard public key encryption. The reason for that is that it is tightly equivalent to so-called real or random security definitions which are mostly used in composition theorems. We don't want to use AKEs as a standard along primitive but we want to compose it with a symmetric encryption scheme. And when doing so we want all challenge keys to be random at the same time and not only one. And this is exactly what the single challenge bit model gives us. Apart from that our security model captures all properties we want to have on AKE protocol. These properties are also formalized in the CK plus model. In particular we show forward secrecy which says that an adversary cannot distinguish the session key of a challenge even if both long term keys are corrupted after the session key is computed. Resistance to key compromise impersonation attacks or short KCI means that given a long term secret the adversary cannot impersonate some other honest user in order to fool the owner of the corrupted secret key. The third property is resistance to maximal exposure attacks and it says that the adversary cannot distinguish the challenge key even when given some combination of secret information that do not trivially leak the session key. An example here is that we allow the adversary to reveal the state of the initiator and the long term key of the responder. I already mentioned the word tightness a few times which brings me to some definitions about provable security. The AKE security experiment I just explained is played as a game between a challenger and an adversary. We prove security by contradiction using a security reduction. That is we assume that there exists an adversary A against our cryptographic scheme in this case the AKE protocol and we use A to build an adversary B against a computationally hard problem. However, we believe that this problem is hard to solve so such an adversary A against our scheme cannot exist. We now call a security reduction tight if A and B have about the same advantage in running time. And what I mean with about the same is that they only differ in a constant factor. In particular for AKE they should not depend on the number of users or the number of sessions. This tells us how to choose system parameters correctly. In fact only a tight proof allows us to implement the protocol theoretically sound. Given a non tight proof one should think about increasing the system parameters for example the size of elliptic curve groups. Before coming to the details of our results I want to give a short comparison to previous work. The first tightly secure AKE was proposed by Bader et al at TCC 2015. They focused on the standard model but did not consider state reveals. They introduced and proved the security in the multi-challenge bid security model and not the single bid one. At Crypto 2018 Yustin and Jagger proposed a variant of the scientific Herman protocol in the random oracle model which allows them to get a tight proof. In their work they also do not consider state reveals and they also use the multi-challenge bid model. One year later at Crypto, Gunn-Garden et al provide very practical and efficient if the Herman protocols also in the random oracle model. They are the first to use the single-challenge bid model however their proof is not tight. It loses a factor which is linear in the number of users. They also show that this loss is inherent to many protocols and in fact our work bypasses their impossibility result. There was another work in between. The Yustin et al presented a tightly secure AKE and the standard model at Asia Crypt 2020. However they did not capture state reveals and they also focused again on the multi-challenge bid model which brings us to our work. Three proposed tight AKE protocols in the random oracle model and focusing on a stronger security model capturing state reveals. In particular we also prove all our results in the more useful single-challenge bid model. So let's dive a bit more into details. Our protocol is a generic construction of basic cryptographic building blocks. Our first protocol is built solely from chems. It is widely known that this yields an AKE protocol with implicit authentication. However, tightness of this construction was never considered before. We show how to instantiate such a chem with hash proof systems. We also look at a second construction which uses an additional signature scheme and a MAC and this yields an AKE protocol with explicit authentication. For the rest of my talk I will focus on the first construction and in particular I want to study the question what security properties we actually need for the chem in order to get a tight security proof. Before giving an answer let's look at the protocol itself. In total the protocol uses three chem instances, a long-term chem for each user to ensure authentication and then a femoral chem to ensure forward secrecy. But one after another, let us start with the ephemeral chem which we mark with a tilde. Alice, the initiator of the protocol, draws a fresh key pair and sends the ephemeral public key to Bob. Bob runs the NCAPS algorithm to encapsulate a key k tilde. He sends the ciphertext back to Alice so that she can compute the same key. In order to do so she needs to store the ephemeral secret key until she receives Bob's message. That is what is stored inside the state. Now we add another chem instance to authenticate Alice. Alice keeps the public and secret key as a long-term key pair. Bob, knowing Alice's public key, can now encapsulate a key kb and sends the ciphertext to Alice. And Alice can compute the same key using her secret key. We do the same on Bob's side. He gets some long-term key pair, Alice encapsulates a key ka to Bob's public key and sends the ciphertext over. In order to derive the final session key Alice must also store ka inside her state. Now we have three intermediate chem keys ka, kb and k tilde, which are all hashed together along with public session information to derive the actual session key. What we then did was to extract the exact security properties we need from the chem. When considering tightness one can prove security by simply guessing the challenge session and then one can simulate the other sessions quite simple. This normally results in a quadratic security loss, so we want to avoid discussing techniques. Now remember that in the ake security experiment, the adversary can adaptively corrupt users. So considering that we do not know the users that will be challenged, we need to be able to output the long-term secrets keys for all users. The long-term keys in our protocol are the chem secret keys ska and skb. Also as all of this happens adaptively, we do not know if the secret key that decrypts the ciphertext will be leaked at some point. So in the end all ciphertexts must decrypt correctly. Something related to this is that ciphertext can be either part of a query where the session key is revealed later or where the session key is challenged. But we do not know which of the two will happen. So we need some special property here to embed challenges in all sessions but also explain revealed keys afterwards. At another point for review queries is that we consider an active adversary. So the adversary can come up with the ciphertext computed by itself and then it can reveal the session key as well. But it knows the session key. So we need to be able to simulate this key correctly, which means that we need some decryption ability like in a CCA security definition. Similar to previous work, we also add another dimension of reveal queries, namely the state reveals. This makes things even more complicated, but it turns out that the properties are quite similar to those of the long term chem. On a state reveal query, the ephemeral secret key is leaked. This is the key I called sk tilde in the protocol. So we need some corruption ability here again. And what we also need is we want key in distinguishability even when the state is compromised. In taking all these properties together, it turns out that we need some quite strong security notion. In fact, we came up with the definition of non-committing key encapsulation where the term non-committing reflects the ability to first send out a challenge ciphertext, but then still use it to explain a revealed session key in case it is not part of a challenge. And I will show you now an instantiation of the chem, and this should hopefully become clearer then. As I mentioned before, we generically built such a chem from hash proof systems. But here I want to keep it simple and will directly show the instantiation from the decision of the Fihalman assumption, which is then similar to the Kramerschup encryption scheme. Therefore, we fix a group G of prime order P and a generator G. We also compute a second group element H, which is G to the power of some random exponent omega. We call this the public parameter. Now the key generation algorithm picks two elements x0 and x1 randomly as the secret key and computes the public key as G to the x0 multiplied by H to the x1. The encapsulation algorithm takes as input a public key. It chooses a random exponent r and the ciphertext consists of two elements where C0 is G to the r and C1 is H to the r. The key is derived using an additional hash function which inputs the public values like the public key and the ciphertext to bind them to the key and the actual secret value which is the public key to the power of r. Note that the ciphertext is now a ddh tuple or in other words it's an element in the ddh language. We call r the witness and knowing the witness the sander can compute the key. Decapsulation does not need to know the witness but it uses the secret key to derive the same key by simply taking the first ciphertext element to the power of x0 multiplied by the second ciphertext element to the power of x1. So far this looks like a normal cam but what we need now is to ensure the non-committing property of ciphertexts and this is where the magic happens. We define an additional algorithm that computes simulated ciphertexts and keys. We call this algorithm SIMNCAPs and it inputs the secret key. As opposed to the normal NCAPs algorithm the simulator now picks a second random exponent rs and computes the second part of the ciphertext as h to the power of s. The key is then computed as in the decapsulation algorithm using x0 and x1. Note that we don't have a witness anymore as the ciphertext is no longer a ddh tuple. Now what we can show is that the output of these two algorithms NCAPs and SIMNCAPs are computationally indistinguishable and this is based on the ddh assumption. The nice thing here is that this holds even if the secret key is corrupted. So in the simulation of our ake experiment we can just always use the simulated ciphertext and we do not need to care about whether the secret key is corrupted afterwards. Looking at the CAM key now we can show that the secret value z0 to the x0 times c1 to the x1 is information theoretically hidden from the adversary as long as it doesn't know the secret key. So keys are statistically indistinguishable from random keys and here the hash function which is modeled as a RAM oracle ensures that this holds for many challenge keys even as some of the other keys are revealed. So this is the main building block of our CAM based ake. Let me now conclude the talk and sum up our contributions. We introduced the security definition of non-committing key encapsulation which is tailored to the security of our ake protocols. We show how to generically build such CAMs from hash proof systems in the random oracle model. And one example we have just seen is based on the decision of the Fihalman assumption. Using that notion we prove tight security of two different ake protocols where our first is built only from CAM and our second protocol uses an additional signature scheme. We are the first to consider tightness in a stronger security model with state reveals and we also revisit the definition of a multi-challenged security notion. In particular we use the variant with a single challenge bit which is useful as it tightly composes the ake protocol with a symmetric primitive. As our work relies on the properties of the random oracle the natural question is if similar techniques are also possible in the standard model and at this point I want to mention our follow up work where we join forces with some more researchers to construct tightly secure ake and signatures in the standard model. If you are interested that paper was already presented at crypto this year. For more details on this work please check out our paper on ePrint. That's it and thank you for listening.