 Welcome everyone to my talk about generic authenticated key exchange in the quantum random oracle model. This was joint work with Ike Kils, Sven Schiege and Dominik Ungho. The context of this work is the so-called NIST competition, which aims to find standards for quantum secure public key encryption, key exchange, and signatures. So first I'd say a few words about public key encryption. On the level of security, NIST is aiming for its active security, which we also know as CCA security. But what is way easier to achieve is passive security like one-way NIST or indistinguishability under chosen plaintext attacks and yeah what we usually do is to construct a scheme that is passively secure and then to turn it into something that's actively secure by applying the well-known Fuji-Saki Okamoto transformation and its modern chem variants. Mental technical problem in applying the Fuji-Saki Okamoto transformation is the probability of decryption failure. So most latest beta schemes come with a small probability of decryption failure, meaning that with some small probability encrypting and then decrypting does not result in the original message. So this has to be taken care of and I'll say a bit more later in the talk. Moving on to key exchange. So we quantum stuff was kind of easy. We could use the Feeham and key exchange and authentication, for example, by our signatures or camps. Post-quantum, the situation is a bit different. The Diffie-Helman is broken and our known constructions for quantum signatures are still quite costly. So we would like to do without them and just to avoid confusion, of course any public key primitive requires working public infrastructure and the latter of course require signatures to certify the public key. But the certificate only has to be verified once and for all, meaning that using signatures to set up the infrastructure only adds negligible overheads. So this is not an issue, but during protocol execution we want to do without signatures. So there already has been work on authenticating key exchange without signatures in 2008. Boid-Ed R came to AKE construction. In fact, they gave two constructions achieving different levels of security. The first protocol was relatively weak because exposed randomness breaks the level of security and to strengthen this protocol to be resilient against this kind of attack, session-specific Diffie-Helman layer was added, meaning that this construction is not suitable for post-quantum security. There was also work by Fuyuko et al. in 2012 and they extended the framework of Boid by also adding a session-specific layer, but instead of working with a Diffie-Helman layer, they just used any passively secure camp. So this was more generic and also working with an additional Twisted PRF trick, they were able to achieve resistance against exposure of secret data. However, they assumed the underlying skin to be perfectly correct and I already mentioned that this might turn out to be an issue and I will discuss it later. Another minor issue is that the Fuji-Saki-Okamoto transform already involves passing the key and the construction by Fuyuko et al. involves more potentially redundant hashing or be already hashed session. So in conclusion, there were no known came-to-key exchange constructions coming with guaranteed post-quantum security in the presence of potential decryption failure. And the goal of our work was to give a simplified transformation that is secure against quantum adversaries, even if the underlying scheme is not perfectly correct and that also gets rid of unnecessary hashing steps. And we achieved this goal by lifting the Fuji-Saki-Okamoto transformation to the AKE setting, meaning that we can now turn any passively secure PKE scheme into post-quantum secure authenticated key exchange. And an example of how to apply this work can be seen in the Kaiba key exchange. Given that I already hinted at the fact that our AKE result can be seen as an extension of our Fuji-Saki-Okamoto CAM result and since the authenticated key exchange protocol draws on recent research on its CAM counterparts so heavily, I will provide you with some background on Fuji-Saki-Okamoto like CAMs first. Afterwards, I provide some intuition on the security level we are aiming for with our key exchange protocol and finally I present our protocol and some open questions that I find quite interesting. As a warm-up, I do a quick recap with regards to the Fuji-Saki-Okamoto transformation. The original work was found to have some limitations when it comes to post-quantum security and those limitations inspired a lot of follow-up research. And the first limitation I want to mention is that the original work needed the underlying scheme to be perfectly correct. That is, first encrypting and then decrypting always results in the original message. But what I would like to point out is that many lattice-based encryption schemes that were proposed for the NIST competition actually come with a probability of decryption failure and known examples are Kyber, Frodo or New Hope. And what we found out in 2017 is that even an negligible probability of decryption failure might affect the security level. So we'll say a bit more about this issue. So intuitively, one might think that an negligible probability should be an negligible issue. But an active attacker can query the de-capsulation oracle on any type of text it wishes. And that means that neither the message nor the encryption randomness have to be drawn uniformly at random which was how failure probability used to be defined at first. In particular, this means that an attacker can try to deliberately trigger decryption failure. And if the failure depends on the secret key that is used, then the fact that failure occurred alone already leaks secret information. So to point out how this indeed seems to be an issue, I want to refer to the attack described in the work of Don Ver et al. in 2018. What they do is that they first obtain a list of failing ciphertexts and then they estimate the secret based on these ciphertexts. Luckily, the NIST proposals pick the parameters conservatively enough to render the attack in practical. But nonetheless, I wanted to stress that even such an negligible probability of decryption failure can indeed affect the security. Okay, so how do we cope with this situation? One possible solution would be to only build schemes with perfect correctness. But first of all, it's quite costly because our violators based encryption schemes can be made perfectly correct by putting a limit on the noise and setting the modulus large enough. Increasing the size of the modulus makes the problem easier to solve in practice. So the dimension of the problem needs to be increased in order to obtain the same security levels which then in turn leads to greater public key and ciphertext length. And also, many NIST submissions deliberately made the design choice of having imperfect correctness and those would not be covered by any analysis that does not deal with non-perfect correctness. So from my point of view, the better solution would be to give proof that deal with non-perfect correctness. Okay, now that we've talked about decryption failure, there was another limitation. The original proof was in the random oracle model which does not account for adversaries with quantum capabilities. And I now sketch a bit what changes if our attacker actually does come with quantum capabilities. So the random oracle model is a proof heuristic in which we replace the hash function with a perfectly random function H. And the common proof strategy we are using in our FO security proof is to claim that if a can distinguish a particular hash value from random, the reduction must learn the pre-image x star and this x star might solve the underlying problem. Our FO example would be that learning the message m star would imply that we are able to invert a ciphertext. What changes if a is quantum? The scenario we are considering here is that a quantum adversary is still interacting with a non-quantum network. And this means that online primitives like decryption or signing say classical, but offline primitives like hash functions that the adversary can compute on its own are now computable in superposition. So what's new? The adversary can evaluate the hash function on some superposition and if you are not super assured that you know what that is, just imagine it to be a linear combination of all possible inputs. So furthermore, the possibility of a pulling quantum tricks leads to more complicated proof. An example for this is how to extract a particular pre-image from the oracle queries because now a particular pre-image is hidden somewhere within this superposition, within this linear combination. So here we see our typical random-until-query argument and we state that the probability of a distinguishing the hash value from random is upper bounded by the probability of a querying the random oracle on x star. So this is still the classical setting. What changes if our attacker is quantum is the following. We get a factor of q that goes into the upper bound where q is the number of queries to the random oracle and we also get a square root above a probability and the probability isn't any longer the probability of a querying the random oracle on the pre-image. Instead it's the probability that measuring random query gives us the pre-image. So we can already see that this upper bound is quite far from the upper bound we had in the classical setting and while there have been some recent improvements they all come with some additional technical restrictions so they are not just a drop in replacements for what we had before. So now that I hopefully convinced you that correctness errors and quantum attackers are indeed an issue it comes as no surprise that there has been a lot of research invested in the FOD transformation in the quantum random oracle model within the last years and I try to gather up some common grounds that all the recent results share. So what all results have in common is that they dissect the FOD transformation into two simpler transformations and then they give security statements for those two steps. The first step usually is called transformation T and it's used to de-randomize the underlying scheme and I'll show you in a second how this works. And the second step is a hashing step that turns the intermediate PKE scheme that we achieved by de-randomization into a chem that is actively secure. So to give a quick reminder on how the T-transform looks like it's essentially the encrypt with hash construction meaning that we make the encryption scheme deterministic by using the hash of the message as the encryption randomness. For the second transformation we take the intermediate encryption scheme and encapsulation looks as follows we choose a uniformly random plaintext M. We use the underlying encryption scheme's encryption algorithm to encrypt it to ciphertext C and we derive the key by computing the hash of the message and the ciphertext. In the de-capsulation algorithm we first use the underlying decryption algorithm to get back to the plaintext M or possibly M' if decryption fails. If the decryption algorithm rejects we will also reject and otherwise we compute the key as above. In a sense I lied a bit to you because actually there are many different variants of the U-transform that have been considered within the last years. So for example you might want to derive the key by only hashing the message or you could go for implicit reject meaning that instead of returning the failure symbol if the decryption didn't work you will return absolute random value that is derived from the message. What all recent results have in common is that at least one of the two proofs runs into the quantum extraction problem I mentioned before and now I discuss a bit why there has been ongoing work and why we are still trying to improve the CCA bounds. So our goal would be to tightly relate the FO claim security to that of the underlying scheme. My tightness I mean that the derived scheme can be proven as secure as the underlying building block and with less tight results I mean that the derived scheme security can be related to the weaker building block in a less immediate manner and we'll see an example in two clicks. If the relation is to lose the security statement for our derived scheme could turn out to be meaningless meaning that we would have to make the underlying problem harder but that would mean that we would have to scale up the underlying schemes parameters and that would make it less efficient. So whenever it is possible we want to give a proof that is well as tight as possible. So first here's some wishful thinking. What we would like would be to give a CCA bond that is simply the CPA advantage of the underlying scheme because this is achieved in the random oracle model. So if we don't have to assume quantum attackers we are very very tight. In this work we achieve an upper bound that still comes with a square root and also a factor of Q. Q is the number of quantum random oracle queries and intuitively the less tight bound is because we need to use a quantum pre-image extraction strategy and we already have seen how those come with a factor of Q and a square root. Still I should mention that the bound we see here might not be great but we already made some progress compared to the bounds in 2017 in which we had nested roots. So last year we made progress in two regards. We could reduce the underlying security notion towards oneness instead of CPA security and for deterministic schemes we could show how to get rid of the factor Q. And why does this work? Because if the underlying encryption scheme is already deterministic we don't have to apply the t-transform and it's sufficient to apply the second step, the u-transform and this work also gave a new bound for our quantum extraction program. This year during Eurocrypt a really nice new result was presented that got rid of the square root altogether. So what we see here is that for deterministic schemes we only have the factor of Q that disturbs our tightness and for CPA we have a quadratic factor of Q and these tighter bounds are achieved by a new quantum extraction technique that's called measure rewind measure. What I should mention is that this comparison comes with a huge caveat and those results are for different variants like on the use slide you have seen you can derive your key in different ways and so on and also they might need some additional requirements and way more details are given in another talk that is to be found in this link. To proceed to authenticated key exchange I first explain our security model a bit. So we are in the lucky setting that we are only considering two move protocols meaning that only two messages are exchanged between Alice and Bob which makes everything a bit more easy and the goal we are aiming for is first of all correctness meaning that both parties will derive the same key with overwhelming probability and secondly key indistinguishability so from the outside it should look as if the key would be completely random. In practice there exist many ways to attack the key indistinguishability so for example one could try to learn session keys of sessions that have already been established one could try to corrupt a user and thereby learn the secret key of the user that was attacked or even the secret key of both involved parties one could try to learn the session state or the randomness it used and one could also try to actively interfere with the protocol by modifying the exchanged messages and there exist many different security models that come with subtle differences in how those attacks are modeled but going into the details would be way beyond the scope of this talk. So for this talk I only point out what kind of security we are aiming for as I already said we are in the lucky situation that we only consider two more protocols meaning that it is easy to dissect the whole protocol into three simple steps and in particular it turned out to be relatively easy to define our security model in pseudo code we have already seen that the attack surface for key exchange is pretty broad and going into details here would go beyond the scope of this work but for viewers that know and care about AKE I want to include a list of attacks that our model covers for our security proof we are using a slightly weaker variant of the model above so what we disallow is to do a state reveal on the test session if it is an initiator session and the adversary actively interferes with the protocol by changing an exchanged message this only affects the initiator session because a responder holds no states anyways for to move protocols and it only affects the time interval between sending and receiving the message while an active attacker may increase the interval by withholding messages and delaying the arrival in real-world implementations there is some restriction on this interval because messages cannot be delayed too heavily as otherwise the initiator will abort assuming that the receiver cannot be reached and our slightly weaker variant is essentially the notion of the work that I discussed when I mentioned time to AKE constructions that have been given before in the last technical part of the talk I will now show you how we can extend the Fujisaki Okamoto strategy to the key exchange setting record that we are aiming for both authentication and key and distinguishability and our strategy will be to use a multi-user variant of Fujisaki Okamoto what I mean by that is that we exchange FO-like ciphertexts by this I mean ciphertexts that are built according to the T-transform so both partners pick some random messages and use the T-transform to encrypt it using the partners public key and key computation will be a multi-user variant of the U-transform but since we are now in the multi-user setting we have to hash the whole transcript which also means that we have to include the public keys of the communicating partners in the hash but that's not really the whole story because we are also aiming for a weak perfect forward secrecy and to include in every session some freshness what we do is to add some session specific or ephemeral FO communication by that I mean that Alice has to pick a new key pair for each session according to the underlying encryption scheme she sends over the ephemeral public key Bob will pick yet another random message and ephemeral message for this session and here then T-encrypted with the ephemeral public key and send the resulting ciphertext over and of course we have to include this ephemeral transcript in the hash as well so we'll include the ephemeral public key and ephemeral message and ciphertext to sketch security rather quickly I first make a statement that you will have to believe and the details can be found in the paper so my claim is that with any non-trivial strategy the attacker could come up with they'll only obtain two out of the three messages that are used to derive the key and given that this is the case our AKE proof is basically a multi-user version of R-CAM proof and there's one exception namely the aforementioned state reveal attack so the problem cause is that Alice's state namely the message she picks and the key pair she picks are independent of her secret key and Bob's response those two ciphertexts and the messages he picks are also independent of his secret key so what the attacker can now do is to let Alice initialize the session that is to be attacked to reveal her state to learn the message MV and then just to pretend to be Bob by picking its own messages MA and ephemeral to control the whole key but to succeed with this strategy the attacker has to reveal Alice's session state before she refuses to communicate with Bob any longer meaning that they have to get into the session state before she times out lastly I want to discuss some open questions that I find particularly interesting the first question concerns our correctness definition you already saw in the discussion of the original for Fisaki Okamoto transformation how active attackers can look for particularly bad ciphertexts to extract some information about the secret key which is why we can't loosen this very conservative correctness notion in which we look at the worst message possible and yeah maybe there's another way in which we can transition from average case correctness to worst case correctness generically I think this would be quite neat the second question is whether there might exist passive to active transformations that are already starting from chems I want to stress here that such a transformation is not even known for a chem to chem and yeah it would be really great to have such a transformation because it would be applicable in authenticated key exchange and also when defining hybrid modes with the letter I mean combining classical secure and post quantum secure schemes to hedge our bets when we want to transition towards post quantum security the last question I want to mention concerns the tightness of our results and we call that on the comparison site for our known CCA bounds I already mentioned that I came up with a really cool nice cool new quantum query extraction technique resulting in way tighter bounds because they basically got rid of the square root that used to be inherent to all our extraction techniques and what we might want to look into is whether this measure rewind measure technique can also be applied to our proof structure because it comes with some restrictions and we still have to make sure that plugging our structure and the technique together indeed works out but if that were to be the case we would achieve tighter bounds for both our chem construction and our authenticated key exchange construction leading to greater efficiency with those questions I want to conclude my talk and yeah thanks everyone for watching I hope you enjoyed my talk and yeah hopefully see you somewhere somewhere bye