 Hello and welcome to my presentation on the post-quantum verification of Hoshisaki Okamoto. Let us first start with a few words about post-quantum secure encryption. Quantum computers will break existing public key crypto systems. And what we need to be secure in the future are new crypto systems that are secure against quantum computers. This is not a new observation. Already for several years there has been the NIST competition which leads the search for the next generation public key encryption standard that is supposed to be post-quantum secure. And we have already reached the stage there that we have several promising candidates. In the current third round we are left with classic Macalese, crystals, kyber, Andrew and Sabre. And if we look at these crypto systems, we see that they all have one thing in common. They use the random oracle to transform a weak, probably not chosen ciphertext secure encryption scheme into a strong crypto system in the random oracle model using some variant of the Fujisaki Okamoto transform. And this makes this Fujisaki Okamoto transform in all its different variants a very important transformation and we need to be really sure that this transformation is secure and is post-quantum secure. And one approach to be really sure is formal verification. So the standard approach when we are asking ourselves what is the security of a crypto system is to do a manual verification. So a human analyzes the crypto system, writes a security proof and then other humans read the security proof. So the first human is probably someone publishing a paper and the other humans are the ones reading the paper. The problem with this approach is that it's very error prone. It is very easy to introduce a mistake in a cryptographic proof especially in the quantum case where a lot of non-intuitive things happen because quantum mechanics just do not match our human intuition. And it is also somewhat, the additional problem is that not only the person who writes the proof needs to be an expert, also the reader needs to be an expert and needs to be able to check each individual step of the proof to get full confidence in the result. On the other hand, we have formal verification. Here, a human still writes the proof in many cases, at least when I'm talking about cryptography but now it's the computer that reads the proof. And this has the big advantage that the experts do not need to check every step of the proof. The experts now need to verify that the specification, so the security definitions and their formalization are correct but the experts do not need to verify every single step anymore. So write once and it's done. So formal verification, if we can have it, is clearly a great advantage. So what do we have already in terms of existing infrastructures for this purpose? So there are actually already quite a number of tools of different popularity and maturity. So probably the worst, the most well-known is EasyCrypt. There are also Crypto, which is a similar approach inside Isabel Hall, FCF in Coq. CryptoVerif is a tool that does some automated or guided rewriting. Verupto, another formalization in Isabel. And I'm sure there are a lot of them that I haven't thought of at this point. And the approach that these follow is that of game-based proofs, which are probably known to most cryptographers from their own pen and paper work. So here we have several games. Here symbolized by these program fragments. And from one game to the next, we have a minimal difference in the behavior of the game. But the interesting question that does not arise that much in the pen and paper proof, but that becomes very relevant in the formal verification case, is how do we prove the relationship between two consecutive games? And here the ways how the different tools operate. So this point is where the different frameworks and approaches differ in subtle or even big ways. And in the case of, for example, the EasyCrypt tool, we have that EasyCrypt uses a logic called PHL, probabilistic relational hall logic, which allows us to give a very fine-grained analysis of the relationship of two programs. And the games in the cryptographic settings are formulated as programs. And this is also a logic on which the quantum case will then later build on. The problem with all these existing approaches, however, is that they are not quantum sound. So all the soundness of these approaches is proven in the classical model. Adversaries are classical and so on. And a priori, there's no guarantee that anything that is proven with these tools has any impact on the quantum setting. In fact, in prior work, we have shown that for EasyCrypt, we could even give an explicit example of a multi-prover-like situation where EasyCrypt would claim security, but there is no security in the quantum setting. So what can we use in the quantum setting? Here comes the, for this we use our prior work on quantum relational hall logic. Quantum relational hall logic is a logic that is similar in spirit to the probabilistic relational hall logic from EasyCrypt. But it has support for quantum programs. So while P I H L and EasyCrypt has the same spirit, it can only reason about classical programs, classical adversaries and so on. And even for post-quantum security, where the protocols are classical, we still need to reason about quantum adversaries. So P I H L does not even work there. And therefore we need another logic and Q I H L is a candidate for that. There's also a theorem prover in Q I H L called Q I H L tool. This theorem prover is designed for proving quantum crypto security proofs. But so far, that is before the current work, there have only been toy examples analyzed. So it was shown that, yes, you can express security properties and make security proofs, but it was only done for extremely simple examples. So what we embarked in on the present work is to apply that paradigm to a real-life, complex security proof. So what is the security proof that we analyzed and formalized? So we based our work on a result by Heuvelmann, Scherger and Unruh from PKC 2020. And they showed the security of a key encapsulation mechanism using a variant of the Fujisaki or Komodo transform. And one crucial aspect of this is that they, their proof also works in the case of decryption errors, which is very important when we are talking about lattice based encryption, because lattice based encryption usually has a negligible probability of failures when decrypting, and while this is practically irrelevant if this failure probability is very small, it does make a big difference in security proof sometimes. So even a negligible failure probability can and did break some of the prior security proofs in the post-quantum setting. So how does this HKSU proof or the transformation analyze their work? So we start first with a in-CPA encryption scheme, and then we apply a sequence of transformation, first punk, which drops just one element from the message space, and this gives us an encryption scheme that has a stronger security, so-called distinct similarity ability, which means you cannot distinguish a valid ciphertext from an invalid ciphertext that is chosen in a way that is information-theatrically distinct from any valid ciphertext. And then we want to make this deterministic while keeping the DS property. This is done in a totally standard way. We use a hash function to hash the message and this gives us the randomness. And then the next step is to get a key encapsulation mechanism from this by simply when decrypting or de-encapsulating, we re-encrypt the ciphertext with the randomness that we get when decrypting and check whether this randomness would actually lead to the ciphertext we had. And this trick has the effect that fake ciphertexts can be caught, and this is the core idea behind the in-CCA security of this CAM. And then, of course, in a practical situation, you would use this CAM combined with a symmetric encryption scheme in hybrid encryption, but this is not in the scope of this result. And this is actually also the simplest part of the whole scenario. So what is our contribution? First, we formalize the HKSU security proof. And this is, to our knowledge, the first non-trivial post-quantum security proof. And it is certainly not the most trivial candidate to take because it involves the quantum render oracle model. And while many post-quantum security proofs are essentially classical in the sense that they follow exactly the same steps as the classical proof, except you need to mention all the time the adversary is quantum polynomial time or something like this, but all the reasoning steps are the same. This is not true in the quantum random oracle model because you need to take into account that the adversary can make queries in superposition to the random oracle, and this makes everything, makes even a classical protocol inherently quantum, and this needs to be taken account in the analysis. Our approach, besides showing that the Fujisaki or Komoto transform is secure and giving the first formal verification of this also shows the viability of the QRH approach. So before we only knew that it probably works for post-quantum crypto, but now we know that at least in this particular case it works. So it's a viable approach and we can look for the next protocol to analyze. We also, as part of our work, needed to extend the QRH framework or the QRH tool. So we added support for the OTH theorem, the one way to hiding theorem, which is a very commonly used theorem for situations where in the random oracle model you want to show that two games are indistinguishable, as long as a certain input to the random oracle cannot be guessed. And we added a tactic to the tools and now you can use the one way to hiding theorem in security proofs in the QRH tool. And furthermore we also discovered that the situation or the logic is not as easy as was thought. So the origin of the QRH tool had only support for global variables because you could just say when there's a local variable, just don't use it. I mean when you want a local variable used only in a game, you just make sure that no other game uses that global variable, which is supposed to be local. But it turns out that while this reasoning makes sense in the classical setting in the quantum setting there are all kinds of subtleties. So this led to an extension of the QRH tool where local variables are supported and also to an extension of the theory. Now what does this QRH look like that I keep talking about? So in QRH we have judgments, so judgments are just facts, that allow us to compare, to state the relationship of two games. And in this case here we see an example with two games, quantum game one and quantum game two. And a statement that if the quantum variables are the same before the execution, so this is this x equal y in the beginning of the equation, and we run game one or game two, then afterwards they are still the same. So here x is a variable of quantum game one and y of quantum game two. And this would be a claim about the indistinguishability of those games. If they start with the same input, they end with the same input. Of course we can have much more complex claims about the relationship, so it doesn't have to be a quality. And the crucial point is that these pre and post conditions can actually talk about quantum states. But we can also talk about classical variables directly, which is very important in the setting of post quantum crypto, where most of the variables we care about are classical, and just a few variables like the adversary state, or the query registers for the hash functions are quantum. So now that we have seen what a QRH judgment looks like, I will show you a very tiny example of how the security proof actually looks like in the QRHL tool. Of course, since our whole proof covers thousands of lines and dozens of files, I cannot really give you a true insight into all the proof steps. So I'm taking just one part of the proof, a very simple one, and go through the steps there just to give you an impression. The proof that we will look at is the equivalence of two games, game 0FO and game 1FO, that are the first two games in the proof of the transformation U that goes from distinct similarity ability to NCCA security. And you see these two games, they differ in a few lines. The most crucial step is that here the random oracle H is a uniformly randomly chosen function, while here H is a function that is computed from a few other auxiliary functions. The specific purpose depends on the other proof steps, but basically H is not a uniformly random function, but depends on other functions in specific ways. And the second important difference is that here the adversary, so this is the adversary of that game, calls an oracle dcq0, while in the second game it calls dcq1. And we want to show that these two games are equivalent. So we have this lemma here. It says the probability that the bit B is 1 in game 0 is equal to the probability that it is 1 in game 1. And we show this with backwards reasoning. So in order to show this goal, we transform it into a different goal that's sufficient. So the first tactic says to show this, we need to show a QHL judgment. So this judgment says if this precondition is satisfied and we run game 0 or game 1, then this postcondition is satisfied. And this precondition basically says all the variables in the left and the right are the same. And this also says the bit B1 is the same in both games. So it's an equivalence. Then the next step is to inline the games, game 0 and game 1. And they're here shown on the right in the proof goal. And now we work our way backwards to this proof. So the first step is to refine this postcondition here, to replace it by another one that implies it. And we see here now, let me just call it, we see here now that we require that all the variables are the same, except that the random oracle H on the right side is chosen in this specific way. And now we can start with the actual proof. So the first step is to get rid of the adversary call. And for this we have a tactic equal that says that we can remove a call without changing the two programs when it's almost the same, except for some differences. In this case, d-cups query 0 and d-cups query 1. And for these differences, we need to show that the two games still preserve the invariant. So here we have that querying the g-oracle preserves this invariant. It looks complicated, but it basically also says just everything is the same on both sides, except for the statement about the oracle, a random oracle H. So I'm not going into the detail how this is proven. Then we need to have the second sub-goal is to show the same thing for queries to the function H. And finally, we need to show that the two decapsulation oracle 0 and 1 behave equivalently. This is more non-trivial because they are not the same, but we can do this with a sub-proof that was in a different file. So it uses this lemma here that is shown somewhere else. I'm not going to go into that. And now we see that we have the two games again, but the last statement has been removed. So we need to show that now when we run the game up to this point and this up to this, this post-condition is satisfied. It's the same post-condition as before. And now we work our way through these games, step-by-step removing one line after the other. So now we remove these two lines in both games. They are the same, so it's an easy step. Remove another line. This k-star line now comes to the c-star that gets removed and it continues that way. Here, things are a bit different. We need to remove the sampling of the random oracle, but it's not the same in both games. But fortunately, our post-condition here takes into account that this run oracle is defined in exactly this way. So we have some simple steps here. And now we have here the same code as here basically, except here this undefined and here h, but it turns out to be not used by this function. So we continue in the same way and that will not go through the details of this. And in the very end, so this is typical how these proofs go, we have no programs left and we need to show that with this precondition, meaning everything is this, all the variables are the same, and we do nothing, then this post-condition holds, meaning at least these few mentioned variables have to be the same. This is trivially true because everything being the same implies a few things being the same. So we can do that. The last sub-goal is gone, our proof is gone, and we have shown the lemma here that we started out with. And this we do for all the pairs of games until we have shown the final security result. So what are the lessons that we learned from this formalization? Well, the good news is that to the largest part, the proof is basically the same as it would be for a classical protocol. So we talk about game rewriting steps where we change one classical computation and replace it by another classical computation. And the only way where it's quantum is that we drag along the invariant that the quantum variables of the adversary in one game and in the other game are indistinguishable. However, a few of the steps are like really quantum and need quantum reasoning. This is when we apply the one-way-to-hiding theorem. However, there the quantum nature is hidden from the user of QHL tool because it is inside the tactic. And then there are some situations where we need to do rewriting of quantum circuits. And even though it's quite simple rewriting of quantum circuits, just like evaluating the composition of two classical functions and superposition breaking this down into individual gates. Explicitly reasoning what this turns out to be quite tedious. And this is the one point where we see that we do need some extra support for this and not have to have to do this by hand. What are some interesting points for future work? So we would like to verify the NIST candidates as they are. So currently we have analyzed a transformation that is like the transformations used for making NIST candidates secure. But what we really want is to verify them exactly as they are specified because there could be some subtleties how things don't match up, et cetera, et cetera. So I think it's a valuable goal to analyze each of the finalists with the best known post-quantum security proof and specify exactly that finalist. Then future work would include coming up with better ways to automate parts of this proof, especially the quantum parts. This will make things more usable, faster to prove the security of new schemes, et cetera. And of course we would like to analyze fully quantum protocols, not post-quantum crypto, but protocols where actually the honest participants are also sending quantum messages. I see no reason why there should be a problem, but it is an additional challenge that we have not yet tackled and that is probably very interesting. So that's it for my talk. I thank you for your attention. And as a last thing, if you're interested in this kind of work, feel free to have a look at the job offers in my group. And we are looking for anyone who does quantum crypto, who is interested in quantum logic, in theorem proving, and so on.