 In this talk we are going to analyze the quantum security of Fiat-Schaemier non-interactive zero-knowledge proofs and Fiat-Schaemier signatures. These objects are actually constructed in two steps, which we will each separately analyze. Namely you have the Fiat-Schaemier transformation, which we apply to a sigma protocol. I will in the first part of this talk talk about the security of the Fiat-Schaemier transformation, where I can already give away that the main issue here is how to deal with superposition actions to the random oracle, about which we have seen two talks already. And then in the second part, Chi-Peng will talk about how to prove quantum security for the underlying sigma protocols. This separation also allows me to give a quick comparison between the two papers. In the first part, we both have a reduction, the only difference lies in the tightness of the reduction and interestingly also in the methods that we used. And in the second part, both of our papers formulated a new property and we demonstrated that this property is sufficient for quantum security. But on top of that, Neon Zandri also develops some extra technical tools that allow to prove that certain schemes actually satisfy this property. Right, so first about the Fiat-Schaemier transform. In a nutshell, it takes away the interaction from a sigma protocol. So given a sigma protocol, which is an interactive scheme, we apply the transform to obtain these non-interactive objects that I mentioned before. And of this transformation, it has long been known that classically it is secure in the random oracle model in the sense that any security notion ranging from computational soundness to statistical proof of knowledge, name your favorite one, if the sigma protocol satisfies this property, then we can prove that the non-interactive Fiat-Schaemier transform version does so as well. Well, up to recently, the quantum setting looked quite different. We had only one positive result here, namely by Dominik Ungru in 2017. But this security proof applied only to the case where the underlying sigma protocol is statistically sound. So only one of the possible properties and actually one that is not typically found in cryptographic applications. For the other properties, the computational ones, for example, it was unknown whether they are carried over, proof of knowledge, which happens to be the one that is crucial when you want to prove unforgibility for Fiat-Schaemier signatures, also unknown. To make matters worse, there were even some negative claims as to the possibility for a security proof regarding these properties. However, the main result of both of our papers is that indeed also in the quantum setting the Fiat-Schaemier transformation is secure now in the quantum random oracle model. Exactly as in the classical case, any security notion that you can think of is inherited with the only difference being that we now get a Q-squared security loss or even a Q-to-the-9 in the paper by Leeu and Sanderi compared to the linear loss that we have in the classical case. Nevertheless, this result allows us to conclude that if the underlying sigma protocol is quantum secure, then we have unforgible Fiat-Schaemier signatures. The condition in itself is already non-trivial. Chi-Peng will have more to say about that in the second part. Right, so I will first tell you a bit more about the technical details of our results, for which I will have to explain a bit about sigma protocols and how the Fiat-Schaemier transformation works. I'll also say a bit about how to extend this paradigm to obtain signature schemes. Then I'll give you an intuition for the classical proof and the quantum issues that we find in the Q-ROM and finally the technical results that we have to overcome these issues. Okay, so a sigma protocol is an interactive proof of a special form that does not reveal anything beyond the truth of the statement. This letter property, as you probably know, is what we call zero knowledge. The famous example being where somebody wants to prove that two graphs are isomorphic without revealing the isomorphism. Well, a sigma protocol is a particular kind of zero knowledge proof, namely one that works as follows. If some prover wants to prove the truth of a statement X using his knowledge about a witness W, he can do so by computing something we call the commitment, which it sends on to the verifier, who then replies with a uniformly random challenge. It always has to be uniformly random, which the prover then in turn can use to compute a response, which he sends over to the verifier. And if this response satisfies a certain relation with respect to this commitment and this challenge, then the verifier accepts. Technical detail that I should mention by P1 and P2, I of course mean two phases of the same algorithm. So implicitly, they are allowed to share states, which is something that will become important in Chi-Peng's part of the talk. But for us, this suffices. Now, the Fiat-Shamer transform, as promised, it takes away the interaction by replacing the challenge from the verifier in the second round with the hash of X and the commitment, which the prover can compute by himself using some publicly known hash function, of which you can find the code on Wikipedia or whatever. The main point is that it has to be one in the same hash function that is known to all parties. Which it then again uses to compute a response. And now the prover sends over the commitment and the response in one single step. So these two objects together now make up the non-interactive proof. And to verify it, the verifier has to do one extra thing, namely recompute the challenge himself, which he does by looking up the same hash function, and again checks if the proper relation holds. Right, so to get signatures, a very simple extension of this idea, namely include a message that you want to sign in the hash. And to make the picture complete, we have then a public key and a secret key consisting of some X and some W from a heart relation. So W must be hard to obtain from just X. Right, the challenge is now defined as the hash of the message that you want to sign X and the commitment, and the signature simply consists of this non-interactive proof that we have. As a small motivational note, I'd like to point out that we have currently three signature schemes in the second round of the NISPOS Quantum Standardization Project that actually make use of this framework. Okay, now we get to the proof in the classical case. What do we want to do? We want to reduce the security of the non-interactive scheme to the security of the underlying SIGMA protocol. How do we do that? By assuming that we actually have some adversary who making Q a polynomial amount of queries to the random oracle is able to break the non-interactive Fiat-Shamier scheme. Then we are going to play the SIGMA protocol, not using the honest proofer but using some reduction algorithm who is going to use black box access to this adversary in order to break the security of the SIGMA protocol. And here breaking the security, it can mean anything. It can mean that the reduction can convince the verifier without knowing a witness. It can mean that he can convince the verifier on some statement X that is not in the language. It really doesn't matter any trick that the non-interactive adversary can do. We will show that the reduction can do as well in the SIGMA protocol. So how does that work? Well, using its black box access to the adversary, the reduction is going to run A while implementing the random oracle himself, which just means that he's going to execute the adversary and during every one of these Q queries reply with some arbitrary random challenge up until a certain point where a randomly chosen query where the reduction says, okay, hey, now I'm going to carefully look at what the adversary is querying. It will be some X and some commitment. And then this commitment I'm going to pass on to the verifier that I want to fool. Then of course, the reduction will get a random challenge from the verifier, which then will pass on to the adversary again. So technically we say that the reduction is going to reprogram the random oracle on this particular input to the particular challenge that he obtained from the verifier. Then all it has to do is just simply continue the run of the adversary, now answering every query with the new reprogrammed oracle and just wait until it finishes its computation, pops out some non-interactive proof, comp-prime and some response. And then it takes this response and presents it to the verifier who will accept if this commitment in the final output happens to be the same that the reduction found halfway when looking at this random query. Because by assumption the adversary can produce a valid non-interactive proof on some commitment and if it's the same then the verifier will accept, which obviously will happen with probability one over Q, so hence the linear loss in this reduction. Right, in the quantum setting, however, we require a proof in the quantum random oracle model where H is still a classical uniformly random function, but now we allow the adversary superposition query access to H. And I want to emphasize also maybe for the people that just did a switch break, track switch, I mean, that it's really important in post-quantum applications to look not just at the underlying hardness assumption, which of course has to be post-quantum secure, but also we have to consider an adversary that makes these superposition queries. And the motivation for this model comes from the fact that as I explained, H is this publicly known hash function of which the code of which you can just look up in some place, which means that in real life, the adversary is not interacting with some magical oracle. It actually has a circuit on its quantum computer to evaluate the hash function, which means that he can also evaluate this function on a superposition of inputs. Okay, but what does that do to our picture in the reduction? If we now want to reduce the quantum security, we again assume that we have an adversary that breaks the non-interactive scheme, this time making Q quantum queries to H. And suppose we wanna try what we did before, have this reduction run the adversary and try to read off one of its queries, well then we run into what Mark just called the observer effect. Trying to look at the quantum query can potentially destroy or at least disturb that disturb the query state, but with it also disturb the internal state of the adversary. Right, so if we were to do such a measurement, then we can no longer in general predict what the adversary is going to output because its internal state might have been disturbed too much. Now, the contribution of our papers is that we have a technical result, which in full generality says that for any Q query quantum algorithm which produces some pair X and Z, where X and Z satisfy some particular relation, wait, I'm sorry, I said particular, I meant just any arbitrary relation with respect to the hash of X, then if we measure one of the queries at random, just like in the classical case, a small caveat in Leon Sander's analysis, you should either measure one or three, but that's just an artifact of the proof. If we do such a measurement, find an input to the oracle X and then reprogram again as in the classical case, reprogram or oracle at this input to some fresh arbitrary but uniformly random value theta, then we can still continue the run of that adversary and at the end find some output X prime Z where X prime actually equals the X that we found in the measurement and furthermore this X prime and Z now satisfy a relation with respect to the newly programmed random value theta. Well, that's just a little bit more abstractly the thing, exactly the thing that we need here, namely we wanted to measure one of the queries of the adversary, find some X and commitment and continue the run and hope that we find some non-interactive proof where the commitment come prime actually equals the one that we found halfway in the measurement. So, yeah, applying the results to this situation, we find that the response in the final output still does a good job now with probability one over Q squared. Okay, so interesting to note is that both of our papers obtained a very similar result using quite different methods. In our paper, we used only elementary techniques, very basic quantum, very basic quantum formalism and some simple medical tools and we obtained this Q squared security loss which we believe is optimal. And the other paper by Leo Insandri used the compressed oracle framework which Marc just talked about which is a more general and strong tool but in this particular example, it had a less tight, it made for a less tight reduction. Right, so to recap the take home message from the first part of this talk is that we now have managed to reduce even in the quantum setting any security notion that you can think of from the non-interactive fiat-sharmier scheme to the underlying sigma protocol which poses the natural follow-up question how can we prove a sigma protocol quantum secure which is what Chi-Peng is going to talk about. Okay, okay. Thanks a lot for the first part and I'm going to talk about the second part which is about quantum rewinding. So quantum rewinding is a technique which allows you to prove special soundness implied soundness or in other words, it's help you to prove certain sigma protocol is a proof of knowledge. So first, we give a definition called collapsing which is a property for a sigma protocol as long as the sigma protocol is collapsing we can do quantum rewinding. In other words, it is a proof of knowledge or an argument of knowledge. And also we gave two classical sufficient conditions which implies a sigma protocol is collapsing. And also we prove this is collapsing which means the Ludwitschewski signature is post-quantum and it's even held for polynomial queue which means we don't need a large parameter, we just need to make the parameter slightly bigger. Okay, so let me talk about our definition of soundness. So for a sigma protocol, it's always easy to generate a pair of transcripts with commitment, challenge, and response because it usually satisfy honest verified zero knowledge. So for soundness here, we actually mean there's no algorithm without the witness can produce a valid response after the commitment is made and the random challenge is also selected after the commitment. And for special soundness, which means there's no algorithm can come out with two valid pairs which like under the same commitment but different challenges. So here's what we do for classical rewinding. Assume we have an algorithm for like which breaks soundness. The algorithm has two parts. For P1, it's produce a commitment for P2. It's actually an algorithm for getting a valid response. So after outputting the commitment, let's assume the internal state of the algorithm is ST. And give a challenge, it runs the algorithm P2 over the statement X, the commitment and the random challenge. And let's assume with 100% probability it actually produce a valid response. So that's the algorithm for breaking soundness. So if two random challenges are given, we simply copy the state ST and we run P2 twice. That is we run P2 over the challenge CH and then we get a valid response for CH and then we rewind by just restore ST and we run P2 over the other random challenge CH prime. In which sense, we can get two pair of valid transcript under the same commitment, which breaks special soundness. This is how we do classical rewinding. But however, for a quantum algorithm, this stage is actually a quantum superposition. So we cannot follow this cloning technique because we know it's hard to clone quantum superposition. So inspired from Uru's work, the collapse biting commitment scheme, we show that collapsing is, the definition collapsing is enough for quantum rewinding. And also we give two classical division conditions. So I will introduce the definition collapsing Sigma protocol and I will mainly focus on the first classical sufficient condition, which is called LASI function. Okay. So here is the definition of collapsing Sigma protocol. So there's no adversary A can succeed the following game with more than half probability. So here is the game. We have algorithm A, which outputs a commitment first and gets a random challenge from the verifier. Then it propels a superposition over our valid response. I mean, for valid it means it's valid respect to the commitment and the challenge. And it sends the superposition to the verifier. Then the verifier flips a coin. If the coin is one, it measures the superposition. Otherwise it does nothing. Then it succeeds if and only if the adversary can get the coin. In other words, the game means like collapsing Sigma protocol means there's no adversary can guess if the superposition is measured or not. So with collapsing Sigma protocol, we can actually do quantum rewinding. Here's how we do that. So I assume we have a quantum adversary A also with two stages, P1 and P2. Assume it breaks soundness. After outputting the commitment, let's assume the algorithm has its own internal state, which is a superposition ST. So given the first challenge, it just simply runs the algorithm over the statement X, the commitment, and also the challenge. So for simplicity, let's just assume it's a unitary. The algorithm just select a unitary based on X, com and CH and it applies the unitary over the internal state ST. And we just make it even simple. We just assume after that we got a superposition over our valid response. Then the next thing to do is just measure the superposition and we get a valid response. And because we know the Sigma protocol is collapsing, which means A have no way to tell if the superposition is measured or not. In other words, a measured superposition is basically indistinguishable from the original superposition. So we can easily get back to the original internal state just by applying the inverse of the unitary to the state. And because of the indistinguishability of collapsing, we can just go back to the internal state ST and by giving another challenge CH prime, we can just do the same because we have already got back to the internal state ST, we can apply the unitary again, but under a different challenge. And by assumption, that's a superposition over our valid response prime, which is valid under commitment and the new challenge. So in which case, it produces two valid pairs under the same commitment, which breaks special soundness. This is how we do quantum rewinding, okay? So this is one of the classical sufficient conditions for having collapsing Sigma protocol, which is called lossy function is also a generalization of URUS work. So the basic idea is very simple. Let's assume there are a collection of functions which is defined over our valid response with respect to a fixed commitment and the challenge. So for the injective mode, the function is the injective function over the domain. And for constant mode, it's just constant function. And also there's no post-quantum adversary can distinguish which function is picked. In other words, it cannot tell if it's a function from injective mode or constant mode. And this is some intuition about why lossy function implies collapsing. So assume we have a superposition of over valid response, which is not measured. It's exactly the same as we apply a constant function over the response and we measure the function register because the second register is constant, measure that does not affect our first register. So they're exactly the same. And from lossy function, we know that they're indistinguishable. So we can apply the injective mode instead of the constant mode. And finally, because the function is injective, measure the second register is exactly equal to measure the first register. So which we have the superposition measure is indistinguishable from the superposition that is not measured, okay. So here's how we rewind the sigma protocol quantumly. We can first, we can either prove the sigma protocol is collapsing, just follow the definition, which requires some knowledge about quantum, or we can prove the sigma protocol has compatible lossy functions or separable functions, which is purely classical condition. For example, we prove that the Lubbyshevsky signature, the valid response are cis solutions. We prove cis is collapsing by showing there are lossy functions and separable functions. Okay, so in conclusion, we show that a sigma protocol with special sound plus collapsing property gives you proof of knowledge. And for honest, verify their knowledge, proof of knowledge plus field shemir, it unconditionally gives you non-interactive, their knowledge, proof of knowledge and unforgeable signature. And as an example, we show Lubbyshevsky signature is post-quantum secure. Thanks. And we have time for a couple of questions. Just a quick question. So as part of the innovation, you mentioned picnic, dilithium and MQDSS. MQDSS is, yeah, shemir-based, but it's a five-round identification scheme, not a three-round standard signals that still work there with any kind of loss of parameters. We are actually currently working on extending the proof that we have for sigma protocol for the three-round schemes, two multiple-round schemes. Okay, so there's more that's going on that you have to kind of handle, I guess. Yeah, yeah, it takes, yeah. It's a non-trivial extension, but we suspect that it should be possible. Okay, great, thanks. So you showed that a large class of sigma protocols that are collapsing are quantumly sound. Is it known if there are any sigma protocols that are not quantumly sound that are like standard, classically sound but not quantumly sound? Are there any such examples? I'm sorry. Are there any examples of sigma protocols that are not collapsing and are also not quantumly sound? You mean like, is there any example which is like not collapsing, but also not like sounding quantum? Yeah, it is. Oh, yes, there is some example showing previous work, yeah. Yes. Okay, last call for questions. All right, let's thank Yela, Shuking, and all the super-extremist sessions. Thank you. Thank you.