 Welcome to this talk about the measuring reprogrammed technique 2.0, multi-round Fiat Shamir and more. My name is Jelle Donne and this is joint work with Serge Fier and Christian Mayans. Our work is about proving Fiat Shamir digital signatures and in general Fiat Shamir zero-knowledge proof systems secure against quantum attackers, whereby secure we mean secure in the quantum random oracle model. What we do is we extend an existing Q-ROM technique to a larger class of applications, notably multi-round Fiat Shamir signatures, of which MQD assesses an example, bullet proofs and sequential ore proofs. And finally we also show that the reductions that we get from applying these techniques are essentially tight. I'll dive right in by explaining the quantum random oracle model. Suppose we have some protocol that makes use of a public hash function, then proving security in the Q-ROM means that we model the public hash function as an external random oracle, to which all parties in a protocol have quantum query access. This means that the function cannot be computed locally, but all parties can query a superposition of inputs. This is a natural assumption, since the hash function is assumed to be public, anyone can build a quantum circuit to evaluate it, and once you have a quantum circuit to evaluate the function, then it's easy to evaluate on a superposition of inputs. But the problem is that in many classical random oracle model proofs, we want to observe the queries that the adversary makes. But as you probably know, observing a quantum state can cause it to collapse. And if the query state of the adversary collapses, this might as well collapse the internal state of the adversary, and then we can no longer predict anything about the adversary's output. In general, that is, because we present a theorem that deals with so-called multi-input reprogrammability of the Q-ROM. It says that if we are in this situation where we have an adversary making Q quantum queries to some random oracle H, and then outputs an array, so I used bold funds here for arrays of input values to the oracle, and some additional output Z, such that we know something about the probability of this output being such that X together with the hash values of X and Z satisfies some arbitrary predicate V. Then there exists a simulator that can sort of creep in between the adversary and the oracle, and choose n of the adversary's queries at random, measure them, and on the input that it finds, so in this example n is 2 and the simulator finds xi and xj, it can then reprogram the oracle to fresh random values theta i and theta j, and continue the run of the adversary such that its final output X bar Z bar will satisfy the same predicate V, but now with respect to these freshly programmed random values theta. And as you can see we can even compare the probabilities for specific choices of this array X0, but by summing the inequality we can also make more general statements. And of course as you can also see this all comes at multiplicative loss of order q to the power 2n, but indeed they are polynomially related for constant or logarithmic n. Right, another observation that we can make is that instead of talking to a real random oracle itself the simulator can also use quantum secure pseudo-random function in order if a is at least computationally bounded so that he will not notice the difference. Right, so that's the main technical result. It may also seem a bit abstract at this point, but we can actually use it and apply it to prove security of multi-round field shamir. So very quickly, multi-round field shamir takes some public coin interactive proof system by n and turns it into a non-interactive scheme. And we show that the advantage of the best adversary against a non-interactive scheme is at most order q to the power 2n times as big as the best adversary against the interactive scheme. And indeed this is tight because in our paper we provide an attack. First we provide an attack for typical three round schemes, so-called sigma particles, which indeed boosts the success of the best interactive adversary by a factor q2, q squared, showing that for n equals 1 this loss is optimal. And then we extend the attack to a somewhat artificial multi-round scheme and we get almost the same boost except for this factor n to the power minus 2n. But since we're usually considering n to be constant, that means that asymptotically q to the power 2n is optimal as well. Okay, for the rest of the talk I'll give you a bit more detail about how the fiat shamir transformation works, how the original result is applied to prove it secure and then we'll come to the motivation for this new work, namely multi-round fiat shamir and I'll discuss what we need to prove that secure. Then I'll give you the proof idea for this main result that we have and I'll finish off by talking about yet another application in the sequential or proofs. Right, so suppose that we have this three round identification scheme where some proofer can prove its identity by sending a commitment to a verifier and then upon receiving a challenge, a random, this must be a uniformly random challenge, uses its knowledge of some secret key to compute a response and then the verifier who of course knows the public key of this proofer computes some predicates on the messages to verify that indeed this proofer must have known the secret key. Now, since this is an interactive protocol, we could ask if we could make it non-interactive and fiat shamir transformation says that we may do so by introducing this public hash function h and then let the proofer, instead of waiting for a random challenge, compute a challenge himself simply by hashing the public key in the commitment. Then it only needs to send the commitment and the response since the verifier on his side also knows how to evaluate h so it can recompute this challenge and verify again that the response is correct with respect to this particular commitment and this particular challenge. Additionally, the proofer could send the message along with this commitment and response as well as include it in the hash. Including it in the hash ensures the integrity of the message which combined with the proof of identity makes for a digital signature. So this is the idea of fiat shamir signatures. Well, to prove this construction secure in the Q-ROM, Don Ferremais and Schaffner in 2019 presented the measure and reprogram technique which is basically the same as the multi-input version which I already presented to you except that now the output of the S-adversary is just this single hash input X along with C and the simulator now chooses just one of the adversaries queries at random, measures it, reprograms the oracle on this input to a fresh random value theta. And then we again get the guarantee that some predicate V holds now with respect to theta up to a Q squared loss and also here we have an additive error term but the theorem has the promise that even when summing over all possible instances of X not the error term remains negligible so we don't have to worry about it. Okay, now I'll finally show you how to apply this tool in a reduction for the plain fiat shamir transformation. What we want to do in the reduction is turn a proofer against the non-interactive fiat shamir scheme into a proofer for the interactive scheme to show that both are approximately equally hard to break. Well, the pattern can be matched as follows. We know that the adversary will output a public key, a commitment and a response. So the public key and a commitment take the role of X and the response takes the role of C. Which means that we can start up an interactive protocol by running the adversary as a subroutine and just wait until the particular query comes up that we have chosen randomly to measure. Once we've measured it we have a public key and a commitment. So we can send that to the interactive verifier. The challenge that we get in return we program into the oracle and feed it back to the adversary. And now we just have to wait for the adversary to finish and we get a response which we then forward back to the verifier and now of course with this inequality at hand we know that the probability that this response is correct with respect to this challenge which is represented here by Theta is approximately the same up to this Q squared loss as the probability that the original adversary can break the non-interactive Viad Shamir scheme. So that's the reduction. But as I said this is only a plain Viad Shamir for three round schemes. The main topic of the current work is about multi-round Viad Shamir. And indeed there exist many 2N plus 1 round public coin interactive proof systems for constant but also logarithmic N. Where we can again remove the interaction in a Viad Shamir heuristic way. It looks like this. We have an interactive scheme consisting of many rounds. And just as before all these uniformly random challenges can be replaced by outputs of some public hash function which the verifier can compute on its own. So that the proofer only needs to send the N different commitments and a final response. And that's enough for the verifier to check the verification predicate. But now of course in the reduction we need to extract all of these commitments all of these hash inputs in order to reprogram the oracle value to the challenges that we get from the interactive verifier. Now to do that obviously what we need is a measure and reprogram technique but now one that can handle multiple measurements. Well of course that is exactly the theorem that I already presented to you at the beginning of this talk. But let's for a moment assume that we don't yet know how to relate the left hand side to the right hand side scenario. How would we go about proving this inequality starting just from the single input measure and reprogram tool? Well what we do is starting from this multi-input adversary. We simply rewrite its output this array consisting of axes. We rewrite it as just x1 with some z prime where now we sort of put all the other x values x2 up to xn inside z prime and it also contains the original additional output c. So we really didn't do anything except just some formal trick rewriting. But now the output has exactly the form to which we can apply the single input result. Namely we get the existence of a simulator that picks one of a's queries at random, measures it, finds x1 and reprograms the oracle at x1 to theta1 such that now the predicate holds with respect to theta1 instead of h of x1. And note that here we made the choice to let s talk to the real random oracle instead of letting it use absolute random function. And the reason for this is that we can now consider a and s together as yet another algorithm which just has this particular output x1 c prime which we can again rewrite such that now x2 is the prominent hash input and z double prime has all the other axes and the original c. So that of course as you can guess we can apply the single input result again to now measure x2 and continuing for n times we will eventually get an algorithm with outputs x1 up to xn and z such that the predicate v checks out with respect to theta1 up to thetan. But what is now the loss factor that we get from this inductive application? Well from the first application we get a q-squared loss as well as this additive error term and then in the second application we're up to q to the fourth and we have two error terms. So continuing the pattern we indeed get this promised q to the power 2n with applicative loss and a sum of n error terms. And if you remember we had the promise that for each i summing over all possible instances xi not this error term remains negligible. So surely summing over n of them keeping in mind that in applications n is constant or logarithmic the whole sum will remain negligible. But the problem is that this is not what we need. If we have a careful look at the inequality down here we see that this is for one particular choice of that array x not. So for one choice of combinations x1 up to xn and if we sum over all possible arrays we sum over many more values than just each of the xis summed independently. So we can no longer control the size of this combined error that we have here. And this proved to need to be a substantial barrier to completing the proof. So what we had to do was go back to the original single input result and you'll now see why we call it the measure and reprogram technique 2.0. Because what we did is we improved the original single input result. We gave a different proof that doesn't have the need for this negligible error term which makes the statement cleaner and well for single input applications we get a negligible quantitative improvement. But most importantly we lose the error terms in the multi-input case and we get the result that we wanted. Now one small issue that I have to mention is well if we now want to apply this multi-input reprogramming results to multi-round via Chamiere. There's one more thing that we need to take care of. As you can see this inductive application of the single input result does not give us any guarantee on the order of the hash inputs that we extract. Here in this example x2 is extracted before x3 but also before x1. And in the reduction that we want to perform the interactive verifier expects these commitments in a particular order. We cannot just send it com2 first and com1 only at some later point. So somehow we need to ensure that we extract the commitments in the right order. The way to do that is simply to include the previous challenge in the hash for the next challenge which would mean in this example that before the adversary could even query x2 it needs to know the value of theta1 since it will be a part of x2. So that enforces the adversary to query all the hash inputs in the correct order. This is not our contribution, this is folklore knowledge also for classical multi-round via Chamiere but I just wanted to mention it to complete the analysis. Okay, so finally a few words about another application of our new technique namely sequential or proofs. This is something introduced by Liu Wei in 1 in 2004 and it's via Chamiere with a twist which allows a proofer to prove the truth of at least one of two statements x1, x2 without revealing which one. So it's a proof of the disjunction really. And how does it work? We start from a simple three-round sigma protocol and we use a hash function to turn it into a non-interactive scheme but now the challenges are computed over cross. The proofer has to provide x1, x1, x2, x2 and then uses h of x2 in order to compute the response that will be used to prove the truth of statement x1 and vice versa it uses h of x1 to compute the response that will be verified to determine the truth of statement x2. Now if we look carefully we see that only for this statement corresponding to the commitment that it's the first out of the two that gets squared to h only for this statement the proofer needs to actually know a witness in order to compute a valid response. Why is this so? Well let's have a look. Suppose for now that com1 is the commitment that gets squared first then before having anything to do with com2 the proofer already knows h of com1 in other words knows the challenge that it will need to compute a response to but by the honest verifier zero-knowledge property knowing the challenge before knowing the commitment allows one to compute the fake proof so to find some com2 and response to such that this predicate is satisfied for x2 even without knowing a witness for x2 and then if com2 has been determined by this fake proof then the proofer can query it to h to find the challenge which now using a witness for x1 it can compute a valid response so that also the other predicate is satisfied. So the security reduction works similarly we want to use a non-interactive adversary to convince one of the two verifiers or the verifier on one of the two statements but note that in order to do so we have to extract both com1 and com2 from this non-interactive adversary namely we have to extract com1 because this is the commitment that we are going to submit to the verifier remember we're assuming that x1 is the statement that is actually true so com1 is the one that we're going to submit to the verifier but we also need to extract com2 because com2 is the input to the random oracle on which we need to reprogram in order to inject the challenge that we get from this verifier so that the response computed by the non-interactive adversary will be valid with respect to com1 and this challenge that we reprogram to so indeed using the multi-input measuring reprogrammed technique we can now for the first time provide a Q-ROM reduction for this sequential ORC proof that concludes the talk thank you for listening