 Hi, I'm Lior and I will talk about how to derive tighter security bounds for Schnorr identification and signature schemes based on a new high-moment for King Lema. This is joint work with Gil Segev. Schnorr's identification and signature schemes were proposed by Schnorr back in 91. They are simple and efficient, but at the same time allow for many generalizations extending their functionality to primitives such as multi signatures, threshold signatures and drink signatures. This makes them an appealing choice of signatures and indeed, they are being widely adopted since the patent on them expired and are currently in the wide use in a variety of applications from cryptographic protocols, messaging applications and blockchains and cryptocurrencies. The security of Schnorr's identification and signature schemes is typically proven via a reduction to the discrete logarithm problem. Namely, it assumes the existence of some successful impersonator against Schnorr's ID scheme or a successful forger against Schnorr's signatures and converts them into an algorithm for computing discrete logarithms, which performs better than it should according to some underlying assumption. This seems natural since the best known attacks against these schemes rely on discrete log computation, but unfortunately the reductions that we have to the discrete log problem are non tight. This is not just a theoretical issue. When setting the parameters of the group, one has to choose between a larger group size, which means the greater deficiency or a smaller group size, which means degraded provable security. So let's see exactly what is the loss in the currently known reductions to the D-log problem and where does it come from? Essentially all reductions from the Schnorr ID and signature schemes to the D-log problem use some variant of the forking lemma introduced by Poinsheval and Stern and later generalized by Bellar and Neven. This is a rewinding based reduction technique that incurs a square root loss seemingly inherently. For example, this reduction technique can be used to convert an impersonator against Schnorr ID scheme, the transient time t and has advantage epsilon, into a D-log algorithm, the transient time roughly t as well, but has success probability, which is roughly epsilon squared. So what does this tell us about the security of the Schnorr ID scheme? Schup's bound on the hardness of the discrete logarithm problem in the generic group model tells us that epsilon prime is at most t prime squared over p, where p is the size of the group. So if we work in a group in which this bound is assumed to hold, such as some elliptic curve groups, the forking lemma implies that the advantage epsilon of the impersonator is at most the square root of t squared over p. This is potentially a much greater bound than the advantage of the best known attack on the Schnorr ID scheme, which is by discrete law computation and succeeds with probability roughly t squared over p. The situation is similar when considering Schnorr signatures. There the forking lemma allows us to convert a forger, the transient time t and has advantage epsilon, into a D-log algorithm, the transient time roughly t and has success probability roughly epsilon squared over q-h, where q-h is the number of random oracle queries issued by the forger. If we work in a group in which the D-log problem is assumed to be as hard as in the generic group model, this implies that epsilon is at most the square root of q-h times t squared over p. Again, this can be much greater than the best known attack via discrete law computation. These gaps between the provable security of Schnorr schemes and the best known attacks against them can be significant. These are just a few examples for different choices of parameters and the concrete gap that they induce between the provable security of Schnorr's ID scheme and the best known attack against it. And these are a few examples for the concrete gap in the case of Schnorr's signatures. As these numbers exemplify, closing the gap between the best possible security of Schnorr ID and signature schemes and the square root bound is not only of theoretical interest, but can also have significant practical implications. It should be mentioned that not all proofs of security for Schnorr schemes go through a standard model reduction to the D-log problem. One option that has been explored is to prove the security of these schemes in idealized models. In the generic group model, Schup proved the tight information theoretic lower bound on the advantage of any generic attacker. And in the algebraic group model, Fuchs-Flauer, Pluvier and Serrer recently proved the tight reduction to the D-log problem. Though tight, the drawback of these two results is that both of the generic group model and the algebraic group model consider highly restricted classes of adversaries. In the standard model, Belare and Dery recently presented a tight reduction from the security of the Schnorr ID scheme not to the D-log problem, but to a new problem which they introduced called the multi-base discrete logarithm problem. The drawback of this approach, of course, is that it assumes the hardness of a newly introduced interactive problem rather than the standard D-log problem. Thus, a pressing question is how inherent is the square root loss when trying to reduce the security of Schnorr schemes to the hardness of the well-studied D-log problem in the standard model? Or in other words, can the square root barrier be circumvented based on the hardness of the D-log problem? In this work, we answered this question in the affirmative by proving Tyler bounds for Schnorr's ID and signature schemes based on the hardness of the D-log problem. First, we refined the assumed hardness of the D-log problem in the standard model by revisiting its generic group hardness and distilling a key aspect of it. Then, we introduced a new proof technique which generalizes the forking lemma to a high moment variant of it and used this new technique and a refined assumption in order to derive Tyler bounds for Schnorr's schemes. For Schnorr's ID scheme, we proved that any impersonator that runs in time t breaks the security of the scheme with probability at most t squared over p raised to the power of two-thirds, which improves upon the previously known bound of t squared over p raised to the power of one-half. Similarly, for Schnorr's signatures, we proved that any forger that runs in time t breaks the security of the scheme with probability at most qh times t squared over p raised to the power of two-thirds, where again qh is the number of random oracle queries issued by the attacker. This improves upon the previously known bound of qh times t squared over p raised to the power of one-half. Finally, though we will not cover this in this talk, our approach can be generalized to apply to any ID or signature scheme which is obtained from a Sigma protocol with special soundness such as the Okamoto ID and signature schemes. The remainder of the talk will be arranged as follows. We'll start by reviewing Schnorr's ID scheme and the existing reductions to the D-LOG problem. Then, we will present our refined assumption and our results in more detail followed by the new high moment forking lemma and finally we'll conclude with some closing remarks. So, let's begin by recalling Schnorr's ID scheme and from this point on we'll focus mainly on this scheme and you can check the paper for the extensions to Schnorr's signatures. So, the setting is that we have a prover p who wants to convince a verifier v that she knows the discrete log of some publicly known group element. We assume some publicly known and fixed group of order p which is generated by some fixed generator g. v gets as input a public key which is a group element and in a non-stack execution p gets as input the secret key x which is the discrete log of this group element with respect to the generator g. Schnorr's protocol proceeds in three rounds. In the first round, known as the commitment, p chooses a uniformly random exponent r from zp and sends g to the r to v. In the second round, known as the challenge, v sends a uniformly random element beta in zp to p. Finally, in the third round, known as the response, p computes beta x plus r and sends it to the verifier who accepts if and only if g raised to the power of gamma is equal to pk raised to the power of beta times alpha. Roughly speaking, the most basic security guarantee that the protocol provides is that no efficient attacker can impersonate p without knowing, so to speak, the secret exponent x and we will formalize this guarantee later on. Actually, the protocol also provides honest verifier zero knowledge which means that security is retained even against impersonators that can passively ifs drop many honest executions of the protocol. Schnorr's signatures are obtained from this protocol via the Fiat-Chemille transform and again, for more detail, you can see the paper. Towards formalizing the security of the protocol, observe that the protocol satisfies the special soundness property. That is, one can extract the secret exponent x from two different accepting transcripts. Concretely, given two accepted transcripts that share the first message alpha but with different challenges beta and beta prime, x is obtained by computing gamma minus gamma prime over beta minus beta prime. The special soundness property can be used to prove that the Schnorr protocol is a proof of knowledge. Given a successful impersonator p star, we can extract from it the secret exponent x. The way this is proven is by employing a proof technique known as the Forking Glimmer. There are several variants of this technique, but in all of them, the general idea is to rewind p star in order to get two accepting transcripts to apply special soundness in order to extract the secret exponent x. Let's take a look at two variants of the Forking Glimmer and how they encounter the square root barrier that we mentioned earlier. The first is due to Belarion Neven, and the idea here is to rewind the impersonator p star exactly once. So the extractor first honestly interacts with p star in order to obtain the first transcript alpha beta gamma. Then it rewinds p star to the point right after the first message alpha, samples a fresh challenge beta prime, sends it to p star and obtains a second transcript alpha beta prime gamma prime. First, know that the extractor runs in roughly the same time as p star. Second, Belarion Neven proved that if p star breaks the security of the scheme with probability epsilon, then the extractor succeeds with probability which is about epsilon squared. The ggm bound for the delog problem implies that the advantage epsilon of p star is bounded by the square root of t squared over p, so we encounter the square root loss compared to the best known attacks. Another variant of the Forking Glimmer was recently proposed by Bruteletal. In this variant, the extractor first runs p star once, and if p star is successful, then the extractor rewinds it repeatedly until it is successful again under a different challenge. If p star was successful in its first invocation, then the extractor will eventually succeed, so the successful probability of the extractor is the same as that of p star, which is epsilon. The running time of the extractor is now unbounded, but it is not hard to see that in expectation it will terminate in time which is roughly t. Now we cannot use Schup's bound for the discrete log problem in the ggm, since it doesn't apply to expected time algorithms. Fortunately, Jager and Tezako recently provided such a bound. Concretely, they proved that in the ggm, an algorithm that transient expected time t prime solves the discrete log problem with probability at most the square root of t prime squared over p. Applying this bound to the extractor of Bruteletal, we again encounter the same square root barrier. Other variants of the Forking Glimmer that we will not cover in this talk also encounter the square root barrier as well. We are now ready to present our refined assumption on the hardness of the d-log problem. Consider an algorithm A for computing discrete logarithms in a cyclic group of order p generated by g. Our assumption is that for any such algorithm A, the probability that it succeeds in computing the discrete log of a uniformly random group element with respect to g is bounded by the expectation of ta squared over p, where ta is a random variable corresponding to the running time of A. We call this assumption the second moment hardness of the d-log problem. A major upside of this assumption is that it only considers the very well-studied d-log problem and doesn't introduce any new problems. Additionally, this assumption holds in the ggm as implied by Schup's original proof and observed by Jager and Tezako. In the paper, we also consider extensions of this assumption to arbitrary relations beyond just the d-log relation and also to higher moments of the adversaries running time potentially beyond the second moment. With this assumption, we can now state our results more accurately. So assume that we work in a group in which the d-log problem is second moment hard. Then, for any impersonator, attacking the schnor identification scheme, the transient time t, her advantage in breaking the security of the scheme is at most t squared over p raised to the power of two-thirds, whereas before, p is the order of the group. For schnor signatures, we obtain a similar bound, but with a multiplicative factor of qh raised to the power of two-thirds, whereas before, qh is the number of random oracle queries issued by the attacker. Just to give you some sense of how much our result improves upon the square root bound, you can find some concrete examples on the slides for the case of schnor's id scheme. So, for example, if the group is of size 256 bits and the attacker runs in time 2 to the 64, our bound is better by a multiplicative factor of roughly 2 to the 21. Here you can find some concrete examples for the improvement we make for schnor signatures. So, for example, if the attacker also makes 2 to the 50 random oracle queries, our bound is better by a factor of roughly 2 to the 13. We can now present our high moment working lemma. So, the lemma does the following. It takes an impersonator pre-star that runs in time t and breaks schnor's id scheme with probability epsilon, and it converts it into an extractor whose goal is to output two valid transcripts with distinct challenges. Moreover, we show that this extractor succeeds with probability roughly epsilon raised to the power of three-halves, and that the second moment of its running time is roughly t squared. Looking ahead, we can then use this extractor to get there with the special soundness property of schnor's protocol to get a discrete log algorithm with similar parameters. Then, our bound for the security of schnor's id scheme will follow from our assumption on the second moment hardness of the delog problem. Before we describe our extractor, note that the existing extraction procedures that we described earlier still encounter the square root barrier even with our second moment hardness assumption. The single rewind approach results in an extractor that has success probability epsilon squared and whose running time's second moment is t squared. The second moment hardness of the discrete log yields the bound of square root of t squared over p. On the other hand, the rewind until success approach yields an extractor that has success probability epsilon and whose running time's second moment is t squared over epsilon. So the second moment hardness of this discrete log again yields the square root bound. The key idea behind our extractor is to carefully choose the number of rewinds so that it optimizes the trade-off between its success probability and the second moment of its running time. So we rewind the impersonator pre-star exactly b times for some predetermined parameter b and the question that remains is how to choose b. What we'll do in the remainder of this talk is to analyze the reduction with b as a parameter and then we'll see what choice of b is warranted by the analysis. First, let us define our extractor more explicitly. The extractor first honestly interacts with p star to obtain a transcript alpha by a gamma. It then checks if this transcript is accepting and if not, it aborts. If the transcript is accepting, our extractor rewinds p star to the point just after the first message alpha b times over and in each rewind it independently samples a fresh challenge beta i to obtain a transcript alpha beta i gamma i. The hope is that one of these rewinds will yield an accepting transcript with a challenge which is different from the first challenge beta. We'll now turn to analyze our extractor starting from the second moment of its running time. The first interaction of our extractor with p star results in an accepting transcript with probability epsilon. So with probability 1 minus epsilon it does not and the extractor aborts after invoking p star just once so its running time is roughly the same as that of p star which is d. If the first interaction with p star does result in an accepting transcript which happens with probability epsilon our extractor runs p star b additional times so its running time in this case is roughly b times t. Overall we obtained that the second moment of our extractor's running time is roughly t squared plus epsilon times b squared times t squared. An immediate observation is that setting b to b less than 1 over square root of epsilon doesn't really make sense. This is because for all values of b up to 1 over square root of epsilon the second moment of the extractor is dominated by the additive term of t squared and obviously the larger we set b the greater the success probability of the extractor will be. So from now on we will assume that b is at least 1 over square root of epsilon and the second moment of our extractor's running time in this case is simplified to roughly epsilon times b squared times t squared. As for the extractor success probability we prove a claim that states that if b is at least 1 over square root of epsilon then this probability is at least epsilon squared times b. Proving this claim is the most technical part of the proof. The main technical difficulty that arises is that conditioned on the randomness of p star the success probability of the extractor is a non-convex function of p star's advantage. This means that standard techniques that are often used in order to prove variance of the forking lemma and which use Jensen's inequality do not apply. Our solution to this difficulty is to bound this conditional probability from below by a carefully chosen function of epsilon arguing that this function is convex and then we can apply Jensen's inequality. I just want to emphasize that this is a highly informal overview of our approach and you can find the full proof in the paper. So now we can put everything together. Using our extractor and the special soundness property of Schnur's protocol we obtain a discrete log algorithm with parameters that are essentially the same as those of the extractor. We then use our assumption on the second moment hardness of the delog problem to argue that epsilon is bounded by b times t squared over p where recall that epsilon is the success probability of the underlying impersonator p star t is p star's running time and little p is the size of the group. Since we wish to minimize the bound on epsilon we can choose b to be as small as possible while still satisfying the condition that b is at least one over square root of epsilon. Indeed, setting b to be exactly one over square root of epsilon gives us the promised bound of t squared over p raised to the power of two thirds. Let's conclude with some closing remarks. To put our results in perspective note that it is the only result until now that breaks the square root barrier based on the hardness of the very well studied discrete logarithm problem in the standard model. Previous results either encounter the square root barrier relied on highly restrictive idealized models or introduced a new interactive problem that goes well beyond the standard delog problem. So to recap, we presented the tighter bounds for Schnur's idea and signature schemes based on the hardness of the delog problem. We did so by considering a refined assumption regarding the second moment hardness of delog. We then presented a new extractor that utilizes this assumption that derived concrete security bounds for Schnur's schemes that break the square root barrier. We mentioned a couple of extensions that you can find in the paper to other identification and signature schemes such as these of Okamoto and also to other relations other than discrete log and to higher moments than the second moment. The main problem that remains open is of course proving optimal security bounds for Schnur's schemes based on the hardness of the delog problem. So that's it and thank you for your attention.