 Hi, I'm Lior, and this is a talk about simple and efficient better verification techniques for verifiable delay functions. So let's start by defining delay functions, and then talk about verifiability. Roughly speaking, a delay function is a function which is efficiently computable, but only in a manner which is inherently sequential. One way to capture this is by introducing a delay parameter t. Then the function should be computable on every input in some time polynomial in t, so for example 40 or t squared, but on the other hand, it should not be possible to evaluate the function on a randomly chosen input in time less than t, even with pre-processing and many parallel processors. A verifiable delay function, or a VDF for short, is a delay function which allows for first verification. So when computing the output f of x of the function on input x, it should be possible to produce alongside the output a short proof pi asserting the validity of this output. Then, given x, f of x and pi, one can verify that f of x is indeed the output of the function on input x, and this verification should be much quicker than computing the function anew. Though this is a recent notion, VDFs have proven to be extremely useful for a wide array of applications. So just to name a few, we have time-based proofs of replication, verifiable randomness pickings, computational time stamping, resource-efficient blockchains and much more. The main candidates that we currently have for VDFs are based on the repeated squaring delay function put forth by Rivest, Shamir and Wagner back in 96. This function is defined with respect to a cryptographic group G, and on input x, the output is x raised to the power of 2 to the t. The fact that this is indeed the delay function is based on the assumption that there are groups in which this computation cannot be significantly sped up, even with pre-processing and a polynomial number of parallel processors. It's not hard to see that for this assumption to be plausible in some group, it should be hard to compute the group's order from its representation. We currently have two main family of candidates for such groups. The first is the family of RSA groups, and specific subgroups of RSA groups. For this family of groups, a recent result of Gil Sagev and myself showed an equivalence in the generic ring model between the sequentiality of repeated squaring and the factoring assumption. The second candidate family of groups is the family of class groups of imaginary quadratic number fields. Indeed, several works, starting with the works of Piotr Sakhand-Vesolowski, constructed VDFs based on the repeated squaring delay function. The basic idea in all of them is to augment this function with efficient proofs of correct exponentiation, on which we will focus later in this talk. Now in some scenarios, one might need to verify not just one, but many VDF outputs at the same time. As an example, consider verifying using a VDF-based proof of replication that some storage service maintains many copies of the same file, or verifying the randomness generated by some VDF-based randomness beacon over many epochs. The naive solution is to simply use the per instance verification procedure of the VDF and to verify the outputs one by one. The downside of this approach, of course, is that it incurs a blow-up in both the proof size and the verification time, which is linear in the number of outputs to be verified. The question is, can we do better than that? Before going back to verification of VDF outputs, I want to take a little detour and to talk about the related work from over 20 years ago by Bellary-Garai and Drabin that deals with better verification of exponentiations but in cyclic groups of prime order. So the setting is the following. We have n exponents, x1 to xn, and n group elements, h1 to hn, and we wish to verify that hi is equal to g raised to the power of xi for each index i and for a publicly known generator g of the group. Bellary-Garai and Drabin presented several batching techniques to solve this problem, and some of their ideas are actually implicit in some of the recent works even on single instance VDF verification. We explicitly observed this connection and built upon it for the sake of batch verification of VDFs. But in order to do so, we need to overcome two issues. The first is that they consider the setting with no additional proofs. All you have are the xis and the hi's, and you want to verify that they indeed satisfy a certain relation. In the VDF setting, we are in a setting of an external prover proving that the shared input satisfies some relation, so we also need to account for the proof length or the communication complexity in the interactive setting. The second issue is that the more efficient procedure of BGR relies on the group being of prime order, which is not the case for the candidates that we currently have for groups of unknown order. Going back to VDFs, Veselowski, in an update to his original work, already presented the batch version of his proof of correct exponentiation. So this batch protocol implicitly relies on ideas from BGR, and instead of producing a separate proof for each output, it produces a single, shorter proof for all outputs, which is also quicker to verify. Unfortunately, this batch protocol was presented not as a modular compiler that works with any proof of correct exponentiation, but rather as a direct generalization of Veselowski's single instance protocol. The more serious issue is that this protocol is proven secure based on the adaptive root assumption. This is not an issue when compared to Veselowski's single instance protocol, since it already relies on this assumption. However, the adaptive root assumption is stronger than those needed to prove the security of other proofs of correct exponentiation. The soundness of PSX protocol is based on the low-order assumption, which is seemingly weaker than the adaptive root assumption. And moreover, a very recent work by Block et al modifies PSX protocol and obtains an information theoretically sound proof of correct exponentiation. So this raises the question of whether we can apply the techniques from Veselowski's batch protocol to these protocols as well, and must we introduce additional assumptions in order to do so? So what we do in this work is to present general and modular batch verification techniques for VDFs that are based on a proof of correct exponentiation or a PoCE for short. First of all, we define soundness notions for batch PoCs, and then we present two compilers that take any single instance PoCE and compile it into a batch PoCE. The first compiler doesn't make any group-specific assumptions and can be applied in any group, and the second compiler has better parameters but it relies on the low-order assumption that we will describe later in the talk. Both of these compilers extend the ideas of BGR to the VDF setting and improve on the naive approach for batch verification that we described before. Additionally, we also have two specific protocols in RSA groups where the modulus is the product of two safe primes. First, as we will discuss, the low-order assumption doesn't hold in these groups, so we show how to extend the second compiler to these groups as well. And secondly, to complete the picture, we present a single instance PoCE which is information theoretically sound in these groups. This is a table summarizing the parameters achieved by our compilers. Let's assume that we want to verify any instances and we start with a single instance PoCE with proof size l pi and verification time t pi. In this case, the naive solution gives you a proof of size n times l pi and verification time n times t pi. Our compiler in general groups gives you a proof of size lambda times l pi and verification time lambda times n times tg plus t pi where lambda is the security parameter and tg is the time it takes to compute the group operation in the underlying group. Our improved compiler, which is based on the low-order assumption, doesn't add anything to the proof size and its verification time is n times lambda times tg plus t pi. Note that in both of our compilers, the proof size is completely independent of the number n of instances to be verified. When it comes to verification time, there is still a linear dependence on n, but it is decoupled from the dependence on the verification time t pi of the underlying PoCE. So even if the PoCE is very expensive in terms of verification time, we still only pay for it essentially once and not once per every VDF output. Now, it should be mentioned that VDF proofs for repeated squaring are typically obtained by applying the Fiat-Chamiriuristic to an interactive protocol. If one doesn't want to assume the sound of Sophia Chamir, our compilers can be instantiated as an additional message to the underlying interactive protocol. And in this case, the communication complexity and verification time are essentially those listed on the slide, and we will see this in more detail later on. So the talk-out line from now on is the following, we'll start by defining batch PoCs, then we'll present our two compilers, and we'll move on to talk about our protocols in safe-primes RSA groups, and then we'll conclude. So let's start by defining batch PoCs. First, we'll define single-instance PoCEs and the setting is this. We have some underlying group G, two group elements, X and Y, and an exponent E. The prover wishes to convince the verifier that Y is equal to X raised to the power of E. So a PoCE is a possibly interactive protocol between the two parties, and the completeness requirement is that if indeed Y is equal to X to the E, then the verifier accepts with probability one. Security is captured by the delta soundness property, which states that if Y is not equal to X to the E, then the probability of acceptance is bounded by delta plus some negligible function of the security parameter lambda. In the paper, we can actually consider an adaptive notion of soundness, allowing the malicious prover to choose X, Y and E, but for ease of presentation, we'll forget about adaptivity in this talk. Now, obviously, this completeness and soundness properties are trivial to satisfy on their own by simply having the verifier compute X to the E. So the non-triviality of PoCEs is that the running time of the verifier needs to be much shorter than the time required for this trivial verification. As mentioned before, a non-interactive PoCE can be obtained from an interactive PoCE using the Fiat-Chamer transform, and this is of course assuming that this PoCE is indeed compatible with Fiat-Chamer, as is the case for existing PoCEs. In a batch PoCE, we have amperes of group elements, and the prover tries to convince the verifier that YI is equal to XI raised to the power of E for every I. As before, if this is indeed the case, the verifier should accept with probability one, and the delta soundness requirement is that if this is not the case, then the verifier accepts with probability at most delta plus some negligible function of the security parameter lambda. Now, in the batch setting, we want to achieve verification time, which is non-trivial, not only with respect to verifying the statement by direct computation, but also with respect to the naive approach that simply applies a separate PoCE to each pair XI, YI. We also want the overall communication to be non-trivial in the same respect. As before, a non-interactive batch PoCE can be obtained from an interactive one using the Fiat-Chamer transform, and as we will see, all of our compilers preserve compatibility with Fiat-Chamer. We can now move on to consider our compilers, and as a warm-up, consider the following simple compiler for transforming a PoCE pi with delta soundness into a batch PoCE. First, the verifier sends to the prover a uniformly random subset I of the indices from one to n. Both parties then locally compute X prime, which is the product of all the XIs for little i in the subset capital I, and Y prime, which is defined similarly. Then, the parties simply execute the PoCE pi on the joint input X prime, Y prime, and E. This is the entire compiler. The main observation underlying the soundness guarantee of this compiler is that if Y i is not equal to X i to the E for some i, then the probability that Y prime is equal to X prime to the E is at most one half. This immediately implies that the compiler yields a batch PoCE, which is one half plus delta sound. The communication complexity is n bits needed to represent the subset i, plus the communication required for a single execution of pi. And the verification time is roughly n times tg plus tpi, where tg is the time required for a single multiplication in the group, and tpi is the verification time of pi. Obviously, we're not happy with soundness error of one half, and we wish to reduce it. Naturally, this is done by parallel repetition. So now, the verifier chooses m-independent random subsets of indices i1 to im, where m is the parameter of our compiler, and sends these subsets to the prover. The two parties compute the corresponding m pairs of group elements, X prime j and Y prime j for each j from one to m, and then run m parallel executions of pi, one for each computed pair of elements. By the independence of the subsets, we can show that if Y i is not equal to X i to the E for some i, then the probability that Y prime j is equal to X prime j to the E for every j is at most one half raised to the power of m. This implies that the resulted batch protocol is two to the minus m plus delta sound. And for example, setting m to be equal to the security parameter lambda, or even anything super logarithmic in lambda, we get a protocol which is delta sound which is the best we can hope for. The problem that remains is that the communication added by the compiler is linear in n. This is not a problem if the Fiat-Chameleuristic is applied since all of this added communication is from the verifier to the prover, so it will be locally computed by the parties. If one wishes not to rely on the Fiat-Chameleuristic and to retain an interactive protocol, then this first message from the verifier to the prover can be de-randomized and made much shorter. For example, this can be done by having the verifier send the seed to a pseudo-random generator. This results in communication complexity, which is m times the communication complexity of pi, since we run m executions of pi plus the seed length of the PRG. The verification time is now m times n times tg plus tpi, plus whatever time it takes to evaluate the PRG. It should be mentioned that one can also de-randomize the verifier's message using combinatorial non-cryptographic tools like epsilon-biased sets, but this would incur a slightly larger communication overhead. Okay, so let's move to consider our second compiler, and to do that, we first need to recall the low-order assumption. This assumption is parameterized by some integer s and is captured by the following experiment. First, we sample a group G according to some underlying distribution and give the description of G to the adversary A. The adversary then needs to output a group element x and a integer omega, such that x is not the identity element, omega is smaller than the parameter s, and omega is a multiple of the order of x in G. We say that the s-low-order assumption holds if the probability that a succeeds is negligible in the security parameter. Candidates in which this assumption might hold for super-polynomial values of s are the group qrn of quadratic residue's model when and its isomorphic group qrn plus of sine quadratic residue's, the quotient group z and star over plus minus one, and class groups of imaginary quadratic number fields for certain parameter choices. In some of these cases, the s-low-order assumption even holds unconditionally for s which is exponential in lambda, and you can see the paper for a more exhaustive discussion. The second compiler is defined as follows. The verifier first samples n independent integers, a1 to an from the set 1 to s, whereas it's a parameter of the compiler. You can already understand that it's not a coincidence that we use the same letter that we use to define the parameter of the lower-order assumption. The verifier then sends these ai's to the prover and the two parties compute x' as the product of xi raised to the power of ai and y' which is defined similarly. The parties then execute pi on the input x' y' and e. The soundness of the resulted batch protocol is based on the following lemma that states, at least informally, that if yi is not equal to xi to the e for some ai and the probability that y' is equal to x' to the e is at least one over s plus epsilon, then we can break the s-low-order assumption in the group with probability which is at least epsilon squared. At a high level, the proof of the lemma is by a rewinding reduction. Observe that if we have integers a1 to an such that the induced y' is equal to the induced x' raised to the power of e and the same holds when we change ai to some other ai' then we can divide the two equalities and conclude that yi over xi to the e is a group element whose order divides ai minus ai'. Since yi is not equal to xi to the e, this group element is not the identity element and since ai and ai' are between 1 and s, their difference is less than s and so we break the s-low-order assumption. So this lemma immediately implies that the resulting batch protocol is one over s plus delta sound. Note that the reduction is non-tight as we move from epsilon to epsilon squared and so it's worth mentioning that we do have tight reductions for specific groups and you can see the paper for more details on that. Also recall that as we mentioned we do have candidates for groups in which we believe that the s-low-order assumption holds for super polynomials values of s. In this case, when instantiated in these groups, our compilers yield the delta sound batch protocol which is the best we can hope for. The communication complexity of the resulted protocol is n times log s for the first message of the verifier plus the communication complexity of pi and the verification time is roughly n times log s times tg since each exponentiation takes roughly log s group operations plus the verification time of pi. As before, we can get rid of the added message of the verifier using via Chameer or we can de-randomize it for example using PRG. Now let's move on to discuss our protocols in SafePrimes RSA groups. Just to be on the same page, we're now considering the RSA group z and star of integers model when where the modulus n is the product of two primes p and q. Note that the s-low-order assumption cannot hold in this group for any non-trivial value of s since minus one model when is always an element of order two in this group. This is not just a problem in the proofs but indeed the protocols of Pietzak and Wieselowski can only make sure that y is equal to plus minus x to the e and the same problem arises in the random exponents compiler that we just saw. A possible solution that has been suggested is to quotient out plus minus one from the group. But in some cases, one might want to stick with the group z and star for example, due to implementation issues or compatibility with other cryptographic schemes. In this case, one can consider settling for the above mentioned weaker security guarantee as it might be sufficient for some applications. Another possibility that we will now see is to have the prover prove that we are not settling for this weaker security or in other words, that yi is not equal to minus xi to the e for every i. And we do that for the special case in which p and q are safe primes. As a first step, let's see how to prove that y is not equal to minus x to the e for a single pair of elements x and y and we'll do this by generalizing an idea by Degreshen Zoetal. So the prover computes e plus one over two, rounds it up, raises x to the power of the result and sends the result group element u to the verifier. The verifier then computes z as y times x raised to the power of one plus e plus one model or two and accepts if and only if z is equal to u squared. The main observation is that if y is equal to minus x to the e, then z must be a quadratic non-residue model n. This is true since in this case we can write z as minus one times x raised to the power of e plus one plus e plus one model or two. When p and q are safe primes, minus one is always a quadratic non-residue. On the other hand, e plus one plus e plus one model or two is always an even integer. So z is the product of a quadratic non-residue and a quadratic residue, which means that z is indeed a quadratic non-residue. Hence it cannot be equal to u squared regardless of what element u the prover sends. Now we want to generalize the same approach to prove that yi is not equal to minus xi to the e for n pairs xi and yi. The way the Kreshenzo et al did this is by repeating the protocol from before n times over, which results in a proof size which is linear in n and this is exactly what we're trying to avoid. So the solution will be to apply the same ideas as in the random subsets compiler. For each i, the prover computes ui from xi and the verifier computes zi from xi and yi as before. But instead of having the prover send all of these ui's to the verifier, the verifier now samples m subsets i1 to im and sends them to the prover. For each j, the prover computes wj as the product of all of the ui's for little i in ij and sends the wj's to the verifier. Finally, the verifier computes for every j an element tj as the product of all of the zi's for little i in ij and accepts if and only if tj is equal to wj squared for every j. Incorporating the ideas from before with the ideas of the random subsets compiler, one can show that the soundness error of this protocol is at most 2 to the minus m. As before, we can get rid of the verifier's first message using Fiat-Chameer or the randomize it to make it shorter. The last protocol that we will see in this talk is an information theoretically sound POCE in RSA groups that are defined by two safe primes. So the only POCE that we know of that is secure in z and star is the recent protocol of block et al extending the protocol of Piedzak. The upside of their protocol is that it works in any group but the downside is that it incurs a blow up of factor lambda in communication when compared to the protocol of Piedzak. What we will see now is a protocol that incurs a blow up of factor only two. It can be proven to be information theoretically sound in RSA groups that are defined by two safe primes but it can also be assumed sound for other reasonable choices of moduli. We start by describing the protocol of Piedzak and for this presentation we focus on the case where the exponent is 2 to the t and the prover wishes to prove that y is equal to x raised to the power of 2 to the t modulo n. Piedzak's protocol proceeds in iterations where in each iteration the prover computes z as x raised to the power of 2 to the t over 2 and sends z to the verifier and the verifier replies with a random integer r between 1 and 2 to the lambda. The input to the next iteration is then x prime which is x raised to the power of r times z, y prime which is z raised to the power of r times y and t prime which is t over 2. After log t iterations we get to t equals 1 and then the verifier can verify that y is equal to x squared on their own. As we mentioned before this protocol is insecure in RSA groups since in each iteration it might be the case that y is equal to minus x raised to the power of 2 to the t. To make sure that this is not the case we ask the prover to additionally compute in each iteration the element u as x raised to the power of 2 to the t minus 1 plus 1 and send u to the verifier and the verifier then rejects if x squared times z is not equal to u squared. We will not see the proof for the soundness of this protocol in this talk but it relies on similar ideas to those we saw before and the fact that all elements in z and star when n is the product of two safe primes are either of order two or of order at least two to the lambda. Okay, so let's conclude. What we saw in this talk are simple and deficient batch verification techniques for verification of VDFs while focusing on proofs of correct exponentiation. Concretely, we saw two compilers that take any POCE and turn it into a batch POCE that improve upon the naive solution of simply repeating the POCE and times over. We saw the random subset compiler which works in any group and we saw the random exponents compiler that assumes the lower the resumption in the group. We also specifically saw how to import the random exponents compiler to RSA groups where the modulus n is the product of two safe primes and we also saw an information theoretically sound POCE in these groups as well. An interesting open question is to devise batch verification techniques for other VDFs that do not rely on repeated squaring and batch proofs of correct exponentiation like isogenic based constructions or constructions in prime fields. Okay, so that's the end of the talk and thank you for listening.