 Hi everyone, I'm Gili and I'm going to talk about batch verification in generic group accumulators. This is joint work with Gili Segev. Let's start by quickly reviewing the notion of a cryptographic accumulator. An accumulator is a fundamental cryptographic primitive that produces a short commitment to a set of elements. The set may be static in the sense that all elements are known ahead of time or dynamic in the sense that elements may be added to the accumulator or removed from the accumulator at any point in time. In addition, the property that distinguishes accumulators from standard short commitments is their support for publicly verifiable proofs of membership with respect to the accumulated set and sometimes even proofs of non-membership. Over the years, accumulators have been found very useful for authenticating remotely stored large datasets enabling to retrieve individual elements with significant savings in terms of both communication and computation. Non-constructions of accumulators can be roughly classified into two categories, hash-based constructions and group-based constructions. Hash-based constructions generate a short commitment via a miracle tree. The length of the resulting commitment is independent of the number of accumulated elements and the length of membership proofs and the verification time are both logarithmic in the number of accumulated elements. Group-based constructions exploiting the structure provided by their underlying groups lead to accumulators in which the length of the commitment, the length of membership proofs and the verification time are all independent of the number of accumulated elements. Most notably, such accumulators have been constructed in RSA groups and in bilinear groups and in both cases the constructions do not exploit any particular property of the representation of the underlying groups and so they are generic group constructions. Motivated by recent applications of accumulators to stateless blockchains and interactive oracle proofs, Bonnet observed that a useful feature of accumulators is the ability to aggregate proofs for several elements and then to batch verify them. Hash-based accumulators seem somewhat less suitable for offering such features with practical efficiency guarantees and therefore Bonnet relied on the structure provided by RSA groups and more generally by unknown order groups such as the class group of an imaginary quadratic number field. Specifically, they show that membership proofs and non-membership proofs for any subset of T elements can be aggregated into a single proof whose length is independent of T. Then, their aggregated proofs are verified via an interactive protocol where the number of group operations performed by the verifier is again independent of T. Specifically, the verifier in their protocol performs T modular multiplications and only lambda over log lambda group operations where lambda is the security parameter instead of T times lambda over log lambda group operations as in the verification of T individual proofs. Then, by applying the Fiat-Chamir transform with a hash function that produces random primes, Bonnet showed that their interactive verification protocol yields a non-interactive publicly verifiable verification procedure. Analogous results were subsequently obtained in bilinear groups with essentially the same parameters. Other than applying the Fiat-Chamir transform for obtaining a non-interactive verification procedure, these constructions are generic group constructions. Given the key importance of non-interactive verification in most applications that involve accumulators, this leads us in this work to the fundamental question of whether non-trivial batch verification in generic group accumulators indeed requires interaction. We prove a tight lower bound on the number of group operations performed during batch verification by any generic group accumulator, stating our result somewhat informally. We prove that any generic group accumulator that stores a less than trivial amount of information must perform T times lambda over log lambda group operations for the non-interactive batch verification of any subset of T elements. In particular, this rules out non-trivial non-interactive batch verification. Our result holds both for known order and even multilinear groups and for unknown order groups where it matches the asymptotic performance of the known bilinear and RSA accumulators. It should be noted that whereas the generic group model captures generic computations in known order groups quite accurately, it's a variant that considers unknown order groups since somewhat limited in capturing generic computations in such groups and please share paper for more details. In order to state our theorem a bit more formally, we need some additional notation. We denote by NACC and L the number of group elements and additional explicit bits that are generated when accumulating K elements and we denote by Q the number of group operation queries that are issued when verifying a batch membership proof for a subset of T elements out of the K accumulated elements. In order to easily understand our lower bound, we first need to focus on a few terms. The first is log of the size of X lambda choose K that is now highlighted in yellow. This is the expected number of bits required for an exact representation of K elements that are taken from the domain X lambda. The second is NACC times log of NACC plus 1 plus L that is now highlighted in green. This turns out to be the amount of information that is actually stored from the verification algorithm's point of view. Finally, the difference between the yellow term and the green term divided by K is the average information loss per accumulated element. Now our third states that the number Q of queries that are issued when batch verifying a proof for a subset of size T is at least T times the average information loss per accumulated element times 1 over log lambda. In particular, let's consider the case where the size of the domain is exponential in the security parameter which is indeed the case for the known accumulators in RSA and by linear groups. Then either the amount of information stored by an accumulator is almost trivial or the number of queries that are issued when batch verifying T elements is at least T times lambda over log lambda. In other words, if the amount of information stored by an accumulator is bounded away from the information theoretic bound, then non-trivial batch verification is impossible. Finally, I would also like to note that our result extends somewhat beyond the generic group model. Specifically, it holds in an augmented model that enables to capture a bounded amount of non-generic information and please see our paper for more details. The rest of this talk will follow this outline. We will start by defining the generic group model within which we prove our result. Next, we'll define in more detail accumulators within this model. Then we will present a simplified version of our proof and finally, we will conclude with some closing remarks and open problems. So let's begin by discussing the generic group model. A generic group algorithm is an algorithm that does not exploit the representation of the underlying group in any way. This is captured via an oracle which manages the access of all algorithms to the group. In this work, we use the generic group model suggested by Maurer. In this model, group elements do not have an explicit representation and instead algorithms specify their queries by pointing to previously seen group elements. For example, an algorithm may ask the oracle to apply the group operation to the fourth element and the seventh element that have appeared in the computation so far. This model is a bit different from the generic group model suggested by Schup. In Schup's model, each group element has an explicit random representation. These two models have been shown to be polynomially equivalent for certain problems such as the discrete logarithm problem. But generally, the two models are incomparable. Looking ahead, a natural open problem that I would like to already point out is whether our result can be proved within Schup's model. One should note that our result can be circumvented by applying the Fiat-Chameer transform and that the random injective mapping used in Schup's model for explicitly representing group elements might potentially be exploited towards this goal. Let's define the model more formally. Any cyclic group of order p is isomorphic to the additive group Zp and therefore group elements in this model are identified with the elements of Zp. A generic computation is associated with the table B of Zp elements managed by the oracle. This table is initialized with the Zp elements corresponding to the input elements that are provided to an algorithm and the table B is always initialized with the generator 1 at its first entry. A generic algorithm can then produce two types of queries to the oracle, group operation queries and equality queries. To issue a query, an algorithm specifies the indices in the table of the two elements that it wishes to compare or to which it wants to apply the group operation and also the type of operation, plus or minus. In response to a group operation query, the oracle adds or subtracts the two corresponding elements in the table and places the result in the next vacant entry. In response to an equality query, the oracle compares the two Zp elements from the table and answers accordingly. When a generic algorithm wants to output a group element, it outputs the index of the corresponding entry in the table B. In addition to group elements, algorithms may receive as input or include in their output also explicit bit strings. This model also naturally captures interactive computations, allowing one generic algorithm to pass group elements to another generic algorithm by pointing to elements in the table B. We consider two flavors of generic groups, groups of known orders and groups of unknown orders. In the case of known order groups, which is the more standard one in the context of the generic group model, all algorithms receive the order of the underlying group as an explicit input. In the case of unknown order groups, which is somewhat less common in the context of the generic group model, the order of the underlying group is not included as an explicit input to all algorithms. Still, however, the corresponding order generation algorithm is always publicly known. In the stock for simplicity, we will consider the case where the order P is known to all algorithms. After discussing the generic group model, we can now discuss accumulators in this model. A generic group accumulator is a triplet of generic algorithms defined as follows. The algorithm setup receives its input as ZX and outputs an accumulator denoted ACC and a state. The algorithm proof receives its input an accumulator ACC, a state and a set S and outputs a proof pi. Finally, the algorithm verify receives its input an accumulator ACC, a set S and a proof pi and outputs accept or reject. The correctness requirement for this most basic form of accumulators is quite natural. For any ZX of accumulated elements, if we generate a membership proof via the algorithm proof for any subset S of X, then the algorithm verify should output accept. The notion of security is also quite natural and considers generic adversaries that can issue a polynomial number of queries but are unbounded in terms of their internal computation. The notion of security asks that any adversary wins the following experiment with only a negligible probability. First, the adversary specifies a set of elements X and receives an accumulator ACC that is honestly generated for X. Then the adversary can ask for honestly generated membership proofs for subset S of X. The goal of the adversary is to output a pair S star and pi star that is accepted by the verification algorithm with respect to the accumulator ACC, although S star is not a subset of X. Given this notion of security, we can observe that we prove our lower bound for the most basic form of accumulators that is static membership only accumulators. Therefore, our result holds in particular for accumulators with more advanced features such as a dynamic accumulators that support both membership and non-membership queries. We can now move on to present an overview of the main ideas underlying our proof. Let me just note that for the purpose of this talk, we actually present a highly simplified variant of the proof with several simplifying assumptions. We show that if an accumulator's verification algorithm issues an insufficient number of group operation queries while batch verifying a subset of T elements, then there exists an attacker A that breaks its security. Our attacker issues a polynomial number of group operation queries while being computationally unbounded in terms of internal computation. This suffices for ruling out generic group constructions since in the generic group model, problems such as discrete log are hard even for computationally unbounded algorithms as long as they are bounded in their number of group operation queries. Our attack can be divided into two steps. In the first step, the attacker chooses a random set, partitions the set into disjoint subsets, and records the view of the verification algorithm when batch verifying proofs for these disjoint subsets. In the second step, the attacker exploits this information for generating a false batch membership proof. Let's describe each step in more detail. In the first step, the attacker samples a random set X of size K and asks to accumulate this set. The attacker then partitions X into K over T, which we denote by V subsets, each of size T. The attacker asks for batch membership proofs for each subset. Then the attacker executes the verification algorithm for each subset by forwarding the algorithm's queries to the Oracle O. Now let's look into the view of the verification algorithm in these executions, which the attacker needs to record. The attacker records the equality pattern among the group elements of ACC, and for this, we need NACC times log NACC plus 1 bits. A simplifying assumption for this talk is that ACC and also the membership proofs include only group elements and not explicit streams. Then for each subset Si, the attacker records the verifier's view in the corresponding execution of the verification algorithm. The attacker records the equality pattern among the group elements of the proof Pi I and whether they are equal to any of the group elements of ACC. If each proof consists of N Pi group elements, this requires additional N Pi times log NACC plus N Pi plus 1 bits. The attacker also records the Q queries issued by the verification algorithm, where each query essentially consists of the two indices, two entries of the table B, and the type of group operation plus or minus. This requires about two Q times log NACC plus N Pi plus Q plus 1 bits. Finally, the attacker records the equality pattern induced by the responses to these queries. This requires additional Q times log NACC plus N Pi plus Q plus 1 bits. Another simplifying assumption for this talk is that N Pi is linear in Q, and therefore the total number of bits for recording the entire view is NACC times log NACC plus 1 plus Q times K over T times log lambda bits. After the attacker records the entire view of the verification algorithm, we move to the second step of the attack to generate a false batch membership proof that is consistent with this view. The attacker internally emulates the three algorithms set up, proven, verified by simulating a fresh oracle O prime instead of the actual oracle O. The attacker tries to find a set X prime and accumulator ACC prime and proves Pi prime 1 to Pi prime V that satisfy the following requirements. First, the set X prime must be different from the set X that was chosen as part of the experiment. Second, when partitioning X prime into disjoint subsets S prime 1 to S prime V, the following two requirements must be satisfied. The verification algorithm with access to the oracle O prime accepts Pi prime I as a batch membership proof for the subset S prime I with respect to the accumulator ACC prime. The exact same execution of the verification produces the same view as the execution of the verification algorithm with access to the actual oracle O on input the accumulator ACC, the subset SI and the proof Pi I all of which A obtained within the experiment. If the attacker finds such values, then it outputs one of the subsets S prime I star, which is not a subset of X, together with the proof Pi I star that belongs to the subset SI star of X. Okay, so that's our attacker. And let's just note that in the first step of the attack, the attacker issues a polynomial number of group operation queries and runs in polynomial time in terms of internal computation. This should be compared to step two of the attack where the attacker does not issue any group operation queries but may run in exponential time in terms of internal computation. The analysis of the attacker's success probability consists of two claims. The first claim states that if the attacker indeed finds such a set X prime then the verification algorithm accepts and thus the attacker wins. The intuition for the proof of this claim is as follows. Based on the definition of the attacker, we know that the yellow computation accepts but note that this computation is not executed with respect to the actual oracle O. However, we also know that the yellow and green computations produce the exact same view and the green computation is executed with respect to the actual oracle O. The main technical effort is now in showing that the yellow computation produces the same view as the green-yellow-green hybrid which therefore also accepts. The second claim states that if the number of bits required to represent a set of size K is larger than the number of bits required in step one then the required set X prime exists with probability at least one-half over the choice of X. This is a simple argument that considers the view of the verification algorithm as a function defined over all sets of size K. As long as this function loses at least one bit of information, there must be many collisions. Okay, so that was our result and I would like to conclude with two quick directions for future research. The first, which I have already mentioned, is examining whether our results can be extended to Schup's incomparable generic group model. The second, which I have mentioned somewhat implicitly, is obtaining a better understanding of batch verification in unknown order groups such as RSA groups. A natural starting point is looking into our approach in the context of the generic ring model which captures such groups more realistically. So that's all and thank you for listening.