 Hey, I am Pratik Suni and I will be presenting our paper which builds time and space efficient zero-knowledge arguments from groups of unknown order. This is joint work with Justin Holgrim, Ron Rothplume, Alan Rosin and Alex Block who will take over for the second part of the talk. Let's begin. Zero-knowledge protocols are a cornerstone of modern cryptography and they enable a verifier to check the truth of an arbitrary mathematical statement without revealing any information. Zero-knowledge enables a number of exciting applications including delegating computations to the cloud and cryptocurrencies with interesting properties like succinctness and privacy. A bit more formally, zero-knowledge argument for a language L is an interactive protocol between a prover and a verifier where the prover is given an input as input and instance from the language along with some witness and wants to convince the verifier that the instance is in the language. Our focus going forward will be on languages accepted by random access machines or RAM for short. Such arguments are required to satisfy standard properties like completeness and soundness. Additionally, we need zero-knowledge which informally captures that for true statements, curious verifiers learn nothing more than the fact that the statement is true. Another important property is succinctness which requires the proof size which is the transcript of this interaction to be significantly smaller than the running time of the underlying RAM program. A key challenge in scaling general purpose zero-knowledge is the high computation complexity of constructing such proofs or generating such proofs and this has been the focus of recent line of research. And in terms of time complexity of the prover, we are in great shape. In theory, we now know of constructions where the prover is asymptotically almost optimal and in some settings one can even achieve linear time provers. The situation is surprisingly quite similar in practice and a number of systems with concretely efficient provers are being deployed. As evidenced by this graph where the y-axis is the running time of the prover in deployed systems and x-axis is the logarithm of the size of statements being proved. The light blue line here is the native execution which is the baseline and quite clearly recent deployed systems are approaching this line. So are we done? Actually no. Although time-wise we are good, memory overheads of the prover remain a major bottleneck in current systems. In fact, Seti in his recent paper observes that the prover of the fractal system, the orange line, runs out of memory at statements of size 2-18, explaining why the orange line abruptly stops midway in the graph. In fact, the state of affairs for rest of the system is not significantly better as the prover is already at the limits of memory usage for statements of size 2-20 and unlikely to scale to moderate sizes of 2-30 necessary to handle any interesting real world application. Said another way, in currently implemented solutions the prover ends up requiring space proportional to the running time of the underlying RAM program rather than its space. As a step towards addressing this key challenge, in this work we focus on constructing zero knowledge arguments where we want the prover running time and space to be as close to the running time and space of the underlying RAM program where we allow polylogarithmic in T overheads. I want to emphasize that time and space of the prover are both very important and would be unfair to trade one over the other. But time overheads are easier to manage. You can just let the prover run for as much time as possible. But comparatively, memory overheads are challenging. Expanding memory is non-trivial due to the hierarchical nature of modern computer systems. And often the overall running time depends on parameters like cash efficiency, which are very hard to tame. Hopefully, by this point you're convinced that the space of the prover is an important resource to consider. And with that, let's talk about prior work on getting time and space efficient provers. So there are some constructions known, but all of them have undesirable caveats. For example, valiant and batansky et al. give constructions from recursive composition, which requires knowledge assumptions that are poorly understood. Valki Kiesa and Holgrim and Rothloom give constructions with a designated verifier, which means that the verifier needs to keep a secret state. This is undesirable for applications. In a prior work with the same set of co-authors, we overcome these two limitations, but the verifier is linear, again undesirable for applications. In this work, we remove all these caveats at once. We need the hidden-order assumption, which is comparatively much simpler and better studied. Our verifier is public coin and runs in sublinear time. In fact, such a result was not known even without the zero-knowledge requirements. Before I can state our main result more formally, let me describe the hidden-order assumption. The hidden-order assumption holds in a group if it's hard to find any multiple of the order of a random group element. And classical candidate for such groups are RSA groups. Class groups of imaginary quadratic order are another candidate and in fact have received a lot of interest lately from the blockchain space. The key feature of class groups is that the description of the group, which is the discriminant delta, can be generated using public randomness. And hence, it's plausible to assume that the hidden-order assumption would continue to hold even when the randomness used to sample the group is made public. In the context of applications, this provides an alternative to removing trust from the parameter generation phase, resulting in a transparent setup of parameters. On the contrary, we cannot expect something like this from RSA groups, where the description of the group is a product of two primes, and there is seemingly no obvious way to sample the product without knowing the factors. So with RSA groups, we would need trusted setup, but here the hidden-order assumptions can be based on factoring. In summary, the hidden-order assumption is well studied, simpler, weaker in comparison with the assumptions used in prior work that we built on. Now to state our main result, given such hidden-order groups, we construct public-coin zero-knowledge arguments for language L accepted by time t and space s rammage program, where the pruer time and space and the verified time are optimal up to polylock t factors. Our protocol is interactive and has lock t runs and polylock t communication. Instantiating the hidden-order group with RSA group will result in an argument from factoring, but this would require trusted setup. Instantiating with hidden-order class groups results in an argument with transparent setup. And finally, we can make our public-coin protocol non-interactive by applying the FIH-Chamele heuristic, which results in time and space efficient zero-knowledge snarks. At a very high level, our approach is to combine polynomial IOPs along with polynomial commitments. Polinomial IOPs are information theoretic proof systems where the pruer sends an oracle in the first round, which embeds a polynomial and then interacts with the verifier at the end of which the verifier asks for evaluations of this polynomial and accordingly accepts or rejects this proof. Polinomial commitments on the other hand are cryptographic tools that allow a committer to commit to a polynomial and later reveal evaluations of this polynomial on verifier chosen points along with a proof that it has correctly evaluated the polynomial. And we combine these two tools in the most natural way. We ask the IOP pruer to send a commitment rather than the oracle highlighted in orange and we replace the IOP verifier queries to the oracle with the evaluation protocol of the polynomial commitment scheme highlighted by blue. In fact, this approach is not new to us and is a common denominator of a number of prior schemes. In fact, a time-optimal pruer combined with time-optimal committer indeed gives a time-optimal argument pruer. Since we are also interested in space efficiency, it is natural to ask whether the same transformation preserves space. In fact, turns out that this question is a bit more nuanced. To get a space-optimal pruer for the argument, we would need a committer to run in space which is sublinear in the size of the polynomial, which is its input. This is clearly impossible for arbitrary polynomials. But in a prior work, along with the same set of co-authors, we observe that the polynomial in context has a rather space-friendly structure. Specifically, this polynomial encodes the transcript of the underlying RAM program and hence that its description can be generated as a stream in small space. We refer to polynomial commitments where the committer requires small space when given streaming access to this polynomial as a streamable polynomial commitment, which is what we construct in this work. More specifically, we build on a recent polynomial commitment scheme due to Boone's Fish and Sepanik. In fact, we find a significant bug in their scheme. The authors informed us that they also found this bug independently. Although we don't know how to fix this bug, we give a non-trivial variant of their protocol where to prove security, we leverage ideas from the theory of integer lattices. An added benefit of our protocol is that it's based on significantly weaker assumptions than the works of Boone's et al. Additionally, we also show how to implement the committer of our scheme in small space given streaming access to the polynomial. And finally, we develop a new proof of exponentiation protocol that is essential in getting the polylogarithmic verifier for our argument scheme. I want to emphasize that our proof of exponentiation protocol is statistically sound and works for arbitrary groups, whereas previous work only achieved computational soundness under normal cryptographic assumptions for arbitrary groups. Going back to the transformation, we take our streamable polynomial commitment, combine them in a natural way with streamable polynomial IOPs from the literature, and get time and space efficient public coin zero-knowledge arguments. Now there are many moving parts here and although space optimality is the killer consequence of our work and most relevant to practice, it is somehow unfortunately not the most technically interesting bit. So we won't be talking about polynomial commitments, time and space efficient implementation and zero-knowledge in the rest of the talk. Rather, we want to focus on the main ideas and for this, Alex will discuss two sub-protocols. First is a proof of knowledge of exponent with small digits, which will highlight the bug in the Boone's et al. paper and how we fix it. And second is a proof of exponentiation protocol that leverages ideas from the proof of knowledge protocol. Alex, over to you. Hello everybody, I'm Alex Block and I will begin by describing a protocol which is a synced proof of knowledge of exponent with small digits, and as said before, this protocol is the core of a polynomial coin-knowledge scheme. So to begin, let's fix the cyclic group G into integers B and Q. Now our proof of knowledge of exponent with small digits, the public statement is some group element Y, and the prover is given an integer written as X. So the protocol certifies that X is a witness to Y, so that G is X equals Y, and that the base Q representation of X has small digits bounded by B. Let's consider examples in our familiar base 10 and the bound B equals 5. So if X is 12, the verifier should accept because 1 and 2 are less than 5, less than or equal 5. The verifier should reject X equal 18 because 8 is bigger than 5. And the verifier should accept 252. Okay, so for the sake of this discussion, we're going to define the sickness as being requiring the proof to be of size roughly N by 2. This just means we cannot send back. So as an additional simplification, we're going to assume unit costs to send a single base Q digit. And now as a warning, we're going to give a dividing conquer protocol to solve this problem. But the divine conquer protocol is buggy and is identical to what the polynomial commitment scheme of Bones et al do. So we're going to examine the buggy protocol first, see where the security breaks, and then describe our fix. Okay, let's get to it. So like I said before, there's a very natural but buggy dividing conquer approach to obtain N by 2 communication. Simply put, we split the statement Y into two statements Y and YM, where X on XM are the witnesses for these statements. We perform the split such that if X has N digits in base Q, then the witnesses X on XM have N by 2 digits base Q. Then we somehow recombine these statements Y on YM to statement Y prime and combine X on XM into X prime. And the hope is that X prime is a witness for Y prime, and that X prime has N by 2 digits base Q that are small. If this is the case, then the prove it can simply send X and we are done. So what does this protocol look like? So we're going to define XL as the base Q encoding of the N by 2 least significant digits of X. We're going to define XM as the encoding of the N by 2 most significant digits of X. And we will define YL and YM appropriately. Note that XL plus Q to the N by 2 XM equals X. And this is the check the verifier performs under the hood as well. Now the verifier samples a random lambda bit integer and then the proven verifier compete this random linear combination of Y prime as the recombination step. And similarly, the prove it competes X prime equal a XL plus XM. Now the big question here is first off, does X prime have N by 2 digits base Q? And are these digits small and is X prime a witness to Y prime? Well, I hope you'll trust me that at least X prime is indeed a witness to Y prime. Well, what about the other two properties? So as long as there's no overflow in the digits of a XL plus XM, we can ensure that X prime has N by 2 digits and that X prime has bounded digits. When can we ensure no overflow happens as long as Q is sufficiently large as shown here? So given a sufficiently large Q then X prime has N by 2 digits and each of them are bounded sufficiently as shown here. Okay, and at this point we're done. We just send X prime and the verifier competes its final checks and accepts the rejects. So this protocol has N by 2 digits plus two group elements of communication which I will call succinct, great. So this protocol is both succinct and complete but what about soundless extraction? And this is where I mentioned before the bug shows up. So for extraction, we're given a cheating prooper that convinces the verifier that a statement Y is true with non-legitability and we want to extract out a small digit witness X for Y from this prooper. So we get, we're given an initial interaction where we send a challenge A and give our given a message X prime such that X prime has small digits and is a witness to this relation here. YL to the A times YM. So we're gonna rewind the prooper, send a distinct challenge A1 and receive X1 prime and then we're gonna note that X1 prime has small digits and satisfies the relationship here. Now my claim is sufficient. Now I claim it is sufficient to extract integers XL and XM such that XL is a witness to YL and XM is a witness to YM and they have small digits base Q. Why? Well, given these values we can explicitly compute X via this linear combination here. Okay, great. So we have a system linear equations which I'm gonna rewrite as this matrix vector equation. Now X prime has small digits and X1 prime has small digits as said before. Fix this matrix to be A. Let's just take the inverse. Very natural approach but very clearly there's an issue here. A inverse is rational entries and so it's not clear if XL and XM are integers anymore. So boons at all encounter the same problem and here's how they handle it. So first off they argue, okay, XL and XM have to be integers. Otherwise the assumption, the computational assumption they assume on their group is Birkin. So this computational assumption is called the fractional root assumption and it says that it's hard to compute G to the X by A for random A. Okay, this is a slightly funky assumption but let's go with it. So we have XL and XM are integers. Now do they have small digits? That's a big question. And the argument in boons at all is that, okay, X prime and X1 prime has small digits and A minus A1 is small. Therefore XL and XM must be small and this turns out to be false. So as a counter example, we consider this implicit claim made by boons at all. So if X has small digits base Q and A is a small integer dividing X then X divided by A has small digits base Q and small just think of as being much less than Q. So there's a very easy counter example. Take an odd Q, take X equal one plus Q and A equal to two. Then very clearly X and base Q has small digits. They're one and one but X divided by A is equal to one plus Q by two and X divided by A and base Q is the same thing which is a single larger digit. Okay, so can we fix the sound this proof? Recall that we wanted XL and XM to be integers with small digits base Q and that we're given X prime and X1 prime with small digits and this matrix A with small integer entries. And then the issue was that A inverse had rational entries. So the question is, can we sample A from addition distribution such that A inverse only has small integer entries? If we have this, then we are able to extract out integers XL and XM and furthermore, they will have small digits and we'll be done. Okay, so then we focus on answering this question. How do we do this? So our approach is a divide and conquer approach with random subset products. So the key idea is to fix a statistical secure parameter lambda. So our protocol will not be a statistically secure protocol and we'll divide and conquer lambda different statements into two lambda different statements and then recombine into lambda different statements with binary challenges. What does this look like? So fix lambda statements Y1 through Y lambda and lambda different when this is X1 to X lambda split them exactly as before in the base protocol. And now for the recombination step, we're gonna sample bits and perform a random subset product. Okay, and I said before, these are uniformly random bits and each bit is resampled for each YI prime leading to two lambda squared bits of randomness. So at the end, the proof would just send X1 prime to X lambda prime. Okay, so let's look at the modifications to our protocol. First off, we have, again, a statistical security parameter lambda. So our protocol is statistically secure. We have lambda different witnesses, lambda different statements, each satisfying lambda different constraints, the same constraint, really. Then the proof is just gonna do exactly as we just described in the previous step. We're gonna split into two lambda different statements as shown here. And the verifier is gonna compute lambda different checks shown here. Okay, looking ahead, this check is offloaded to the prover as it is too expensive to the verifier. We offload it via our new proof of exponent protocol, which I'll talk about in a bit. So then the verifier samples lambda, two lambda squared bits of randomness via this matrix A. And then recombination is done exactly as before via random subset products. And at the end, the proof would just send X1 prime to X lambda prime. So what about soundness of our protocol? So again, it is sufficient to extract these two lambda integers XIL, XIM, which are witnesses to YIL and YIM and all have small digits base cube. So we're gonna take our malicious prover, we're gonna rewind it some constant number C times and obtain a new system of linear equations as it looks like this. So each of these XI primes, these capital XI primes has lambda entries and each of these entries have small digits. Now, fix this big block matrix to be A. We want this matrix to have an integer left inverse specifically with small entries. Once this happens, we're done. So this gives our key lemma. So fix their distribution D to be exactly as what the extractor does. Mainly it samples some constant number of lambda by two lambda binary matrices. It stacks on the top of each other and outputs this A. So C is a constant here. So if we sample A in this manner, then except with two the minus lambda probability, A has an integer left inverse and the entries of A are bounded by two, the poly lambda. This two, the poly lambda is okay by us because it is independent of the value Q in the protocol. And furthermore, as necessary for the extraction, A inverse can be computed in poly lambda time. So I'm not gonna get into the proof in this talk, but the proof uses ideas from the theory of integer lattices and we foresee it being a very useful tool in crypto based on integers such as our work here. So just to recap, we give a proof of knowledge of exponent with small digits base Q. We highlighted a buggy protocol and we describe our fix which uses ideas from the theory of integer lattices and the subset product thingy. And as a result, we obtain a protocol which is statistically sound. And I want to emphasize that this proof of knowledge protocol captures the main technical ideas of our polynomial commitment scheme and can be modified to the full version of our streamable polynomial commitment scheme with some tweaks. Furthermore, if we want logarithmic communication, we can just recurse log N times on this lambda two lambda lambda protocol. And then finally, I wanna mention there's a gap in the completeness and soundness. If we want to extract out B bounded digits in the extraction, then we need an honest prove to start with some B prime bounded digits which is much smaller than B. Okay. So with the proof of knowledge of exponent small digits complete, we're gonna move on to our next awesome thing, the proof of exponentiation for arbitrary groups. So the PoE problem specifies a group G to elements X and Y in the group and an integer T, all this information is public. And we want an interactive protocol or non-interactive protocol that produces a proof pi which attests that Y to the two to the T equals X, or sorry, excuse me, X to the two to the T equals Y. Of course, there's a naive, so PoE is actually a very important component of VDFs and succinct component of commitments. And of course, the naive solution has the verifier just compute this X to the two to the T. This works for any group. This requires no communication, but the verifier runs in time T. So can we submit the verifier? So there are a couple of works with sub-linear verification, so PoE is sub-linear verification. So Petruzark gives a statistically sound protocol in RSA groups. So this is only statistically sound in RSA groups where the verifier runs in logarithmic time. Wozolowski gives a computationally sound protocol assuming class groups with something called the adaptive root assumption and the verifier time is constant. Then Bonet et al observed that Petruzark's protocol is also computationally secure assuming other group assumptions, class main and class groups with lower order assumptions. Okay, so our main question here is can we get a statistically sound PoE over class groups or arbitrary groups for that matter? And in fact, yes. So in this work, we give a statistically sound PoE that works over any group with logarithmic verification. So what does this look like? So just the main theorem, it is a public coin statistically sound PoE over any group where the verifier time is logarithmic, the proof size and number of rounds is logarithmic and the proof of time is linear. So what do we do? We generalize Petruzark's protocol to work over any group and this can be made non-interactive via the Fiat-Chemery heuristic. So very briefly, we're gonna go over Petruzark's halting protocol. This is the core of Petruzark's PoE and then we're gonna give our modification. So the halting protocol splits computation of x to the two to the t in half via first computing this mu and having to pre-resend it. So x to the two to the t by two. Now, if mu is computed correctly as shown here, then y is in fact equal to mu to the two to the t by two. So the power is split in half. Then a random linear combination sample by the verifier which is then computed via, then x prime and y prime are computed via this random linear combination. And indeed, if mu is computed correctly and the x prime and y prime are computed correctly, then y prime is in fact x prime to the two to the t by two. So we go from an exponent size t to an exponent size t by two. So for the full protocol, you recurse log t times and the verifier checks the final claim itself. Okay, so what about our PoE? So our PoE is actually, again, using the ideas of dividing conquer via random subset products that we saw in the proof of knowledge of exponent. So we now again have a Lambda statistical security parameter. We have Lambda different PoE instances and we're gonna do the same halving for each. But now our recombination is again via random subset product. So again, these AIs are randomly selected bits. Okay, so I'm not gonna go into full details but I hope you'll believe me when I say that this modification is complete, it is statistically sound and it works over any group G. Okay, so that completes the section on the proof of exponentiation for arbitrary groups and in fact we come to the end of the talk. So I wanna end by first just flashing our main theorem and restating it again. So assuming hidden order groups, we obtain the public coin, zero knowledge arguments for languages accepted by a time t space as non-deterministic RAM where the proof of time is nearly optimal. The proof of space has polyloggering like overhead with respect to the RAM. The verifier time is polyloggering like overhead. The communication is polylogarithmic and the number of rounds is logarithmic. For the more assuming hidden order groups, we obtain a transparent setup and we can make this non-interactive via Fiat-Chemere and obtain time and space-efficient zero knowledge arguments. So synced non-intergrated zero knowledge arguments. And with that, thank you very much.