 Hi, my name is Nick Spooner and I'll be presenting the paper Proofcaring Data from Accumulation Schemes This is joint work with Benedict Boontz, Alessandro Chiesa and Pratif Mishra. So let's start with some motivation The problem that we're trying to solve is the following. So you have a t-step non-deterministic communication get computation, which is given by a circuit F, which is the transition step function and an initial input z0 and a target output zt and you want to check that there exists intermediate states z1 up to zt-1 and witnesses w0 up to wt-1 such that for every time step t, F applied to ziwi takes you to the next step zi-1 So this represents essentially a non-deterministic machine computation and one way to do this, which is fairly well understood is to simply prove the entire computation at once. So this is monolithic proof. There are many ways to do this and the issues with these schemes typically they require a large amount of Prover memory. So the Prover has to hold the entire transcript of the computation in his head at the same time and so you need something like s times t memory where s is the space needed to compute the function F whereas the computation itself only needs s memory and Proving the t plus first step. So you prove t steps and now you want to prove so t plus one steps This requires recomputing the entire proof from the beginning So to avoid these issues valiant suggested in 2008 Notion called incrementally verifiable computation Which looks like this. So you take as input z0 and w0 into the Prover. The Prover produces a proof along with the next state z1 Which you then feed again into the Prover along with the input w1 and you get new proof z2 pi2 and so on And you do this over and over again Until you end up with the final state zt and the final proof pi t that attests to the correctness of the entire computation So this is what incrementally verifiable computation looks like and then there is a generalization of that Which is called proof carrying data which takes this sort of path computation. This is a normal path computation and generalizes it to Any directed acyclic graph So for the purposes of this talk, I will talk about IVC and PCD interchangeably So you can think about either some applications of IVC on and PCD includes 16 blockchains snots with low space complexity as and Verifiable delay functions. So in the latter case you can build a VDF by Taking f to be some some hash function, and this will give you a fairly efficient VDF How do we build IVC on and PCD so The sort of state-of-the-art construction is from this PCCT 2013 paper where you build IVC PCD out of a snark with succinct so polyloverithmic verification So a snark here is a succinct non-directive argument of knowledge It was shown more recently that this can be relaxed somewhat to snarks with sublinear verification and also this Holds in the in the quantum setting and post-quantum setting Unfortunately, the this sublinear verification requirement is pretty strong and it restricts the sort of class of snarks that we're able to use to construct this this primitive And so the question that we ask in this work is is sublinear verification required for IVC and PCD and recent work Suggests that maybe this is not the case. So they this is by Bo Grig and Hopwood from last year and They outlined a novel approach to obtain IVC from a specific snark, which has actually a linear verifier They give details about sort of some important practical aspects things like elliptic curve cycles that you need to implement this scheme But they don't give a detailed construction or approval security. And so that's what we Do in this work along with some other things So and essentially we take this Idea from from BGH 19 and we formalize it Using this new cryptography motion called an accumulation scheme and then we start to develop the theory of accumulation schemes So we show that a snark when the accumulation scheme Implies IVC on or PCD even if the snark verifier itself is not sublinear Secondly, we can obtain snarks with accumulation schemes by combining a snark whose verification is sublinear relative to some primitive With an accumulation scheme for that primitive So we'll see examples of that later on Where this is this is important and then finally we have a particular choice of primitive that we're interested in so We show that two popular polynomial commitment schemes have accumulation schemes So one thing to note about these results is that they don't quite all fit together So the top two theorem 1 and theorem 2 hold in the standard model Theorem 1 does not hold in the random marker model Theorem 3 holds in the random marker model But is not known to hold in the standard model and there too is black box And so it holds in both but to go from bottom to top One needs to make a heuristic assumption which you'll see in more detail in a second And we should note that there is no Relation between accumulation schemes and sense accumulators. They just have similar names So a summary of the results in a sort of pictorial form so snarks with accumulation schemes imply IVC PCD and How do you obtain a snark with an accumulation scheme? Well, you start with a snark that is efficient relative to some predicate and then you add an accumulation scheme for that predicate And moreover this allows you actually to turn snarks with your efficient relative to Sorry starts with accumulation schemes into snarks, which actually have 16 verifiers using this result of PCC 13 Unfortunately Well, this picture is very nice. We actually don't know how to instantiate these things in the standard model And so we turn to the random marker model So we have snarks which are efficient relative to polynomial commitments in the random marker model and We show how to build so there's previous work and we show how to build accumulation schemes for some polynomial commitments in the random model again We can then apply theorem 2 in the random marker model to obtain snark when the accumulation scheme in the wrong And then we have to apply this random marker heuristic. So it's a commonly applied heuristic that We make an assumption that there is a choice of concrete hash function which maintains the security of this scheme Which means that we end up with a snark with accumulation scheme in the standard model and then we apply theorem 1 so theorem 1 is a theorem but the The The the condition of the theorem only holds heuristically and so we only heuristically obtain these new PCD and IVC constructions with nice properties So Let's start with a little bit of background about previous IVC PC constructions and this method of recursive composition that is used to build them So a quick definition of IVC the prover takes in the previous state of the computation z and the previous proof pi Along with potentially a witness, but I will ignore this in the in the remainder of this talk and outputs a New state z prime and a new proof pi prime the verifier takes in the card state and the card proof Whatever they are and outputs zero one depending on whether it thinks the entire computation so far has been performed correctly and Notice that the the prover is sort of able to be Looped back into itself and this is formalized by this completeness Condition which says that if the verifier accepts a state proof pair then when you apply the prover one step Then the verifier also accepts this new so it's a prime pi prime And the soundness property or proof of knowledge property is the the following that for every adversary that produces a final proof Final state proof pair Which is accepted by the verifier we can extract from that adversary a complete transcript of the computation so far So going all the way back to this z zero So this is IVC The final property that we need from IVC which sort of makes it interesting Compared to just arguments Is this efficiency property? So we want that the size of the proof pi prime is the same As the size of the proof pi so when as you when you apply a single step of the prover The the proof does not grow in size and this means also to the verification Because it's on sort of a fixed size object. It does not grow in size either So this is what we want how do we build it So one of the building blocks that we require is a snark with preprocessing So what is this? We have a relation R which is the relation of Circuits with Inputs that evaluate to one And a snark will be an argument or a proof system for this for this relation So you start with the setup phase which takes in just the circuit and outputs approving and verification key corresponding to that circuit the prover holds X and W and Produces a proof pi that C of X W is equal to one and the verify which holds X is able to check that proof and Then knows that there exists some W such that C of X W is equal to one So we have the standard completeness and adaptive proof of knowledge properties. I won't talk about them One important thing about snark is that the proof size is sublinear This is sort of what we mean by the succinct in the in snarks So the size of the proof is much smaller than the size of the circuit and Optionally, we also obtain sublinear verification We can also also ask a sublinear verification So here we want the time of the verifier is much more than the size of the circuit so another this is optional and So there are snarks that fulfill it and there are snarks that are interesting that that don't and we will Later on be interested about in snarks that actually don't fulfill this condition But for now we're going to look at snarks that do and see how they How you can use them by this technique of recursive composition to build ABC So the way that you do it is you start with this ABC prover So the ABC prover is going to take in z t pi t and it's going to output z t plus one pi t plus one This is just the syntax of the prover and so now we're just going to fill it in based on You know what we need to go from from t to t plus one So obviously to go from z t to z t plus one you need to apply the transition function f. That's so by definition Then with the proof pi t what you do is you verify So the prover is going to is going to make sure that the proof is correct Then what he's going to do is actually encapsulate both of these things into a circuit are which Does the transition and also verifies the previous proof? This so I sort of hit this but the Snark verify needs a verification key for a circuit and in this case we will choose also the circuit are So this is sort of the recursive property of this and the snark prover then Proves that this circuit are except That it sort of outputs that this z t plus one and pi t plus one is the proof For the next the like pi t plus one which is the the output of this This prover is the proof for the next step The ABC verifier is then just the snark verify Where you key it with this with the verification key for the circuit are Completed as basically follows directly from the snark completeness like the statement will just be true soundness Essentially what you do is you use the proof of knowledge property of the snark to recursively extract From the ABC prover so you you can kind of go back in time by repeating a repeatedly applying the Prover knowledge property This note that this construction and in particular the soundness does not hold in the random oracle model because here we are using the Verifier in a non-blackbox way with proving things about the verifier and this just doesn't work when there is a random oracle In terms of efficiency well this this is good because the size of this proof pi Is the same as the size of a snark proof for this circuit are And what is the size of that? Well, this maybe takes some Some some thinking and this is actually sort of where the sublinear verification part comes in So the size of this circuit are depends on the size of the snark verifier in particular it is at least the size of the snark verifier and so if the snark verifier was a linear then this circuit are would actually Increase in size every time because you will be proving something about the circuit are from the last step And so it would get sort of slightly bigger at every step with sublinear verification. This allows you to Avoid this blow-up by just having the verifier for are be smaller than are And this is why sublinear verification is necessary. It's because you prove stuff about the snark verifier So the question we ask is what about snarks that don't have sublinear verification? What can you do? and To discuss that one I'm going to need to introduce a new sort of cryptographic tool, which is accumulation schemes An accumulation scheme is best expressed in the following way so think about a List a stream of inputs Q1 up to Qt going on forever and We have a predicate fight Which takes in Inputs qi and outputs an output to zero one value and The quantity that we're interested in is the conjunction of the predicate fire applied to all of the qi Okay, and I just want to do if I qi we want to know whether this is one and so one way we can do this is just by you know every time a queue comes in we We apply a fight to it and then we just remember the conjunction so this is Just remember this one bit But if fire is expensive this might take a long time and so an accumulation scheme allows you to To enlist a an untrusted prover to help you with this So the cryptographic object is an accumulator and what an accumulator does is a it accumulates the truth value of this this statement so the We have an accumulation prover P which takes in the old accumulator and the current Query and it outputs the next accumulator. So we do this over again Then We also have a verification algorithm which checks the prover's work at each step So it takes in the old accumulator and the next accumulator and the current query and it outputs one if the If the prover did his job correctly and We do this once again for every step And then finally we have an algorithm called the decider which which runs just once at time t and It takes in the teeth accumulator and it checks essentially whether this accumulator is is right in some sense And this has the property that if all of these checks passed so if the verify if every verify accepted and the decider accepted then This implies that the This conjunction is one So Note that we save in this case if the cost of v is much smaller than the cost of the predicate phi So Because then the sort of it even if the decider is quite expensive the cost is sort of amortized over all of these all of this time t The other thing that we would like That sort of necessary is that the size of the accumulator does not grow with the time t And this will also mean that the decider does not Grow with the like the time of the decider does not grow with the time t and this avoids sort of certain trivial constructions So what can we do with this? Well, we can actually build IBC and PCD from snogs with accumulation schemes. So if you have a snog And you add to that an accumulation scheme for the predicate corresponding to the snog verify Then you get an IBC or PCD scheme Importantly the Verifier for the snog does not need to be sublinear However, the verify for the accumulation scheme does need to be sublinear But this is okay because this is likely to be a simpler object so one thing to notice that if v is in fact sublinear then you can sort of trivially do this by by setting the Accumulation verifier to be equal to the snog verifier and the decider then does nothing But as I said, there are more interesting constructions than that More over this construction preserves both zero knowledge and post-quantum security Finally, this construction shares this drawback of the previous constructions of IBC and PCD Which is that it makes non-blackbox use of the accumulation scheme verify which means that it doesn't hold in the random oracle model So let's briefly go over the construction again We're going to sort of do the same thing as before and look at just the inputs and outputs and try and sort of go from one to the other so the IBC proof now is going to consist of two parts it consists of a snog proof pi i and an accumulator a i and The first thing that we do obviously is we compute the transition function then what we want to do is accumulate the Previous inputs to to p into a new accumulator Ai plus 1 so we do that using the accumulation pruer Then the next thing that we have to do is to check that this was done correctly Otherwise we you know we need at some point to do this verification. Otherwise We don't keep track of whether the accumulator has been correctly updated and so this we this we do and then like before we encapsulate now the the Transition function and the verifier for the accumulation scheme in the circuit are and then So this is the description of the second hour and we we then apply the snog pruer to To this second up Then the IBC verifier so it takes in takes in again this Z and then the IBC proof pi and a the first thing that it does is it runs the decider to check that the Accumulator was was sort of correctly formed And then it runs the actual smart verifier on this on this final proof so The reason that we don't need the snog verifier to be sublinear here is that the recursive circuit doesn't contain it We only actually ever run it once right at the end So the recursive circuit only contains the accumulation verifier, which means that we only need the accumulation verifier to be sublinear So that's the theorem and then the question of course is how do we obtain snogs with accumulation schemes? Where do they come from? so very quickly the The way that you get them is you start with a Well, what an easy way to get them is that you start with a snog Where the verifier is succinct except for some sort of expensive predicate fight? And then you say okay fire has an accumulation scheme So I designed an accumulation scheme for fine and this implies that the snog overall has an accumulation scheme essentially because I removed the expensive part of the snog verifier And then the question becomes naturally what predicates are actually useful to build accumulation schemes for and The answer for this comes from a very popular methodology for building snogs So you start with this information theoretic proof system, which is a called the polynomial IOP or AHP You add to that a polynomial commitment scheme and combine them together using a compiler and this gives you a snog So, you know examples of this a sonic plonk and Marlin and the polynomial commitment scheme For example, you have these KZG BBB or halo And this is a this is used to construct Many popular sort of widely used snogs And so how do you then accumulate such a snog? Well noted we noticed that whenever you build a snog with this With this framework what you end up with actually is a snog that is predicate efficient with respect to the Verification predicate of the polynomial commitment scheme and Then this means that what you Need to do is just design an accumulation scheme for the verification predicate of the PC scheme And you combine these things together using theorem do and you obtain an accumulation scheme for the snog as a whole And this is what we do. So we design accumulations schemes for two widely used polynomial commitments with different properties And these yield in a heuristic sense Two interesting new PCD constructions. The first one you start with a predicate efficient snog with With respect to PC whatever that might be your favorite snog And then you add an accumulation scheme for this KZG commitment Apply this this chain of theorems in heuristics And what you get is PCD and ABC from bilinear groups Where the you have a trusted setup But in return you get Very small proofs and more of a in compared to previous approaches Advantage that you have with this approach is that you don't do any pairings in the recursive circuit Which means that in principle you you can this will be a lot more efficient The alternative construction the other construction that we have again, you start with some predicate efficient snog with respect to polynomial commitments and You add in an accumulation scheme for a different polynomial commitment scheme, which is this Discrete log-based scheme based on the bullet proofs in a product argument apply this these theorems and what you get is PCD and ABC now from standard groups. So not necessarily with with the pairing Which has a transparent setups is no sort of secret parameter that is generated and Secondly the the proofs are pretty small. So not as small as the KZG proofs, but much smaller than previous PCD and ABC schemes with transparent setup and that is all. Thank you