 Hello everyone. I'm Nalan. I'm Ying Tong. And today we're going to talk about recursive ZK snarks, the kinds of applications they unlock and how we can implement them. So first we should talk about what recursive proofs even are. So in the context of snark, recursion is usually the ability to verify a snark proof inside of another snark proof. So this is the ability to say something like I know a snark proof that when I run the snark verification it will guard them on, it returns true inside of another snark proof. And the sort of key part here is that verification of recursive ZK snarks is usually not significantly slower than ordinary regular ZK snark verification. So now that we have this primitive in mind, we should think about the sort of natural question is why would you want to make recursive snark proofs? So typically regular ZK snarks, you know, we think of them as having two sort of properties, succinctness and zero-knowledge. So recursion in fact unlocks powerful versions of both of these properties in the form of compression, which is a stronger version of succinctness, and multi-party composability from zero-knowledge. So first let's talk about compression. We think of compression as supercharging succinctness. And in particular, usually the applications of compression tend to share a sort of common pattern. And this common pattern looks like, you know, let's say we have a prover who wants to show a verifier some n pieces of knowledge. How do they do this? They make a proof showing two sort of things. First they show one piece of knowledge. And then they also show, you know, additionally the n minus one other pieces of knowledge. But for these n minus one pieces, instead of showing each piece individually, they all show that they know another snark proof of these n minus one pieces. So the next question is, how do you make the snark proof for the n minus one pieces of knowledge? And in fact, we use the same kind of strategy again. The n minus one pieces of knowledge, we show one piece of knowledge. And additionally we show that we know another proof of n minus two pieces of knowledge. And so then, you know, you just cascade down this sort of strategy and you end up with this situation where you're verifying one snark proof. And in that, it's automatically verifying these n items of knowledge. Another instance of compression that's particularly helpful to point out is the setting where you want to compose between different proof systems or arithmetizations. And at the time, also pick the good features of each of them. So for instance, you can have some setting where you have two different proving schemes, one where you have a fast prover, but unfortunately verifier is slow. And another where the prover is slower, but you get the trade off that the verifier is fast. So using recursion, you can compose between the proving system from the wider setting to the narrower setting and get a tiny proof output as well as fast prover getting sort of the best of the both worlds. A concrete sort of instantiation of this interoperability is the setting with starks and grot 16. In particular, grot 16 is very cheap to verify, whereas starks are very easy to prove. So if you verify a start proof inside of a grot 16 snark, you end up with a proof of the original statement with a fast prover as well as a fast verifier. So generally, compression has this sort of property that is very mechanical. And usually you're rolling up some giant list of computation or items of knowledge, incrementing to a single proof. So just to give a flavor for sort of what applications this is, let's look at things that are interesting to roll up. So first is signatures. Over the summer, we built applications using sort of this primitive of recursive compression of signatures called isopracia, where we end up with a low trust, low cost sort of roll up of off-chain votes and end up securing governance. And then more related to sort of blockchain land, you can do the same kind of trick with light-clan proofs. And in fact, plumo, which is cells, light-clan is based on this sort of idea. And there's also another group named axiom, which is exploring sort of more cooler use cases for this with aggregating and providing historical data through the use of recursive snarks. And then finally, the sort of hot topic in blockchain land disease is making roll-ups of transactions. Two particular ones that are sort of interesting to point out are Mina, who sort of uses recursion primitive as a consensus layer primitive, and Pauli Ganhermes, who uses the exact strategy of like using starts for fast-prover settings and grad 16 or snarks for fast verification settings. Next, let's talk about composability, my personal favorite property unlocked by recursive snarks. So to give some context for this, let's take a step back. Let's think about what a normal ZK proof, what's the context of a normal ZK proof? We usually think of ZK proofs in this context where a prover is showing knowledge to a verifier without revealing the underlying fact of the knowledge. With recursion, in fact, you can unlock something more powerful. A prover can show knowledge to a verifier without fully knowing the underlying facts themselves. So this is a bit hard to model, so I'll just lead with an example. Over the summer, we built this application called EATDOS, which is Erdos numbers on social graphs. So these social graphs are sort of graphs of relationships, of people saying, I am your friend, kinds of things. And for instance, here we have this graph, and Vitalik sent someone who sent someone who sent someone. And finally, I ended up with a four degree path to Vitalik. Now I can prove that I have a four degree path to Vitalik without revealing any of the intermediate parties in this path. So how do I do this? I say, I am a friend of Adjan. And Adjan has a ZK proof that he is three degrees from Vitalik. So in this process, I do not know the three degrees that precede Adjan. And I have still convinced any external, as well as myself, that there is a valid path of four degree between me and Vitalik. So of course, this is not the only application you can enable. In general, we think composability is particularly cool to think of in sort of incomplete information game kinds of settings. And of course, the sort of very typical example, the very starting point we thought of was games like telephone or Chinese whispers where you pass a word around or something and you want to make edits to it incrementally. And then you can make more complex applications like party games like mafia, those kinds of things. And then more relevant to blockchain land, you can build private state channels and rollups of that sort. So now that we have some intuition for the high level properties of recursive proofs, and now that we've seen some classes of applications that are enabled by them, you might be feeling that recursive proofs are kind of unreasonable. We get unlimited compression and composability. So I guess the natural question now is, how do we implement and construct these systems? So right now, there's broadly three classes of recursive proofs in production. And as we descend the hierarchy, we are relaxing the requirements on the proof systems, which are eligible for recursive schemes. So at the very top of the hierarchy, we have really the most stringent requirements. So we need proof systems with verifiers that are sub linear, that are succinct, in the size of the statement being proven. And this enables us to do full recursion at every recursive step. So this has been implemented for the growth 16 and fry proof systems. So if we relax our requirements a little bit, and we say, even if we don't have a succinct verifier, maybe we're happy enough just with a succinct accumulator. So intuitively, what this means is that we want a verifier with this shape that they have a succinct check, a cheap check, and then separately, they have an expensive check. And so here, we can instantiate an atomic accumulator. And at each recursive step, we only perform the succinct checks. And we accumulate the expensive check and delay checking it until the end of a long batch of proofs. And doing this gives us amortization of the expensive check. And finally, if we relax even more, we don't even require a succinct accumulator. Now we're just happy with a succinct public accumulator. And the idea behind this split accumulation is simply that you split your accumulator into a public and private part. And the public part is short. And this is what we accumulate at each recursive step, where whereas we delay the verification of the private part of the accumulator until the very end. So let's go through these constructions at a high level, one by one, just so you can see the shape of it. So probably the cleanest shape is the cleanest shape would be full recursion. So over here, we start with our application circuit. And it's ff of w i z i gives you z i plus one, right? It's a normal relation. Now we bundle that together with a recursive verifier. And basically this recursive verifier takes in a proof instance that was produced by the previous instance of the recursive circuit. And so if we look forward, we need to generate a proof of the whole recursive circuit in order to input to the next recursive instance. So in this way, we're chaining recursive circuits. And at each step, we are fully verifying the previous instance. And when we get to the final verifier, we no longer need to bundle it with the application circuit. And here, we can just perform a final verification of the proof outside the circuit. So this is full recursion. It's the cleanest API. Now we're going to get slightly more messy, and we're going to relax our requirements to get an atomic accumulation scheme. So now if you recall here, an atomic accumulation scheme, our verifier is not sublinear. So in fact, the verifier inside the recursive circuit is just the accumulation verifier. And it only concerns itself with the succinct checks of the verifier. And the expensive check is accumulated and deferred at each step. So here we are chaining basically instances of proofs and accumulators. And at each step, we're just procrastinating on performing the expensive check. Well, until we're finally satisfied at the end of a long chain of proofs, we perform the final decider sub protocol that finally bites the bullet and does the linear time check. But at this point, we can do it outside the circuit. And at this point, the cost of the linear time check is amortized across a big batch of proofs. Now from here to atomic accumulation is just a small step. So it's just splitting up your accumulator and your proof instance into a public and private part. And now the verifier, the recursive verifier gets even smaller. And it concerns itself only with accumulating the instances, the public parts. And it does not perform, it does not have access to the private parts of the accumulator. And rather, it relies on the prover to provide some commitments to the private parts, which it then performs accumulator checks on. So a lot of these constructions I've described are really cutting edge. And they come from a feature of our modern proof systems that are very modular in design. And as we get a better understanding of the building blocks and the components of our proof systems, this lets us customize our proving stacks with a lot finer granularity. And recursion, as we saw, also allows us to compose proof systems and to get the best of both worlds in many cases. So here's an example of a modular conception of a proof system. So I work on the proof system Halo 2. And I think of Halo 2 in terms of these four components. So at the very front end is where you're interfacing with your business logic. So you take a relation and you arithmatize it into. So in Halo 2's case, we encode our values in the Lagrange basis. And we encode constraints on these values as polynomial identities. We then input these polynomials into this information theoretic polynomial IOP, that basically checks the correctness and the consistency of the polynomials encoding our circuit. And then we realize this polynomial IOP using a cryptographic compiler. So in this case, it's the inner product argument that and the Fiat-Charmier transform that allows the prover to define their interaction with the verifier. And at the very end of it, Halo 2 is a recursive proof system and we can in fact instantiate an atomic accumulator over the inner product argument. So these are the four sort of modular pieces of modern proof systems that I think of. Now we can do pretty funny things composing these pieces. So I've thought of three cases. So the first one is information theoretic compilers. And then an example of this is MPC in the head. What this does is it converts one information theoretic protocol to another one. And in this case, it's the prover sort of pretending to run a multi-party computation in her head. So she's really simulating the multiple views in an MPC and committing to those views. And now the verifier is only checking the outer protocol being the MPC instead of checking the inner protocol, which is the ZK proof. So this is one very interesting way to compose proof systems. Another way that we've seen before in this presentation is more or less just implementing verifier X and prover Y. And you would do this also for efficiency gains. And sort of the last clause of composition that I came up with was really just thinking up better cryptographic compilers. So there's this recent paper by a group at consensus. And they they took this protocol GKR that's highly optimized for repetitive computations like hash functions. But the problem of GKR is that it has a slow verifier. And so and it also uses the Fiat Shamir hash as a cryptographic compiler. And this makes it very inefficient in the context of recursion. So what the team at consensus did was they came up with their own cryptographic compiler that was custom made for their target proof system. And the target proof system here is like a growth 16 R1CS proof system. And so really, like that's going in the middle and making changes, customizing proof systems at a very low level for efficiency. And yeah, I think this this leads me to ask whether or not we can systematize this process. And whether or not we can, I don't know, explore this optimization space in a well defined way. So you can think of this as kind of a call to action. So I think there's certain nice to have certain to do that all proof system implementers would really like the first being good benchmarks, fair benchmarks of heavily used primitives like hash functions, like big int arithmetic. So benchmarks of these primitives across different proving stacks. And the second would be, it's a kind of a meta requirement is to think of like what metrics we're interested in, for example, efficiency, for example, proof size, and basically how to optimize towards these metrics. So on what basis, are we comparing different compositions and configurations? And the last sort of call to action would be to think carefully about how security properties also compose like, excuse me, like, is it the least secure proof system, the the fewest bits of security or like some weird composition, when we make some match proof systems. So all these are questions that it would be great to have everyone's input on. Like even this taxonomy that I came up with, I'm not sure that it captures all features of proof systems in the best way. So Nalan and I actually help out at a group called Xerox Park. Oh, we didn't put the name on the slide. It's Xero X Park. And there we're setting up this task force to look into this area of recursion, aggregation and composition. Yeah. And I think like, I might as well take this opportunity to shout out Xero X Park. They supported a lot of our work, a lot of those fun apps, and a lot of these community efforts. Yeah, I think that is all I have. So thank you. Awesome. Well, thank you so much. We have time for one or two questions. If you have any questions, please raise your hand. We got a question here on this side. Hi. I didn't think this would work for years and years and years. And I also I'm not I'm not even sure that it does work. It does seem way too good to be true. So can you prove and how do you prove that that it's secure to do recursive like proofs like this cycle? Where where is the security proof? You know, I don't know how to convince myself that this works actually. Yeah, I mean, that is a good question. It seems very unreasonable that you can prove like an arbitrarily long history of computation with a constant size prove. It does seem unreasonable. So I will say that security proves exist. They're in papers. But now indeed have like some intuition as to why the security should hold? Yeah, for sure. I think one sort of physics based view that we've seen before is like, it's like, oh, you have 3D space and you're collapsing it to like a 2D surface. I forget the exact name of this, but holographic principle. There you go. So I think there's like some sort of meta justification you can come up it to convince yourself if that's like the sort of thing. But there is proofs of all kinds that are actually substantial. It's a good question. Any last minute questions? We have one at the back all the way over there. So recursive proofs are real fun. But if you do it at scale, then you're, you know, it's like you have skill and corrupt is snapping at your heels because of the proof carrying data, right? If you want to build on recursive proofs and it doesn't happen at the same time. So let's say you do a proof and you do recursive proof, and then you have to wait for a state change for something else that might happen much later. You need to keep the proof carrying data around. And as, you know, as you know, but most here, that is typically a very large amount of data. So it's like if you're doing it at scale, you're really running into the issues of data provisioning. And all the fun part can get really expensive really, really quickly, not even talking about like the, you know, the computational overhead that you have. Yeah, yeah. For sure. I think I don't think recursive proofs are a silver bullet. And I think they're better suited for some shapes of applications. So recursive proofs are very commonly used to reduce prover space complexity. So breaking up a large circuit into many smaller parallelizable circuits. So yeah, but I agree like in context of applications with more complicated and like timely data flows, it's, it's, we have to put some care into how and where we are inserting this proof carrying data. Yeah. Right. And also just to add, I think there's also some interesting like sort of clean separation here of like rules versus data availability sort of problems. And if you just look at, you know, data availability as a sort of black box problem, you know, you have lots of blockchain solutions for it. And for instance, Isocracia Explorers, one sort of solution that maybe you can check out the blog on. Yeah, I also to add on to that, like, I, I really like Nalan's thing about composability, the fact that, you know, you, you can play incomplete information games. So I think sort of fun applications like that that are unique to PCD are also very interesting to explore. Amazing. That's all the time we have for today. Please give Ian and Nalina a big round of applause.