 I'm Uma, and this is John, and we're going to be talking about succinct verification of consensus, and we're part of succinct labs. Okay, so let's just start with the super high overview of the multi-chain and cross-chain landscape. So in the past few years, a bunch of different L1s and L2s and app chains have come online. They all lie on very different points of the trade-off curve in terms of decentralization, security, transaction, cost, and throughput. But basically, as the number of applications on these different L1s and L2s has increased, it's important that users are able to interact between these different applications. And in this future, in the multi-chain future, bridges have become critical infrastructure to make blockchains interoperable. It's really important that users' funds and assets are not siloed in one ecosystem, and they can interact with all the applications that they want to interact with in a seamless manner for the best user experience. So bridges are really important, and so let's talk a little bit about what bridges look like today. So today, most L1 bridges are built with multi-sigs or off-chain oracles. And so the high-level design is you have a multi-sig run by a centralized entity that watches for deposits on one chain, and then will sign off on withdrawals on another chain. And these multi-sig designs are generally pretty bad for a lot of reasons. So they're sensorable, they're not permissionless, they're very centralized, and they're actually, you know, empirically have been very insecure, and there's been many such bridge hacks. And I would argue that these bridge hacks are not only bad for the users who've lost billions of dollars of funds, but they're actually broadly bad for the entire space. It reduces the credibility of the entire space and then leads to severe downstream consequences like regulation. Okay, so that's kind of like an overview of the current problem, and so let's start talking about solutions. So what does a maximally secure and trust-minimize bridge look like between L1s? Well, we already have a mechanism for a decentralized group of people to come to agreement on the state of a chain that's called the consensus protocol. And so the key idea here very simply is that bridge security should be based on the same mechanisms that validators already use to agree on state, which is verification of consensus. So if you're able to verify the consensus of a source chain in the execution layer of a target chain, then in a trust-minimized way, you can know the state of the source chain without having a centralized intermediary like a multi-sig that has to sign off on what the state of a source chain looks like. So now that Uma has painted a picture of why these proof-based, proof-of-census-based bridges make a lot of sense, let's think from a high-level overview what implementing the system end-to-end might look like. So it means that we have some blockchain, like Ethereum, and over the course of its lifetime it's naturally producing information about its consensus, such as block headers, validator signatures, attestations, and other important metadata. And normally the peer-to-peer network of Ethereum would broadcast and gossip this information to all the other honest validators, and these validators will also verify the consensus algorithm. But what if instead of just broadcasting this information to other honest validators, we broadcast this information to a smart contract on the execution layer of another chain, which will re-implement the honest validator's logic? The reason why this is so powerful is that if we can verify the consensus algorithm in a smart contract, that means we can essentially run a light client on chain. And this means that essentially we can trustlessly access any state from the source chain by simply providing a Merkle-proof, proves the inclusion of some sort of data. And inside these block headers we have access to basically commitments to the entire state of the source chain, such as how much eth I have in my wallet, what transactions were sent in the past, and what events were emitted in contracts. And essentially we can access all this state with just a simple Merkle-proof if we have this light client running on chain. So the question is, why hasn't anyone done this before? And the big problem essentially with these proof-of-stake-based block chains is that verifying the consensus algorithm is really expensive. And in particular with Ethereum, the challenge is that the validator set is so large, there's over 400,000 validators. And to run this verification of consensus, you have to keep track of all 400,000 validators, their public keys, how much they've staked, whether some new validators have came in, whether old validators have unboughtened their stake. And this is quite difficult to implement a contract in a gas-efficient way. The second problem is that the signature scheme used by Ethereum's two beacon chain is the signature scheme that is BLS. And unfortunately, the elliptic curve needed to compute the signature is not currently supported by the execution layer on many EVM block chains. So even outside the context where we want to do this in a smart contract, basically running a light client for a proof-of-stake block chain like Ethereum is just ridiculously expensive even on consumer hardware like iPhones or laptops. Which is why the consensus folks essentially implemented a specific consensus algorithm for light clients. And this protocol is known as the SYNC committee. So essentially, instead of verifying consensus against 400,000 validators, the SYNC committee reduces this problem down to listening to the signatures of 512 validators, which are randomly chosen every 27 hours. And it works exactly as how you'd imagine. Essentially, these validators will sign every block. And if enough validators sign up from the block, the block is justified. And after some finalization rules, the block is also considered finalized. And obviously, at the cost of being much cheaper to verify, the SYNC committee provides much weaker security guarantees and it requires a two-thirds honesty assumption. However, we originally explored the possibility of verifying this light client protocol on chain. And what we found is that even in this scenario where there's only 512 validators, actually verifying this consensus on chain is still too expensive. So as to concrete reasons why it's too expensive, it has to do with the fact that you still have to store 512 of these public keys on chain. And you have to rotate them every 27 hours. And storing a state-on-chain is still very expensive. Furthermore, like I said before, the current EVM doesn't support a pre-compiles for the specific elliptic curve needed to verify these signature schemes, which means that we have to implement this elliptic curve natively in solidity. And in terms of gas costs, this is also prohibitively expensive. So our key insight basically is to implement the honest validator's logic on chain is obviously a super-computation-expensive task. But what we have available to us is the power of Zernald's proofs, which have this magical property of succinctness, which basically means that for any arbitrary long computation, we can generate a succinct proof which can be verified on chain. So essentially, the code in solidity is going to look something like this, where we have some function that validates a new block. And obviously, you have some pre-processing step. You have some step to verify that the current validator set is valid and that we've rotated the stakes and whatnot. And then finally, of course, we have to verify these BLS signatures. And as I mentioned in these previous slides, I've talked a lot about how verifying the validator set and verifying these BLS signatures is quite expensive. But what we can do is we can just compute a ZK proof that does these two expensive operations off chain. And instead on chain, we just implement the same function, but we swap out these two expensive parts with just the verification of this NARC, which will verifiably prove that we computed those things correctly. And I think this is really exciting because this is a framework that generalizes outside of this Ethereum Sync Committee and can be generalized to other block chains to basically make it very easy to verify the consensus algorithm in the state of other block chains in any execution layer that supports the op code for verifying ZK proofs. And I think in the same way that ZK is being used to scale the throughput of block chains, this is really exciting because it's showing that we can now scale the verification of these consensus algorithms. And yeah, for these reasons, we're trying to coin this term called proof of consensus. We're essentially trying to build these bridges, which use ZK SNARKS to generate a validity proof of the state of some block chain. And yeah, we believe that these things, these things called succinct line clients will be sort of the end game for cross chain interoperability between many different ecosystems. Okay, so John gave a really great overview of, you know, how we're going to use SNARKS to make these succinct line clients. So now I'll talk more about the details of how we did this for the Ethereum Sync Committee. So as John mentioned, the Sync Committee does two things. One is for every single block header, the 512 validators will sign that header, and they'll produce an aggregate BLS signature of the particular block header. And then the other thing they'll do is they have to sign off on the new Sync Committee that gets rotated every 27 hours. And it's really important that validators in the Sync Committee get rotated every so often for like security reasons and like making making sure that set is decentralized, etc. So we actually have two different SNARKS. So one SNARK verifies an aggregate BLS signature of a particular block header and make sure that like the signature is coming from the set of validators in the Sync Committee. So every block header that we want to have accessible in the execution layer of the other chain, we need to generate a proof that this BLS signature was like actually verified for that header, and we have to send that to the like client contract on the different chain. And then once every 27 hours, we have to generate another proof that will update the Sync Committee validators and set the new validator set that we're going to verify against. So we have two different SNARKS. So without going into too much detail, but to kind of cover some of the primitives that we had to build to produce those SNARKS that I mentioned on the previous slide, we used called GROSS16, and we use the programming language SIRCOM, invented by Jordy and his team at IDEN3. And so our suite of circuits was pretty complicated and resulted in over 70 million constraints, which is one of the largest circuits that we've heard of being used, at least in SIRCOM. And so some of the primitives we have to build were a public key addition and aggregate verification. So you basically have to add up all the validator public keys to produce an aggregate public key. We also had to implement verification of the block header. So this involved implementing pairing in a SNARK and then having the pairing check the BLS signature, which we, other collaborators at Xerox PARC Yi and Jonathan and Vincent worked on. And then we also had to implement serialization methods to basically check that these public keys are actually being, are actually like the correct public keys. So we had to implement the SSZ serialization that two uses, which is a SHA hash function, basically. And then we also had to implement Poseidon commitments to a vector of public keys, which is a SNARK-friendly hash function. And there's one trick that we use that actually helped us significantly save on gas costs of storing public keys. So I'll go into that a little bit, because I think it's pretty interesting. So at a high level, SNARKs have public and private inputs. So the public inputs are in red because they're bad, because if we have to, if we have to verify a proof on chain, then the public inputs means we also have to put that data on chain, which is expensive. Private inputs are transparent, like we don't need to put them on chain when we're verifying this proof. So if we were to implement the verification SNARK, we would have all the public keys as a public input, because we need to make sure the public keys correspond to the correct validator set. Unfortunately, storing public keys on chain is really expensive. So our idea was we could store the public keys instead. So we can store the public keys as a private input in the SNARK. As you can see here, it turns into green. And then the public input becomes this commitment, which is basically like a hash of the public keys. And it's like much shorter. But then the question is, how do we update the commitment when the sync committee rotates? So one idea is that when the current sync committee is signing the rotation of the new sync committee, they basically sign an SSZ, which is this ETH2 serialization they use of the new public keys. And so we could just use this SSZ serialization of the new public keys as the commitment. But the problem is SSZ is really SNARK unfriendly, because it's a bunch of shot hashes. So it's really expensive to compute in every header verification SNARK. And so what we, our idea was what we could do is we could map this SNARK unfriendly SSZ to a SNARK friendly commitment. And so our second snark that I mentioned below for the sync committee rotation, basically maps, it takes in the public keys, and then it produces the SSZ commitment and then also produces a SNARK friendly commitment called a Poseidon commitment. And then it just asserts that those two things are the same. And the bottom line is that we were able to save around 70 million constraints in our header verification SNARK, which we run every update that we want, which is like a huge savings in terms of proving time. And just for some benchmarks, so our sync committee rotation SNARK has the SSZ computation. So that's why it has 68 million constraints, which is quite a lot. Thankfully, the proving time is not that bad. It's four minutes. And that's the SNARK we run every 27 hours. And then we have another that's verifying the signed header that has around 20 million constraints, and it's proving times also around four minutes. Yeah, so everything that me and Uma have, we've described here, we have like working prototypes of it. And we actually built a two way like client bridge between Gorley and Gnosis chain, which is actually another L1, which implements Ethereum consensus exactly. So we're able to reuse many of our circuits. So we have a demo demoed us to sync to XYZ and fair warning, it's in beta and it works much better on your laptop. But here we have some screenshots of like what it can do. So basically, you know, you choose your networks right now we only have like one pair obviously, and you choose your currency. And yeah, you can just bridge the tokens, you send a deposit transaction. And essentially, what's going on behind the scenes here is quite interesting. So when you make your deposit transaction, it basically stores some data on some contract on Gorley that indicates that you've made a deposit. And what you have to wait for is essentially for the light client on the target chain, Gnosis in this case, to be updated with a block header that can now reflect your new deposit. So you're going to wait for the finalization of an Ethereum block, which is around 12 minutes. And on top of that, you're going to have to wait for our proving time, which is right now is three minutes. And after that period, we can send a transaction to the light client to update it to the latest block header. And then to initiate your withdrawal transaction, essentially, you have to provide a Merkle proof proving that you made a deposit on Gorley. And this basically, this Merkle proof will essentially unlock funds on the target chain. And you can see, yeah, over after some time, like the top emoji becomes like green check mark. And yeah. So yeah, to zoom out a bit, I want to take some time to compare what the tradeoffs are between, you know, this proof based bridge versus the bridges that exist today, because I think they very much exist at different tradeoff points. And I think they have different use cases. So with these proof based bridges, you know, obviously, the really big benefits are that there's much higher security guarantees. You know, assuming that the ZK smart security is fine, essentially, you're able to borrow security directly from the source chain, which I think is something really powerful, and that many of the existing methods don't have at all. Another property is that because we don't have any additional trust assumptions, besides trusting the L ones validators, this this protocol can be very permissionless and censorship resistant, because theoretically, anyone can be running the operator that generates the proofs, and anyone can be updating the like clients. And yeah, for these reasons, it's much more permissions than J centralized. Now, the cons are, I think you might imagine is that, you know, verifying this ZK snark proof is much more expensive than verifying, you know, a threshold signature scheme that a multi-sigma implement. But thankfully with like graph 16 and stuff, we expect that the verification cost will be around 300k gas. And obviously, another con is that you're gonna have to wait for the proof generation time. And in our case, you know, our circuits, we didn't spend that much time optimizing them, but you know, right now it's three minutes. And I think in the context of Ethereum, this is okay, because the finalization period is already so long. But for other consensus protocols, you know, you might want to have a much faster proving, proving time. And that's something you could probably fix by using your generation proving systems. And many of the techniques that a lot of the ZK VM people are exploring actually. Furthermore, another challenge with this approach compared to multi-sig approach is that for every new chain you want to adopt or a new L1, or even when a consensus algorithm on Ethereum changes, you sort of have to rebuild the circuit and you have to like kind of hand design these snarks to verify the consensus algorithm. And I think from the developer's perspective, this can mean that onboarding new chains can be quite difficult. So in terms of our roadmap, we're trying to take what we currently built for Ethereum to production. And I think our end goal is essentially to be this like trust minimized interoperable layer for Ethereum and other decentralized platforms. And we want this to all be powered by proof of consensus and adopting these values of decentralization and permissionlessness. And I think in the future, the two things we're really excited about is building these succinct ligands for other blockchain ecosystems, and also exploring these newer proving systems to decrease the proving time significantly. And yeah, finally, we wanted to give a big shout out to NoSysDAO who originally funded this work and super we're super helpful and also Xorex PARC where we worked for the summer and the community provide us a lot of support and help.