 Our talk was originally ZK application design patterns, but Yee and I got together and started talking. We realized a lot of what we really wanted to talk about was like, what is the landscape of ZK applications? ZK or ZKP or SNARK is quickly becoming like a big data-esque buzzword and very jargony. It's starting to mean a lot of different things. And so we hope in this talk to sort of like talk through some of the different application classes we're seeing emerging today. And in the process sort of like plot what we think the next six months to one year of like new ideas coming out look like. And if the core thesis were stated in a single statement, it's basically maybe unsurprisingly to many people who work in ZK that succinctness and privacy are kind of like the two sort of features of ZK that are interesting. And understanding what succinct asks require and what private apps require like independent of one other might help us understand how like these two applications exist. So as Luxon mentioned, we really think of ZK as this matrix where your proof can be either succinct or not succinct and your proof can either be private or not private. So we think that each of these quadrants is useful in a different type of application. So in the top left, if you have a private proof that's not succinct, well it's gonna be hard to verify it on chain but you can still use it in some off chain applications. In the bottom left, if you have a succinct proof that's not private, well that's perfect for on chain infrastructure applications but not so good if you're trying to hide information. And of course, you can have the best of both worlds in the top right where you have both succinctness and privacy but as we'll talk about a bit later in the talk that's extremely challenging to do. So we think that almost all applications right now fit into one of the left two quadrants. So just to talk through some of them on the top left, we have more social type explorations of what you can do when you can hide information about what groups you belong to and what statements you wanna make with partial revelation of information. And the top left, in the bottom left, there are a lot of projects surrounding making blockchains more scalable or giving additional capabilities to decentralized applications using the succinctness property but where everything's public already so we don't care too much about privacy. And finally, in the top right, there are a very small number of applications that have managed to both achieve succinctness and privacy and probably everyone is familiar with those shown. Okay, so we're now gonna give an overview of where we are today on each of these factors, succinctness and privacy. So in the bottom left corner, if we ask for succinctness but not privacy, the general theme is that ZKPs are used to scale trustless off-chain compute. And the theoretical principle behind this is that if you run code on Ethereum today, every Ethereum full node has to rerun that code to validate that the execution of your transaction was correct. So that's about 100K times overhead and it has very high duplication. The new capability that succinct proofs give us is that only one party needs to execute the transaction and generate a validity proof of that transaction. Everyone else simply needs to validate that proof. So this removes the duplication but just incurs a very high overhead on the prover. And so examples of applications that use this pattern today are all of the ZK rollups as well as the sort of app specific rollups like DYDX or loop ring. So another capability that we see ZK succinct proofs bringing is cryptographic interoperability. So one thing that ZK allows you to do is to take anything from the grab bag of cryptographic primitives on the left and allow you to wrap it in a uniform format, namely a ZK snark. So while each of these primitives is useful for a different thing, they are designed in a somewhat single purpose way and that makes it very difficult to aggregate or compose these things. So ZK provides an interoperability layer that turns all of these into a snark proof that is arbitrarily composable using recursion. And while that snark proof adds some overhead, it makes this open interoperability possible. Okay, so let's talk about now what's necessary today to make this happen. The first thing is that because we're only asking for a succinct proof, we can really scale the prover. In particular, we can first not prove in the browser, we can prove on the user's bare metal CPU. So that gives us a really high five to 10x improvement. Secondly, we can outsource the prover to the cloud so the user can send a request for a snark proof of something that's already going to be public and then the cloud-based server can run a big, large AWS instance, it can run a GPU, it can run an FPGA or eventually an ASIC and that can give another five to 10x speedup in the proving time. And finally, I wanted to talk about the last constraint when you use succinctness on chain and this is a pretty heavy one, which is that you have to verify all of your zero knowledge proofs on chain. And in this case, the gas cost of verification really differs from the CPU cost. It's gonna depend on the proving system you used, the choice of curve for that proving system and also some proving system specific implementation choices. The most restrictive of these is the choice of curve. So on Ethereum, there are precompiles for the BN254 elliptic curve and those operations are much cheaper. Unfortunately, other curves are somewhat prohibitive to do directly in EVM, hence the need for a precompile in the first place. One way that is commonly used right now to get around this, for example, in the ZK EVM is the operation known as aggregation. The strategy here is that if you want to use a snark, a very big one, that's not natively compatible for EVM verification, you first generate your snark A and then you produce a recursive snark B which proves the claim that you know a large snark A that proves the statement you claimed. In this case, you can choose B to come from a different proving system than your original snark and that proving system could be very cheap to verify on Ethereum. In this way, you can sort of transmute a very large snark from an arbitrary proving system to an on-chain verifiable snark from something that's compatible with Ethereum. The only caveat here is that you, of course, incur the overhead of this recursive proof. Cool, thank you. Yeah, now I'll talk about the other kind of interesting cluster we're seeing in the top left which is an application cluster that seemed to require privacy or suit on the media of some sort, but don't necessarily require sickness. This is kind of like a more nascent ecosystem of things. Many of them don't require chains and so I think relative to maybe a lot of the things that you talked about, they're perhaps less familiar to the average Ethereum conference attendee, but what we're seeing is that they seem to cluster around social type things. This doesn't mean that this is all it is in the future, but kind of want to talk about what we're seeing today. And so in thinking about this quadrant, I spend a lot of time thinking about sort of this spectrum. There's a spectrum, so let's say, think about those arrows zero knowledge proofs being made. The computers are like, say like an end user computer. Ethereum, there obviously is a decentralized state machine with a lowish relative to congestion block gas limit. Those bottom two icons are sort of how proofs are being consumed and the top two icons are how proofs are being produced. It feels like there's a spectrum of applications and how we think about what is consumed. Is it being consumed by a human? Is it being consumed by a state machine or a decentralized state machine? If it's being consumed by a decentralized state machine like Ethereum, hence like on chain verification, you need things to be considerably more succinct. If it's being interpreted by a human, maybe you don't. Maybe it can be like a much larger piece of information that is like human interpretable or has like some kind of like interesting graphical representation. And so in thinking about this spectrum, and again this is not including a lot of things in the middle. So things in the middle might be things like being consumed by like an app chain or a layer two or a lower congestion layer one or a short lived chain or something like that. But these two ends of the spectrum, I think we all quite understand like the chain side of applications, like things that use the knowledge proofs that require a succinctness tend to benefit from being very composable, creating things that are canonical. On the other hand, if we're talking about like humans interpreting proofs, it feels like it's the applications want to be higher velocity or can benefit from being higher velocity because you don't need a chain verifying every proof. Things are more ephemeral. Maybe humans don't necessarily have like persistent memory themselves, right? Like we see a lot of things on Twitter and sort of like form a vague understanding of what's going on. Another mental model I think about is the ZKPs on the privacy not succinct side is like ZKPs sort of creating a new content type, which is like all these places where people create content online today, what if they now were enriched with the ZKP of some sort? It kind of means something a little bit different, which is quite cool. So I didn't really talk about, I realize I probably should have had a slide here, presenting some applications, but some examples are like, actually let me just go back to this slide. So semaphore, which is a project of the PSE, which is kind of group membership and like proving private group membership so you can prove that you are one of the people in a group and so you can use this to do like private like pseudonymous message board or something like this. Things like ZK email, like proving that, mail servers, it turns out many of them, like sign the email body. So you can do a zero knowledge proof that you received an email with certain properties or something like that, which is kind of cool. And these things like, it's not obvious that that zero knowledge proof in either of these cases needs to be interpreted by a chain. And so the requirements in this quadrant are quite a bit different than the quadrant you was talking about in that like verification complexity doesn't matter as much. Verification complexity matters as a sickness because it matters when you're verifying on chain because you don't want the chain to have to spend a lot of gas to verify your proof. But if a human's verifying it, like whatever, right, like it can be done somewhere else, like off chain and or like, you know, it can use like some, like even consumer hardware is more powerful than a chain. Proving complexity tends to matter a lot more because we are operating usually on a consumer device for proving. So consumer device proving friendliness as well. This is something we think about a lot is how do we make things work in a web browser? How do we make things work in a mobile device? And another piece is, you know, like given that we care about privacy in this quadrant is respect for sensitive user information. So if anyone was around for a usage talk earlier today where he's sort of been developing this new deterministic nullifier scheme, that came from realizing that we just couldn't do a lot of things with private key in a client device because we would need to pass this very sensitive piece of information around different parts of application memory. And so we have to think of new kind of like mechanisms for not having to do that. Oh, this is still me, oh cool. All right, so some of the challenges that we see technical challenges in the next six months. So as I mentioned earlier, like we need to be very friendly on like for privacy not succinct applications in research constrained environments like mobile devices, like web browsers. I think a lot of like, a lot of the ZK space is assuming that most ZK proofs will be produced on like very large servers or with FPGAs or with GPUs and you know there's some like debate around which of those two things wins, of course. But if users are making proofs to other or humans are making proofs to other humans, we perhaps can't quite make that assumption. So we spent a lot of time thinking about performance in research constrained environments. So like the specific example that I gave earlier with Aiyush, we need to think a lot about non-private key nullifiers like non-private key uniqueness of identity because we don't wanna have to deal with private key. We don't wanna have to create like an environment or a platform or anything where we assume private key has passed from app to app. And then we also spent a lot of time thinking about representing more crypto systems where identity matters in snarks. So Ethereum, we all interact with Ethereum. That's quite obvious, there's a lot of interesting things you can say about Ethereum stuff but there are other like networks in the world where cryptographic signatures are made like email mail servers like I mentioned earlier. So we're constantly looking around to understand where new networks are forming and how to like put those operations inside snarks. So Lux once spoke a bit about some of the challenges that's ahead for privacy. And a lot of that was dominated by finding new ways to improve the user experience and find applications. I think on the sickness side, the split is the other way where what you wanna build is somewhat more clear but you need to have good enough performance to actually build it. So in that sense, what I view the challenge ahead is really optimizing the performance of our proving systems and our architectures. So one direction is to really weaponize this operation of aggregation and recursion that I mentioned earlier. And the reason here is that if you want a very rich infrastructure application, you need a proof for a much larger circuit than is even possible in the larger circuits today. And so there are a couple ideas here that are only beginning to be exploited by projects right now. One is to maximize what's called a Prover Verifier trade-off. There's a fundamental trade-off between the proving time and the complexity of verifying a proof. If you feed in more compute, you can get an easier to verify proof. So we can do things like starting with a very fast to prove system and then wrapping it in an aggregation layer which transforms it to a cheaper to verify system. One form of that is already in use with basic aggregation, but we can push it much further using multiple aggregation layers. And the things here to really optimize are non-native arithmetic and elliptic curve operations for these snark-based systems. And a second direction is that even if we really optimize aggregation, it's likely that the most interesting statements to make are gonna require multiple circuits to prove. And so in this schema, you divide up a large computation into multiple pieces and then you verify each piece in a snark and then you verify those snarks recursively. So with the ZKVM and other ZK roll-up VMs, we're seeing the beginnings of building virtual machines in this way. And I think that VMs for transaction execution are just the beginning of this trend. In a separate direction, in the last five years, a lot of the progress in ZK has been driven by the emergence of many new types of proof systems that really pushed the envelope on what we can do inside a ZK circuit. So just as recently as 2016, really the most viable ZK proof system we had access to was Groth 16. But since then, we've added many capabilities like custom gates, lookup arguments, and we've been able to remove some aspects of the trusted setup in newer systems like Planck, Halo 2, and in Starks. In the landscape today, there are actually a number of new primitives emerging which may be equally or more exciting. So systems like NOVA allow for much more efficient accumulation. Some check-based systems like GKR or HyperPlanck allow for faster but larger proofs which could then be recursively aggregated in other proof systems that we already control very well. And finally, there's the potential for much more efficient lookups and new systems such as Kalk. And so each of these represent a pretty big shift in the way that we can design ZK circuits and what type of programs we can express efficiently in them. And so it'll be exciting to see how fast these things can come to production and what new applications they can enable. Oh yeah. And so finally, and to talk a little bit about where we see kind of these two things converging in the third quadrant. This is probably what the future will look like in the sense that, so the left side is the kind of privacy side and the right side is sort of this distinct proving side. And the idea being that hopefully we reduce the surface area of the private to public-ish proof to something quite small that can then be recursively included in a larger proof that is in the kind of like succinct but not private kind of like technological requirement land. And then that is interpreted on chain. And then this way we get all of the nice benefits of privacy and succinctness and on-chain applications with pseudonymity and stuff like that. Yeah, that's it. Sorry, maybe silly question for the audience, but can you explain a bit more like what actually succinct means? Was that probably in the layman's terms, yeah? Yes, succinctness is a property of zero-knowledge proofs which says that the size of the proof can be asymptotically smaller than the size of the computation you're able to do. So depending on the proof system, either the size of the proof could be constant size in the size of the computation or it could be logarithmic. But either way the point is that you can verify this proof in much less compute than doing the actual computation itself. Hey, and perhaps there's another slight thing which is many, I mean, most of these proof systems do have constant proof sizes, but also the lower the constant proof size, the better for something that is interpreted on chain, obviously. But yeah, using that service better than mine would have been, I was just gonna say small. Hey, so when we recursively prove stuff, so over here, sorry. When we recursively prove, so there is a proof and then you are proving the proof, right? So there are multiple layers of integrity checks, right? That is the assurance that you get. And these circuits are super complex and there could be like a million arithmetations in there, right? So what about the security of these proofs because if you have a single bug in a circuit, right? That is gonna traverse down to like this very small proof that you will post on chain. So what are your thoughts about that? How is the security landscape around verifying CK systems? Glad to hear your thoughts. Yeah, definitely. I would say on the syncing side, definitely recursion and aggregation make it substantially more difficult to verify that your circuits are correct. So we have obviously traditional unit testing. We have randomized fuzzing and we have some emerging formal verification approaches for ensuring circuits are correct. Where recursion make these more complicated is that you may need to encode assumptions slightly external to your circuit itself to check things. I'm sure Laxman has some views on how that affects privacy as well. Oh, I mean, like I think this is actually something we need to think about very seriously. It feels like, I mean, if any of you are familiar with like supply chain attacks and the Node.js ecosystem, it feels like we could easily, that is like, I don't wanna get too pessimistic, but that is like a very bad worst case here because it's like it's very hard for the recursing proof to make strong assumptions around the soundness of like proof that it is recursing on. I don't have any optimistic takes, unfortunately, but I think it's something we need to think about. There are lots of great people, like especially the Xerox parks ecosystem, kind of thinking a lot about like formal verification and things like this. And maybe like, yeah, formal verification of the dependency graph type things needs to start happening, I don't know. Hello, what measures can we take to make our proofs of quantum resistance for the longterm? So I think of the different proof systems, most of the ones which are based on elliptic curve cryptography are not gonna be quantum resistant since they will require at least a discrete log assumption. But we actually do already know proof systems which are quantum resistant like STARCs, which only require a random oracle hash assumption. And so if you want something to be really standing the test of time, you probably want to use one of those quantum resistant systems. I just want to add to the succinctest part, there's a difference between space succinctness and time succinctness. Usually succincts will mean like both. And question on caulk, what do you mean by caulk being more efficient than the easiest in lookups? So it probably seems to me that you can actually be applied to lookup arguments in circuits or in the mercury setting where you want to have zero knowledge membership proofs because there's a linear overhead when maintaining this free process of proofs. Yeah, definitely. I would say we're being a bit sloppy with the use of succinctness here. In most of the systems that are deployed today, you get both verifier and size succinctness. With regards to caulk, I'm more trying to gesture towards the idea that there are people working on enabling lookup arguments to be more efficient and that that would really transform how we write these circuits. I totally agree that caulk is not like today gonna enable much more efficient lookups. Well, the music is coming. I guess we're coming to an end. Thank you so much. Big round of applause to Lakshman and Yi.