 Thank you all very much. And it's really fun to see so many people really interested in a zero-knowledge proof technology. This is going to be a talk that actually is, I was totally expecting around 20 to 25 people to care about ZK Snark. So this was meant to be more of a brainstorming or opid-ended talk. And it follows from what Sean was talking about in the talk right before lunch, so I'll give a little recap of that before going into this. A little bit about myself, and then I'll mention about the Zcash Foundation. I'm Andrew, I'm a professor at the University of Illinois at Urbana-Champaign. I do research on cryptocurrencies. I'm part of the initiative for cryptocurrencies and contracts. I'm also a technical advisor for a bunch of other projects you probably care about. Relevant for this talk, I'm also one of the directors of the Zcash Foundation. And what I wanted to tell you a bit about today is how the Zcash Foundation, or how I'm thinking that the Zcash Foundation should view how we can contribute to a parameter setup for zero-knowledge snarks. A little bit more of just background about the Zcash Foundation, because I don't get that many opportunities to present on behalf of the Zcash Foundation so far. We are the independent nonprofit organization associated with Zcash. So we're a separate organization from the Zcash Company. We're independent. We have a separate board of directors. We recently got our 501C3 status, which I think is really exciting. Peter van Valkenburg, one of our board members, has a really nice blog post about just the significance of this. I won't go into all of that, but I think that we're the first 501C3 public charity that is a cryptocurrency foundation that has some of the goals of other cryptocurrency foundations. We get our endowment from pledged donations of the Zcash Founders Award, adding up to about 1% of all of the mined coins. And we have basically a complimentary mission to the Zcash Company. So something that's a little bit different about the Zcash Foundation, which you'll see as relevant for how I approach this question of parameter setup. We're not the core developers. The Zcash Company has the team of engineers that's working really hard, and with all of the vigor and efficiency and enthusiasm of a startup, we're aiming to play a complimentary role, especially for now, while the Zcash Company has explicit incentives to keep going and upgrading the software. We're focused on things that are for the public good, because that's, as a 501C3, what our mission has to be. And our mission, in a little more detail, has these three pillars of building a diverse community so that the Zcash Company isn't the only effective organization in the Zcash community and providing leadership in terms of improving and maintaining the protocol, especially when needed in doing things like education and research. Again, just giving background on the Zcash Foundation, because I assume maybe not too many of you are familiar with what we do. A couple of our recent activities that I think are really interesting, we're just about to conclude a grant program of giving about $130,000 of funding out to community-led proposed projects. And we're starting now planning for what will be ZCon zero sometime next year. Let me then steer this back towards the question of ZK parameter setup. So this is the recap of the same motivation from Sean's talk. The ZCash parameter setup for the ZCash 1.0, the Sprout system, was the first of its kind in many ways. It's the first putting of, using a multi-party parameter generation ceremony to put zero-knowledge snarks into practice. And this was the first ceremony of its kind. I think it was really cool. I was one of the six participants in this, so I haven't really found association with it. This is the largest-scale multi-party computation of any kind at its time. Other ones had been like three parties. There's like a famous Danish beats auction if you follow MPC literature and cryptography. They're all like, oh, we did this Danish beat auction. It's useful in practice. That was like three people. And it was also in the semi-honest case. Cryptographers are way more interested in this malicious case security. And this was the first one of those done over the internet. And it had this nice quality that if even one of the people successfully deleted all the toxic waste from their computers at the end of the ceremony, then you'd be able to rely on the safety of this. And it was so exciting. It had me at my mom's house in Orlando going and getting a random computer from a Walmart. It had Peter Todd in a desert bus somewhere like out of communication in the vast expanse of Canada, something like that. But it's not, it's the mother of all one-off ceremonies. It's not something that's replicable that easily. It was a one-off design and it is limited in scale. So there were six of us doing this and we had to basically stand vigilant over our computers for this 24-hour period during which we had to make sure no one snuck in and took a copy of our RAM while we weren't looking. And we had to do this for this 24-hour period that would have gotten worse if we had more than six people doing it. And even worse, it's brittle in the sense that if Peter Todd crashed his desert bus and couldn't do his second round, we'd have had to throw it all out and start over and do it some other way or rely on some backup parameters we had generated earlier. All right, so this isn't the right way that we should be setting up zero-knowledge snark parameters into the future and for all of the interesting ZK apps that I think you all wanna build and especially if you wanna build and take advantage of some kind of general platform like this like snark support and Byzantium. So we really wanna have a more sustainable, this is the same motivation as what Sean's talk was. You called it democratizing it. I'm saying that it's sustainable. It's really the same kind of idea here. I'm riffing on it. This is why it's a brainstorm session. Even though we already have these Zcash parameters, there's at least a couple of reasons why we want a better system for setting them up. Not the least of which is that the sapling upgrade is the planned upgrade for Zcash coming sometime next year and that'll require doing the ceremony again so we want the ceremony to be even better designed if we can. And I think I'm really excited about this kind of cryptography technology. I can see by the standing room only that you all are too. I'm anticipating that there's going to be an upcoming boom of ZK snark-enabled apps. I don't know whether these are called Zapps, Zaps, Zcaps. If you all have like a suggestion you just shout it out or something, it's a brainstorming talk. Zaps, you like zaps for this? Okay. All right, we'll tentatively go with zaps. I'm not gonna do votes for all of those. Okay, so every zap is gonna need its own snark. This isn't necessarily true. There's a project called TinyRAM and there's a couple variations of this. The idea is maybe we could do one snark circuit that ends them all because it has in it a really powerful general purpose virtual machine. You won't be able to put a Turing machine inside a snark, exactly. At least not one that runs with unbounded time but you can come kind of close to that by putting like an instruction set inside a snark. The downside is that this has a performance penalty like the prover for this generic thing is gonna be a lot slower. That's like the time it takes to make a transaction is gonna be a lot slower than if you have a customized solution. So I'm expecting that the boom of upcoming zaps are going to rely on custom circuits that are each going to need their own trusted setup to be performed. So where does the Zcash Foundation come in? Well, the Zcash Foundation's remit and its goals and its mission are broader than just Zcash. So if we can, we'd like for any work we do supporting this to actually end up being useful for other people as well. Again, this is still overlapping motivation with Sean's talk, right? The protocol which I'll mention again for a recap, the idea is that part of its work, this phase one out of phase two can be reused across many different applications. All right, so what role would a foundation play? Well, I think that by Sean said at the end of his talk that we wanna host it. I wanna dig into that term a little bit but I think that we should play a role in encouraging lots of people to contribute to jointly participating in some of this work, especially the parts that can be reused for many different applications. Because we care about transparency, we would like to come up with a process that has us publishing as much information as we can as the process goes along so that it's as visible for the rest of the world. That's the thing with these trusted setups is you need confidence that they're done correctly. More transparency as much as possible will help that. All right, what else can we do? Well, we have an endowment and we have a lot of technical people associated with us so something we could do is try to encourage more independent versions of it. So, so far, Sean built his rust implementation of this. We would like for other people to go and build alternate implementations of the same protocol that should help us wage fears that there's an implementation level bug that makes the parameters bad. We also need more independent reviews. So this is a new protocol. I think that this will have to undergo a whole lot more crypt analysis and scrutiny from the cryptography community before we end up hardforking to use it. We could maybe start doing some of the development in preparing this process or even generating parameters ahead of time but there's gonna have to be a lot of crypt analytic review done before all of this confidence has earned. These are all things that I think the Zcash Foundation should play a role in. To give a recap of how this new protocol works and why it has these opportunities for this amortization. So again, this is my interpretation of Sean's work. I got to see the paper only several days ago, I think on the order of a week ago. So if I say something wrong, I'm gonna rely on Sean to shout out and we can try to work through this together. Instead of the snark setup ceremony like it was in the Zcash Sprout being one big process that has some fixed number of people, this is now split up into two phases and in each phase the parties that participate don't even have to be determined ahead of time, all right? And if one of the parties bows out or fails, they only have like one step to do, you can just replace them with someone else, it's not that big of a problem, it's not brittle. And what's even more exciting is that this phase one, the powers of Tau part, doesn't depend at all on the specific circuit. It kind of looks like this, you'll have several parties each taking a step and at each step they have this crypto term that's like these powers of Tau and at each step when a new party comes along, they build on the work of the previous person by re-randomizing these things and providing some kind of checksum that says that they did it correctly, they didn't introduce an inconsistency, all right? And you can keep going like this, it scales nicely because the amount of work that each new party has to do is exactly the same, it's not like it gets quadratically worse over time, it's the same kind of computation that they do. The size of this computation and the amount of time it takes to do only depend on the size, like a bound on the size of the circuit that you're going to do, but not on the details. So for example, what Sean has in mind is that we should have a circuit that has two to the 21 gates. The significance of this is that that's roughly the amount of size that will mean that the steps that you have to take would fit on one DVD. So if you wanted to participate in this and you wanted to have an air-gapped compute node like we did in the Sprout ceremony, this would be DVD sized, all right? Two to the 21's also the size of the original Z-cache circuit. The new Z-cache circuit is down to two to the 17, or it is what the gate size of the sapling circuit would be. The point of this is that if you build this phase one powers of tau step with the two to the 21 parameter, anyone who wants to build a snark using a circuit that's that size or smaller can make use of the final output. So the trust model is that at least one of these participants who re-randomizes this has to actually re-randomize it and forget what their randomness is that's disposing of the toxic waste. No matter how many you have, you just have to assume your trust relies on at least one of them doing that. It can be anyone, you don't have to know which one it is, but you have to have one of them actually delete their randomness. Every other part of the protocol except that needs to be verified, but that can be done in a publicly verifiable way. And that verification step also is just like one of the same kind of computation for each party. So it doesn't get worse the more parties that you have, but the more that you have, the more of these that you have to do. So there will be some kind of trade-off in terms of how many should we have? Should we have a hundred or a thousand, like what's the stopping point? Just for the sake of having a concrete number, and I don't think you should hold Sean to this, I feel like I dredged it out of him, but just to keep in mind if two to the 21 is the size of a circuit that we would want, again that's big enough to hold the current Zcash circuit, let alone the next one. This compute time might take something on the order of like 13 minutes of compute time for one contributor to add to this, and verification time might take around 30 minutes. These are just ballpark estimates. And the size of each step would be at an extra gigabyte that's added. So if you have 100 contributors, then to check it would be 30 minutes times 150 hours. So you'd have to run your verifier node from scratch over for two days. I don't know, that's kind of comparable to syncing a full Bitcoin node. So it scales linearly with that. All right, after this phase one, what happens is that then you have to do the circuit specific step, and then this is something that would have to be done again for every new zap that wants to use a different snark. And there's still parts that are publicly verifiable. What you basically do is take like a snapshot, like whatever is the last powers of towel that you got to, when you're ready to go build your circuit specific snark now, you need to like take the get head, if you will, of that whatever is this current version, and then re-randomize it once more with public randomness, right? That could maybe come from, you wanna make sure that this randomness, that this value can't have depended on this randomness. So you'd want this to be time stamped earlier, and then you'd want this random beacon to come from blocks that came much later than when this was time stamped, right? But that's still publicly verifiable. And then to do the second part efficiently, you have to do this really beastly fast Fourier transform, all right, just over that last piece of data. This is a slow step. I don't know how slow that would be. What would you think that would be for the one gigabyte? Yeah, for the FFT. Hours? Okay, so like maybe an hour if you have like a huge multi-core EC2 or something. So it's not like everyone's gonna wanna rerun that entirely. Even then there's options that we could try, right? So we might imagine publishing that one gigabyte file that's the latest power of tau that we got to, and the output of that FFT, which is still about a gigabyte. And using one of a handful of techniques, I think there's a lot of open-ended possibilities here. There may be like a random verification option we can do so that if this is public, anyone can verify a portion of it, and if they find something wrong, they'd be able to make a really short fraud proof. Or you could use the interactive verification technique. That's a general purpose solution that's suitable for this. All right, that's the same as the technique used in Trubit. I know there's gonna be a presentation on that later if there hasn't been already. So you could imagine applying any one of these other verification techniques to that big beastly FFT, all right? And then there's the final step of doing an MPC. So that does depend on the details of your circuit. The main thing is that this is now even faster. Like it's even less than the 13 minutes of compute per person. But this will actually be the only part that depends on the details of your circuit. All right, so that was my rehashing of what this new protocol is gonna look like. So there's basically two main ideas here where I think we can get this kind of benefit and it's gonna be sustainable because it's gonna be a process that's useful for a lot of different people. So the first one is we would wanna make a really good powers of towel that everyone wants to use because everyone who wants to build zaps in the next year has contributed to this one. All right, and here this is where I started to get, I'm trying to approach how the foundation would do this because the reason why I gave all that background about the foundation is that the foundation isn't in charge of anything at all. The foundation's not officially responsible for Zcash. Really, no cryptocurrency foundation is responsible for all of the nodes that are running it. They only get to direct hard forks to the extent that the community of nodes and miners follows along with them. And it's even more of the case in the Zcash foundation because we don't even own the code base or are the core developers, we're to the side. All we have is that endowment to direct. So the most obvious thing that we could do is to say, okay, at some time this year, like on New Year's, we're gonna publish the official list by decree of who the foundation deems will be the participants in the setup. And there isn't really anything wrong with that. It could still be a broad list. It could still obviously be a large set of participants that you think wouldn't collude. I think that we could do better somehow. I'm still rolling around whether it's worth this extra process. But I like the idea of sitting back and letting consensus emerge on its own in some public space about who wants to contribute because the amount of effort for each new person to add, if it's only 13 minutes of compute time and only a gigabyte, maybe we could do something like just let whoever shows up on this mailing list, which we set up the Zaps working group mailing list. This is like a ghost town right now. I just have a test post, but we set it up. It's on the Zcash foundation domain. So maybe the right thing to do is just allow whoever shows up. If you're in the audience and go there right now, maybe you could like claim your spot to be first. And if everyone agrees with that, then the foundation should acknowledge the consensus that exists rather than trying to make consensus by decree. Another alternative that maybe makes sense, maybe not, we haven't thought it through yet, brainstorming is to let miners contribute. So like if you mind a bunch of blocks, then maybe you should contribute. If we do that, then we should call it a tower of Powell. Was it be a tower of proof of work? I like all forms of like power towers. I don't know. If you like Zaps, maybe you like that. I said that like, I like this idea of reaching consensus in some way about how this process should be done or about what the order of the participants should be. I mean, there are a bunch of questions. If this is gonna be like a public resource that many different projects all benefit from, maybe we should try to reach some kind of public consensus with input from you all or from everyone who wants to do Zaps in some medium term future. Like how many participants should we have? Should we build one with a hundred or one with a thousand? The more participants you have, the better the chances that one of them successfully deleted their randomness, their toxic waste. All right, but then verification from Genesis, from the beginning of this is gonna be more expensive. So I'm not sure what that trade-off is. There's a lot of flexibility in this protocol. So like we could go up to a hundred and then cap it there for sapling. And if everyone else wants to go further for whatever reason, they could keep building on the first hundred that we use for sapling, just for example. What else? So I mentioned that there's the circuit size parameter. I think this is a good motivation that Sean came up with to recommend two to the one, but we've talked with some folks who say they are planning on doing Zaps, but they might need a much larger circuit. So I actually have no idea what's the appropriate circuit size. It's kind of an optimization challenge. So it's hard to make a really small circuit that does what you want, especially if it's like binding digital signatures to the cipher texts of, to the plain text underneath some encryption. Maybe you get bigger and bigger circuits and it's hard to optimize them. So maybe if not everyone has time to optimize their circuits really well, that's a reason why we should make an even bigger one. But again, that feeds into the verification time. Maybe we should do multiple months. Maybe we should do one for like two to the 21 and also for a larger size. As we mentioned, saplings going to be on this better curve BLS12381, but BN128 is the one that's going to be in Byzantium right now. This powers of Tau protocol I think works on both. So, but you have to do one or the other. So maybe what would make sense is that we should have a process where we're doing both at the same time. There's like a parallel track of the powers of Tau. And that maybe we can do some process that still provides some benefit to the folks who want to build on the Byzantium one. The main benefit of this one is that it has better security parameters and is much faster. Yeah, Nathan. Yes, that's right, right. Yeah, exactly. The verification time for applications seems like it's going to be constant. I don't have the slightest idea. Ellie, I saw you around there. It depends. Even Tiny RAM I think has to have embedded a bound on how many steps to run. So I have no idea what would be appropriate for that. That's a good question. Something cool about this is that the cost of your phase two only depends on what you use. So if we build the two to the 21 and you only need a two to the 15 size circuit, like you want a small snark, then your setup time can be proportionally smaller. Oh, sure, right, right. Yeah, so if you had a project, you're like, oh, I got it down to two to the 27. That's too big. It doesn't fit in here. There's almost always a way you can break it down into multiple smaller circuits that you link together with commitments. Yeah, you can ask me about that if you want clarification on what that means. But it's a standard trick in the... Yeah, yeah, yeah. Oh, you're saying even better than just the general one. Oh, okay. That sounds awesome, I didn't know about that paper. Ariel, by the way, here's like actual cryptographer, so definitely knows these things. Okay, what else? I mentioned this a little bit. I kind of, this'll almost be repeating some points, right? The fact that you still have to do an independent second part of the circuit, this phase two means that for sapling, maybe we'll just come up with something and there's gonna be some set of people that do the phase two. But even there, there's an opportunity for batching. We could imagine having all the people who participate in phase one because they wanna build zaps, also contribute, they publish what their circuit is that they want compiled in this process. And maybe when we go set up a phase two for sapling, we might include some of all of these participants even in the phase two and try to grind out all of our circuits that we wanna use at the same time. You don't get the amortized benefit in terms of how fast that is. You still have to do this separate work for each one of these separate circuits, but you might as well amortize the coordination effort in some way. I think that makes sense. All right, and I think that what we should look towards in the future is something that doesn't just end at sapling, but is like a sustainable snark setup as a service, right? Where there's some process where if you come with a new zap and you want it built, maybe you don't have to work out how to do all of this yourself. Maybe there's something that we would do. I could imagine set up something that maybe takes place at universities or has some kind of like coordination that occurs like three times a year. And if you get your circuit included in the batch, then we'll go through this process somehow and grind out those circuits. I like that idea. Don't have much more details on how that would go, just a thought at this point. The fact that I'm out, that was the end. Those are all of my slides. So yeah, that was the end of the brainstorm dump. So more questions? This says we have a minute. What do you all think? Yeah, Nathan. Like the trust model? So what happens is that in each of these phase, there's some number of parties and it's still the same thing where one of them has to successfully delete their randomness in each column, like in each phase. Yeah. Would you raise your hand if you have a circuit in mind or like you are a member of a project that knows that you want to use a snark in something? Like raise your hand if you're a zap person. Okay. That feels like 15-ish. That's amazing, okay. Oh well, so it gracefully degrades even if you can predict. Got it. Joseph Beno, yeah. Beno. Yeah. Yeah, yeah, yeah. Joe has a pretty impressive line of work about how to basically squeeze high entropy randomness out of a source like a blockchain ashes. This is like a proof of sequential work or a proof of time elapsing, that kind of thing. Okay. I think I'm out of time, so I should let it go. I just want to say if you raised your hand and are interested in zaps, if you can find the Zcash Foundation or the Zaps Working Group mailing list, I'd love it if you sign up or just try to contact me because I'm trying to build a list of everyone who might want to contribute in this way. So yeah, thank you all.