 We're going to have a talk on guaranteed output of square root of n rounds for round rod and sampling protocols. This is by Ron Cohen, Jack Doerner, Yashavant Khande, and Abhi Shalat, and Jack will give the talk. Please. It's a switch. Oh, there we go. Thank you for the morning. Hi, I'm Jack. My co-authors are Ron Yash and Abhi. So suppose that you have an elliptic curve group and then consider the distribution that's generated if you sample a random variable tau from the field modulo of the curve order and then compute the powers of tau in the curve group. This is the structured reference string used by the polynomial commitment scheme of Kante, Zabrucha, and Goldberg, and its form derives actually from the challenge issued in the strong Diffie-Hellmengame introduced by Bonnet and Boyen. The structured reference strings are also used in gross snarks and the number of gross snark derivatives, including sonic, plonk, aurorolite, and marlin. And via gross snarks, these are actually deployed in the world today. So at the very least, Filecoin, and Ethereum, and Zcash use SRSs of this form, and there's probably some others as well. Now, these SRSs have an unfortunate property, which is that you can't sample them in a public coin fashion. In particular, if given an elliptic curve point for which you don't know the discrete logarithm, computing the square of that point actually implies breaking the computational Diffie-Hellmeng assumption. So in order to sample the powers of tau, you really have to sample tau first. On the other hand, if the adversary were to learn tau, the consequences for security are catastrophic. So the KZG10 polynomial commitment scheme has no binding if the adversary learns tau, and all of the snarks you saw a moment ago have no soundness if the adversary learns tau. These two facts are in a sort of natural tension, which leads to a question, who will we trust to sample tau? And the answer that the community has come up with is that we will distribute the sampling of the powers of tau among absolutely as many people as we possibly can, and in such a way that so long as at least one of them is honest, tau will remain hidden. So for the first protocol for doing something like this was proposed by Ben Sasanadal in 2015, and there have been a number of follow-ups since then, but they all follow the same basic layout, which I'm going to show you in a sort of simplified form. So the protocol begins with a single party, Alice, who samples a multiplicative share of tau and then computes what I'm going to call a partial SRS. That is, she computes the powers of tau with respect to just her multiplicative share. And when she has this, she's going to post it to a public bulletin board of some kind where the whole world can see it. And at this point, Alice is actually done. She doesn't need to interact with the protocol anymore, and she can sort of fade away into the background as another party comes along. So this is Bob, and he also samples a multiplicative share. And then he reads Alice's contribution off the bulletin board and uses his multiplicative share to update it. So when I say update, here's what I mean. He takes every element in the vector that is Alice's output and multiplies it by the corresponding power of his multiplicative share of tau. And of course, you can see that Alice actually just computed a slightly degenerate version of this function, where she used the vector of all curved generators as her starting point. Regardless, once Bob has his partial SRS, he also posts it to the public bulletin board. And the rest of the world will want to be certain that he used Alice's output as his input, so he'd better post some kind of proof to that effect as well. And now Bob is also done. He never needs to interact in the protocol again. And the protocol can sort of continue in this way, using the same steps for as many parties as you'd like. But for the sake of simplicity, I'm just going to show you three. So when all the parties have gone, the output of the final party is the final SRS. And as you can see, it comprises the powers of the product of all of the shares sampled by the individual parties. This protocol has a couple of useful properties that I'm going to kind of observe to you. The first property is that all of the secrets used in this protocol are uniform. The environment never has a specific input to anybody. The second property is it has a round robin structure, which means that everybody speaks exactly once using only the public bulletin board. And the order in which they speak doesn't impact the security of the protocol in any way. In fact, the order can even be determined by the adversary. Of course, this round robin structure immediately implies that the protocol has exactly n broadcast rounds, where n is the number of parties. Finally, I'd like to observe that the protocol has guaranteed output delivery against a dishonest majority. And this means that any coalition of n minus one corrupt parties cannot prevent the protocol from delivering a secure SRS to the public for public consumption. So for the rest of the talk, I'm going to refer to any protocol that adheres to the top two attributes as a strongly player replaceable round robin protocol or an SPR three protocol for short. The main result of this work is a protocol compiler that takes an SPR three protocol with guaranteed output delivery as input and produces a compiled protocol with only O of square root n broadcast rounds, which nevertheless achieves guaranteed output delivery against dishonest majority just like the input protocol did. Finally, the output protocol is going to be UC secure under non-interactive zero knowledge and any protocol which realizes the OT functionality. So a generalization is pretty useless if it only generalizes one example, which leads us to the question, what other SPR three protocols exist? We looked at the literature and we were able to find one other protocol that's interesting and falls into this model, which is verifiable mix nets. So as a corollary, we also give the first robust mix net with a number of rounds that sub linear in the number of parties who do the mixing. For the rest of this talk, I'm going to give you a little bit of context about guaranteed output delivery so you can see how this result fits in. Then I'm going to tell you how our compiler works. And then I'm going to tell you a little bit about the bias that this compiler allows the adversary to inject into the protocol. And I'm going to argue that this bias is essentially harmless in the context that we care about. And finally, I'm going to leave you with a couple of open questions. So then let's talk about God. And one of the earliest and most important results in guaranteed output delivery is Cleves 1986 proof that in the dishonest majority setting, some functionalities can't be computed with guaranteed output delivery at all. In particular, any coin tossing protocol with R rounds must have bias proportionate at least to one over R. So this means that computing an unbiased coin flips cannot be achieved in any finite number of rounds. The good news, though, is that sometimes bias is tolerable. And this is particularly true when you're sampling cryptographic objects. So in 2003, Genaro and all proved that threshold snore signatures are still secure when the public key is biased. And just a few years ago, Groth had all proved that some flavors of Groth snarks are still secure when the SRS is biased. And here security is against an algebraic adversary. As a sort of side result in this paper, we proved that biasing the challenge gives the adversary no advantage at all when it's playing the strong Diffie-Hellman game. And this immediately implies that the KZG10 polynomial commitment scheme is secure when the SRS is biased. Now, when it's okay to have a little bit of bias when you're sampling whatever it is you're trying to sample, this opens up the door to using a really classic technique for achieving guaranteed output delivery in a generic way. And this is the player elimination framework of GMW. This framework is very simple. First, you compute whatever function you'd like via multi-party computation with security against the dishonest majority. Then you ask everybody to prove using zero knowledge on a broadcast channel that they acted honestly. And finally, if anybody cheats, you eliminate them and you start over. So this last step here is the important one because if you're in the dishonest majority setting, this implies that you might need n rounds since there could be n minus one restarts. And furthermore, it implies that the adversary gets n minus one opportunities to reject an output that it doesn't like. This is how the adversary gets to inject a bias into a protocol generically. Now, in spite of the fact that this framework is now over 30 years old, this is really the best we know actually for a lot of tasks when you want to achieve guaranteed output delivery. There's really very few things that we can do in less than n broadcast rounds. And this paper identifies an entire class of distributions that can be sampled with guaranteed output delivery in only O of square root n broadcast rounds. So then how do we do that? How does our compiler work? Well, first let's recall the SPR3 protocol from earlier in which every party speaks one at a time posting exactly one message to some public bulletin board. And let's focus just on Alice for a second without a loss of generality. So because Alice only speaks once, she has a single next message function and this next message function takes as input the state of the bulletin board up to the moment she speaks and then produces as output some message for her to post to the bulletin board when she's done. We're gonna take the code of Alice's next message function and embed it into a functionality. I'm gonna call this a player emulation functionality. And when it's called by some other parties, it produces an output exactly as Alice would. Now I'm going to enhance this functionality in a couple of useful ways. First, I'm going to give it identifiable abort. So this means that if the functionality is invoked by some corrupt party who causes the functionality to abort, then the functionality identifies that corrupt party to all of the other participants in a consistent way so that they can eject them and say the next invocation. Now, although the identification of this party to all the other participants is consistent, unfortunately the functionality isn't going to be fair. This means that the corrupt party will get to see the tentative output for Alice before it decides whether or not to cause an abort. And this is how the adversary gets its rejection sampling power in whatever outer protocol is going to use this functionality. The second enhancement is going to be public verifiability. This means roughly that the functionality is going to write its output directly to the bulletin board just as Alice would. And finally, we're going to give the functionality security against full corruption. So this means that the functionality, all of these properties you've seen still hold even when everybody invoking the functionality is corrupt. We only require that one corrupt party be identified, but they better be identified consistently to all of the other people who might read the bulletin board later. And the good news is due to a sort of classic folkloric combination of the GMW compiler and the BMR protocol, we can actually realize this functionality for any next message function that Alice may happen to have in only a constant number of rounds. So now that we can emulate a single player, the natural thing to do would be to try to emulate an entire protocol. We might try to cut all of our parties into committees and then have every committee emulate a party in the original protocol like this. And if your committees are a size squared N, then of course you would achieve a protocol with square root of N rounds overall if nobody cheated. Unfortunately, however, the functionalities can only identify one cheater at a time, and they're required to go in sequence here. So if you have N minus one cheaters, you still might need N minus one round to identify them all, which means we haven't really achieved anything yet. In order to do better than this, we had better be sure that we can identify a super constant number of cheaters anytime no progress can be made in the overall protocol. And our solution for guaranteeing this is to force all of the committees to compete to emulate every party in the original protocol. So now our compiler looks a bit more like this. Again, we cut all of our parties into committees, square root of N committees, each a size square root of N, and now they all invoke the same player emulation functionality at the same time. When they do this, we call it a virtual round. And in any virtual round, there are three potential outcomes for every single committee. I'm going to go through them with you here one at a time. So the first outcome is that somebody cheats and causes an abort. And in this case, the functionality is going to identify that cheating party to all of the others, not only in the committee, but in the entire protocol, so that that party can be eliminated. And I'll introduce a little bit of visual shorthand. In this case, we're going to color the functionality red to signify this outcome. The second outcome is that the committee succeeds in sampling an output for our virtualized Alice. And furthermore, the committee has the lowest index among all of the committees who did succeed. So in this case, the committee has won the competition and its output will be considered to be the definitive output for virtualized Alice in this round. Not only that, but just like the real Alice, this committee is now done with the protocol. They can sort of retire and never interact again. And in this case, actually, I will label the functionality as green before moving on to the final outcome, which is that the committee succeeds in sampling an output for Alice, but there's another committee that also succeeded with a lower index. So in this case, the committee has lost the competition. Their work is going to go to waste, their output won't be used anywhere and they'll have to return in a later round and try to emulate another party. So in this case, I'm going to label the functionality as yellow. And once all of the outcomes have been decided and all of the fates of all the parties have been decided, the parties who are eliminated or have retired, they can leave and the remaining parties can go on to another virtual round, potentially where they emulate another party in the original protocol. Now that you understand what all the outcomes are, let's talk about how many times each one can happen. So I claim that a committee can be red at most square root of n times because every time it's red, one party from that committee is eliminated and the committee only starts with square root of n parties. I claim that every committee can be green at most one time because after this event happens, the entire committee retires and doesn't interact again. And I claim that the committee can be yellow at most square root of n minus one times because this event only occurs when another committee is green and there's only square root of n minus one other committees and they can all only be green once. So if you add up all the outcomes, you arrive at the conclusion that the maximum number of virtual rounds is two square root of n. And since we know it takes only a constant number of real broadcast rounds to realize a virtual round, we know that the total number of rounds in the protocol is O of square root of n. Let's now do a sort of a simple example for you so that you can see how everything kind of hangs together. So on the right-hand side, I'm going to show you the outcomes of all of the committees in every virtual round. I'm gonna show you how it maps onto the original SPR3 protocol and its rounds. And on the left-hand side, I'm going to observe some things for you about the protocol as it progresses. And the first thing to observe is actually not every virtual round, corresponds exactly to a round in the SPR3 protocol. And this is because it's possible that every committee aborts and therefore nobody successfully emulates Alice. And in this case, they actually have to try to emulate Alice again in the second round. But once somebody succeeds in emulating Alice, they can move on to the second party. Here it's Bob. And at this point, you can see the first committee actually only has one party left. It's eliminated three out of its four parties. In this case, of course, the one lasting party doesn't actually have to call any of the MPC stuff. It can just use Carol's next message function as Carol would for the next round. So finally, we have only one committee left. And it turns out this one only had cheaters in it, which means that it never managed to produce an emulated output for the final party in the original SPR3 protocol. This turns out to be fine though, because we know that the original SPR3 protocol had guaranteed output delivery against a dishonest majority, which meant that it could still deliver an output even if the last party never spoke. Consequently, we'll get an output in this case as well. So now you've seen the compiler, you understand how it achieves quadratic round compression. Let's talk a little bit about the bias that the compiler allows the adversary to inject. And now I'm going to kind of abandon my generalization and talk again just about the powers of tau context. And I'm going to stop talking about protocols and talk instead about the functionalities that the protocols realize. So first let's discuss the functionality for the original SPR3 powers of tau sampling protocol. This functionality is quite simple. It begins by sampling a multiplicative share of tau for the functionality and computing a partial SRS for the functionality from that multiplicative share, which is then handed to the adversary. The adversary replies with its own multiplicative share of tau and the functionality updates the partial SRS with the adversary's multiplicative share to produce an output that can be delivered to everybody. Now during the invocation of this functionality, two different SRSes enter the adversary's view. The first one is the unbiased SRS, which comprises the powers of only the functionalities share of tau. And the second is the bias output SRS, which comprises the powers of the product of the adversary and functionalities shares. Now, when you take the protocol and you put it through our compiler, the functionality that it realizes the end is actually relatively similar. In fact, it starts out pretty much the same. But at the end, the adversary gets an additional option instead of sending a multiplicative share tau. It couldn't then send a special symbol that tells the functionality that it rejects this partial SRS that the functionality has proposed. And in this case, the functionally has to start over. It has to sample a new share of tau, compute a new partial SRS, candidates of the adversary do the whole thing again. So when this functionality is invoked, the adversary sees a lot more SRSes and it gets to select one of them. In particular, it sees as many as it likes different uniform SRSes from the space of all of the valid SRSes that exist. And at the end, whichever one it selects, it sees that SRS again updated with its own share of tau. Now, I claim that this rejection sampling mechanism that I've just given the adversary actually gives the adversary no additional power. And specifically, this means that F-compiled tau perfectly realizes F powers of tau. Of course, this implies that there must exist some simulator such that for all adversaries, the experiment involving F-pow tau in the simulator has an output distribution exactly equal to the experiment involving F-compiled tau and the adversary. And I'm not gonna show it to you now, but in the paper, we do in fact construct such a simulator. Not only that, but the simulator is general among all distributions that have what we call perfectly re-randomizable sampling functions. I won't describe this exactly. The important thing to know is that mixed nets are also such a distribution. Consequently, our compiler can be used on a verifiable NixNet without any degradation and security. So finally, a couple of open questions. First of all, we'd like to know, are there any other interesting SPR3 protocols? We looked through the literature and found only two, but if you happen to know another one or develop one, you get an automatic round compression result for it from a this. Second, we'd like to know if these techniques can be applied with concrete efficiency. So in this talk, I suggested to you that maybe you ought to take a next message function with D elliptic curve scalar operations and then render it as a Boolean circuit and put that circuit into the BMR protocol and then put the BMR protocol into the GMW compiler and then do all of that order N times and make everyone in the world verify all order N instances. This is like clearly not a concretely efficient, right? But we think that this isn't actually an inherent problem. We think that this is probably just an effect of the fact that the compiler has to be generic. And sort of in service of showing that, oops, I'm gonna go back here, service of showing that we have an additional construction in the paper which approaches the powers of Tau sampling problem directly and avoids generic MPC and generic zero knowledge entirely except for a little bit of generic arithmetic MPC for computing a couple of products in ZQ. In addition, we give a couple of techniques for verifying instances far more efficiently than they can initially be computed. So everyone, thank you for listening. Our paper can be found online in its full version as you print 2022 slash 257 and I'll take any questions. For your great presentation. So I just, maybe you mentioned, but I must miss maybe you did. But my question is that, do you still have this possibility that parties can join in future? You don't need to have all of them at the beginning. Yes, so this is something that's been done for I know a couple of start constructions in the past where I think they call it the continuous powers of Tau. People can sort of show up at any point. That remains absolutely possible under this compiler. In fact, if you have a group of parties at once, you can apply the compiler just to those and then some more can show up later and you can apply the compiler to those and so on. We don't sort of show anything about this in the paper, but we did think through it and there should be no problem making this work. Okay, thank you. Thank you for the interesting talk. Why can't you use the yellow groups as they are the result of the yellow groups? Why can't you use them as they are? Why do you have to? So this is something I'm not familiar with. I'd love to come talk to you. I don't have an answer for you right now. So I'll come talk to you afterwards. Maybe somebody else knows the answer to that one. Oh, why didn't I use the yellow? I thought it was like some kind of mathematical construction. Why don't you use the yellow groups? The answer is that there's not really, once you have two sort of parallel strings that contain two partial powers of Tau constructions, there's no good way to combine them that preserves guaranteed output delivery. Basically, any way you would want to combine them would allow somebody to cause an abort that denies the output. Or at least that's what we concluded after thinking about it for a while. In fact, we did at one point have another construction that did something like what you're asking. It said that if you have two people that succeeded at sort of the same time, you'll recombine afterwards. But it turned out that that construction had the same quadratic bound at this one, essentially by coincidence. It didn't really share any features. So this one we decided to present since it's much simpler. I also have a question. If the number of malicious parties isn't n minus one, it's like n half plus T, what happens to your protocol in that case? How many rounds do you need? How many rounds do you need? Let me think about for that for a moment. So basically the number of rounds that you require stretches between the square root of n and two square root of n. So the best case is that you sort of emulate square root of n parties over the course of it. Unfortunately, like the way it's constructed, you can't really do better than that. The worst case is two square root of n. Yeah, yeah, same. Okay, thanks a lot. So we're done for this session and it's time for a break.