 Hi, my name is Yan Nozin, and this is a talk for Asia Group 2021 presentation titled Snacky Ceremonies. This is a joint work by Marko Kovais, Mary Muller, and Mihail Volkov. Let me start you by reminding what are the sickest snacks. This stands for zero-knowledge succinct non-interactive argument of knowledge. And the idea is that we want to prove that some statement X, which is public, and some written as W, which is private, satisfy a certain relation. And from security-wise, we wanted to satisfy knowledge soundness. This means that prover who is able to convince the verifier should know a witness, W. And we want zero-knowledge, which essentially means that the only information we leak is that X and W are in the relation, and we don't even reveal the witness here. And finally, we want succinctness, which essentially means that the proof size is independent of the size of the statement and the size of the witness. Smallest Snack, so far, or at least once the smallest, is proposed by Yens Kroep in 2016, and it requires a complicated trusted setup, unfortunately, which has made it very difficult to use it in practice. So, what I mean by this trusted setup? Essentially, we have here a prover, a verifier, and some trusted central party. Prover knows statement and witness, and verifier knows only the statement. And then in the beginning of the protocol, the trusted party will generate what we call a common reference string. So this is a bit string from a very specific distribution, and he gives this to both the prover and the verifier, and then prover can use this to construct an argument, and verifier can use this to either accept the argument, which means he believes it or he will reject it. There are essentially two flavors of the common reference string. There are uniformly random strings, which is completely uniformly random bit string. Or there are structured reference string, which is anything besides the uniformly random string. And uniformly random string we sometimes call a transparent setup, because it's relatively easy to obtain a uniformly random string from some natural resources, which means that we essentially don't have to trust it anymore. We can obtain it by, I don't know, picking, for example, certain digits of pi or some other natural source of randomness. Unfortunately, protocols based on URS are less efficient. And then you have protocols with structured reference string. Here the setup is much more complicated, and we don't have as good ways of generating this structured reference string, but the protocols are much more efficient. So in case of structured reference string, the trusted party usually does something like this. He first generates a trapdoor, then he applies some function with the trapdoor, and this then generates the SRS. Unfortunately, anyone who knows the trapdoor can completely break the protocol. It is actually by design of the protocol that this is possible. Essentially this trapdoor is used to prove that the zero-knowledge property holds, but it accidentally also can break the soundness property if someone actually knows this. Which essentially means that in most applications, a single party should not be able to generate the SRS because otherwise, I mean, we really have a centralized trust in the system, which for example in the blockchain setting is not really acceptable at all. There have been several proposals, how to solve this. One of the earlier proposals was to use multi-party computation. So here the idea is very simple. We already have bunch of multi-party computation protocols. The trapdoor can be shared between different parties. So each party will generate its local trapdoor, then they run some multi-party computation protocol, and the final output is an SRS. F is applied to a master trapdoor, where the master trapdoor is some sort of a combination of the individual trapdoors of each party. Typically you get a property that if some fraction of the party is honest, then the final trapdoor does not leak, and the SRS has a correct structure. Unfortunately, it's also quite inconvenient if you want to run this with a large number of parties because they have to be available throughout this whole protocol, which might take quite a while. And also multi-party computation protocols are not that efficient if you want to run it with a huge number of parties, which would perhaps in a blockchain setting. So in response to that, yes, there have been proposed some specialized multi-party computation protocols, which are a bit more efficient, but they are still quite cumbersome to use in practice. Another possibility, which is a little bit better, is known as updateability. So these are essentially specialized snags, where the multi-party computation protocol has only one round. What do I mean by this? Now we basically have that each party generates a fixed trapdoor, and the first party will produce some SRS and some proof of updates from one. The second party will just update this proof and generate its own update proof. So this update proof is something just to verify that the update was done correctly, and so on. And in these protocols we only rely on one honest party. And in order to verify the final SRS, you just need the very last SRS and some of those intermediate update proofs, which are typically short in those protocols. And there are now many works in this model, like Sonic and Marlin and Plunk, but in general they are a little bit less efficient than the non-optatable snags. So it would still be interesting to kind of make those non-optatable snags also updateable in some way. And finally, there's a solution by Bowie, Cabism and Myers, which is a player exchangeable NPC. The basic idea is that in these types of protocols, the trapdoor doesn't have to be committed in the beginning of the protocol, and instead they use a random beacon, which is like an extra party in the protocol. And then in each round of the protocol, also this random beacon participates, and we can kind of, we are just assuming that this random beacon behaves correctly. It produces some random numbers for us. And yeah, the output of the protocol is kind of the same as in the usual NPC protocols, but what is nice here is that the parties can be different in different rounds of the protocol. So now we are not anymore relying on the fact that the same party is available in the first phase and the second phase of the protocol. So this is a lot more convenient if you have a lot of parties in a multi-party computation protocol, especially in the blockchain setting. And now this protocol of Bowie, Cabism and Myers, this was used quite extensively, for example, with the Zcash protocol, Aztec protocol, Filecoin, semaphore, loopring, and so on, with quite some success. So let's look a bit more closely what is this random beacon here. The idea of the random beacon is that after certain time intervals, it would produce us a new random number, and why it's useful in this previous protocol is that we can basically use this previously unknown random number to randomize this current state of the SRS. Unfortunately, it is quite complicated to construct secure random beacons. There's a heuristic approach which has been used in this blockchain world, which is that you just take a hash of a relatively recent blockchain block and you now say that, okay, this is my new random output of the beacon. This is only a heuristic approach if you actually want to do it securely, then it's a very challenging problem. So ideally, we would not want to use a random beacon at all. So what is our contribution here? Firstly, we propose a security framework for non-interactive arguments, which have a ceremony protocol. So this is a protocol for setting up the SRS. And you can kind of see this as a generalization of the notion of optimability if you're familiar with this one. Then we propose a simplified version of CROC16 ceremony, essentially a simplified version of this Bowie-Gabbison-Meyers protocol, which does not use a random beacon. So we completely eliminate this assumption. And finally, we give a proof of security in a mixed model of algebraic group model and random record model. This technically quite an advanced security proof, which we will very briefly go over as well. So let's take a look of that security framework of our proposal. So let's start with the ceremony. Firstly, we have our proof of algorithm and the verifier algorithm as usual. But the SRS is now split into independently updateable parts, SRSU, which we call a datable SRS, and the SRSS, which is a specialist SRS. So the idea here is that SRSU is independent of the concrete relation that we are trying to prove, and the SRSS already depends on the relation. Then we have an update algorithm, which is used to update the SRS. It takes in the previous SRS, a set of update proofs, and an index, either a U or S, depending on which part of the SRS we want to update. And then it produces a new version of the SRS and an update proof for that SRS. Then we have an SRS verification algorithm, which takes as an input an SRS, and the set of update proofs, and either resets it or rejects it. Yeah, so what does the model actually look like? Basically, we do updating in two phases. So first, we update the universal SRS. This is the SRS without a specific relation, and it goes as an unusual updateability model. So the first guy updates and produces a proof, and so on. Then we finish the universal updating phase and fix the SRS, the universal SRS. And now we have, let's say, some relation that we want to use. And we take the SRS, which is universal, and we generate initial specialized SRS. So this is an SRS which depends on the relation, and we start updating this. And as you see here, these parties in the first phase and second phase, they don't have to be the same. So from updateability perspective, we argue that there's actually not much difference if you work in this type of updateability model, where you fix the universal SRS at some point, and then go on with the specialized SRS, or you just have your usual notion of updateability. And the final SRS is just the conjunction of the final updateable SRS and the final specialized SRS. And from the security wise, we want completeness in two flavors, actually. We want update completeness, which essentially says that, okay, so if you have some SRS which verifies, and we make an update, then also this updated SRS will verify. And we also have pruber completeness, which is essentially the usual notion of completeness in non-interactive serial noise protocols, that, okay, so if the SRS verifies, and the non-pruber generates a proof, then also the verifier will accept the proof. But these are just some properties to guarantee that a protocol even makes sense. Then we require update noise soundness. What this one says is that, for every efficient adversary A, there exists an efficient extractor, such that if the adversary outputs statement X and some proof pi, and it verifies, then the extractor is able to extract the witness from the view of A. So this is all the internal knowledge that the adversary has. Yeah, so you can extract the witness and the SRS here is generated by interacting with the challenger. So essentially what we allow is that in the first phase of updating, the adversary can request the challenger to update his SRS. And by the end of the universal phase, the adversary will submit some SRS-U together with some update proofs, such that at least one of those update proofs was generated by the challenger. So essentially there has to be one honest update. And the second phase works basically the same way. So now we have some SRS-U already fixed. Adversary can request updates for specialist SRS. And by the end of the phase, he submits some SRS-S, a specialist SRS, and update proofs such that at least one of the updates was done honestly by the challenger. And then we borrow the notion of subversion or knowledge, which was invented by Bellara, Huxpower, and Stapuro. And the idea there is that adversary is allowed to pick any SRS and a set of update proofs such that they verify. And in that case, there must exist an efficient extractor that can extract a tractor from the SRS. And this tractor can use to simulate proofs that are indistinguishable from honest proofs. So yeah, this is our security model. Okay. So now let's go and check what this ceremony protocol of CROT-16 roughly looks like. So CROT-16, as I mentioned before, it's not with one of the shortest proofs so far. And we can split the SRS into a universal part and the specialized part. And here this G is generator of one of the bearing groups and H is the generator of the other bearing groups. And X is some integer, which is a tractor. So X, alpha and beta, here are actually a tractor. And the specialist SRS kind of looks something similar. But what we can notice here is that the universal SRS has essentially monomials in the exponent, whereas the specialized SRS does not necessarily have monomials. So here we see some much more complicated polynomials. But essentially what we maybe update this is that in the first SRS, let's say if you want to update this element, an updater would pick some X prime, exponentiate, let's say this element with X prime to the power of Y, and then the updated element will look like such. So this X prime X is now the new tractor X. And if you want to update the second phase, then already the X, alpha and beta are fixed and they're only updating delta. So updater would pick some delta prime and again exponentiate. Let's say this element, for example, by one over delta prime, which means that now this delta here is updated. And beyond that, we also need to include that update proof, which essentially allows us to verify that the updating was done correctly. So what this proof contains is in case of X, for example, tractor X, it will have G to the power of X times X prime, G to the power X prime and H to the X prime. So X prime is this new tractor here and X is the previous tractor. And then we also have a proof of knowledge for X prime. This is something which we need for a security proof. And the verification looks something like this. So E here is a bearing operator. And we can verify that this G X prime was computed correctly by running this function, for example. And then we also verify the proof of knowledge. So this proof of knowledge is actually something very specialized protocol. I won't go very much into details here. But so far, the protocol that I've introduced is essentially the same as the Bowie-Cabison-Meyers protocol. But here we somehow already are deviating a bit. We are requiring stronger security properties for the proof of knowledge part. What we essentially prove in the paper is that this proof of knowledge has to satisfy zero knowledge and the very specific flavor of soundness, which will have straight line simulation expectability. So this is a very strong notion of soundness. And in addition, the soundness adversary is allowed to ask from the challenger a group element, as we can see here, where X1 to Xn is some secret value and F with some polynomial chosen by the adversary. Additionally, we allow the adversary to query the random protocol. And we prove this in the model which combined random protocol model and the algebra group model. As a group model, I will introduce now. So yeah, let's look briefly what is our security model and the security proof. I will give some very brief overview of our main security proof. But before that, what is this algebra group model? So here we have an adversary and the challenger. And in algebra group model, the challenger can give us some group elements. And whenever adversary outputs a group element A, then he also has to output integers, which are a linear representation of those group elements. What I mean by this is that A has to satisfy this type of relation between the elements that he has received so far. And now the challenger can send us some new set of elements. And again, when adversary outputs some group element B, then he has to also provide integers D1 to Dm, which again satisfy this type of linear relation. But notice here that now we can depend both the elements that were sent in the first round and also the ones that were sent in the second round. But this is what we assume from the adversary. So essentially this is a restricted model of computation where adversary is forced to output something that he would usually not force. But this has become very widely used in the bearing base setting. And additionally, we assume a random oracle where adversary can input some bit string X. And in return, you will get a group element Y, which is a random group element. And our security of our main result depends on Q1, Q2, discrete logarithm assumption, where basically adversary will pick a random exponent Z and send these group elements that you see on the screen. And now the adversary is supposed to find an integer Z, which is the exponent. So it's the usual discrete logarithm assumption, but adversary gets more knowledge than usual. And the statement of our main theorem is that if 2n minus 1, 2n minus 2, discrete logarithm assumption holds and we work in this combined model of algebraic group model and random oracle model, then we achieve update knowledge soundness for the ceremonial cross 16 smack. So this is cross 16 snack together with this updating protocol. And notice that here we don't require randomly which is one of the main results or main achievements of this paper. Oh, yes. And the end here stands for the number of multiplication gates in the circuit of the relation. So I will very briefly go over of a high level strategy of our main security proof. And this will end this talk. So essentially the idea is that, okay, we pick some algebraic adversary A, and then we will construct an extractor. And this extractor will output specific coefficients, like this algebraic representation coefficients that I showed before, which we claim that will now correspond to the witness. And our rest of the proof will actually we will be arguing that this output of the extractor is indeed the witness. Well, we use several games. Game zero is the original update knowledge soundness game with adversary A and the extractor that I just described. And so if you find is the setup of the proofs in some phase five, then we call the critical query to be the last honest update that is the update timing by the challenger. And in game one, what we will try to do is try, we will guess the index for the critical query before the beginning of the protocol. And we don't know this by I guess. And basically, if we guess it correctly, then the challenger will on the ice, I guess query, he will not update the SRS proposed by the adversary, but he will instead generate the completely fresh SRS, which is unbiased. And he will instead simulate the update proof. So now what this will guarantee for us is that the SRS from this point on has not been up to this point has not been biased at all by that nursery because we generated it freshly. And we will use the serial knowledge and straight line simulation extractability properties of this protocol knowledge sub protocol to show that these two games are indistinguishable. And then the algebraic group model part of the proof looks something like this. We define a polynomial q in the trapdoor variables x x alpha x beta and x delta where if you apply basically the trapdoors at the point at the index guess that they will be zero exactly when we accept. So this q is essentially something very complicated polynomial, which depends on the verification equation, all the intermediate SRS states and brand more common responses to the adversary, and also the agent coefficients of the adversary. But it will have this property that it is zero exactly when we accept the proof. And now the proof will branch into two parts. In the first part, we say, okay, suppose that this polynomial q is not zero. Then we show that we can do a reduction to this variation of the discrete logarithm assumption. And if it is zero, then we show by arguing the structure of this polynomial that whatever the extractor outputs is actually the witness. So this is a very technical part of the proof, but it indeed seems to work out. So so much about this proof. And this was the by far the hardest part of this paper. I think it took us a couple of months to get this proof correct. But yeah, so the main takeover helped this paper. Even non-updatable snarks can be updated if we allow it multiple phases. And this seems to have very little difference in practice if you have only one phase updating or two phase updating. However, what is actually important is that this one phase updatable snarks are also universal typically. And by universal, I mean that the SRS does not depend on the relation. And this might be actually much more difficult to achieve. But there is actually a protocol called Mirage where they do modify code 16 to also obtain universality. But this is already quite the big modification, I would say. But if you apply this paper together with Mirage, then you can get a version of code 16, which is both universal and updatable. So yeah, thank you for listening.