 Okay, next, Matthias is going to present stacking sigmas, a framework to compose Sigma protocol for this junction. John worked with Guelm, Matthew Green, Matthias himself and Gip captured Matthias. Yes, so I'm going to present stacking sigmas, so I can sound it. So first I'm going to be a bit of an introduction, so like the notation for the Sigma protocol and some prior works and where everything fits in. Then I'm going to introduce stackable Sigma protocols, which is the property that we need in order to be able to prove space-saving disjunctions. Then I'm going to give a primitive called partially binding commitments, and then I'm going to be able to present our compiler. Then I'm going to talk about how do we get a logarithmic communication while recursively applying the compiler over and over again, and then finally some wrapping up. So let's get started. So a Sigma protocol is a three-round protocol, it consists of a triple of algorithms, a set and five, such that you know the prover starts by generating the first round message, the commitment, using the statement, the witness, and some random tape. It sends this to the verifier, the verifier, samples the uniformly random challenge, sends this to the prover, and then the prover finishes the transcript by generating the last round message using the witness again, and then the challenge, and the verifier can check that the transcript accepts using five. So what's the goal of this talk? The goal of this talk is to have zero knowledge proofs for disjunctive statements. So where you may want to prove that the first statement is in the first language, or the second statement is in the second language, and so on, until statement L and language L. And what we would like is that if this proving statement is in some language using a protocol PII, requires some cc of a PII, then we would like to derive a protocol for the disjunctive relation, where the communication of the disjunctive protocol is significantly less than the concatenation of the transcripts. What are some applications of this? Applications include ring signatures, magic communication, and witness indistinctive ability from honest verifier like Cyranoge. Okay, so in prior work, there's been a bunch of prior work on like, during disjunctions of Sigma protocols or disjunction, space-faving disjunctions of all the protocols. When generally they sort of fall into two categories, either you have some generic compiler for Sigma protocols, and then communication is just a concatenation of all the transcripts. Or you can get some communication saving, but for a particularly constructed protocol. So then you deliberately design a protocol to get this space-saving. And this work falls somewhere in between the two. So what we do is we get space-saving disjunctions for a large class of Sigma protocols, not all Sigma protocols, where the communication is logarithmic in the number of clauses. Furthermore, our compiler conserves the concrete efficiency. So it's real-world, so you can use that. Note that we can't do this for all Sigma protocols. We can't use the definition of a Sigma protocol black box. But however, it applies for a very, very wide class of Sigma protocols. Here, just like a small handful of them. So like, if it's like of the form of like proving pre-measures of like homomorphism using these classic protocols, if it's like NPC in the heads, then you could probably also do it. And it also includes like the classical protocols, like for graph Hamiltonicity and others. So it relies on like no algebraic structure of the language. And so which Sigma protocols does it apply to? So in order to build some intuition, let's start with this Sigma protocol of proving pre-measures of a one-way homomorphism. So in this protocol, of which you know, Schor is an instance, the proof is starts by sampling a random element of the domain. He then applies the homomorphism since the image to the verifier, the verifier samples a challenge. And the prover computes this linear combination in the on the pre-image side. And since this random pre-image to the to the verifier, within applies the homomorphism to verify that it's a valid pre-image. Okay, so how do you how do you simulate such a protocol? So the simulator, if you gave this like a student and asked him to write a simulator, you know, like it works, you've sampled a third round message independently of set and penalty of the of x sorry, and depending of the statement, which sample a random thing from the domain. And then you compute the accepting commitment for this last one message given the challenge. And so the observation that we make in this paper is that in many Sigma protocols, simulation works similarly. Namely, you sample some like third round message from a distribution, which may depend on the challenge, but doesn't depend on the on the statement, or perhaps depends on some like notion of size for the statement, but not the particular statement. And then, for a particular statement, you can then finish the transcript. You can compute the commitment that makes it accepting for C. So, for instance, an example is in many NPC in the head protocols, the view of the open parties, which is the final set, can often just be seen as like a string of like uniformly random ring elements, like it's a communication from the other parties, and it's just usually just uniformly random. Okay, so let's formalize this slightly. So we say that a Sigma protocol is recyclable has recyclable third round messages. If the distribution of the last round message does not depend on the statement can depend on the challenge but not the statement. So the second property, we say that a Sigma protocol is extended on this verifier zero knowledge. If when I give you a statement, and a challenge C, and a last round message, right, so this is the is where David deviates for the normal definition of special on this verifier zero knowledge. If I also give you a random last round message, you can deterministically compute a first round message that makes the transcript accepting. We say that if both are satisfied, we call the protocol stackable, which means that our techniques apply. Okay, oops. And there we go. So here's here's like the overall idea that doesn't quite work yet. There's some piece missing. But the idea was that when you have this definition, or you have a protocol this form, right, suppose that we have a prover was a witness and you want to prove that you know, the witness on the first statements like in the language is in the relation or the witness and the second statements in the relation. But the idea here is that he proves the satisfied clause using his witness, he somehow obtains a transcript for the satisfied clause. And then he applies the same dissimulator for the other clause using the same last round message. So he has, you know, the same challenge, the same last one message two different first round messages. Unfortunately, there's sort of a chicken and egg problem here that means that this particular example does not quite work. Namely, that, you know, the prover cannot generate the first round message for a to because he doesn't have a witness he needs to see to simulate. At the same time, he cannot send a one because that will reveal which clause is satisfied. And obviously the verify cannot send the challenge before seeing a one because otherwise the prover could just send to simulate both transcripts. In order to get around this chicken egg problem, we introduced the notion of partially binding commitments. So this is a commitment scheme, in which we can have a committer, which is going to be the prover can commit to two tuples, and some index I, in such a way that the prover can later equivocate at the position that is not I, but not in the eye position. So you can equivocate, for instance, if I is one, he can equivocate on V one, but not on V two. So you can equivocate on one of the two positions. The observation here is the crucial thing here is that when the prover eventually, or when the committer eventually opens this commitment, the verify does not learn which position was binding. This intuitively will enable the prover to send to one of the first two round messages, right, because he's going to commit to one of them, or he doesn't know which one. But the verify will never know which one. And now I would like to give you a very, very simple construction. So the construction that I'm going to give here is just from partisan commitments. So in this case, you know, you have some setup that's like two generators, you don't know the street lock between them. I'm going to generate a commitment key, and then equivocation keys, a commitment key is just whatever you're going to use to commit an equivocation, we'll let it be equivocating in one of the exactly one of the two positions. You're going to generate this by selecting the equivocation keys, just a random trapdoor, right, so it's a scalar. And you're going to form this H for the other position. So if I is one, this is like H two, you're going to set that to G two, your trapdoor. And then you're going to pick, you know, H one, but said, when I multiply G two to G two, and H one and H two, I get H. And then the commitment key is just going to be, you know, H one. Then in order to commit, I'm simply going to compute H two from from H, what was given in the setup, and H one. And then I'm just going to commit using two individual partisan commitments. So I'm going to form the final commitment as G two or some randomness, H one to V one, G two some other randomness, H two to V two. How do you equivocate? Well, you equivocate, like you normally would for a partisan commitment. So the one of you can definitely change, you know, the equitable position. It is also not hard to see that if you could equivocate in both positions, using the standard, like the way that you normally reduce the binding of medicine commitments, you will get the discrete log of both H one and H two and G. And then by the relation between the two, you'll get the discrete log of H. Okay. So now with this, with this very simple tool, the new idea is, you know, the proof is going to generate the first round message is going to generate an equivocation key that is binding in the position where he actually has a witness for, for the clause that he has a witness for. And he's going to send the commitment key and this and a commitment to the first round message and some garbage in the other position. And then the verifies can send the challenge to the prover, the prover can then finish the transcript for which he actually has a witness. And then he can simulate the other transcripts using the challenge. And then finally, he can equivocate, such that he opens it to the two of the two first round messages. Okay. Slightly more detail, right? So the prover, he generates a quick commitment key and equivocation key for I equal one in the case. This is the clause for which he has a witness. Then he generates the first round message he sends over commitment. He gets a challenge. He runs set to get the final message of the, of the, of the first transcript. And he simulates the second transcripts and he equivocates using the randomness of the original commitment and his newly simulated first round message. And he sends over the last round message, the random and the, and the randomness after equivocation. And the verifier can regenerate, you know, the first two round messages and you can check that it matches the opening of the commitment. Okay. So at this point, we have a not super exciting compiler that, you know, takes a, a, a signal protocol that would have had two last round messages and it gives you a signal protocol that has one left round messages, one left round message like compared to CDS. But what's the trick now, right? So the trick here is that it, this compiler, it takes a stackable signal protocol for language with some communication complexity, ccf part, and it gives you a stackable signal protocol for the language of disjunctions. So x1 is an L or x2 is an L where the communication overhead is essentially just out of the partially binding commitment scheme. Now the trick here is to say I have a, I have a disjunction of four clauses. I have a disjunction of four clauses. I can mute it as a disjunction of disjunctions of two clauses. And I already have a sigma protocol for disjunction of, for two disjunction. So what I can do is I can simply take the protocol that I was given. I can take pi prime and I can stack it again. So I simply feed it into the, into the, into the compiler again. The second time it comes out of the compiler, right? I'm going to have, you know, ccf the original pi plus two times the overhead of the partially binding commitment scheme. So in general I'm going to end up with like log two of L times some, some small constant time security parameter plus the communication complexity of the original protocol. Lastly, before I wrap up I would like to describe a slight generalization. So it turns out that you can generalize this to distinct protocols as long as they share the same distribution over last-run messages or it's indistinguishable. Or in the case where you can sort of trivially convert one into the other by like packing or padding the last-run message. So how does this work? Like informally you simply finish the transcript for, for, for, for the protocol execution that you, you do have a witness for. You obtain the transcript as before and then you simulate using the same set. So just as in, in the case of a single protocol. So for instance, if you have like an instance of KKW over Boolean circuits and you have an instance of KKW over a larger ring, where you know the size of the ring is like some, some power of two. Then in, in, in both cases you can sort of view the set as just being like a string of random bits. Because of the uniformly random field elements, pack them as uniformly random bits. In this case you can do a disjunction between two, two parallel executions of the protocol. Finally, I would encourage you to see the full paper and I would like to say thank you for your attention. Okay, do we have any questions? So it's very cool that you get the sum result for protocols like KKW. I'm curious though, can you say anything about composition in the, in the sense that some parts of the circuit will be stacked and others will not be stacked? Oh, we, we have thought about that, but it's not super, super easy, but I mean no, not really. Okay, thank you. I mean we've thought about it, but that's not an elegant solution. Any other question? Okay then, let's thank the speaker again.