 In the next talk, we're going to hear about efficiency preserving transformation from honest verifier statistical zero knowledge, statistical zero knowledge. And this is by Papel Hubacek, Anon Rosen, and Rita Valdt, and Rita would be giving this talk. Thank you, Niel. And this is joint work with Papel Hubacek and Anon Rosen. So what we do in this work, we construct statistical zero knowledge proofs as efficient as the best honest verifier statistical zero knowledge proofs. And we do that via a tool that we define called instance dependent statistical zero knowledge, that it's a zero knowledge analog of instance dependent commitments. And so let's first define what is statistical zero knowledge. Statistical zero knowledge is the class of promise problems that have statistical zero knowledge proof. What is a promise problem? So it consists of two sets, the yes instances set and the no instances. The sets are disjoint and are not necessarily complement of each other. So in this setting, we have a proven verifier that are given an instance x, and the proven is trying to convince that x is a yes instance. So they run the protocol and eventually the verifier will output, accept or reject. And the guarantees of statistical zero knowledge proofs are the following. The first guarantee is completeness. So it means that for any yes instance, if the prover and the verifier will follow the protocol, the verifier must accept. The second guarantee called soundness means that for any no instance, and for any unbounded cheating prover, the verifier should not accept the proof. And the third property called statistical zero knowledge, it captured by having an efficient simulator that manages to simulate the transcript distribution in a way that it's statistical close to the real distribution of the transcripts. Okay, and this holds against any malicious computationally bounded verifier. So this is very strong requirement actually, because here the simulation is statistical kind of information theoretic requirement. Okay, so you notice just that the guarantees of the proofs are only with respect to instances in pi, if we're given some string that it's not in pi, there is no guarantees whatsoever. So for the rest of the talk, when I'll say it's statistical zero knowledge, I will mean that zero knowledge holds against any malicious verifier that it's computationally bounded. And if I'll say one is verifier as a key, it will mean that the zero knowledge holds only against one single one is verifier. Okay, good. So our goal is just to construct efficient statistical zero knowledge proofs and efficiency, we mean to minimize the interaction, but also we would like that the prover overhead would be kind of minimal. In that sense, we would like that the complexity of proving the statement would be as close as possible to the complexity of solving the instance, okay? Okay, good. So the classical approach to do that is here from the 90s basically is to transform one's verifier statistical zero knowledge to general statistical zero knowledge. And it started with transformations under computational assumptions with Belar Mikhaili and Kostrovsky and others. And later, the second type of transformations were via popcorn, but the drawback of this transformation that it does not preserve the round complexity of the honest verifier protocol. And the third type is a transformation by Ongel Batal, I'm sorry. Okay, it gives us a constant round statistical zero knowledge protocol and this transformation goes to Arthur Merlin's class, okay? And both the second and the third transformation is unconditional. It's not under computational assumptions, but the problem with these transformations is that they introduce a blob in prover complexity. And actually, what I showed that this blob prover complexity is unavoidable. Any transformation that goes through public life will not preserve the prover efficiency of the honest verifier, okay? So not a real question to ask at this point is can we have a transformation that is unconditional, of course, and fully preserves the efficiency of the honest verifier statistical zero knowledge. So what we do in this work is we are, oh, I'm sorry. Okay, so what we do in this work we answer this question and we show a transformation that given any honest verifier statistical zero knowledge proof, we convert it into a full-fledged statistical zero knowledge proof while maintaining the efficiency of the original protocol. What we mean by efficiency? So the prover efficiency, the verifier efficiency is preserved and also the round complexity is preserved and the transformation is unconditional. So what this gives basically is that we can focus on constructing efficient honest verifier statistical zero knowledge proofs and just plug in into the transformation and preserve the efficiency, basically, okay? And potentially constructing honest verifier statistical zero knowledge proofs, it's much simpler task. We need to prove zero knowledge against single verifier, the honest one. So this is one final result. Another result, we show a concrete protocol for a statistical zero knowledge complete problem. So more formally, we show a constant round statistical zero knowledge proof for a problem called statistical difference and this proof, and this problem is as a K complete. So this gives us basically constant round statistical zero knowledge proof for any problem with missing K, okay? And this is unconditional as well. The, in this context, the previously known result was constructing also constant round statistical zero knowledge proof for the same problem, but it was only against honest verifier, okay? The zero knowledge was only against honest verifier, perfect. So at the rest, I had to explain how to construct these statistical zero knowledge proofs and how to do it efficiently, okay? So the high level approach is the following. We have a pie that needs a promise problem that belongs to honest verifier statistical zero knowledge and we are guaranteed to have this honest verifier set protocol for the problem pie. So what we want to do is we want to immunize this honest verifier protocol against malicious verifiers, that's the goal, okay? And the natural approach would be first to let, instead of letting the verifier pick the coins for the honest verifier proof, we will do some coin tossing phase between the prover and the verifier and the coins, the result of the coins will be used by the verifier in the honest verifier protocol. So if the honest verifier protocol was public coin, this would be enough because the prover while executing the protocol could verify that the verifier is using the coin tossing principle. So this is good, but actually, most of the protocols are not public coin and this is the problem. So we add another component here and we add a proof of correct behavior. Here the verifier will need to prove to play the role of the prover and prove that he's following the honest verifier protocol, okay? So let's see what the properties that we need from these two components in order to achieve what we want, zero knowledge against malicious verifiers. So when the verifier is malicious, what we want is that the coin tossing result would be random and also binding. In a sense, it should bind the verifier to use these coins in the honest verifier protocol later. And another property is that the proof should be solved. So if the proof is convincing, it should be that the statement is actually true. On the other hand, when the prover is malicious, we want coins to be statistically hidden from the prover and we need it to be statistically hidden because the prover is all unbounded in our setting, okay? And the second requirement is that we need the proof to be statistically zero knowledge and not only statistically zero knowledge, but we need a stronger property. We need it to be zero knowledge against unbounded verifier because the verifier in the proof is the unbounded prover in our case. So let's first focus on the first component here, the coin tossing phase and see how we can implement it. Here, the coin tossing, so the natural way to do this is just to use commitments. Instead of, to do the coin tossing just to use commitments, let the verifier to commit to the some random string, the prover will pick his own random strings and it need to clear and then the verifier solve it and use the result in the honest verifier protocol. And now, instead of the requirements from the coin tossing, we can have requirements from the commitment scheme. So we'll need it to be binding and statistically hiding the commitment scheme. And now the question whether we can implement this unconditional because we want the transformation to be unconditional. The problem with that is statistically binding cannot, the commitment cannot be unconditional because it will imply one function. So this cannot be done, but if we'll take a closer look at the requirements, we will notice that we actually need the statistically hiding property to hold only when the prover is malicious. And this is the case where we're dealing with no instances. And on the other hand, we need, oh, sorry. Okay, on the other hand, we need the commitment scheme to be binding only when the verifier is malicious. And in this case, this is against instance. So this actually gives us some relaxation of what we want from the commitment scheme. We need some relaxed property. We don't need it to hold simultaneously. And we can ask again if we can implement this relaxed commitment scheme. So for that, actually, the answer is yes. And this is exactly what called instance dependent commitment scheme. This was introduced by Belar Mikhailov-Strovsky. So normally, instance dependent commitment scheme for a practice problem, pi, is a family of commitment schemes that each commitment scheme in this family parameterized by an instance of pi, okay? And the guarantees for this family is the following. If x is the no instance, then the corresponding commitment scheme will be statistically binding for the no instance, okay? And if x is a yes instance, then the corresponding commitment scheme will be statistically binding, okay? So basically for no instances, the commitment scheme is statistically binding. And for yes instances, it's statistically binding. But there is no guarantee that there exists an instance for which both properties hold, okay? Either this or this, no guarantee that some will happen. Actually, it cannot be, okay, but great. So how we can construct instance dependent commitments? This is started with specific constructions for specific problems in Honest Verified SDK. Later, it was extended for all promised problems in Honest Verified SDK, but in the cost of making the committer inefficient. Later on, another instance dependent commitment scheme was proposed. Here, the proof, the committer and the receiver is efficient, but the binding was relaxed. It doesn't matter how it's called, what exactly it gave, but the last one, the last construction by Ongen-Padan gave instance dependent commitment scheme that have efficient committer, efficient receiver, standard binding, as we said, statistical binding, and they are constant around and public order, okay? So the nice thing about it also is that statistical zero knowledge is closed under documents, so we actually can reverse the properties of the hiding and the binding with respect to the instances, so we can have the other way around in the property. Okay, good. So this shows that we can actually implement the first component, and now we can focus on the second component. So here, we need the proof to be sound and to be statistical zero knowledge against unbounded verifier, and the same question, we ask the same question, can we do it unconditional? And again, the statistical zero knowledge, we cannot do it unconditionally, so as before, we just needed to hold on you when we're dealing with no instances and the soundness to hold when we deal with yes instances. And now, can we do that? This relaxes zero knowledge properly. For that, we answer yes, and to do that, we define a new notion called instance-dependent statistical zero knowledge. So what is, actually I want to say that this primitive that we define was implicitly used by Ongen Badan in their transformation to Arthur Merlin, I think. So what is instance-dependent statistical zero knowledge? It's also, it's a family of protocols, so instance-dependent statistical zero knowledge for a language L with respect to a promise problem pi. We have here two components, a language and a promise problem. It would be a family of protocols that parametrized on instances of the promise problem, okay? And the guarantees for this family of protocols are the following. For any instance in pi, no matter if it's a yes instance or no instance, the corresponding protocol will guarantee completeness for L, okay? Instance, the corresponding protocol will be sound for the language L, and for any no instance, the corresponding protocol is statistical zero knowledge for L. And next, what we show is that we also can construct this instance-dependent statistical zero knowledge. Formally, we show that for any language in Np and any promise problem in Onnest verifier, statistical zero knowledge. Plus, there exists an instance-dependent zero knowledge proof for the language with respect to the promise problem. And he actually, we show that it has additional nice properties. It is constant round. It not only sound, it's also proof of knowledge. It means that there exists an efficient extractor that manages to extract the witness from convincing statement. And it is statistical zero knowledge even if the verifier is unbounded, okay? So how we show this, we basically, we take the Blanc-Lamiftonicity protocol, we take few copies of it in parallel and we replace the challenge of the verifier with a coin tossing component that we just saw before. Okay? So we use instance-dependent commitment there, okay? Okay, so now that we have the two components, let's just put things together and see how the event, like the actual protocol looks like. So we have a problem by, promise problem by that is Onnest verifier is okay and an instance X. What the protocol starts with the verifier picking a random string. You will commit to this string using instance-dependent commitment scheme that parameterize on the instance X and he will prove using instance-dependent statistical zero-knowledge proof of knowledge that he knows R1. If the proof is convincing the prover what he will do, he will pick his own random string and send it in the clear to the verifier. At this stage, the verifier will absorb the two strings and use the result as the coins for the Honest Verifier protocol that is guaranteed because pi belongs to the Honest Verifier SDK class. Okay, they will run together the protocol on the random coins that is absorb and the instance X and in each round of this execution, the verifier will also prove that he's following the protocol. He will give a proof that the message he just sent is according to the next message function, give it the history and the points of reasoning. Okay, so what we just need to show that this protocol does not destroy the soundness of the Honest Verifier protocol that we're using and that this actually achieves statistical zero-knowledge. So at high level, the reason why we have here soundness is that when the prover is malicious, we have statistical hiding commitments and all proofs are statistical zero-knowledge. So informally, this means that this transcript distribution is indistinguishable. It's statistically close to the transcript distribution where the proofs here and also the proofs here are given as simulated proofs without using the witness. Okay, and also the verifier is not using the coins, like the distribution of transcripts where this is simulated proofs and the verifier won't use actually the random coins that from the coin tossing, he will pick his own random coins. And this is statistically close. And since the distribution of transcripts where the proof is simulated and the verifier using his own coins, he has nothing like the new components are independent of the secrets of the verifier. This is basically reduces to the soundness of the original protocol. Okay, what about the statistical zero-knowledge? In this case, we have the commitment to be binding and the proofs are sound. So here, very, very high level, what the simulator that we construct is doing is forcing a honest transcript on the verifier. He's going to sample a honest transcript from the simulator of the Honest Verifier protocol. He will use the proof of knowledge to extract the coins R1. And using R1 and the transcript that he sampled, he will force the coins of the Honest Verifier transcript on the malicious verifier. And because we have the proof of correct behavior here, the statistical zero-knowledge proofs here, he can actually verify each round with the malicious verifier following the protocol. Okay, so I just want to conclude first. So what we show is a transformation from Honest Verifier statistical zero-knowledge to, sorry, yes, to general zero-knowledge. And basically, as I said, it allows just to focus on constructing efficient Honest Verifier statistical zero-knowledge. I want just to fast propose two open questions that we leave open. We define this notion of instance-dependent statistical zero-knowledge and use it to construct a general statistical zero-knowledge. But it would be nice to see if there are any additional application of this new primitive. And also, can we improve the concrete efficiency of the transformation? So our transformation is asymptotically efficient, but the bottleneck there is the instance-dependent commitments of Honest and Badan. And the reason we use them is that we actually, the simulation technique require a statistical binding. So maybe an alternative simulation technique will allow to require just standard binding, not statistical, and then to use more efficient instance-dependent. Questions to Rita? Okay, so let's thank Rita. Oh, there's a question. At some point that if the protocol would be public-coined, you wouldn't need the second step of proofs. Yes. So is there an alternative to make the protocol public-coined? No, it just, if the Honest Verifier protocol was originally public-coined and I could just, while running it, the prover could verify that the Verifier is using the right coins from the coin tossing phase. Because the coins of the Verifier public-coined means that the coins of the Verifier are, like the Verifier only sending random strings, right? And the random string is exactly, this random string is exactly the coins of the coin tossing phase. So he could actually see that he's doing it properly. So let's thank Rita again and lunch then. Thank you.