 Hello everybody. Welcome to my talk about the statistical zaps from Google based assumptions. My name is Elahe and this is a joint work with Shafor Kuto, Sri Chikar Sumanta and Bogdan Ersut. I want to start my talk with some definition overview and I want to start with definition of zero-knowledge protocols which you probably already know. So a zero-knowledge protocol is a protocol between a prover and a verifier in which the prover wants to prove that some statements like X is in some certain language like L to the verifier after some interactions. Zero-knowledge protocols have three main properties. First is completeness which assures us that the protocol works completely fine. Second is soundness which prevents the prover from cheating or in other words, if X is not in the language, the prover should not be able to produce such a proof. And the last property is called the zero-knowledge property or ZK for short, which prevents the proof from leaking any information beyond the fact that X is in the language. So we know that the prover and verifier have interactions with each other in zero-knowledge protocols which will cost us a lot. So in order to fix this, it will take us to the definition of non-interactive zero-knowledge protocols or NISIX for short, which are similar to zero-knowledge protocols but in them the prover and verifier do not have any interactions with each other. The prover should prove that X is in the language with just sending one message to the verifier. Unfortunately it has been proven that we can only have NISIX in the plain model for simple enough languages. So in order to be able to have NISIX for NP languages, we need to relax our model a little bit. And a common choice would be CRS model or common random or reference string model. In CRS model, there is a trusted party who generates a long string, which is either random or come from some certain distribution. And the prover and verifier both have access to this string and it makes it possible to have constructions of NISIX for NP languages in this model. Now the question is that can we eliminate the trusted party because in the CRS model, NISIX need this trusted party to generate the CRS for them. And if we eliminate that trusted party and in conclusion the CRS, we will face some impossibility result that says we cannot have zero-knowledge property for the protocols with less than three messages in the plain model. So in order to be able to eliminate the trusted party and get around that impossibility result, we need to relax one of the properties of zero-knowledge protocols. And the common choice would be to relax the zero-knowledge property. This will take us to the definition of ZAP protocols. So ZAP protocols are two messages protocols in the plain model which have the similar soundness property. And they have a property which is the relaxation of zero-knowledge property, which is called witness indistinguishability or WI for short. Witness indistinguishability is basically says that if there exists two witnesses like W0 and W1 for the same statement X, the verifier cannot distinguish between two words where in one of them the program use W0 to produce the proof and in the other the program will use W1 to produce the proof. We have two types of ZAP protocols. First is computational ZAP proofs, which we have a statistical soundness property and computational witness indistinguishability property in them, which means that the security in this ZAP protocols holds only against probabilistic polynomial time adversaries. By security here, I mean WI property. The other types of ZAP protocols are called statistical ZAP arguments, which we have computational soundness and a statistical witness indistinguishability in them, which means that we have everlasting security. The reason that we care about a statistical ZAP argument is that between the two properties of soundness and witness indistinguishability, we know that soundness is the property that is basically some online security notions. It means that the adversary wants to break soundness while the proof is being generated and after the proof is completed, we don't care about breaking soundness anymore. But witness indistinguishability is the property that we wanted to be everlasting. The reason is that witness indistinguishability is the property that basically prevents the proof from leaking information. So this is the property that the adversary wants to break even after the proof is being completed. So between these two properties, we want the witness indistinguishability property to be statistical. So, I want to talk about previous works in these areas and talk about our contribution. So there has been constructions of computational ZAP proofs from different kind of assumptions such as tractor permutations or decision linear assumption in bilinear maps, or even from assumptions such as indistinguishability of classification or learning with errors. And from very recent works, there has been constructions of a statistical ZAP arguments from assumptions like quasi polynomial hardness of LWE, and some exponential hardness of DDH in paving free groups from a recent breakthrough. So here is that can we have constructions of a statistical ZAP arguments from other assumptions, especially the assumptions that we have constructions of computational ZAP proofs from them, such as factoring or paving based assumptions. In our work, we introduced a publicly verifiable framework for constructing statistical ZAPs for NP, and we gave two instantiations of a statistical ZAPs. One was in paving groups and based on explicit hardness of DDH and Kernel D. Fee-Hellman assumption, and the other was in paving free groups based on explicit hardness of DDH, plus OWKDM security of lml encryption. To clarify, by explicit hardness, I mean we require some explicit upper bound on the advantage of the PPT adversary against the assumption. So to give you the technical overview of our work, I want to start with the main idea of our work. The idea was to construct the statistical ZAP for NP from some unconditional leasing in the hidden bits model. And to do this, we needed some construction of hidden bits generator, which I will define all of these later on. And to construct this hidden bits generator, we needed some statistical ZAP for a simple language. So you can view it like in other words, we started with some statistical ZAP for some simple language, and we bootstrapped it somehow to get a statistical ZAP for NP. Our starting point was the construction of a very recent work, which we call it LPWW construction. So in the construction, there was some trusted party that generates the service, and it generates some secret key for the verifier. The prover used some dual-mode hidden bits generator to produce a hidden string, and she was able to compute the proof and send some openings to this hidden string to the verifier. And the verifier was able to check the openings with his security. And it was mentioning that their construction relied on the DDH assumption. Our challenges when we wanted to use their construction was to first how to eliminate the trusted party from their construction while maintaining the dual-mode property of the hidden bits generator. The second one was that in ZAP setting, we do not have the trusted party to generate the service for us. Our second challenge was to how to open the bits publicly verifiable, because in their construction, the verifier had some secret key, and we didn't want the verifier to have any secret states in our construction. So to give you the proof overview, I want to start with some technical definitions and then give you the ideas of how to get around these two challenges, and then go a little bit more into details of our proof. So first, I want to start with the definition of hidden bits model. The hidden bits model is an abstract model that makes it possible to have unconditional physics in this model. So in this model, we have some hidden string that only the prover has access to it, and the prover is able to open some positions of this string to the verifier. Of course, she cannot lie about the content of this hidden string, or in other words, she cannot cheat. And one property for this hidden bits model is that the opened positions cannot reveal anything about the unopened positions. The one powerful primitive here is called hidden bits generator, which is a primitive that makes it possible to transform a music in the hidden bits model into a music in the CRS model. So it's a very powerful primitive that generates some hidden string which is so random, and has two main properties. First is called a statistical binding, which is the property that prevents the prover from cheating, which in other words means that the prover can open the bits, but she cannot lie about the contents of the hidden string. And the second property is called computationally hiding, which prevents the verifier from cheating. Or in other words, it means that the opened positions cannot reveal anything about the unopened positions. In our construction, we use some specific construction of hidden bits generator, which is called dual mode hidden bits generator from the work of LPWW, which is basically a hidden bits generator in which the CRS can be in two months. One is called binding mode, which allows us to have a statistically sound proof. And the other is hiding mode, which allows us to have a statistically zero knowledge books. And one crucial property of a dual mode hidden bits generator is that the hiding mode and the binding mode are indistinguishable from each other. So to recall the challenges in our work when we wanted to use the LPWW construction. Our first challenge was to how to eliminate the trusted party and at the same time maintain the dual mode property of the hidden bits generator. And our second challenge was to how to open the bits publicly verifiable in such a way that the verifier does not have any secret states. To get around our first challenge, one common approach is to let the verifier sample the CRS for us. But this will rise some other problems, which in this case, when we let the verifier sample the CRS for us, the question would be how to prevent the verifier from cheating this way. Or in other words, how to protect the prover from a malicious verifier. The idea here was to let the prover tweak the CRS in such a way that the CRS after the tweak should have two properties. One is that we want the CRS after the tweak to be in hiding mode with some overwhelming probability, which allows us to have a statistically witness indistinguishable proof. And we want the CRS after the tweak to be in binding mode with some negligible probability, which allows us to have a statistically sound proofs. So to recall the construction of LPWW, there was some trusted party who generates the CRS and generates some secret key for the verifier. And the prover uses the dual mode hidden miss generator to generate some hidden string. And after she computed the proof, she opened some positions of this string to the verifier, and the verifier uses his secret key to check the correctness of these openings. So back to our second challenge, we want to open the bits in such a way that is publicly verifiable. In order to do so, we assume that there exists some statistically zap for language for the language of correct openings. We call this language the language of LPWW. So if I assume that there exists such a zap, the proven can use this zap in order to prove the correctness of the openings. And we will be able to open the bits publicly verifiable. So to go a little into more details, I want to start to explain about the construction of LPWW first. And I want to first give you some notation, explain a little about some notation. So in their construction, they work in groups, and for a vector of V, we write V in the brackets, and by this we mean simply G to the V1 until G to the Vm. And similarly, for a matrix A with columns A1 to AM, by A in the brackets, we just mean like each of the columns in the brackets. We can view each of the columns as a vector. So in the, in the construction of LPWW, they gave some dual mode instantiations based on DDH assumption. And in that, they view CRS as a matrix in the exponent. So we simply write it as A. And it had the A has M plus one columns. And each vector of Y defines a hidden material. So the C is the commencement of the prover, which is basically Y transpose A0, and each of the OIs is defined as the hash of Y transpose AI. So in this way, the openings for each OI would simply be Y transpose AI. In their construction, whenever the CRS or in other words, the matrix A was full rank, they were in hiding mode, and whenever it was low rank, they were in binding mode. And the verifier had some secret key in their setting. So the prover was able to use a NISIC to prove the existence of a vector Y in such a way that OI is equal to Y transpose AI and the commitment C is equal to Y transpose A0. And verifier you can use, could use his secret key for checking this proof. Now, we call this the language of LPWW. So now we want to protect the prover from a malicious verifier. So we want to maintain the witness indistinguishability property. And at the same time, we want to protect the verifier from a cheating prover. So we also want to maintain the soundness property. The question is that how to achieve both these properties when we let the verifier sample the CRS for us. So by using the construction of LPWW, we know that hiding mode will protect the prover from a malicious verifier. And we know that binding mode will protect the verifier from a cheating prover. So the question when using this construction is to how to achieve both of them. So back to our generic construction. The first step is to start with the construction of LPWW and let the verifier sample the CRS for us. So in this case, the verifier will sample some matrix A and it will send it to the prover. The second step is to let the prover tweak the CRS LPW. So the tweak will be as follows. The prover will sample a small random alpha and it will compute A prime as A minus alpha times the identity matrix. We claim that this way, the matrix A prime would be in hiding mode or in other words, A prime would be full rank with one minus negligible probability. And A prime would be in binding mode or in other words, it would be low rank with some negligible probability or to be more specific with some inverse super polynomial probability. To give you a high level intuition of the proof of this claim, for the first claim, we can say that if A prime is not full rank, it means that there exists some non zero vector U in the kernel of A prime, which implies that alpha is an eigenvalue of the matrix A. And since alpha is some small random value, it will happen with some negligible probability. So we can conclude that A prime is full rank with one minus this negligible probability. For proving the second claim, we can say that if A prime is low rank, it implies that the verifier was able to engineer a prime in such a way to be low rank. And it itself means that the verifier was able to guess the value alpha in advance. Because if the verifier was able to guess alpha correctly in advance, it could have simply compute A to be M plus alpha times the identity matrix for a low rank matrix M. So we can conclude that A prime would be low rank with some negligible probability, which is the probability of guessing the value alpha correctly. In conclusion, we can say that after the second step, which means after the tweak, the A prime, the matrix A prime or the tweaked series would be full rank with some high probability and would be low rank with some small probability. In the next step, the program will do everything else exactly the same as in the construction of LPWW assuming A prime as the CRS. And in the last step, we assume that there exists some statistical zap for the language of correct hidden myths or the language of LPWW. And the program will use this zap in order to prove the correctness of the hidden myths in order to reuse it in order to open the bits. As a security argument, with complexity leveraging, we were able to prove that if breaking DDH is hard, even with inverse super polynomial advantage, our generic construction will work completely fine. I strongly encourage you to read our paper for more detail. Now to give you a summary of our framework, we first started with the construction of LPWW, then we let the verifier sample the CRS for us, and we let the program to tweak the CRS in order to protect itself from the verifier in a way that the CRS after the tweak would be in hiding mode with some high probability and in binding mode with some negligible probability. Then the program will do everything as in the construction of LPWW with this tweaked CRS, and then we assume that there exists some statistical zap for some simple language, and the program will use this statistical zap to open the hidden myths. So in conclusion, we can say that assuming super polynomial, polynomially secure DDH and the existence of a simple statistical zap, we can get a statistical zap for all MP. Now to give you our two instantiations. We gave two instantiations for a statistical zap for the language of LPWW from some recent works, which directly implies a statistical zap for MP by plugging it into our framework. The first one was in Paving Free Groups under the assumption of OWKDM hardness of El Gamal. It was from the recent work of Kuto, Katsumata and Ursu, in which we improved their constructions along the way. And the second one is in Paving Groups under the assumption of DDH and Kernel-Diffie-Hellman assumption in Paving Groups, which was directly from the recent work of Kuto and Hartman. I want to finish my talk with some open problems followed by our work. So in our two instantiations, we used search assumptions along with the assumption of DDH, which is a decisional assumption. So one possible question is that, is it possible to have a statistical zap completely from search assumptions? So for instance, can we relax DDH to CDH? The other question that can come to mind is that can we have zaps from assumptions that is not known to imply public encryption? Thank you very much.