 Hi, I'm Marshal Ball and I'm excited to tell you about some new techniques for zero-knowledge. We're developed with Danedoc and Soled, and cool, cool, Carney. So to begin, just a brief reminder about what a zero-knowledge proof is. So we have a prover and a verifier, and the prover is trying to convince the efficient verifier that this statement is true, X is an L. And so they interact and we really need three properties. We need completeness. If the statement is true, then the verifier accepts with high probability. We need soundness, so if the statement is not true, then the verifier will reject with high probability. And this should hold, even if the prover is not behaving as specified, behaving in fact completely arbitrarily, and we want zero-knowledge. So the zero-knowledge says roughly that the verifier only learns the truth of the statement from interacting with the prover and nothing else. And this is captured by this simulation paradigm. So it says that the zero-knowledge holds if there exists a simulator an efficient procedure for generating a transcript which is indistinguishable from the real one, the prover and the verifier seem to be interacting with this truly magical prover. So since zero-knowledge is introduction, we now know a fair amount about sort of complexity. So if one way functions exist, we have zero-knowledge proofs for all of NP. And if they don't, basically we only have zero-knowledge proofs for languages that are trivial on average. And the culmination of this is this work by Ang-Va-Dan that says that we have zero-knowledge for language, even only if the language admits instance-dependent commitments. In this work, we're interested in understanding sort of minimal assumptions for zero-knowledge, but in particular we're interested in situations with very limited interaction and trust actually. So but non-interactive zero-knowledge is sort of set a zero-knowledge proof with as little interaction as possible. So in this setting, the prover is just going to send a single message to the verifier. The verifier isn't going to speak at all. This isn't achievable in the standalone model. So what's typically done is the prover and verifier are given access to some public randomness. So there's two settings that are typically considered. One is what we would call the common random string model or CRS model, where the randomness in the sky can be correlated, need not be uniform bits. And all their model considered typically in the literature is this uniform random string model or URS model where the bits are uniform. And this is better typically understood to be sort of a situation requiring less trust, whereas the correlations in the CRS model might require using developed, need to be generated using an NPC or something like this. Anyways, that's sort of out of the scope of this work. What do we know about these two models? One way, from one way functions, we can construct an NISIC with CRS for NP. This can be extended actually to AM. And in the uniform random string model, we know how to construct NISIC for NP from one way permutations. This can also be extended. We are also interested in sort of relaxations of zero knowledge with limited interaction. So one such notion that has found many applications is this beautiful idea from Twork and Nor called a ZAP. So in a ZAP, we have a prover and a verifier, and a ZAP follows a very specific format. So it's a public coin protocol, and so the verifier sends, it begins by sending a uniformly random message and the prover responds, and that's the end of the interaction. And the difference between what I was describing earlier is that the promise here is weaker than the one of zero knowledge. We just require witness indistinguishability. So what is witness indistinguishability? What we want is that for any two witnesses, W1 and W2, if I run the prover with W1 and let them interact with any efficient verifier, the transcript should be indistinguishable from if I ran the protocol where the prover had witness to. So one thing to note is that if the language has unique witnesses, then this property is trivially satisfiable, and the prover can just send the witness, and it will be indistinguishable. And secondly, if the prover, if we don't constrain the prover at all, then the prover can just find, say, the lexicographically first witness for any statement for the X and just send that. Yeah. So what did Dworkinor show about this notion of his app? They showed that it was equivalent to music in the uniform random string model. And you may look at this slide and you say, why do music is implied by trapdoor permutation? Then you just show it, so it holds from comparatively weaker assumption, one way permutation. And the reason for that is that this transformation critically only applies if the prover is efficient. And the transformation I showed you that was on the previous slide, the prover is inverting a one-way permutation. And so this construction from Feghe-Lapidot and Shamir is inherently inefficient. So what we are interested in this work is can we show a similar transformation from Nizik to Zapp with an inefficient prover? And that is indeed what we do. And so as a consequence, we get Zapps from one-way permutations with inefficient provers, but still non-trivial. And we have a variety of applications, or sort of zero-knowledge type proofs, with very limited interaction, non-interactive witness indistinguishability, one-message zero-knowledge, and a new notion of what we call fine-grained zero-knowledge. And I'll say more about that at the end of this talk. For now, I just want to focus on this arrow across the middle, this transformation from Nizik to Zapp. And in particular, what I'd like to show is this theorem. So what does this say? So if a language L has a Nizik in the uniform random string model and a t-time prover, then the language has a Zapp with a polyn times t prover. There's a corollary from this. Immediately, we get using this Lapidot Shamir protocol, we get Zapps with sub-exponential time provers per NP. So we want to show this main theorem at the top. Let's recall how Dworkenor showed the transformation for efficient provers. So we start with the Nizik, P and V are going to denote the prover and verifier respectively, and we want to construct a Zapp. The first message needs to be random. So we'll do just that. The verifier will sample some random strings and send them over to the prover. The prover will sample a random string of his own, S. Then he will X or S with each R.I. to generate a series of U.R.S.'s. And with respect to each U.R.S., he's going to generate a proof. He will send this back to the verifier. And she will accept if the Nizik verifier accepts all of the proofs after reconstructing the U.R.S.'s, of course. So as you can see, like, right, the first message is uniforms and satisfies public coin. The U.R.S., because we're X-orring random stuff with a random string, is going to be uniform for each proof. So, right, it should be fairly obvious by inspection that completeness holds based on the completeness of the Nizik proof. Soundness, to see soundness, we're going to fix some X that's not in the language that we care about. And we're going to invoke the statistical soundness of the Nizik to argue that there are very few U.R.S. random strings. And the U.R.S. random string is bad if there exists a pi, which makes the verifier accept. And, right, so next we will observe that for any fixed S over the randomness of R, the R.I.s, the probability that all of these uniformly random U.R.S.s are bad is, can be bounded by something that's exponential in M. And next, by simply taking a union bound over all of the Ss, we can bound the probability that there exists any sort of S that would allow the prover to trick the verifier. And we can bound this with 2 to the minus M plus N. So, as long as M is greater than N, we're good. And we have statistical soundness. The final property we need is witness and distinguishability. And I guess before I continue, there's nothing about these prior proofs, completeness or soundness that required that said anything that sort of would differ if the prover was inefficient. So, the problem is going to come up here in the witness and distinguishability. So, the witness and distinguishability is going to be proven near using a hybrid argument. So, we want to switch these proofs, pi 1 through pi M, 1 by 1, from being generated using a witness 1 to a witness 2. And so, in the ice hybrid, what we'll have is we'll have already switched over the first I minus 1 proofs to witness 2. And we'll try to switch over the ice proof. The way that we're going to do this is we're going to reduce to the zero knowledge property of the music proof system. Right, we're going to, by, we're going to swap this proof, this ice proof for a simulated proof, right? And this will, this will reduce to the zero knowledge property. The reduction will simply generate all of these surrounding proofs, run the, the distinguisher to break the underlying music zero knowledge property. And the argument for indistinguishability to go to move to from the simulated proof to proof generated using witness 2 is, is identical. Okay, so the one thing to note here, though, is because soundness is statistical, if we want zaps for NP, we want this transformation to hold for NP, and we don't want to cause some sort of collapse. It's sort of critical that the indistinguishability argument is computational. So this reduct, if we're going to follow some sort of reduction framework, it's like very important that this reduction is efficient. And the problem with applying this to an inefficient prover is that generating these surrounding proofs to feed to the reduction is simply too expensive because the prover, we don't have time to sample the proofs ourselves. Okay, so how can we get around this? So one thing, the first thing to sort of think about might be to sort of hard code these sample these proofs and sort of some preprocessing phase and hard code them into a distinguisher into our sort of distinguisher for the hybrids. So, given that this is the first attempt, what goes wrong, what goes wrong here? So let's look at the reduction from, right? So sort of the first step, we're going to sort of invoke the, the distinguished verifier, distinguisher, whatever to get these R1 to R1. You can think of this as sort of a worst case choice. And then we're going to set in the, for the prover, we're going to set S to be RI XORed with URS. So URS is the, is the uniform random string given to us by this Nizik zero knowledge game, security game. Next, we're going to set the jth URS to be S XORed with RJ. And finally, we're going to generate these proofs. Right. So what is it? What is the issue here? If we, if we hard coded these, if we wanted to hard code these proofs to begin with, the thing is, right? The, these, the way that we sample these URS is it depends very critically on the URS that we're given from the security game. So we don't know ahead of time how to choose them. And the sort of issue here is that they're, they're potentially, or as soon as you fix this RI, they are very, very correlated. So how can we get around this? Well, we dig up an old idea for sort of breaking these sort of correlations. The 80s. So Nizik, Nissan and Victor Sin encountered a similar problem while trying to build pseudo random generators. And the way they solved it was using combinatorial designs. So what is this thing? So you have these M subsets of integers from one to L, such that each subset is of a fixed size N. And for any two distinct subsets, their intersection is very small. So think of C as three for, for, for our purposes. So, right, as you see in this picture, right, we have two subsets that are large and they have very, very small intersection. And they showed that construction of these objects for, you know, basically any C where L is order N squared. And the way to, if you want to hint as to how to do this, consider low degree polynomials at the bottom of the slide. So how do we use this design, Nissan Victor Sin design? So let's recall our construction from before. So we're going to use these designs to, to break the correlations on the URS. So we're going to, instead of using S directly to generate the Ith uniform random string, we're going to sample the bits corresponding to TI. The Ith set in this design. So you get, so S is going to be slightly longer than before. And we're going to take some of the bits and explore this with the Ith random string that the verifier sent us. And everything else will proceed identically. And the point being here, so now if we look at the URS J versus URS I, there's very little dependence between them. If all of, like say you fix RRI and RJ, right, and all the freedom we have is over S, then there's like only a few, a few bits in common. So three C is three. So completeness is preserved. If we do this, and critically soundness doesn't change by a lot. So the S is slightly longer. So we have a slightly larger union bound. But as long as C is, you know, say three, then you get something that's, you can down this with something exponential in N. Okay. So, right. So let's check witness in the stability. So at a high level, what we're going to want, what we're doing is our reduction is going to end up being ultimately distribution over circuits. And we'll sort of view the sampling this distribution as a pre processing step for the sake of clarity. So the, the. So what we're going to do is we're going to get our R1 through RM from the distinguisher. We're going to sample our S, all the bits of S that aren't, that don't correspond to the I subset from the design. Okay, we're going to sample those uniformly. And we're going to, for each J, not equal to I, and every setting of the bits that we didn't already set. So, right, all the bits in the TI, we're going to, so for each, right, this is going to give us essentially, right, there's the most C of these bits. So we have at most two to the C settings of them. And so for each of these uniform random strings, we're going to sample a proof for the statement. Okay. And we're going to hard code this whole thing as a lookup table into our reduction. So, right, the lookup table is of size to the C. So the reduction is not significantly bigger if C is a constant. Okay, so now what do we do online? On input pi, you are as from this music security game, we simply set the ice, as I, the ice, you are as as before, but using this design and evaluate all the lookup tables to get the correct, the proofs that are correctly distributed. And finally, we continue running the rest of the distinguisher. And that completes the proof. So, right, this is what we saw. We saw this transformation, the simple transformation using this old idea from de randomization. This sort of gives us a tight way of preserving running time, prove a running time in this related transformation to that of Dworkin or this immediately gives us this one way permutations from zaps from one way permutations. It may be though that maybe you find that this idea of agreeing proven verify or agreeing on a uniform random string is already like too much to assume. So, can we get away without any setup? So, if we want zero knowledge, we know that this is not possible. There are sort of relaxations for which you don't need public randomness and sort of, which whereas you can, which are achievable in the plain model. So, one relaxation is one message zero knowledge from Brock and pause. So here, right, you, the prover sends a message, the verifier doesn't speak, and there's no public randomness. So, instead of achieving soundness against arbitrary cheating prover strategies, instead, we just guarantee uniform soundness. So what this says is that if the statement isn't true, then the verifier will check when interacting with any uniform sub exponential time prover. And another relaxation is non interactive witness indistinguishability or nearly. So here it's just witness indistinguishability exactly like this app except now the verifier doesn't send a message to begin with. And this is due to Brock on that. And so, to sort of applications of our technique by sort of modifying Brock on that and framework tweaking it a little bit to hold for this are setting. We get from one way permutations and hitting set generators for cone on deterministic circuits. I'm not going to describe what those are here, but you can look at our paper for details. We get me we for NP with sub exponential time provers. And similarly, if we also add to this sub exponentially secure uniform collision resistant hash functions. We get one message. Mizzix where the simulator is efficient after a pre processing step, which so after the pre processing is independent of the statement. But it's not sort of a standard setting. But briefly, I'd like to tell you in the one minute remaining about this notion of fine grained zero knowledge and our results there. So in the classic formalization of the verifier learning nothing. Right. If you have it start with any PPT verifier, then the transcript generated by prover interacting with this verifier should be simulatable in poly time. So, and for our notion of fine grained. Zero knowledge, right? We're just going to replace any current of PPT with some complexity class C. So if C complexity C verifier should be simulatable in time in this complexity class C and in simulatable means it's the transcript is indistinguishable to C. And we extend this also to witness indistinguishability and there's a running example of what we are considering is C equal to NC one or log depth circuits over a standard basis and we allow them to be randomized and results here we get using this from this worst case assumption. Parity L slash Polly not contain not contained in NC one. We get NC one fine grained physics in this sense that I described earlier. Uniform random string model for NP zaps for NP in this for NC one fine grained sense as well. If we add hitting such generative for codon deterministic circuits. We get fine grained non attractive witness indistinguishability. And finally, if we also add uniform collision resistant hash functions. We get one minute message. And see one fine grained this for NP. Right. The key thing here is that inefficient relative to NC one is different from inefficient relative to PPT. So all of our provers are actually poly time. We're just not as efficient as the verifiers. That's all. Thank you.