 Hey, welcome to my talk on deterministic Prover Zero Knowledge. This is joint work with Neal Vitansky. So let's jump right in with zero knowledge. So this is a wonderful notion introduced in the work of Goldwater-McCulley and Rakov back in 85. The Prover and the Verifier have some common statement x, and the Prover is trying to prove that the statement belongs to some language n. So there's some interaction between the Prover and the Verifier at the end of which the Verifier either chooses to accept or reject. So this protocol has three properties. The first is completeness, which says for every statement that's in the language, the Verifier actually accepts. The second is computational soundness, which says for every statement that's not in the language, no computationally bounded Prover can make the Verifier accept. The third, which is the most important property for this talk is zero knowledge, which says for every Verifier that exists a simulator, which can simulate the view of the Verifier. What do I mean? So the view of the Verifier in an actual protocol with the honest Prover is the messages that the Prover sends and its own random coins, and the zero knowledge property says that the simulator on input x can actually sample a distribution that's approximately close to the actual view of the Verifier, thereby hiding any additional information that the Prover might have. So across time, there have been many flavors of zero knowledge. The one that we've just seen is on the left, and we call this the GMR zero knowledge definition. The next is that of oxymere input zero knowledge. So it turns out when zero knowledge protocols are used within other protocols as a sub protocol, or when you want to compose zero knowledge protocols, the Verifier might actually use information that it's learned from other parts of the protocol or from previous iterations of the protocol. To come to this, we give the Verifier some oxymere input denoted here by Z, and we say that the same holds that the view of the Verifier can be simulated by a simulator. As long as the simulator gets both the input x, and the oxymere input z. And the third notion is that of black box zero knowledge, it's asked for a universal simulator, which can simulate the view of the Verifier. And additionally, the simulator uses the Verifier only in a black box manner with Oracle access to the Verifier. So these are like the three flavors of zero knowledge that are important for this talk. So when zero knowledge was brought up, it was on two foundational aspects, which is interaction and randomness. And in this talk, we ask if we have a prover that's completely deterministic, can we have some meaningful notion of zero knowledge? So the first answer to this with respect to deterministic provers was provided in the seminal work by Goldrack and Oren that showed that deterministic provers satisfying oxymere input zero knowledge or black box zero knowledge was impossible for non-trivial languages. So in the positive direction, there has been some work. This work recently by Farnio Nielsen and Venturi showed that if you have a language that has a witness encryption, you can get a slightly weakened notion of DPZK, which is honest verify DPZK for the same language, which is to say that the zero knowledge is only required to hold for honest verifiers and not necessary for malicious one. If you have slightly stronger properties for the language L, which is a hash proof system, then you get stronger properties in the corresponding DPZK. For instance, here you get soundness against computationally unbounded cheating provers. And very recently, Dahadi and Lindu improved the assumptions from witness encryption to doubly injective one-way functions, but at the cost of having the honest provers be inefficient. And for malicious verifiers, they actually construct DPZK for languages that have some certain entropic guarantees from their witnesses. So given this understanding, there are lots of gaps in characterizing when deterministic provers zero knowledge is possible. So our result aims to fill in this gap. We actually split auxiliary input ZK into two parts. The one on the left, which is the new one, is the B-bounded auxiliary input ZK, which says that there is some polynomial bound B on the amount of auxiliary input that's fed to the verifier. And we actually show that you can have zero knowledge satisfying either the GMR ZK or the B-bounded auxiliary input ZK with a deterministic program. So specifically, our result in terms of assumption, we require slightly strong assumptions. We show that assuming non-interactive witness indistinguishable proofs, sub-exponentially secure indistinguishability obfuscation and one-way functions, there are two message DPZK for NP intersect co-NP. And if you want to go to all of NP, you additionally require keyless collision-resistant hash function that are sub-exponentially secure. So given these strong assumptions, we actually show that some of this seems to be inherent in the sense that if you have a DPZK for any language L, it actually implies a witness encryption for the same language. And witness encryption, as we know of today, is only implied by strong assumptions. So actually, let's go to our construction for the two message DPZK. The starting point of our construction is the honest verified DPZK by Farn, Neal, Sin, and Maturi. And since they constructed from witness encryption and witness encryption is very central to our work, let's actually see what witness encryption is. So witness encryption for a language L comes with an encryption algorithm and a decryption algorithm. The encryption algorithm takes in the statement X, the message M that someone wants to encrypt and outputs the ciphertext. And the decryption algorithm, which importantly is going to be deterministic and we see momentarily Y, takes in the ciphertext, string W, and outputs either the message or some board. So the correctness requirement says that if W actually is the witness for the statement in this given language, then the decryption algorithm returns the message M that was encrypted. And security says that if the statement's not in the language, then the ciphertext hides the underlying message. So given this definition of witness encryption, how do we actually use this to construct honest verified DPZK? So the verified here is quite simple. It samples a random string of say length N, uses the witness encryption scheme with the statement X to encrypt this string U. And then it sends across the ciphertext to the deterministic prover who simply decrypts the ciphertext. And since the decryption is a deterministic process, it can actually do this deterministically, get some U tilde and sends U tilde across to the verifier. And the verifier just checks if U is equal to U tilde. So complete as follows is from the correctness of the witness encryption schemes as long as the deterministic prover has the witness, it can decrypt and get back this value U. Some as follows from the fact that when the statement is not in the language, the ciphertext completely hides the message. So we can just switch it to an encryption of zero and the cheating prover wouldn't be able to tell even if the cheating prover was randomized. And lastly for honest verified zero knowledge because the verifier behaves honestly when the simulator initializes the verifier with random commands, it essentially knows what U the verifier is going to pick. So simulation is actually quite simple for honest verified DPCK. So how do we get from here to malicious verified DPCK? And we take an intermediate step of explainable verified DPCK. So what are explainable verifiers? So these are verifiers that can behave maliciously, but all of the messages that the verifier sent can be explained using honest verified coins. So unlike some related notions of some malicious adversary, these coins might actually be hard to find. So one important consequence of this definition is that the simulator no longer knows the message that is an explainable verifier encrypts as in the case of the honest verified DPCK. So our previous simulation strategy won't work. And actually if you look at the proof of impossibility in the Bore-Wacken oven, it actually already implies that Oxford input DPCK for explainable verifiers is impossible. So making progress with explainable verifiers is an important step in getting DPCK against malicious verifiers as well. So as with most ideas going from honest verified DPCK, honest verified ZK for that matter to malicious verified ZK, we use some additional trapdoor statement that only the simulator can use, but cheating proof work cannot. So what is this in our setting? So as we already said, explainable verified DPCK is ruled out for auxiliary input zero knowledge. So we have to bound the amount of auxiliary input. So it's actually simpler to think of the verifier plus the auxiliary input being bounded by some B, value B. So we can say that the verifier, the size of the verifier plus auxiliary input is bounded by some amount of B. And so what the verifier does now, in addition to this honest verified protocol that we've already seen, it samples a really large string R of some length N, where N is much larger than B. And it uses this R as a statement for a separate witness encryption scheme and we'll see momentarily what that is. It encrypts the same message U that it sampled before and generates a ciphertext. So essentially now there are two ciphertexts that encrypt to U, one using the statement X and one using this new statement R. So what is the language that this witness encryption is used for, this new witness encryption? So the language is here given at the bottom which is that for any string R, we say R and some machine M belongs to the relation. If M is a Turing machine that outputs R and more importantly the size of the Turing machine is small. So here specifically we're going to say the size of the Turing machine is some B plus lambda. So the prover itself actually doesn't do anything else. It behaves exactly as in the honest verify protocol and the verification also remains identical. So the verify in addition to whatever it send also sends across the ciphertext and obviously because the deterministic prover doesn't know the language or the statement for the second ciphertext, the verify has to send across R. So completeness follows the same as the honest verifier setting because the deterministic prover doesn't do anything else and the verifier's check is actually identical. So now for some reason we need to argue that given this new ciphertext, the cheating prover isn't able to suddenly come up with decrypt U so that it's able to cheat and this actually follows from the relation, description of the relation, which says that R is a really large string and we want a really short machine to output R and with high probability no such machine exists. So a cheating prover is unlikely to succeed here. And what about zero knowledge? For zero knowledge, we require a machine that outputs R. Do we have such a machine? Of course, the verifier itself, as you can see in the first message that it sends across the verifier also includes a string R. So the simulator can therefore just use the verifier's code to as the witness for the second ciphertext and get back U-tilde and then sends across U-tilde thus completing the simulation. The point of note is that the verifier's randomness is simulated by PRG so therefore the overall description of the verifier can still be small. I sort of brushed this under the rug but if you look at the relation carefully, it's actually not an NP relation because we do not a priori bound the running time of the Turing machine. We said, okay, the bound of the size of the Turing machine is some B plus lambda but it can run in any arbitrary polynomial time. So to actually have an efficient witness encryption scheme for this relation, we require this strong assumption of indistinguishability obfuscation for Turing machines. So given we have explainable verifier DPCK, how do we get to the malicious set? And this is the idea here is quite simple. The verifier just simply proves honest behavior and as long as the verifier is able to prove that it generated both the ciphertext correctly then you can essentially have the same simulator that you had for the explainable verifier DPCK. So how does the verifier construct such a proof? So for the case of languages that belong in the intersection of NP and co-NP, the verifier proves using a non-interactive but indistinguishable proof that either it behaved honestly or the statement X is not in the language. Clearly because L is also in co-NP, X not in the language actually has a witness. And for zero knowledge, the second statement is going to be false because we only consider zero knowledge for X in L and therefore an accepting proof must indicate that the verifier behaved honestly. So that's great. So what about the case for L belongs to all of NP? This is a little more complex. So the verifier still proves that it behaves honestly but the odd statement, it proves that it is committed to a collision of a key less hash function. That is collision resistant. So essentially the second statement says that, it's something that's hard for an auxiliary, de-bounded auxiliary input verifier to compute. So as long as in the zero knowledge setting, as long as the second statement is unlikely to hold true, it must be the case that the first statement is true and therefore we have a verifier that behaves honestly. So given now this we now have gone from honest verifier DPCK all the way to malicious verifier DPCK. Admittedly under strong assumptions and then now let's, that gives us a perfect jumping point to the next part where we talk about the necessity of these strong assumptions. Specifically, we show that if a language has a DPCK, it also has a witness encryption. So to do so, we actually talk about this notion of a predictable argument that was introduced also in this work by Farnio Nielsen and Maturi. And here again, the prover and the verifier interact in a protocol with some common input of the statement X and the prover is trying to prove that the statement X belongs to some language. With predictable arguments additionally imposed as a structure on this protocol, which is that the verifier actually knows or expects the prover messages to be of a certain. So what the verifier does is runs this algorithm V gets as input V1 and P1 sends across V1 gets in response P1 tilde and it only proceeds if P1 is equal to P tilde. So this is the predictable part of it is that the verifier actually knows the prover message. And it does this for the next round, generates V2 P2 and does the same thing and so on and so on until the end of the protocol. And clearly since the prover is not providing any new information that the verifier doesn't already know, the verifier can actually run this entire algorithm V at the start and just send it one after the other. So there are lots of common and natural examples for this kind of a predictable argument. For instance, the most common example is that of graph non-isomorphism where prover and verifier are given a pair of graphs and the prover wants to prove to the verifier that these graphs are actually non-isomorphism. So in this protocol, the verifier picks a random bit B, permutes randomly, permutes the graph GB and sends it across the prover and the prover is supposed to figure out what the bit B was that the verifier picked. So essentially since the verifier is doing the picking it knows what the expected response from the prover should be for it to accept. So this is a predictable argument for graph non-isomorphism. So what does predictable arguments have to do with the on your DPZK or bitness encryption for that matter? So for Arnie and Nelson, I'm sure you actually show that predictable arguments for any language actually implies bitness encryption. So all that we have to show is that DPZK for language actually implies a predictable argument. So let's look at that transformation then and the idea is quite simple. So what does the verifier do? The verifier runs the simulator on the statement X, gets a simulated transcript and also the random coins of the verifier. And as a first step, the verifier simply rejects if the simulated transcript is not accepted and this can be done efficiently given all the information that the simulator outputs. And now it sends across this message V1 and the prover in this predictable arguments behaves identically to the DPZK prover. So the prover doesn't change at all. It's just that the verifier has simulated this transcript and uses the simulated messages as its own verifier messages. Okay, so the idea seems simple enough. Now we need to argue that it's actually a complete and sound protocol. So for completeness, it just simply follows from the zero knowledge property that the simulated prover messages and the DPZK messages have to be the same. Otherwise we can just have it extinguisher that looks at the simulated messages and the prover messages and can tell them apart. So when we actually run the protocol with an honest prover and also a verifier, the verifier actually accepts. And what about the next case, which is soundness. So here, the first thing to note is that the verifier only starts interacting with the prover if the simulated output accepting transcript for a statement that's not in the language. So when simulator was on some X, which is not in L, the simulator still produced an accepting transcript. And given that the cheating prover is able to make the verifier accepted, means that it sent these P1 tilde, P2 tilde that matched the simulated transfer. Therefore, the cheating prover is actually able to come up with a transcript that made a verifier accept. So as long as the underlying protocol is sound, a cheating prover shouldn't really be able to do this. So for this argument, we actually made this simplifying assumption that the simulated transcript produces pseudo random verifier coins. While this is not exactly true, in the paper we use a more subtle argument to argue that soundness still holds even if the simulator doesn't produce the pseudo random coins for the verifier. So when we talked about our positive construction, we showed that there exists two message DPZK and then this somehow seems to be inherent because we show also in our paper that any DPZK argument can be collapsed into two messages. So even if you have like an arbitrary message DPZK protocol, it can be collapsed into two messages and also can be made extremely electronic in the prover message size. So for soundness, roughly one half, it suffices for the prover to send a single bit. And these actually follow from transformation and predictable arguments in this work of finding your names in inventory. Since they don't really care about zero knowledge, what we show is that their transformations also preserve zero knowledge as long as you start with a zero knowledge protocol. So yeah, that essentially brings us to the end of the talk to sum up. First, assuming like strong primitives, we actually construct two message DPZK for all of NP against bounded oxygen input verifiers. And we show that some of these strong assumptions are inherent by showing that DPZK for any language actually implies a witness encryption for the same language. So thanks a lot. If you have any questions, feel free to reach out by email. The paper's also on ePrint. So here's the link for the paper. And you can also obviously ask me questions during the live talk.