 Hello. I'm Ben, and I will be talking about some progress on the problem of achieving non-interactive witness-hiding proofs in the standard model. This is joint work with Mark Sandry. Our goal in this work was to construct proof systems for arbitrary NP languages with three properties. First, the proofs should be witness-hiding. That is, after seeing a proof, the verifier should not be able to construct a witness that is accepted by the original NP verifier. Second, the proof system should be non-interactive. It should consist of a single message from the prover to the verifier. Third, it should not rely on a common reference string or some other shared resource. The proof should be in the so-called standard model. We present four different proof systems that make progress towards this goal. Each scheme relies on different assumptions and has some serious qualifications, but each of them represents progress towards non-interact witness-hiding. As a whole, we hope they clarify the landscape of witness-hiding proofs. Before we present the schemes, let's go over some definitions and motivation. First, we're dealing with NP languages throughout. If we use L to denote a language, we will use V sub L to denote its poly-time verifier. If we need to, we use R sub L to denote the witness relation. That is, the set of pairs X, W of instances and witnesses. All the proofs we present are two-party protocols between a prover and a verifier. As usual, both parties get the instance X as input, and the prover gets a witness Y. The end of the protocol, the honest verifier either accepts or rejects. A dishonest verifier might help with something else. Like any proof system, we want Rs to be complete, sound, and efficient. These are standard definitions. More interestingly, we want to look at privacy notions for proofs. If we were not interested in privacy, the prover could simply output the whole witness W. The verifier could run the NP verifier to validate it. But maybe the prover doesn't want to reveal everything it knows. We will define three notions of privacy. Zero knowledge, which is a very strong and well-studied property, as well as two weaker ones, witness hiding and witness indistinguishability. Zero knowledge hides everything about the witness. It means that anything a malicious verifier could learn by interacting with the prover, it could learn by itself. Formally, we define this using a simulator-based indistinguishability game. What do we like about zero knowledge? To start, it is a very strong privacy notion. It easily implies both witness hiding and indistinguishability. Additionally, it can be achieved without interaction if you have a common reference string. These NISICs, as they're called, are known from a variety of reasonable assumptions. Unfortunately, in the standard model we have lower bounds, you need at least three messages to build a zero-knowledge proof system for NP. With this limitation in mind, to further reduce the number of messages, we should look at weaker privacy notions. Unlike zero-knowledge, oh, sorry, witness indistinguishability is an interesting privacy notion that considers what happens when you run the proof using distinct witnesses. Formally, for any malicious verifier v. star, the transcript after running the protocol with some witness w, is indistinguishable from the transcript if the protocol were run with another witness c. Unlike zero-knowledge, witness indistinguishability can be achieved non-interactively in the standard model. But by combining a CRS NISIC with the hidden set generator, we get a NIIWI with non-interactive witness indistinguishable proof. These NIIWIs are very useful in developing more complex protocols and proof systems, as we demonstrate in the body of the paper. But on their own, witness indistinguishable proofs are not always interesting. For example, if the language L has unique witnesses, then every proof system is trivially witness indistinguishable. With that counter example in mind, we move to the definition of witness hiding. Witness hiding means that after seeing the proof, the verifier cannot recover a valid witness for the instance. Formally, for any malicious verifier, the probability that it outputs some w star accepted by the NP verifier is negligible. Now, witness hiding is an average case notion. We must define a distribution D for the witness relation. Further, witness hiding is only possible if it's hard to find witnesses for D in the first place. That is to say that the search problem over the distribution is hard. But if this is the case, then witness hiding is always a meaningful and intuitive privacy notion, regardless of the language. Further, in the CRS model, it's easy to achieve. Every NISIC is also witness hiding. However, it is currently unknown witness hiding proofs exist in the standard model, bringing us to our results. We present four separate constructions that almost achieve the desired notion of non-intractive witness hiding. All the constructions assume we have NUIs, then consider what it takes to boost their security to witness hiding. As a starting point, we consider a two-message protocol of Rafael Paz. The original security analysis of this protocol utilizes complexity leveraging to achieve security. To do this, it assumes quasi-pollinomially secure one-way functions. More concerningly, it only guarantees witness hiding when the search problem is quasi-pollinomally hard. We address this limitation by considering the delayed input model, where the verifier only learns the instance X at the very end of the protocol. In this model, we manage to prove witness hiding whenever the search problem is hard in the standard sense against non-uniform adversaries. Moving to single-message protocols, the next scheme we give requires an advice strength given to both prover and verifier. Making a non-standard but plausible complexity assumption, we show there exists some choice of advice such that the protocol is witness hiding. But unfortunately, such an existential result is not useful in practice. It's unclear at best how to choose the advice strength uniformly. The third protocol we give is explicit. It is in fact a universal non-intractive proof. That is to say, our proof system pi sub u is witness hiding as long as some non-interactive proof system pi prime is witness hiding and provably sound in a way we later. Further, even if pi prime has an inefficient prover or requires non-uniform advice, pi sub u will still be efficient in uniform. But unfortunately, the previous scheme we gave seems to lack provable soundness in the desired way. Thus, while the universal scheme is certainly concrete, we do not have a scheme pi prime to demonstrate that it is witness hiding. Our last result is in a slightly different vein. First, it limits to the case of languages with unique witnesses. As we discussed before, this is still an interesting setting for witness hiding. However, we would like it to be more general. We will construct a proof system that is either witness hiding or yields a form of witness encryption. If we say assume witness encryption does not exist, then our scheme is certainly witness hiding. But more perhaps realistically, we should just interpret the either-or-or result as heuristic evidence that our scheme is secure. We have some time now to look at some technical details. But for more, definitely check out the paper. We start with a very basic idea. To prove x, instead, I'll put a Newey of the statement x or y. This Newey can be constructed using the witness w for x. Now there are two cases. If y is false, then by a destructive syllogism, we know the proof is sound. On the other hand, if y is true, then we can show the proof is witness hiding. This proof is pretty simple. Let z be a witness for y. Using non-uniformity, we can hard code z into the code of the adversary. Then, instead of using this proof on the left side that uses w, we can generate the proof on the right using z. Feeding the right-hand proof to the original adversary directly solves the d-search problem. Now, we want a proof system that is both sound and witness hiding. However, y cannot be both true and false. To resolve this in pass this protocol, the verifier will simple statement y that is true but where finding a witness is hard. Concretely, we use a one-way function and let y be the statement that some value b is in the range. In the full protocol, we need to add a commitment. First, the verifier samples a random preimage r, applies the one-way function and sends the result. Second, the prover commits to a witness to the statement x or y and sends a NEEWE proof that it has done so. Finally, the verifier can check the NEEWE. The completeness of the protocol is easy to show, which leaves us to prove soundness and witness hiding. For soundness, consider an adversary who opens the commitment by brute force. Since x is false, the commitment must contain a witness for y. That is, a preimage of the one-way function. Thus, we have broken one-way function security. For witness hiding, we instead invert the one-way function, again using brute force, and use the preimage r to generate the NEEWE in the second step. By NEEWE security, the malicious verifier still outputs a witness using this proof, thus breaking the D search problem. Now, both of these adversaries are inefficient. Thus, witness hiding requires the one-way function and the commitment scheme to have carefully chosen concrete security parameters. The distribution D must be secure against quasi-pollinomial adversaries as well. We would much prefer to allow standard hardness of D. To achieve this, we give an alternate proof in the delayed input model, but we delay that discussion to the paper. Instead, I want to briefly discuss how we might try to remove the first round of protocol. Note that if F is a permutation, then we can sample the image B directly. Thus, we have a straightforward heuristic to remove interaction. Simply replace B with the output of a hash function on the instance X. This can be shown secure in the random oracle model, but it is not clear if we can do any better outside of idealized models. This motivates us to consider fixing Y non-uniformly. This next construction is again a NEEWE of X or Y, but now we fix Y non-uniformly, taking it as an advice strain for both prover and verifier. For soundness, we only consider false Y. Now to argue witness hiding, let's assume the contrary. Let A sub Y be an adversary against witness hiding relative to advice Y. Since the protocol would be witness hiding if Y were true, the success of A sub Y is itself a proof that Y must be false. But if you believe that CoNP is not an NP, we shouldn't have such proofs of falseness for NP complete languages. More formally, we can define a verifier as follows. On input YA, sample many tuples from D and see whether A succeeds in extracting a witness. If it does so with sufficiently large probability, we can accept. Now this is not quite an NP verifier. To start, it's randomized, so it's more of an MA type proof system. But more importantly, we have some issues with asymptotics. Because adversaries can have arbitrary polynomial runtime, our verifier must be allowed slightly super polynomial runtime and as well as witness length. For similar reasons, we even need NEEWE and a search problem to again be super polynomially secure and hard respectively. Despite these technical conditions, we still end up with a plausible complexity assumption in the spirit of CoNP not in MA. Taking this assumption to hold, we get the desired result. But moving on to the existential results, we want a concrete scheme. Thus, we will build a universal non-interactive witness hiding proof system. To do so, we rely on a formal proof system S for statements about Turing machines. So far, we have only discussed proof systems for membership in NP languages. Formal proof systems are a little different. They let us write deductions for logical statements. For concreteness, you can take S to be piano arithmetic, but there are plenty of other deductive systems that would work. At a high level, our universal proof system will prove that there exists some other proof system that accepts X. Let S of X be the statement with two clauses. First, that there exists a Turing machine D and witness Z such that D accepts the tuple XZ. Second, that D is sound where pi is a logical proof of its soundness. In our protocol, the prumer outputs a Newey of the statement S sub X. As witness, it uses the original NP verifier V sub L as well as its witness W and the trivial proof tau that V sub L is sound. The soundness of this universal system follows by direct implication. Because the Newey is sound, we know S sub X is sound. The second clause ensures that D is sound. Then the soundness of D means that X is in L as desired. For witness hiding, we have to use the existence of some other non-interactive witness hiding scheme. Let's call it pi prime. Let little pi be the logical proof that V prime is sound. Then switch from the left hand proof generated by the pruber to the right hand proof using the proof and verifier from prime scheme. Thus, Newey's security lets us reduce from an attacker against the universal proof system to an attacker against the prime scheme. The further thing to note is that P prime and V prime never occur in the actual construction just in the proof. Thus, P prime can be inefficient and the whole scheme can even be non-uniform. However, the proof of correctness must prove soundness for a particular choice of advice. Since our non-uniform construction in the previous section does not appear to have this sort of provable soundness, it does not suffice to instantiate the universal scheme. One final thing to note is that our analysis didn't use anything special about witness hiding, other than the fact that it is a falsifiable security notion. Thus, the security of the universal scheme is a lot more general. Since it achieves any achievable falsifiable security property, we claim that this universal proof is in fact the best possible non-interactive proof system. And this is described formally in the paper. Let's move on briefly to our final result, yielding either witness hiding or witness encryption. First, we need a brief overview of witness encryption. Witness encryption is a form of public key encryption where the public key is an instance X and any witness W can serve as private key. Formally, we require correctness, that is, that an honestly encrypted ciphertext decrypts, and a property called soundness security, which dictates that if X is not in the language, then an encryption reveals nothing about the message. We weaken the notion of correctness to an average case notion. Even this weakened form of witness encryption seems to be a very strong cryptographic primitive, only known from tools like IO. Thus, it seems unlikely that our simple scheme could achieve even weak witness encryption, suggesting heuristically, at least, that it should be witness hiding. Let's look at the construction now. The scheme as much as before, but now we prove X or not Y, taking Y to be a true statement in an NP intersect co-MP language, and append a witness Z to the proof. By checking both proof for Y and the Newey, a verifier can then be sure that X is true. Towards witness hiding, we let A be an adversary against witness hiding. To encrypt a randomly chosen W under public key Y, we compute the Newey Pi that constitutes half of the proof from the proof system above. Then, since the decryptor gets Z, it has the other half of the proof, so it can run the adversary obtaining some W prime. Since we require that the language has unique witnesses, we note that W equals W prime is desired. To encrypt a chosen message instead of a random one, we can use the Goldwright-Leven hard core predicate, giving a full witness encryption scheme. To conclude, each of these four schemes almost achieves our goal of non-interactive witness hiding for arbitrary NP languages. We hope that tools such as these allow us to finally achieve such a proof system. Thank you.