 Hi everyone, my name is Takshita Khurana and this is the EuroCript Talk about the paper Non-Interactive Distributional Indistinguishability and Non-Malleable Commitments. Let me begin by telling you about Non-Interactive Distributional Indistinguishability, which is a special type of proof system. A proof is a way for one party, that we will call the Prover, to convince another party, called the Verifier, about the truth of a statement. Traditionally, the way we think about proofs is that the Prover writes out an entire sequence of steps that he then sends to the Verifier, who then checks these steps one by one and accepts the Prover's claim if all these steps check up. Some of the really consequential early results in cryptography and complexity followed the realization that proofs could be interactive. A Verifier could send randomized queries that the Prover would have to repeatedly answer and only once the Prover successfully answered all of these queries, would the Verifier accept the Prover's claim. As usual, it was important for these proofs to be sound, which means that if the Prover's claim was false, then with overwhelming probability, the Verifier would catch the Prover in a lie and reject the proof. Now one of the most beautiful consequences of allowing interaction was that it enabled zero knowledge. This concept was introduced in a seminal work of Goldwasser, Mikali and Rakov and allow for the construction of interactive proofs that revealed no secrets to the Verifier. The only information the Verifier learned from such a proof was whether or not the Prover's claim was true and no associated secrets that the Prover held would potentially be leaked to the Verifier. Now zero knowledge proofs have become the basis for most privacy-preserving proof systems that we use today. And while interaction enables zero knowledge, interaction is also prohibitively expensive in situations like ledgers and blockchains, but there are many participants verifying the proofs of many others. And that's one of the really important goals of modern cryptography is to build privacy-preserving proof systems that do not require interaction. Now one could begin by asking whether ZK arguments are achievable without interaction. And while this is possible via heuristic constructions that can be proven secure in idealized models, or if we assume that players have access to a trusted common reference string, they actually turn out to be impossible to realize in the plain model when players don't have access to a trusted third party. So an important question is can we achieve privacy-preserving proof systems that are not zero knowledge would satisfy somewhat weaker privacy properties, and that still suffice for applications in the plain model and without access to trusted setup. There have been some positive results in this direction. And starting with the work of Barakong Wadhan, who themselves built on a work of Dwargan Naur, as well as subsequent works of Grotso-Strovsky, Sahay and Pitansky and Parnit, obtained notions of non-interactive witness and distinguishability, where essentially the guarantee is that the verifier cannot tell which of two witnesses was being used by the prover. There have also been constructions of witness-hiding arguments, most recently by Coyk and Dahl in Zandri, but these rely on non-standard assumptions and not all of them are explicit constructions. As such, there are major gaps in our understanding of what it is that we can achieve non-interactively, what sorts of privacy guarantees are achievable from standard assumptions. Let's try to better understand what this gap is with the help of an example. Suppose I want to finalize a transaction, say on the blockchain, but I would like to keep the contents of the transaction hidden. Let's say I only have 10 units of currency in my account, so the transaction should only go through if I have transferred less than 10 units to someone else. Clearly, I cannot transfer more than I have. Now, if I want to keep transactions encrypted, I'd also like to be able to guarantee that I'm not cheating. So I can compute an encrypted transaction and then without revealing exactly how much money I'm transferring prove that the amount I transferred is less than the total currency that I have in my account. And it's a simplification of this. Let's consider a prover and a verifier. The prover encrypts a message M and would like to prove that the encrypted message is less than 10, but without revealing what this message is. This is called a commit and prove argument. And for the reason I already discussed, we would like to be able to do this non-interactively. I mentioned the word commit because this encryption I was talking about is not really the right primitive here. Indeed, the functionality that we need. A commitment allows a committer who has some secret input M to put this message M inside a box, lock the box and pass it on to a receiver. Later, the committer can send a key using which the receiver can open this box and recover the message. Note that once the box is sent, the committer cannot change the contents of the box can only later send a key to open it. Now, what actually happens is that in a commitment, the committer and the receiver run a commit phase at the end of which they obtain a transcript. This transcript commits the committer to a message without revealing to the receiver what this message is. In the decommit phase, the committer cannot later change her mind about the message she committed to, the most she can do is decommit. In a non-interactive commit and prove argument, a prover with input a secret message M and a public predicate Phi generates a commitment to the message M and additionally proves that the message M satisfies the predicate Phi. The soundness guarantee is that if the verifier accepts the prover's proof, then the commitment C is indeed a commitment to some message M that satisfies the predicate Phi. The privacy guarantee is that the commitment and proof together do not reveal the message M. What this means is that for every predicate and every pair of messages M1 and M2 that satisfy this predicate, the output of the prover given the first message is indistinguishable from the output of the prover given the second message. Well, this seems like a really fundamental and necessary privacy guarantee. Unfortunately, existing non-interactive proof systems like NIVIS and witness-hiding arguments do not offer this guarantee. In particular, NIVIS do not suffice because they do not offer any privacy guarantees and settings where there is a unique witness, which is the case here. And on the other hand, witness-hiding offers a much weaker hiding guarantee than the indistinguishability-based hiding that we desire here. Moreover, witness-hiding is not known at the moment from well-studied assumptions. And so in this work, we take the first step towards rectifying the situation. It brings me to our results. We introduce NIDI, which is a new type of privacy preserving non-interactive proof system. We construct NIDI based on indistinguishability obfuscation and variants of one-way functions. And finally, we show that NIDI can be used to obtain commitment proof, as well as non-malleable commitments that are, in some sense, non-interactive. So what do I mean by some sense? Well, the interaction pattern of the resulting commit and proof argument is as follows. The prover on input a message M and public credit file generates a sampler and sends it to the verifier. The verifier, given the sampler, runs a local randomized algorithm on the sampler and obtains a commitment string C. We have the usual completeness property in that the commitment string C that the verifier obtains is indeed a commitment to the message that the prover intended to commit to, with some randomness R. The sameness property is that either the verifier aborts or when it does not abort, the verifier obtains a commitment string C that is indeed a commitment to some message M that satisfies the predicate phi. Of course, this this guarantee is randomized and holds with overwhelming probability over the randomness of the verifier. The privacy guarantee is that for all pairs of messages M1 and M2 that satisfy the given predicate, the sampler for M1 is computationally indistinguishable from the sampler for message M2. In other words, there is a CPA style hiding of the message M. Now, let me tell you a little bit about the techniques that go into constructing a non-interactive commit and prove argument. Indeed, a NIDI will be nothing but an abstraction of these techniques into a clean primitive. Recall that the prover in a non-interactive commit and prove will send the verifier a sampler that encodes the prover's secret message M and outputs non-interactive commitments to the message when the verifier interacts with the sampler. The sampler will actually take the form of a circuit. The circuit will have hardwired a key K for an appropriate pseudo random function and will also have hardwired the prover's secret message M. On input and X, the circuit will compute a PRF and output a commitment to the message. In order to hide the message, this circuit will be obfuscated via an indistinguishability obfuscation scheme. As such, this will satisfy completeness and soundness, sorry, completeness and privacy, but there is no soundness yet. Because a malicious prover could send an arbitrary circuit that outputs commitments to arbitrary messages that do not necessarily satisfy the predicate. Therefore, in order to achieve soundness, the circuit will actually also attach proofs to the prover's commitments. The specific proof that we will rely on is a two message proof of which the first message will be supplied as input to the obfuscated program C. And the second message will be computed by the program itself. The specific construction that we rely on is one due to pass, where in the first message, the verifier generates the output of a one-way function. And this is an appropriately chosen one-way function on a random input and sends it to the prover. And then the prover outputs a commitment to zero, and in addition proves via a navy that either the statement that they were originally setting out to prove was true, which in this case is that the message m satisfies phi, or that they inverted the one-way function. And managed to commit to the inverse of the one-way function on the value y that the verifier sent in the first place. So coming back and zooming out, coming back to our setting of commit and prove, this obfuscated circuit that the prover sends will obtain as input, pair x and y, and then in addition to computing a commitment to the message, we'll output the second message of this two-message proof system with respect to the input string y. So the next question is, why does this proof system hide the prover's secret message m, and why does this provide any privacy at all? To understand why, let's consider a hybrid experiment where the prover sends a slightly different circuit that instead of committing to the message m, has an arbitrary index i hardwired in it. And if the input y is less than the index i, outputs a commitment to m1, otherwise outputs a commitment to m2. And these circuits, two circuits that have indices i and i plus 1 respectively, will only differ on a single input, which is y equals i plus 1. And so it turns out that by relying on a puncturable PRF and indistinguishability obfuscation, and some standard techniques developed in context of using indistinguishability obfuscation, one can prove that these two circuits that have i and i plus 1 hardwired in them respectively are indistinguishable from each other. And if one carries out sufficiently many hybrid experiments, in particular, equalling the number of possible inputs to these circuits, then one can show that a circuit that always outputs commitments to m1 is indeed indistinguishable from a circuit that always outputs commitments to m2. This helps establish privacy, as long as the obfuscation, commitment and PRF are at least 2 to the n secure, and where n is the size of input to the program. So in other words, we need that if the size of inputs to this obfuscated circuit is n bits, then the PRF, the proof system, the commitment, and the obfuscation scheme themselves are all 2 to the n secure. This presents some challenges when proving sadness. One would like to say that because of soundness of the Navy, if the message m of the prover does not satisfy the predicate phi, then the proof that the circuit provides will implicitly contain an inverse of the one-way function F. And one may hope to try to use complexity leveraging to extract this inverse from the proof pi and derive a contradiction. However, recall that we needed the proof pi to be 2 to the n secure, where n was the size of outputs of the one-way function. And this in particular prohibits the use of complexity leveraging, just because it's going to take much longer to extract the inverse of the one-way function from the proof, and requires us to come up with a new technique. Our main idea is to rely on a different axis of hardness. And in particular, we develop a new technique where we rely on a non-uniform hardness, and also at the same time achieve non-uniform security. So this is all from standard assumptions, I.O., and non-uniform security of one-way functions. And like I had mentioned previously, a nitty is simply a generalization of these techniques to encapsulate what you can do more generally. A prover that has input a language L and a distribution D that samples instances and witnesses that satisfy the relation corresponding to the language can generate a sampler and send it to the verifier in such a way that the verifier can interact with the sample and obtain samples from the language. That means obtain instances in the language. And be convinced that indeed, if the verifier did not output bottom, then they indeed did output an instance in the language L. And moreover, the privacy guarantee states that for all pairs of distributions that sample instances that are indistinguishable, we have that the sampler for the first distribution is indistinguishable from a sampler for the second. We are also able to use the same ideas to build a non-malleable or CCA commitment where the interaction pattern is the same as before, in that the committer on input M sends a sampler to the receiver and then the receiver runs a randomized algorithm on input the sampler to obtain a commitment string C. In a nutshell, the non-malleability property guarantees that a man in the middle that obtains a commitment from an honest committer is not going to be able to generate a sampler that will produce valid commitments to a related message. In particular, the guarantee is that the commitment C prime generated by the man in the middle is going to be a commitment to some X that is independent of the input M of the honest committer. The reason that NIDI's help in this setting is that existing non-interactive CCA commitments or non-malleable commitments have an important tag amplification component that requires a commit and prove mechanism, where the committer commits to the same message many times and must prove that all commitments are to the same message. And this is exactly where NIDI turns out to be helpful and allows the committer to do to run this process of tag amplification non-interactively. In summary, in this work, we build a NIDI, which is a new non-interactive privacy preserving proof system that is applicable in settings where statements being proven have unique witnesses. And very roughly it guarantees that when two statements are indistinguishable and so are the statement plus proof combinations. This privacy guarantee is morally quite similar to what strong witness indistinguishability, which is a notion that's different from regular witness indistinguishability, gives us. However, the completeness properties of NIDI are different in that the prover cannot really control the exact sample or in our example of the exact commitment string that the verify would end up with, and the prover can own all that the prover can do is send a sampler that outputs a randomized commitment. And we show in this work that these techniques have applications to commit and prove arguments as well as to CCA commitments. And we believe that this notion of a NIDI may find other applications in settings where one needs to prove something non-interactively while still giving strong privacy guarantees. That concludes my talk. Thank you for listening.