 Thanks, Sanjay. Glad to be here. I'm going to talk about our work on the power of secure two-party computation. This is joint work with Kermit Hazai from Barilan. Can everyone hear me? Okay. All right. So this talk is going to be about zero-knowledge proofs. It's always nice to begin with the definition of zero-knowledge. It's an interactive protocol between a prover and a verifier that basically satisfies three properties. First, a prover can convince a verifier of a true statement. Two, soundness, which says that no cheating prover, even if computationally unbounded, can convince a verifier of a false statement. And the zero-knowledge, which is centered to this definition, says that no efficient verifier can learn anything more than the validity of the statement. The zero-knowledge property is further, I mean, to be a little more precise is to say that for every probabilistic polynomial time adversary we start, there is a PPT simulator S that can produce views that are indistinguishable to what the verifier sees in a real interaction. So just pictorially, what the verifier sees in the real interaction are basically the transcript of the messages and the random coins, and the simulator must be able to generate them indistinguishable. Okay, so for what languages do we know how to construct zero-knowledge? Actually, all of IP, but in this talk I am more interested in the class of NP. And it was shown early on that assuming, I should say the first paper didn't do it based on one-way functions, but I'm just going to say that assuming one-way functions that exists zero-knowledge proves for all of NP. A little bit of a biased history here, I'm not going to list all the works. A little bit that's relevant to my talk. Initially, the zero-knowledge proofs for general NP statements were constructed for specific NP-complete languages. Like graph three coloring for Hamiltonicity. There's also a word that does for satisfiability, but it was not based on general assumptions. It was based on the quadratic recipe. But along this line of work, a major breakthrough was achieved by Shai Pashilevich, Ostrowski, and Sahai, where they show that you can construct a zero-knowledge proof for any NP relation starting from an honest majority MPC protocol for a related function RF. Honest majority, this is just one of the results. There are many more of this flavor in their paper. But the core to their approach here is to use was to introduce this idea of MPC in the head. Besides introducing this powerful technique, this work actually constructed one of the first simplest zero-knowledge proofs for general NP statements without expensive car productions. By instantiating the MPC with a simple MPC protocol. Also in the same work, using very clever ideas of how to choose the MPC protocol, they even construct asymptotically communication efficient or even optimal in some sense, zero-knowledge proofs. And more recently, Giacomelli at all extend this framework and also give practical implementations of these proofs. Okay, so our work is sort of similar in spirit in this line of work. And our main result, informally stated, shows that starting from any two-party protocol in the OT hybrid, we can construct a zero-knowledge proof for any NP relation. Now, I mean, two-party is supposed to be a specific case of MPC, so something is going on here. First, we show for any two-party computation in the Oblivious Transfer Hybrid, which means the parties have access to an OT functionality. And I'll also argue, like in a couple of slides down, why the ICOS approach does not work for, at least directly work for two-party protocols. Now, I'm going to list some corollaries, they're not quite corollaries, it needs extra work. But just to illustrate that this technique, what additionally does it give us that ICOS does not already give is, one, we show actually a very simple zero-knowledge proof using garbled circuits. Garbled circuits can be seen as an instantiation of a two-party computation protocol, and previously we only knew how to get zero-knowledge arguments from garbled circuits. A second one, we show that we can strengthen this definition of zero-knowledge proof and get this property called input delayness. I'll talk about this a little later, but very informally, it says that the statement and witness is made available to the prover only in the last round, so that the input is delayed to the prover. And previously, actually even quite recently, such protocols were constructed for very specific sigma protocols. And if you want it for general NP statement, the work that we know of is, traces back to the 90s and requires expensive car productions. So ours is black box, and I'll tell you what this means. In a sense, you can read it as saying it does not require car productions. And the third, and the technically most involved, which I probably won't have time today, is we also construct adaptive zero-knowledge proofs. And adaptive here means that the simulator not only needs to simulate the view of the verifier, but at the end, say it should also, if the prover is corrupted, produce a view for the prover consistent with the transcript generator. So that's adaptive zero-knowledge, and we show starting from two PC protocols. Again, I'm simplifying things here. We need additional things that need to be additional properties that the two-party computation needs to satisfy for these to work. But essentially, we get these results using our compilation technique. Now, before I go into how we do this, I want to start off with, and one more thing. Previously, to construct adaptive zero-knowledge, again required car productions and was done by Lindell and Zarosa. Okay, so before we get into our approach, I'm going to start with the ICOS approach. So here, basically, you start off with an MPC protocol. So given an NP-relation R, you're going to start off with an MPC protocol for a related function F. This function F is basically, it has the statement hard-coded and computes the relation on the XOR of the inputs of all the n-parties, okay? So what happens here? The prover in her head will emulate an instance of this MPC protocol giving inputs to the parties and generates view according to this MPC protocol. Then in the first round, the prover commits to the views of each of these parties. In the second round, the verifier challenges the prover on two of these parties for which the prover needs to open the views, basically decommit the views. And the verifier checks that these views are consistent and also that the output computed in this MPC protocol is one, namely that the relation holds. Now, if you look at this approach, actually, if you instantiate this MPC protocol with known information theoretic protocols that we know in literature, we need at least three parties because of honest majority. They also have an instantiation based on GMW, and as written in the work, this also requires three parties because GMW is in the OT hybrid and to ensure consistency of how these OT channels are used by the parties, they need to open two views, okay? So they need at least three parties for privacy. Now, I said as it was written there because in the very next talk, you're going to see how this can be extended to also work for two parties in the OT hybrid. However, I want to say that the remaining two results that we have, which is delayed inputness and adaptive zero knowledge, cannot be cast in this framework. And roughly speaking, the intuition is that when the prover commits to all the views, it sort of binds everything. The statement, the witness, everything is bound. I mean, if you use statistically binding commitments, it's literally bound to the first message and you can't get any delayed input. Okay, so this is sort of the limit of ICOS. And now I'm going to give our construction of zero knowledge based on garbled circuits. First, let me just define what garbled circuits are very briefly. I'm sure most of you know something that we love and care about. Garbling can be thought of as a set of four algorithms. There is a garbling algorithm that takes the circuit, outputs the garbled circuit. The translation table and key labels. And I'm going to call decay, the decryption key, or the translation table. SK, the secret key, or you can think of it as the randomness used to garble the circuit. And then there is an encoding procedure that shows how to encode any input to a garbled input. Then there is an evaluation procedure that takes a garbled circuit and a garbled input and outputs a garbled output, which the decoder using the translation table can give the final output. The two properties that we need here are correctness. Namely, y must be equal to c of x, where c is the original circuit. And security says that there is a simulation that can produce the garbled circuit and consistent garbled inputs and translation table just from the circuit and the output. That is basically without the input. So this is the security of my formulation of the garbled circuits. Now let's construct a zero-knowledge proof from this. So the circuit that we are going to take is similar to anything else. We're going to have x hardcoded in the circuit and it's going to evaluate the relation on the witness double. Now, what does the prover do? The prover garbles the circuit, also garbles the input witness that she knows. In the first message, the prover gives the garbled circuit the translation table and commits to SK, which is the randomness used to garble. Now, the verifier is going to challenge with a bit B. Now, depending on whether it's zero or one, the prover, if it's zero, is going to decommit and give the randomness used to garble the circuit. And if it's one, she's just going to give the garbled input corresponding to the witness. Now, what does the verifier do? If B is zero, he checks if the circuit was garbled correctly. And if B is one, he's just going to evaluate the circuit and check if the output of the relation is one according to this computation. Now, why is this protocol sound? This protocol is sound basically by the correctness of the garbling. If you can give valid randomness for the garbling and the garbled input that gives that outputs one, then you actually can show that the statement is true. In fact, it satisfies what is known as special soundness that I'll get to in a minute. And zero knowledge essentially follows from the simulation of garbled circuit. The simulator guesses whether B is zero or one. If it's zero, it just needs to proceed honestly. And if B is one, it uses the simulation, okay? So this is a very simple zero-knowledge proof starting from garbled circuits. Now, before I go to the, so this basic zero-knowledge proof, I'm going to modify it so that it also gives the input-delayed property. But first, let me go over this definition very quickly of what input-delayedness means. Okay, so this is the definition of zero-knowledge proofs. First, I'm going to discuss what special soundness is. Special soundness basically says, this is defined for three-round protocols, which is going to be the case for us. For a given first message and two convincing transcripts with different second and third messages, you can extract a witness. That's what special soundness says. And so what happens in a delayed input zero-knowledge proof is that they don't have the input x, w, and x at the beginning of the protocol. So that is actually revealed after the second message. Now, we need to change the definitions accordingly because the input is revealed only later. First, we append the special soundness guarantee because these two transcripts might not even talk about the same statement. So to enhance this definition, we take the approach of chumpy at all, where basically we say that given two accepting transcripts, you need to output the witness for both the transcripts, the statements corresponding to both these transcripts. And simulation, I'm not going to talk about, but essentially it should be able to simulate even if the statement is revealed after the second message. So how am I going to modify this protocol to get input delayness? First, we can't hard code x into the circuit before we guard it. So how do we fix it? We're just going to put x as an input to the circuit. In addition, we are also going to make x part of the output because in one of the cases, the verifier should know what statement is being proved. Otherwise, the prover will be able to prove any true statement. So x has to be part of the output to specifically show which statement the prover is trying to prove. So we modify the circuit to have x as the input. And instead of garbling once, we're actually going to garble twice. So just two independent garbling of the same circuit, it's going to send that. Now, the verifier still asks just a single bit, 0 or 1. The statement and witness is revealed after the second round. Now, if B is 0, the prover shows that the first garbling was done correctly and gives input key labels according to the statement and witness for the second garbling. And if B is 1, she's going to do it the other way around. Now, why is this protocol, what does the verifier do? The verifier does what he was doing before. Basically, he's going to verify that the first instance, if B is 0, that the first instance was constructed correctly, and the second instance evaluates to x, 1. And if B is 1, he does it the other way around. Now, why is this sound? If the prover can convince two in two different transcripts for the same first message, it means that you can obtain two valid garblings and two valid inputs that give the answer 1. So you get this adaptive special soundness property just by this simple modification that we do to the circuit. And simulation is essentially the same as before. So this technique, because of the way we use garbled circuit, easily extends to getting something that is input delayed. In our paper, we also show how to get negligible soundness. But for now, if you want, you can just be satisfied with the soundness half in some sense, OK? All right, so one more point for people familiar with the garbled circuit literature is that if you choose the input and witness or inputs to the garbled circuit after the circuit has been revealed, you need a stronger property. You actually need adaptive input garbling. You're going to see in this crypto one such construction that we use in our work as well. All right, so I have three minutes. I promised you that I'm going to construct a zero-knowledge proof starting from any two-party computation in the OT hybrid. But I went on about getting everything from garbled circuits. But garbled circuits themselves can be seen as an instance of a two-party computation in the OT hybrid, as well as a randomized encoding. These are two interpretations of garbling. And we show in our work how to construct a zero-knowledge from both of them just using one-way function. Actually, we show a loose transformation from two-party computation in the OT hybrid to randomized encoding, and then to zero-knowledge. Now, what I'm going to do in the next couple of minutes is actually to show this direct construction from 2PC in the OT hybrid to zero-knowledge. So this is the main theorem stated more formally. For any NP relation, R, consider a two-party computation in the OT hybrid for a function, F, such that these two properties satisfy. It should be perfectly correct, and it should admit UC security against honest but curious adversaries. And then you can show, assuming one-way function, there is a zero-knowledge proof for R, where F is black box in R. Again, I'm not going to define this formally, but stay tuned for the next talk for how this is formally defined. So static zero-knowledge from two-party computation, the function that we are going to do is analogous to what the ICOS approach. It's going to evaluate the function on the XR of the inputs of the two-parties. I'm not going to do the input-delayed part. I'm just going to do basic zero-knowledge proof from two-party computation. So what does the prover do? The prover in her head computes, emulates this two-party computation, and in the first round, instead of committing to the view, actually shares the transcript. Now, you can't do this in an information theoretic MPC protocol because the transcript actually binds everything. But in a two-party computation, the transcript does not reveal information, and that's what the prover gives in the first round. Now, the verified challenges says open either party one's view, which is basically its input to randomness or party two's view. Now, this is the zero-knowledge proof. Why is it sound? Because if you can open consistent views for both parties, by perfect correctness, there is a witness for the statement, and simulation, because I assume UC simulation of both parties, it just guesses the bit B and simulates according to that. Now, I've cheated a little bit over here. I said two PC in the OT hybrid, and the way I've written it here, you cannot do it based on one-way function because the transcript here, I must assume some instantiation of the oblivious transfer. We in fact show that we don't need to do that. We can encode the calls made to the oblivious transfer in a different way. So let me tell you how we do that. So let's say that there is an instance of the oblivious transfer where P1's input is S0, S1, and P2's input is T. How are we going to incorporate this in the emulation? How is the prover going to incorporate it in the transcript? Basically, the prover is going to commit to S0 and S1. For every oblivious transfer, she is going to commit to both these inputs. Now, remember that the verifier challenges to open either P1's view or P2's view. Now, if he asks for P1's view, she decommits both S0 and S1, and if the verifier asks the second, since the prover has emulated the actions of P2, the prover knows exactly which of these two inputs P2 is going to see and decommits only that particular oblivious transfer input. So commitments just require one-way function, so if you encode the oblivious transfer this way, you don't need anything more than one-way function. I know I'm out of time. I also had a slide on how to get adaptive zero knowledge, but I'm going to skip that, and I'm just going to go to... No, I heard that the first session finished early, so I thought I'd have extra time. But that was not the case. All right, quickly, one minute about what we do. We actually not only construct adaptive zero knowledge, we also show one that has very good communication complexity, and this is sort of the technically most challenging part. We need to use malicious two-party computation, and we need an adaptive version of interactive hash. And now to my final slide. So a general perspective of what we did here was our work was more like in the spirit of how ICOS constructed zero knowledge proofs, except we started from two-party computation to zero knowledge proofs, and we were able to get based on garbled circuits, simple proofs based on garbled circuits. We could get this additional property of input delayness and adaptive security. And another way to see is that with MPC in the head techniques, you can get static versions of zero knowledge. If you go to two-PC in the head, you can also get adaptive zero knowledge proofs without any additional assumptions. And one more point that I just want to tell you here is that this sort of reconciles the cut and choose approach of garbled circuits by Lindel Pinkers. They actually show how to get malicious security without zero knowledge. And what this work essentially says is that their cut and choose does in fact give a zero knowledge proof, and that's our basic construction of our garbled circuits. And finally, some future work. MPC in the head was instrumental in the compiler of Ishai Prabhakaran and Sahai. You can ask the same question for two-PC in the head. Come and ask me after the talk and I'll tell you what is it. Thank you.