 So this is joint work with my co-authors, Prabhanjan and Abhishek. So what is the question that we're interested in? Namely, what is the exact round complexity of MPC? So obviously, why do we even care about the round complexity of MPC? It just so happens that round complexity is a fundamental measure of efficiency for multi-party computations and, you know, it's importance, the importance of round complexity, that is, extends to general interactive protocols as well. So now that we have sort of the question out of the way, let's sort of finalize the setting that we're going to work in. So we're interested in computational security. We allow a malicious adversary to corrupt an arbitrary set of parties, and we work in the plain model. So specifically, if we assume a trusted setup like this ERS, then this problem has already been resolved, and we know of two round protocols, and we know of a bunch of them. So I'm now sort of going to briefly take you through a history of the works that address this question of round complexity in MPC. And this starts with the work by goal Mikali and Wigderson, which show a polynomial round construction of a protocol. Obviously, we want to do better, and the constant round protocol, the study was started by Beaver, Mikali, and Rockaway, and it was followed by a sequence of works not limited to the ones on the slide, which used various assumptions to construct these constant round protocols. Of course, we want to resolve the exact round complexity of MPC, so we want to do better than constant. So what about the exact round complexity? Do we know of even lower bounds? So this is sort of established by an important result by Katz and Ostrowski, who showed that four round is impossible with respect to black box simulation. There is a caveat which is the communication channel is this unidirectional message model, and this is sort of unique to the two-party setting. It's sort of not going to be used in the multi-party setting, and I'll shortly tell you the different communication channel. So what about weird interest in multi-party computation? So what about MPC? And this was a very recent result by Garg Mukherjee Pandya and Polychronia 2, and they showed that three rounds are impossible with respect to black box simulation. And this is the simultaneous message model, which is the more general model in MPC, and that's the model that we are going to be working in. Okay, so now we've got lower bounds. Do we have constructions or that are close enough to this? So the same paper also showed a five round protocol based on IO and some other assumptions. So given that this is sort of the state of the art, the natural questions that follow are, does there exist a five round protocol based on standard assumptions? Because the only one we know is from IO, and does there is the sort of upper bound tide? Does there exist a four round MPC protocol? And the results in this talk show that if we assume DDH, we construct a five round protocol. And if we assume one way permutation, then sub exponential DDH, we construct a four round protocol. Of course, given the results, the four round is optimal with respect to black box simulation. Now, we should point out that there's a concurrent work by Brackersky, Halevi, and Polychronia do and they construct a four round protocol as well. They use slightly different techniques and their assumptions are different as well. They assume adaptive commitments and sub exponential LWE. So I'm sort of just going to sort of touch briefly upon the template of this state of the art work and I'll show you why there's an inherent sort of limitation in extending this work to get the results that we do. So their work, the basis of their work is this two round protocol in the CRS model, and we know of these from IO and LWE. And specifically, they construct the five round protocol from IO, so that's sort of going to be the focus, yeah. So you can think of it as being simple enough. I have a CRS, I need to get rid of the CRS, so what am I going to do? I'm going to replace it with a coin tossing protocol. And they construct a four round coin tossing protocol, and it just so happens that it's possible to parallelize one round of the MPC with one round of the coin tossing protocol, and this gives me a total of five rounds. So what are the limitations then of this approach? It seems almost inherent that one round of the MPC, rather the second round of the MPC, seems to depend on the CRS. So we're sort of unsure of how you would parallelize both rounds of the coin tossing protocol with this MPC. And the other is, irrespective of whether we can solve this, this still limits our construction to the assumptions of the underlying MPC. And that we know for the five round case only from IO. So there's another methodology for constructing protocols, and that's using this GMW compiler. So what does it do? It takes a protocol that's secure against semi-honest adversaries, and converts it into a protocol that's secure against malicious adversaries. So how does it do that? I start with this phase which I call coin tossing, and then for each round of the underlying semi-honest protocol, I'm going to prove honest behavior. And that's going to be this, this underlying protocol. Now I'm going to attach a zero-knowledge proof saying that, look, I behaved honestly in this round. I'm going to sort of repeat this for each round, and so on. So in this work, we're sort of going to consider semi-honest protocols that are secure against bad randomness. And so I'm sort of going to just ignore the coin tossing protocol. I'll come back to this point later on. So now that I've got rid of that, no matter in terms of rounds, no matter how efficient the underlying protocol is, the main overhead is still in terms of the number of proofs, which are, of course, multi-round. And the idea is then, can we reduce the number of proofs? And so this is sort of what we're going to tackle. So our strategy is then to start off with the semi-honest protocol of a specific type. So I'm going to have a protocol that's split into this computation phase in the output phase. A computation phase is when each party computes a share of the output. And this can be multiple rounds as is denoted by these dotted lines. And the output phase, typically a single round just exchanges the shares and then they compute the output locally. And this, in turn, actually is satisfied by most MPC protocols that we know of. So this leads to this idea of, can we prove honest behavior just once for the entire computation phase? If so, the protocol would look something like this. I done with my computation phase, I send a proof and so on. Well, this sounds bizarre, right? Like it might be too late, allowing an adversary to deviate from the protocol even in one round might break privacy completely. So of course, we're going to sort of require some additional properties from the underlying protocol. Specifically, we're going to call this property robustness and it's going to pertain to the computation phase. And the computation phase is going to maintain privacy against malicious adversaries but it's not going to guarantee the correctness of the computation. So why is this point sort of relevant? And why can't I sort of push the output phase further up? Know that because the correctness of computation is not guaranteed, the adversary could potentially change the function being computed to the identity function. Of course, privacy is still guaranteed till the end of the computation phase but if I don't complete my zero-knowledge proof prior to revealing the output phase, then the adversary learns the honest party's output. So this proof in between sort of acts as a shield between the computation phase and the output phase. So this technique, we call this underlying protocol a robust MPC and this technique is sort of called delayed verification and was introduced by this work by Chandran Goyal or Stravski and Sahai and was introduced in a completely different context. And I will point out, as I've already said, that this protocol is already secure against bad randomness. So we can ignore the coin tossing phase in our compiler. So in the rest of the talk, I'm sort of going to talk about compiler from a four-round robust MPC to a five-round protocol and similarly to a four-round protocol. And for the construction of the four-round robust MPC, I'll just briefly talk about some of the key points. I won't have too much time to really go into that. So a five-round protocol, the template is simple enough. If you remember, this is what we had and if you see the number of rounds that the computation phase is reduced to one, so it's three plus one, four. So the idea is, of course, if I want to reduce the number of rounds, I have to parallelize the proof somehow. And if I want to parallelize the proof, it's important to understand what the statement is. The statement is the entire transcript. So if I want to parallelize the proof, it has to be input delayed. And the next thing is, of course, we don't, there are impossibly results for black box simulation for three-round ZK, so this zero-knowledge, these proofs rather become four-round and if I were to sort of push them up, it would look something like this. In fact, I can do even better. I can sort of move the second zero-knowledge proof up and sort of just leave the last round to be in parallel with the output phase. Of course, one would hope that something like this would work, but of course now we've parallelized things and issues crop up. Mainly, this issue of non-malabilities like a big concern for us. Why so? It's because in the case of parallelizing the proofs, standard soundness does not suffice. So what do I mean by this? In general, I have a prover and a verifier and then the soundness just guarantees that an adversarial prover can't prove a false statement, but now when I'm sort of squishing these proofs, an adversary is acting both as a prover and verifier, so it's something like this. So in the case that I'm trying to prove the security in simulation, the adversary is going to be a recipient of a simulated proof. Now in the case of a simulated proof, can I still claim that the proof on the right-hand side is sound? So this is sort of what is called simulation soundness and this is sort of inherently linked to non-malability. So instead of now having these normal zero-knowledge proofs, I'm going to sort of switch them around and I'm going to replace them with non-malable zero-knowledge proofs. Fortunately, we know of input-delayed NMZKs and these can be constructed from collision-resistant hash function and you'll hear about this later in crypto in one of the talks. So this is sort of like a general overview of the construction, of course, the details I've sort of glossed over. So what about the four-round protocol? I mean, we saw the five-round case, so what are the main challenges in terms of getting this down to four? Of course, I explained as to why we can't let this output phase be revealed before the end of the proof. So it seems like this is sort of our bottleneck and of course, it might have been nice if we could reduce it to three. But like I already pointed out, three-round zero-knowledge proof with black box simulation is impossible and so this sounds a little tricky. It almost seems like we don't know how to go beyond these five rounds. So let's look at a specific property of this robust NPC and the nice property of this robust NPC is that the simulator needs to cheat only in the final round and for the computation phase, it can just run the protocol with random input. So in some sense, the simulator uses the output phase to fix the computation that it has done in the computation phase. So if I have the protocol like this, if I'm simulating the output phase, I can go from a computation phase with the real input to the computation phase with the random input. So why is this relevant or important to us? Remember, I had this, but now I don't need to simulate the NMZK because I'm sort of behaving honestly in the computation phase with a different input, but the behavior is honest with respect to that input. So I can hope to use a weaker primitive than zero knowledge and this primitive that I'm gonna use is called strong witness indistinguishability or strong WI. Now, I'll come back shortly to what strong WI is and why it suffices for us, but say I had hypothetically a three round strong WI, then I could replace it there and then I'd push everything up and hopefully this would resolve the problem. So what are these strong WI proof systems? Say I have a distribution D1 and D2 that are computationally indistinguishable and from them, my sample a statement and a witness. So strong WI guarantees that the proof associated with these statement and witnesses are computationally indistinguishable. So recall in our setting that the statement is the transcript of the protocol and the witness is the input that generates the transcript. So when we're changing from sort of the real input to the random input, the witness is of course changing, but also know that the transcript is going to change. So your standard notion of witness indistinguishability is not going to apply, which is why we need this stronger notion of strong WI. But what about known constructions of strong WI? In a recent work by Jain, Kalai, Khurana, and Rotblum, they construct us three round strong WI from DDH, but the only caveat is that it's in a limited setting. So why is it in a limited setting? It works only if the prover sort of reveals the statement only in the last round of the proof. But in our case, since the statement is the entire transcript and the transcript is sort of available to everyone before the last round, their setting, their result is not applicable to our setting. So what do we do? We construct a three round strong WI. I've put a start to denote some additional properties that we need. So we construct them, sorry, based on one-way permutation and sub-exponential secure DDH. And since I've alluded to the fact that we require these non-malability properties, it's also going to have the requisite non-malability properties. So an important point to note that even though we are assuming sub-exponential security, our final simulator is still going to run in polynomial time. The sub-exponential security is sort of going to be only used and argued in the hybrids. So just some remarks on the security proof of both these constructions specifically I'll focus on the four round case. There are several challenges and one of them I've already alluded to is the non-malability and specific constructions of the constitutive imprimitives are modified accordingly. Another point is that for a simulator to argue security, it needs to extract the inputs of the adversity. So for that, we add this extractable commitment, a three round extractable commitment where the parties commit to the input that they're going to use in this robust MPC. And now the proofs are also going to state that the value committed in the commitment is consistent with the rest of the protocol. And these extractable commitments which are extractable by rewinding are known from injective one-way functions or the three round ones, of course. So now we have primitives that require rewinding and because of this, we now also need the other primitives in the protocol to be secure against rewinding. Specifically, just think of right now just the strong WI in the extractable commitment. So when I'm sort of rewinding to extract from the extractable commitment, I require that the strong WI property still holds. And there's no guarantee that this will actually happen. So we need to sort of incorporate this into our protocol and primitives. And we do this either by using primitives that are rewinding secure or we sort of entirely bypass the thing and use complexity leveraging. And this is another place where we sort of use the sub-exponential security. And again, this is used entirely in the hybrid. It doesn't show up in the final simulator. And finally, I'll just like talk about a few key points of the robust MPC. From the GMW construction, we know that the round complexity is proportional to the depth of the computation. And a common idea to reduce the depth is to use randomized encodings. And Applebaum Ishaik Kusilowitz showed that I can reduce general functionality computation to computing degree three randomized polynomials. So if I now compute these degree three randomized polynomials using this sort of GMW style, I get a six round semi-honest protocol if I use semi-honest OT. Now, if I use two round OT, which is maliciously secure indistinguishable, I get six round robust MPC. So we already have this robust MPC. So sort of the main contribution is now, of course, to bring this down to four rounds. And unfortunately, I'm sort of not gonna have time to go over this, but this sort of gives me a four round robust MPC. And it's important to know that we can construct it from DDH. So the assumption for the robust MPC is just DDH. So yeah, of course, this now brings me to the end of the talk and yeah, if you have any questions.