 Hey, welcome to my talk today on efficiency preserving round compression compilers on MPC or alternatively do fewer rounds mean more computation. And this is joint work with Prabhanjan, Anushree and Abhishek. So let's jump right in with multi-party computation on MPC. It's a mechanism that allows parties to compute a joint function over their private inputs by executing a protocol. They do so by exchanging messages. And at the end of this protocol, everybody has the output. So as we're going to talk about round complexity of these MPC protocols a lot, what constitutes a round? It's simply constitutes of every part of spend sending a message. And for efficiency purposes, it's best to minimize the number of rounds of interaction. Obviously, without any sort of security properties, this can be trivially done in a single round by parties simply sending their inputs to one another. So what is the security property that we need? Interatively, we want to say that even misbehaving participants shouldn't learn anything beyond the output of the function. And MPC, it turns out, comes in many flavors. So what's the setting that we are considering? We want an honest majority of participants and we are working in the computational security setting. So meaning that anything, any protocol that's going to be talked about in this talk is going to assume at the basis existence of one-way functions. So in this honest majority setting with computational security, what do we know about the round complexity? And actually, it's actually well studied and well understood. And this follows from this really cool recent line of work, which shows that in the honest majority setting, there exist two round protocols. And this is just for minimal assumptions of one-way functions. And all of these works actually rely on a round compression technique to achieve these two round protocols. What does that mean? So to start off, you have a multi-round protocol computing this function after that you want to compute. Say, let's just for simplicity, say it's three rounds, it can be any arbitrary polynomial in many rounds. And to sort of compress this into two rounds, what you're going to do in the first round is you're going to exchange commitments of the inputs. So the inputs that go into this functionality are you're going to commit to and broadcast to all the other parties. And in the second round, parties are going to gobble the circuit and send broadcasts to gobble circuit to everyone else. So what purposes gobble the circuit serve? It's going to act as a proxy for the parties in this multi-round protocol. What does that mean? So it's best to look at what actually happens at the end of the second round. At the end of the second round, all the parties aggregate all the circuits that have been broadcasts and the commitments and so on and so forth. And then structure of these circuits as if they were the parties itself and locally execute the multi-round protocols sending messages between the circuits. So these gobbled circuits act as the proxy for the parties themselves. And even though this local exchange of messages can be multiple rounds, this is entirely local and all the interaction has been done in advance. And I'm sort of skipping over some of the details because you're like, oh, how do you actually get the keys to execute the gobbled circuits? And to do so, you actually run a two round protocol in parallel with these two rounds that I've described here to allow you to get the appropriate keys or labels for the gobbled circuits and then you can actually execute the gobbled circuits. And this is a really cool technique that actually gives us two round protocols, but it comes at a cost. And specifically that if we start, you saw this multi-round protocol that we try to compress and if you start with this multi-round protocol which has a total computational work W where W is a function of the number of parties and the size of the circuit representing the function. Then all of these compilers actually give you this overhead of n square, multiplicative overhead of n square when it comes to the total communication or per-party computation in the resultant two round protocol. And this O tilde here actually hides polylog factors in the number of parties or polynomial factors in the security parameter. So what does this n square overhead mean in terms of starting off with the best or most efficient compilers? So if you start off with compilers in the seminar in a setting where the total computational work is only O tilde of C plus a magnitude factor, then these existing protocols give you total communication or per-party computation of n square C, it's O tilde notation of course. So the question we ask in this work is this a multiplicative overhead actually inherent? Can we construct efficiency preserving wrong compression compilers? Well, here efficiency we're gonna talk about total communication or per-party computation. So we actually show positive results in this direction. So in the seminar on a setting, we actually remove this multiplicative overhead and sort of move this over as an additive overhead instead. And here, as you can see, this W actually doesn't depend on n anymore but only polylog with the material on n. And in the malicious setting, we get a three round protocol with similar complexity for total communication and per-party combination. The additive overhead is larger. Of course, it's a three round protocol and not a two round protocol like prior components. So for this work, we're actually going to focus only on the seminar on a setting, but before actually going to the details, let's actually see what our results imply when instantiating the most efficient protocols as we've seen before. So as I've already shown you, the prior work even in the seminar on a setting gives you an n square overhead and our work also gets rid of the n square and as you can see, the additive overhead is into much larger than the additive overhead and prior works either. And the malicious setting, again, we have n power six additive overhead which is sort of comparable to n power four. But one point to note obviously is that the prior work malicious protocols are two rounds and while in our work, it's a three round protocol. I've said total communication and per-party computation. You may ask, oh, what about total computation? And in fact, if you're willing to sacrifice an additional round of communication, you can actually get the total communication and total computation cost to be the same. But for this talk, I'm going to just focus on the total communication and per-party computation. So how does one actually achieve these results? And a natural approach which has been sort of tried in test strategy in the past is this delegation of computation approach where all the parties that want to compute a function elect a subset of them. We have some committee election procedure, denote them the servers, and these servers are the ones that are going to do the heavy computation while the clients sort of just sit back and receive the outputs. So let's look at it in a little more detail. So let's say here that these three parties are the ones in the circles have been selected by some committee election procedure to be a part of the server. And they need to run a protocol computing some function F prime that's going to be related to F and we'll see shortly what that is. But obviously it needs to incorporate the inputs of all the other parties. And for input privacy, parties can't directly just send their inputs. So what they're going to do is going to secret, additively secret share their inputs. I'm going to give each of the servers only a share of their input. So I've shown an example where X, Y is additively shared among the three servers. So the servers now have input into this new functionality F prime, not just their input, but shares that all the other clients have sent to them. So what does F prime do? F prime just reconstruct, given all of these, all the inputs of the servers is able to reconstruct all the client inputs and then compute F on the reconstructive inputs. So it's clear that the servers seem to be doing substantially more work than the clients. And actually we show that this kind of balance or lack of balance is inherent. Specifically, we show that if you want a protocol where the total computation cost is O tilde C and note that our three round seven hours protocol actually achieves this not in two rounds, but three rounds. But you can actually show that there are some functions for which there does not exist a constant round balance protocol. So we actually show this in our paper and we use these MPC in that kind of techniques for these lower bounds. So this is not just natural, but this delegation and computation seems inherent in some sense. But what's new to, so this is a tried and tested strategy like I've said before. So what's new in this setting? So the main challenge is we need to do all of this delegation of computation and somehow manage to complete this in two rounds. So there are several challenges and I'm gonna mainly focus on two high level challenges. One is that in a two round protocol to achieve any sort of meaningful security and to avoid residual attacks, the servers that are computing the protocol must commit to their entire input in the first round. But in the first round, the servers are not in possession of the complete input because the clients have the shares of their inputs. So somehow the committee election and the input sharing must happen in the first round. So that seems challenging. And the second challenge is that, is an artifact of existing cobalas that they require private communication between the servers. And in the first round, the servers don't know who the other servers are, how they want to privately communicate with one another. So our main idea is to develop a round efficient approach to this delegation of computation. And as a starting point, we're going to consider two round MPC protocols that have some special properties. And what are these special properties? The first is what we're gonna call decomposability of the first round messages. So the first round messages of all parties can be split as light messages and heavy messages. Light messages, as you sort of guessed from the name, the computational complexity is independent of W. But more importantly, it's the messages that depend on the input. So light messages depend on input and low cost, while heavy messages are independent of input where the computational complexity can depend on W. So that's great. That's our first property that we require from our special tool on MPC. And the second is that the private channel messages between parties are independent of the input. Good. So these are the two properties. Can you actually achieve these properties? And in our work, we actually show that you can take existing compilers and suitably modify it to achieve these properties. Okay, so now you have a special tool on MPC. How are you actually gonna make it work? So this is gonna be a high level strategy. So the first step, parties before the start of the first round, parties are going to toss appropriately parametrized coins and self-reflect into committees. And this is totally fine in the semi-honest settings. Parties are gonna toss coin. If it turns out heads, they're saying, oh, I'm gonna be the server. Of course, nobody else knows who the other servers are. They don't have time yet to announce to the others that they are the server, but they locally know that either they're the client or the server. Now the servers want to run a special MPC protocol computing F prime. If you recall, F prime was this function which reconstructed all the client input from the shares and computed F on the reconstructed inputs. And one of the challenges we said, oh, in this, the servers need to compute or commit to the input in the first round and they don't have the inputs. How's it gonna proceed? And to deal with this, we're going to help the servers compute the first round messages. Specifically, as you can see, the decomposibility gives you, splits the messages into heavy messages that are completely independent of the input and the light message which actually depend on the input. So all the clients are going to come together and help compute these light messages because they have parts of the input. And the nice thing about this is because the computational complexity of this is low, the overhead that this additional helping gives is also going to be low in the overall protocol. So that's nice that then, what about the second challenge about the private messages? So to deal with this, what you're going to have the servers do is they're going to broadcast encrypted versions of their private channel messages. And in the first round, they can do this because of this independence property which says that the private channel messages between parties are independent of the input. So they don't really need to wait for input shares from the client before they send out these private messages. So it's going to be broadcast, it's going to be encrypted, but we'll see shortly how to deal with this. So for this talk, I'm going to mainly focus on the third point and we'll see that the ideas for point four are very similar and I'll show you how at the end of this talk, how you can actually use the similar ideas to actually get around our resolve point four. So what about the helper computation by the clients? Before we can get to that, let's actually see what even the clients have to help the servers compute. So what did the servers compute? In the first round, as we've said multiple times, the servers compute light messages that depend on the input and heavy messages that are independent of the input. And in the second round, it computes the second round of the special MPC that depends on all the light and heavy messages of all parties. So we can sort of ignore these heavy messages that are independent of the input for now because once the party knows that it's a server, it can actually compute these heavy messages that doesn't need a way for the inputs from the client. So let's ignore this. We still have these other two issues to resolve and we're gonna resolve both of these together. So to do that, every server Alice is going to delegate its computation of the second round of the special MPC to a circuit. So it knows that it doesn't have all the information to compute the first round and first round light messages, for instance, that depend on the input. And by the time it gets it in the second round, it's gonna be too late. So what it does, it computes the circuit CI that's going to take in all the light messages from round one and then compute the second round message of the special MPC. And it sends this out in the second round. And of course, to hide any of the inner workings of the circuit CI that might leak inadvertently private inputs of Alice, Alice actually covers the circuit and sends this out. But at the end of the second round, everybody has a circuit and they want to evaluate the circuiting at the second round of the special MPC. To do that, there needs to be a mechanism to deliver the labels that help you actually evaluate the circuit. And the labels have to correspond to the light messages from round one. So this is exactly where these clients step in and help. So they're on all the parties that are on a two round helper protocol. And what is this helper protocol compute? The clients actually have inputs to this protocol as well. Both the clients is always have input. So as like from the previous slide, we saw that the server actually computes a couple of circuits. So the server inputs are pretty obvious. It's going to be the labels of these couple of circuits. But what are the client inputs? The client inputs are also obvious. These are going to be shares of their corresponding input XI. So the client has input XI. These are going to correspond to the shares of XI. And what is the protocol output? The protocol takes all these shares, computes the light messages and outputs the label corresponding to these light messages. And as you can see that the protocol actually has all the information it needs to compute the labels corresponding to the light messages. And the server protocol actually has some rolling as properly. And two of them are that it doesn't require knowledge of the servers because all parties actually run the protocol. It suffices for parties to individually know if they're the server or the client to determine what kind of input they have into this protocol. Otherwise given that all parties participate, they don't really need to know if the other party is a server or a client. Great. And because the computation is only of light messages, the overhead that this generates for running this helper protocol is low. This is great. So we seem to have solved point three. So let's actually come back to point four where servers broadcast encrypted messages. How do the other servers then decrypt the messages? And for this again, we're going to run a helper protocol. This is going to be the same idea as point three. We're going to run a helper protocol for servers to obtain appropriate keys to decrypt. But again, these protocols only finish in the second round. So you're going to have to have some similar to before. You're going to have some sort of garbage circuit and garbage circuit internally is going to decrypt this broadcast message given the labels for the keys. So again, this is very similar to the approach that we've just discussed. So I'm not really going to go into the details. So the last thing I actually want to talk about is malicious security. So far, I've spent the entire time talking about the seminal security, but we said, oh, we have a maliciously secure protocol as well. And there's some overarching ideas are the same in the seminalistic malicious protocols. But obviously, there are several challenges because participants can deviate arbitrarily from the protocol. And so for instance, we require the special NPC to be secured against malicious parties. And again, we need the committee election process to be robust to malicious behavior. So for instance, if I have, and by two of the parties that are malicious, if we follow the same strategy in the seminalistic setting, all the malicious parties are going to be claimed to be in the committee. And then if clients send their shares to the committee, input privacy is completely gone. So there are two ways to mitigate this that we take in our work. One is we have an additional round for the committee election process, or we have strong setup assumptions, specifically of verifiable random functions or so on to actually get malicious security to run protocol, but under these setup assumptions. So that's all I have time to talk about today. Paper is an e-print. Feel free to look at it. In case you have any questions, feel free to reach out to me. And that's my email address. Thanks so much for listening.