 So in this talk, we start by looking at the ideal framework for secure multi-party computation. So it's very simple. Parties just submit their inputs to the trusted party and they get the outputs back. But when we look at the transformation from the ideal to the real world, so here's where we use secure multi-party computation. But what happens is that we typically lose the minimum structure of the ideal model protocol. In particular, we need more rounds. As a starting point, just broadcast alone requires typically greater than or equal to two rounds. And it depends on the number of corruptions. And even if you assume a broadcast channel, we know that three rounds are necessary, especially when the number of corruptions is greater than or equal to two. So in this talk, we revisit the question of NPC in two rounds. And specifically, we'll be looking at the settings where n equals 3 and n equals 4. And we're looking to tolerate exactly one malicious corruption. And we want our protocols to work in a minimal setting. We don't want any broadcast channel, no setup, nothing. So you can ask, why two rounds? Because here, you're trying to replicate the minimal structure of the ideal model protocol. You just have input contribution step and an output delivery step. And also, we know that NPC in general itself is impossible in one round. So the more tricky question is why n equals 3 and n equals 4 and t equals 1. Well, from a theoretical and academic point of view, the remaining cases have already been solved. So for n equals 2, we already saw from the previous talk that there is a five round lower bound. And for n greater than or equal to 5, we know two round protocols which guarantee output delivery in the minimal setting already, with no broadcast, no setup. As I said before, for t greater than 1, there is a three round lower bound. But from a more practical point of view, you typically expect NPC to be executed in a setting where you have a small number of parties. You can geographically locate them in different locations. So you can even try to minimize the number of corruptions. So more than one corruption is typically unlikely. And if you look at the real world deployments of NPC that we know, like Shermind or the Danish sugar beet auction, so these were all done for the three party setting, assuming one corruption. And they usually implemented BGW, which required a lot of rounds. Here, you are trying to do it in two rounds. And another thing selling point over just plain two PC is that when you have three or four parties, you have more redundancy. So therefore, you can recover from a different type of crash failures. So for example, in a two party setting, if you're secret sharing data between two servers and all of them crashes, you kind of lose your data. So that's in the three party setting, you do have some more extra redundancy. So let's just look at the prior work for both like three party and four party. What we know is that broadcast itself is impossible for T equals one in an information theory setting and you have to sell it for computational assumptions. If you're looking for, if you're willing to assume computational assumptions and additional setup, depending on this power of the computational assumptions that you're willing to assume, you can actually get two round NPC for all T and, I mean, as long as you're in the honest majority setting, you can even get like fairness. For the three party setting, I mean, for the four party setting, we know that two round perfectly secured NPC is impossible. And if you relax it for like selective about security, it's something that I'll explain in a future slide. You can actually get a two round protocol. But even like fundamental questions like statistical VSS, we don't know how to do it in a total of two rounds. We know one round sharing, two round reconstruction, or a two round sharing, one round reconstruction. We don't know one round sharing, one round reconstruction yet. And also if you look at just like the pre-processing model, we know of like two round general SFV, which can give statistical security for like, say like the simple functions like NC1 functions. But here the correlated randomness size starts growing exponentially in the input size. So this is the state of the affairs prior to our work and we will try to improve all of the results that we mentioned here. So our results in the three party setting, again, we're looking to construct two round protocols. We don't want to assume a broadcast channel, no setup. And we want to tolerate exactly one malicious corruption. So here we can get general secure function evaluation protocols with selective about security. So what I mean by selective about security is that the adversary can selectively deny output to individual honest parties. In particular, you don't have guaranteed output delivery or fairness. And we can achieve this general SFV with statistical security for NC1. And for general arbitrary polynomial functions, we can get a computational security with a black box use of PRG. A remarkable point about our three party result is that the concrete efficiency of this is comparable to that of semi-honest Yao. So know that we are tolerating one malicious corruption, but we still get efficiency that is comparable to semi-honest Yao. You have to know contrast this with like two PC where if they want to guarantee security against one malicious corruption, you need an overhead which is proportional to the statistical security parameter. So here we just avoid that. In the four party setting, we get like a variety of results. In particular, we get statistical VSS with one round sharing and one round reconstruction. So this in particular implies two round coin tossing, two round simultaneous broadcast. And we can upgrade this to statistical general linear function evaluation with guaranteed output delivery. We complement this positive result with a negative result for statistical general SFV. And this negative result holds, even assuming a broadcast channel, even with a non-rushing adversary. Then a natural question to ask is, what if we don't want, what if you're willing to sell for computational, willing to assume computational assumptions? Then we can actually show a general SFV in two rounds with guaranteed output delivery. And our assumption is very minimal in the sense that we only need one-to-one runway functions. We don't even need like trapped accommodations. And finally, in the pre-processing model, we can get general SFV with guaranteed output delivery and statistical security for NC1. And here the key point is the correlated randomness that we need is just like, it's just equal to the length of the inputs as opposed to prior work where they needed exponential in the size of the inputs. So one thing that I wanna note is that all of the positive results that we obtained in this work, they were previously not known to hold even in settings where you assume a broadcast channel. So in the remainder of the talk, I'll introduce the main tools that we use in all of our constructions and I will try to go through all of the results. So while presenting the results, I will typically be assuming a broadcast channel, but we can get rid of this broadcast channel and I will advise you to look through the paper for more details. Our main tool is private simultaneous messages. So here in this setting, there are multiple clients which share randomness and they just send a single message to a referee. So this referee will be able to compute, will be able to evaluate the function on the inputs and learn f of x, y, but nothing else. And that's it. So this is the entire PSM protocol, right? So you can actually instantiate this PSM protocol by adapting garbled circuits. For example, the shared randomness alone can define the entire garbled circuit along with the input keys. Then the clients just send in their single message will just contain the garbled circuit along with the input keys for their specific inputs. And then the referee can use the garbled circuit along with the input keys and they can evaluate the garbled circuit to get the final output. So it's just like basic PSM. The next main tool is, again, basic secret sharing. So one is additive secret sharing, which we all know. The other type of secret sharing is one private three-party CNF secret sharing. So here the secret is split into three parts and each share consists of two out of the three parts. So note that even when you miss one, so each of the shares is still like one private. The other thing to notice that pairs of shares will have common values and this is like a key component that we'll use in our protocols. Another key property is what is known as efficient extendability. So here, if you're given the secret and just one share, then this means you can compute, it's possible to compute all of the shares. This is something which is true for all linear secret sharing schemes and in particular, it's also true for both additive as well as CNF secret sharing schemes. This is something that we rely on heavily. Let's start by looking at the protocols. So the first is the two-round three-party protocol is selective about security. So here in the first round, the parties are going to set up the shared randomness for the PSM. This is just, they just send a random pad. This is exchange random paths between themselves. And in parallel, they are also going to additively secret share their inputs. Then in the second round, they are going to send the PSM messages which are going to evaluate the function. So the way the PSM protocol proceeds it first reconstructs the inputs from the additive shares. So know that this is possible because the joint views of the parties will define all the inputs because the inputs have already been shared. Then after reconstructing the inputs, the PSM will evaluate the function on the inputs. And that's it. So everybody will get their output. So this is secure in a semi-honor setting but it's totally insecure against a malicious adversary. So there are two attacks. So one is a selective about attack which I'll talk about first. So here basically in the second round, the party on the right can just not send the PSM message. So in this sense, I mean, in such a setting, the top party is just not getting one of the PSM messages. So it cannot compute the function output. So this effectively corresponds to the adversary denying output to the top party. And this is what we call as a selective about attack. It is something we are okay with because we are okay with getting a three-party selective about secure protocol. A more serious attack is the inconsistent inputs attack. So here in the first round, a malicious party can do an additive sharing of an input X. But in the second round, it can send PSM messages that correspond to an input Y which is not equal to X. So what happens is that the PSM messages which are being delivered to the malicious party, so this will be evaluating on input X because this is what X has been shared to these two parties. Whereas the other two PSMs which deliver output to the good parties, they are going to be evaluated, the function will be evaluated on the value Y. So as it is what happens is that the honest parties are accepting wrong output and this is something that is not simulatable in the ideal world. So a variety of techniques that you can use to handle this type of an attack but we actually propose a very simple and elegant trick which we call as the view reconstruction trick. So here we are going to exploit the efficient extendability property of CN of secret sharing schemes. So the way we implemented is that we asked the PSM not only to evaluate the function F but also to evaluate the secret share of the PSM client input that is ought to be held by the PSM referee. So given the secret and one of the shares from the clients, you can compute what share the PSM referee should actually be holding. And then the PSM referee will accept the output only if the reconstructed share is going to match the original distributed share. So as a result you can, because of this step you can avoid this attack. So the honest parties will not accept the wrong output, they will just output abort but this is okay because we are willing to settle for just like selectable security, this is equal into the adversary aborting the honest party and this is fine. So that is the result for the three-party setting. So because of the PSM, we get like statistical security for NC1 and for like general functions you can get computational security with just black box use of PRG like using Yaw or Garble circuits. So next we're going to four-party statistical VSS. So verifiable secret sharing schemes are basically analogs of commitments in the multi-party setting. So you have a commit phase and an open phase also known as like the sharing phase and the reconstruction phase. So in the sharing phase the dealer is actually going to commit to his inputs by sending a message. And what we want is like the standard thing from commitments we want privacy in the sense that at the end of the commit phase the input of the dealer is private from the malicious parties. And then the reconstruction we want commitment. Basically, the unique secret which is defined at the end of the commit phase needs to be reconstructed. And we also want correctness in the sense that the unique secret, the reconstructed secret should actually be equal to the honest parties input. So I start by looking at a very naive VSS protocol. So here in the sharing phase you just assume that the dealer is going to do a one private three-party CNF sharing. So note that the parties will share common values because of the CNF sharing. And then in the reconstruction phase parties are simply going to broadcast. So in the sharing phase the dealer just shares and in the reconstruction phase parties just broadcast their shares and that's it. Now note that the CNF shares because they share common parts. So this naturally defines an inconsistency graph. So depending upon whether these two parties like broadcasted CNF shares which matching common parts or like common parts which are disjoint which are distinct and you can label these edges as either red or green depending on whether they match or not. So now in an analysis of the naive VSS scheme we see that we are actually left with like four cases depending on the consistency graphs. In the case where there is no edge this basically means all of them are holding consistent secrets and you can just pick any pair of honest inputs and reconstruct the dealer secret and this is fine. In the case of three edges you see that everybody's holding inconsistent shares or something is like terribly wrong. This can only be explained if the dealer is malicious so it's okay if you reconstruct a default value. In the case where you have two edges here you can kind of pinpoint that either the dealer is malicious or the party on the right is malicious who's part of both inconsistent edges. So here you can identify that these two are like the honest parties and you can reconstruct from their joint view at the end of the sharing phase. It turns out that the most trickiest case is the case where you have just one edge and in this case any of the three parties could be cut up so you cannot identify a pair of honest parties to help in the reconstruction. So this is where our contribution comes. We use an idea which is a combination of cotton shoes as well as max. So at a high level what happens is the dealer not only sends a CN of sharing but also sends max on these shares to the parties and also keys to other parties to check during the reconstruction phase. The undisputed party who's not part of any of the inconsistency edges will actually get K-max keys from the dealer and in the reconstruction phase it will send a random subset of K over two to one of the parties and all the K keys to the other party and then this will help the parties at the end of the reconstruction to make a voting decision on the disputed edges just to see who's being supported by the max. So the voting procedure in particular will, for any party he makes a decision by looking at, for himself, by looking at the K-old to max that he got and see if all of them pass and for the other party he's going to see whether all the K-old to max pass plus at least one of the remaining K-old to max which the other party does not know should also pass. So in this case, so this is how they do the voting and then if both the disputed parties get a vote or none of the disputed parties get a vote then you can just discard the dealer but if exactly one disputed party gets a vote then you just use the winning share to reconstruct. So in the analysis we have two cases either the dealer is honest and one of the parties is bad. So in this case you just, so in this case the analysis proceeds by saying that the dishonest party is not going to get a vote because this is kind of equivalent to forging a max without knowing the keys. So note that these are all like information theoretic max. Also in the right case where you have a bad dealer all we need to show is that all parties will unanimously agree on the votes. It doesn't really matter which exact value they are going to reconstruct because the dealer is malicious and you just need an unanimous agreement and by a simple analysis you can show that this will happen except with like negligible probability. So what we have shown is a four-party statistical VSS in a total of two rounds. So next we're going to try to extend this to four-party linear function evaluation. So I'm going to take a slightly different approach to present the protocol. So I'm just going to present the first round. The first round is basically the same as VSS. It's a CNL sharing plus a MAC distribution. The second round is very, very complicated but just to help for the presentation we look at the simulation extraction procedure like how the simulator should extract the input of the malicious party in this case. So note that it's only a two-round protocol and the adversity might not participate in the second round and you still need guaranteed output delivery. So the simulator has to extract the malicious input at the end of the first round. So what it can do is like based on the messages that the malicious party sends it can go and look at the inconsistency graphs that have been produced by the edges. And here again, it could be any of the four cases. In the no-edge case the same as before, same for the two-edge case as well, you can always identify a pair of dishonest parties. In the three-edge case, can actually design the protocol to make sure that the final outputs are consistent with the input zero so the simulator will just extract zero. As before the tricky case is going to be the single-edge case and here it's going to be one of either the resolvable or identifiable depending on whether exactly one disputed party got the vote or whether both or none got the vote. So in the case when exactly one disputed party got the vote so you just reconstruct using the winning share. But when both parties get the vote you do a slightly weird thing where you absorb both losing shares as well as the other common parts of the shares. So this is like a weird step in the linear function evaluation protocol but this actually helps in designing the protocol exactly because we can exploit the linearity of the function that we are evaluating to force the outputs to be consistent with the extracted inputs. So there are many complicated details in the protocol but at the very high level we are going to use the PSM sub-protocol as well as the view reconstruction trick. What these help us is also into design like the inconsistency graphs. So previously in the case of VSS you could just let the parties broadcast their shares but here we are trying to do a private evaluation of linear functions. So you cannot just broadcast the shares you need these view reconstruction procedures inside the PSM to even get the inconsistency graphs. The other challenge is that different parties might hold different versions of the inconsistency graphs so it's a bit complicated to make sure that all of them agree on the same input. So next we are going to look at impossibility of statistical SFE. So here our impossibility result is basically based on a very simple function. So the top party has Y0 and Y1 and the bottom party has B and at the end of the protocol we want the bottom party to get Y sub-B so it's like a variant of multi-party OT. So the attack proceeds in the following way so the bottom party is going to send three messages such that the top and the left messages are going to be consistent with B equals zero and the top and the right messages are going to be consistent with B equals one. So why is this possible? Precisely because the protocol guarantees privacy so this top message alone does not reveal any information about the B.B. Plus the adversary is computationally unbounded so he can sample let's say these two messages first and then fix this message and still sample this other message because computationally unbounded so you can always do this power is consistent message sample. What happens as a result is that in the second step when he gets the messages from the parties he can just ignore this, the malicious party can just ignore this particular message and try to reconstruct outputs from just these two messages and the claim is that this will result in the output Y zero because precisely because this is a protocol that will give guaranteed output delivery so the protocol should work even if this guy was malicious so just the output should be able to be reconstructed just from these two messages and because the view of all these three parties are consistent with B equals zero so this output will be Y zero and similarly the same thing is for like on the other side where you kind of discuss this guy and try to reconstruct from the other guy and you will get like Y one. So this attack basically obtains both outputs and it's obviously unsimulatable in the ideal world and the main reason is because our protocol guarantees like privacy and guaranteed output delivery and these two kind of things come into conflict with each other but this is not that this is just the impossibility for the statistical security, statistical nonlinear function evaluation so next we are going to look at computational SFE and see how we can bypass these results.