 Thank you for the introduction. Good morning. So this talk is about the exact round complexity of secure three-party computation. And this is a joint work with my advisor, Arpita Patra. So let me begin with the exact question that our paper addresses. So we consider this specific setting of three-party with honest majority. And we investigate its exact round complexity over a range of security notions, namely guaranteed output delivery, fairness, security with unanimous abort, and finally, security with selective abort. So some of these results were partially known. And our goal was to fill in the gaps and complete the picture for two popular network settings. The first network model is the minimal model of point-to-point private and secure channels. And the next one additionally assumes a broadcast channel. So we complete this picture by means of two lower bounds and three upper bounds. And while our upper bounds are specific to the case of three parties, our lower bounds are generic, and they extend for honest majority setting. So like we saw in the previous talk, but let us just quickly recap what the problem of multi-party computation or MPC is. So we have n mutually distrusting parties out of which t may be corrupt. And the goal is that they want to compute some combined function of their private inputs. And MPC gives them a means to do so with the guarantee that nothing beyond the output of the function will be revealed. So in a nutshell, it can be thought of as MPC emulates the effect of having a trusted third party to whom all these parties can simply submit their inputs and get the output in return. So MPC has been studied for a range of security notions which are classified based on the degree of robustness of the protocol. So the strongest one is that of guaranteed output delivery or GOD in short. So this is the ideal case in which, no matter what the adversary does, he cannot prevent the honest parties from getting the output. So here in your system, even if some parties are corrupt, at the end, everyone is guaranteed to get the output. Now a slightly weaker notion is that of fairness in which though everyone is not guaranteed to get the output, but it is fair in the sense that if the adversary gets the output, then all the parties get the output. So basically, this is an all or a non-kind of situation in which either everyone gets the output or nobody gets the output. Next, an even further weaker notion is that of security with unanimous abort. Now these kind of protocols may be unfair. That is, it's possible that the adversary gets the output, but the honest parties don't. But still, the adversary cannot keep the honest parties on different pages. So what I mean is that the adversary gets the output, but there are only two possible cases. That is, either all the honest parties get the output or all of them abort, but there is a sense of an agreement or unanimity amongst the honest parties. And finally, we come to the weakest security notion that is security with selective abort. Now in this case, the adversary can selectively deprive some of the honest parties of the output. For example here, the adversary who gets the output, he may select that only say the first two honest parties should get the output, while the other two don't. So now the honest parties amongst themselves are not in agreement. Now here, the notions are listed from the strongest to the weakest order, and therefore the implications follow top to bottom. What this means is that guaranteed output delivery would imply fairness, which implies unanimous abort, and which finally implies selective abort. Now we consider these notions for our special setting of three parties with one malicious corruption. Now there are a number of reasons why this particular setting of honest majority is interesting to study, but I'll be listing a few of them now. The first and foremost reason is that it has been found that the most likely scenarios for MPC in practice, they seem to involve only a small number of parties. For instance, even the first large scale deployment of MPC, which was the Danish sugar beet auction, that was designed for the three party setting. And again, some decent technologies such as ShareMind and a few papers on secure machine learning, they all involve protocols having small number of parties. The next reason is that having an honest majority allows us to study these strong security goals of guaranteed output delivery and fairness, which are otherwise known to be impossible, as shown by the famous result of Cleave. The third reason is that there is a lot of evidence in the literature to show that having a single corruption can be exploited and taken advantage of to circumvent known lower bounds. For example, there is this famous lower bound of three rounds for fair MPC with t greater than one. And this was circumvented by the two round four party protocol of Isha and others, which achieves guaranteed output delivery by leveraging the fact that they have a single corrupt party. Also in the context of verifiable secret sharing, it's known to be possible in one round, only in the case of a single corruption, but otherwise it's known to be impossible for a generality. Another reason is that this setting allows us to construct protocols from weak assumptions, like there are some three party protocols based on garbled circuits, which are built just from one way functions or one way permutations, and they are completely, they shun these publicly primitive such as oblivious transfer, and this would not be possible, say in the two party case or in the dishonest majority case. And these kinds of constructions, they turn out to be using only lightweight operations. For instance, some of the garbled circuit constructions, they are able to avoid the use of the techniques of cutting choose. And finally, having an honest majority gives us a promise of a better round guarantee as substantiated by this lower bound in the plain model, which is four for the case of dishonest majority, but it's only two for the case of honest majority. So these are some of the reasons why this particular setting is interesting, and with this background, I now move on to present our results in more detail. So like I mentioned, we have two sets of results, one for the setting of without broadcast, and the other that assumes the presence of broadcast. Now, in the setting of without broadcast, these are the things which were already known. First, like two is known to be the lower bound for any meaningful notion of security of MPC, which includes security with selective abort. And for this, we already had a tight upper bound, which is the protocol of Isha and others. Now, for the other extreme of guaranteed output delivery, it is known to be impossible no matter how many rounds are given in the absence of broadcast. So therefore, the question of round complexity in the case of God without broadcast does not seem relevant because it's known to be impossible. Now, our goal was to complete this remaining picture, and we do so by means of one lower bound and one upper bound. So our first lower bound shows that three rounds are necessary to get unanimous abort without broadcast. So what this means is that given two rounds, the best notion of security that you can get is only security with selective abort. And therefore, it implies the optimality of this protocol of Isha and others in terms of the security that they achieve in two rounds. Next, we have an upper bound, which is a three-round fair protocol without broadcast. Now, since fairness is a stronger notion than unanimous abort, a lower bound of unanimous abort would translate to a lower bound for fairness. This actually, as you can see, is making our upper bound of the three-round fair protocol tight. And again, for the same reason, since fairness is a stronger notion, an upper bound for fairness can also be considered to be an upper bound for unanimous abort. So now, with this, the picture on the left side has been completed. And we move on to the setting of with broadcast. Now here, the known results are simply the ones which were translated for the security with selective abort from the without broadcast setting. Now, in the case of unanimous abort with broadcast, we had some surprising and interesting results. It turned out that two rounds are actually sufficient to get unanimous abort with broadcast. Now, what this means is that broadcast has actually helped to improve this round complexity of unanimous abort protocols from three to two. However, this did not seem to be the case for fair protocols, because we have a second lower bound which proves that three rounds are still necessary to get fairness, even if you're given a broadcast channel. So this means that broadcast does not really help to improve the round complexity of fair protocols. And this lower bound of ours actually nicely complements the known lower bound of three rounds, which is known for fair protocols with t greater than 1 and any general n. So what our lower bound shows is that for the case of t equal to 1 and n equal to 3 also, this lower bound of three rounds is necessary to get fairness. And finally, we have a third upper bound, which is a three round protocol achieving guaranteed output delivery in the presence of broadcast. Now, again, since guaranteed output delivery is a stronger notion than fairness, the lower bound of fairness would translate to a lower bound for guaranteed output delivery, making our upper bound for GOD tight. And finally, to fill in the last piece of this puzzle, which is the upper bound for the fair protocol with broadcast, it's actually, in fact, implied by two of our existing constructions. Both are construction for the fair protocol without broadcast, as well as our construction for the GOD protocol with broadcast. Both of them actually can be considered to be an upper bound for the fair protocol with broadcast. So this completely settles the question of the exact round complexity in this particular setting. Now, as I mentioned, the good news is that these lower bounds are generic, and they can be extended for any n where your n lies in between 2t and 3t. And our upper bounds, though they are specific to the three-party case, but still, on the bright side, they are built only from one-way functions or one-way permutations. And technique-wise, gavel circuits underlie all our upper bound constructions. Now, let me give you a high-level overview of the approach that we use in our lower bound results. So as I mentioned, we have two lower bounds that we show that three rounds are necessary, first for unanimous abort without broadcast, and next for fairness with broadcast. Now, the general approach that we use in both of our lower bounds is as follows. First, we pick a special three-party function, and the proof is by contradiction. So we assume that a two-round protocol exists that is either secure with unanimous abort or fair, respectively. The next step is that we define a sequence of hybrids involving different adversarial strategies. And the idea is that within a hybrid, we use our assumption that the protocol must be correct and secure in order to derive some inferences. And across the hybrids, we compare the views of the parties to build up these inferences and finally come to a contradiction. So in our lower bounds, the contradiction that we come to is that any two-round, such a two-round secure protocol must be, in fact, violating privacy, showing that three rounds must be necessary. Now following this template, let me explain to you the overview for the lower bound for fairness with broadcast. So the special three-party function that we pick is the following. Here, the inputs of the parties are nothing but single bits, and the output is the logical AND of party p2 and p3's input. So the proof is again by contradiction. You assume that a two-round fair protocol exists with broadcast. So now we move on to see the strategies. So consider one such execution of two rounds, called sigma1, in which your party p1 is corrupt. Now his strategy is very simple to describe. He behaves honestly in the first round, and he simply remains silent in the second round. Now with this strategy, since he has behaved honestly in the first round, he would be receiving all the desired communication, both from p2 and p3, in both round one and round two. And by correctness of the protocol, this would mean that he is able to compute the output. Now recall that our assumption is that the protocol is fair. And since a corrupt p1 is able to compute the output, it must be the case that an honest p2 is also able to compute the output, since it's supposed to be fair. So the inference that we derive from here is that an honest p2 is able to compute the output, even without the round two message from p1. So building on this, we move on to the next strategy in which p2 is corrupt. So the intuition is that since p2 does not need the round two message from p1 to compute the output, as per what we inferred before, he can actually afford to misbehave towards p1, right from the first round. So this is what his strategy is, is that p2 behaves honestly only to p3 in the first round, but he does not communicate to p1. And his strategy in the second round is that he simply remains silent. So again to just repeat p2 strategy in sigma two is to behave honestly only towards p3 in the first round, and to simply not communicate to p1 at all and remain silent in the second round. So now the claim that we make is that the view of a corrupt p2 in sigma two actually subsumes the view of an honest p2 in sigma one. Now to give an intuition of why this holds, let's analyze the view of p2 in sigma one. So it comprises of the round one messages from p1 and p3, and round two message only from p3. Now this is something which even a corrupt p2 gets in sigma two. He also gets round one messages from p1 and p3, and round two message from p3. And note that we can rule out the possibility of this round two message from p3 to p2 as being influenced by p2's misbehavior in the first round. And why we can do so is that even if we assume that p1 was say to report this misbehavior in the first round to p3 in the second round, he could do it only during the second round by which time the honest p3 would have already sent the round two message at the beginning of the second round. So all in all we can conclude that whatever an honest p2 sees in sigma one, all that communication, even a corrupt p2 is seeing in sigma two. And by the conclusion we made earlier, it must be true that p2 is able to compute the output. Now again by fairness, since a corrupt p2 is able to compute the output, it must be the case that an honest p3 is also able to compute the output. Now we make a crucial claim. We say that it's not only the case that p3 is able to compute the output at the end of the protocol, but in fact he should be able to compute the output at the end of round one itself. Now to give just an idea of how it works, it's more formal in the paper. So like if you analyze the nature of this function, you can notice that it depends only on inputs of p2 and p3. So as per the strategy of p2 and sigma two, the only thing that he has communicated is in the round one communication to p3, and maybe the broadcast communication in round one. So both these things are available to p3 at the end of the first round itself. The only additional thing that p3 is getting in the second round is communication from p1, but that communication from p1 could not possibly hold any information about p2's input x2 because p2 has effectively never interacted with p1 throughout the protocol. So whatever information p3 needs related to the input of p2, all that should be available to him at the end of round one itself and enabling him to compute the output. And with this we move on to the final strategy in which p3 is corrupt, and in general, in any MPC protocol, if a party is able to compute the output at the end of round one itself, then it's susceptible to something called the residual function attack in which the party can locally plug in inputs of his choice and get multiple evaluations of the function while the inputs of the honest parties remains fixed. So in this particular example, we can imagine that p3 participates as per his input being zero, and as per the ideal functionality of and, when x3 is zero, he should not be learning anything about x2. But as we concluded earlier, that he is able to compute the output at the end of round one itself, that is nothing stopping akara p3 from locally plugging in his input as one and allowing him to learn x2. So finally, this is a violation of privacy because p3 was able to learn x2 even when his input was zero. And now before concluding, I would just like to give an overview of some of the challenges that we face in our upper bounds and the techniques that we use to handle them. So in our first upper bound, which is the three-round fair protocol without broadcast, the main challenge was that since we don't have a broadcast channel, the adversary can behave in many ways to cause the honest parties to be in a state of conflict and confusion at the end of the second round. So in particular, we were dealing with scenarios of the following kind in which a party p1 is supposed to distribute some common information to both the remaining parties. Now to do so in the absence of broadcast, we make both the remaining parties that is p2 and p3 exchange this information in the second round to cross verify. Now suppose an honest party finds any inconsistency during this cross verification, then he simply does not know whom to blame because for example here, if p2 is honest and during the cross checking, if he finds that the messages don't match, he doesn't know that is p3 bad or is it the case that p1 has not distributed this information consistently. So for this, we introduce a new mechanism which actually rewards honesty of parties with something which we call a certificate. So in this context, p1 would actually be given a certificate for behaving honestly and distributing the information consistently to the remaining two parties. This certificate serves a dual purpose. First, it's used to unlock the output and second, this can also be used as a proof by p1 to convince the other honest party p2 that he is not the reason of confusion and that he has indeed distributed the information consistently. So for this reward mechanism, we formalized it as a primitive which we called authenticated conditional disclosure of secrets and we realized it using privacy free gavel circuits. Next for our two round protocol of unanimous abort with broadcast, we identified that the round to private communication is actually a soft spot for the adversary to attack and disrupt the unanimity amongst the honest parties. So this is quite natural because if a corrupt party misbehaves only to one of the honest parties in the second round, then there is simply no time left to inform the other honest party and bring him or her to the same page. So for this, we break down the round to private communication into two parts. One is the private communication in the first round because if a party misbehaves in this step, then there is a scope for him to be detected early and this can be informed to the other parties in round two. And finally, the second part is the broadcast communication in the second round. So here if a corrupt party misbehaves, then he will be publicly detected by all the honest parties and they would continue to be in agreement. So in particular, we introduced this two part release mechanism to transfer the encoded inputs of the gavel circuit in our protocol for unanimous support. And finally, for our upper bound with guaranteed output delivery, here our main concern was that since we want robustness, we could not afford any kind of conflict or confusion at the end of the second round. So for this, we designed a two round building block having this property of strong identifiability, which means that an honest party either gets the output or identifies the corrupt at the end of the second round and this would help everyone get the output no matter what. Now finally, there's a common challenge that we face in all our upper bounds. So our upper bounds deal with multiple executions and whenever you have a protocol with multiple executions, a common concern is that of input consistency, that is you need to make sure that the corrupt party uses the same input in all the executions. So in fact, we had to deal with two kinds of input consistency issues. One was that within an execution, we needed to make sure that a party uses the same input across Gabel circuits in the same execution. For this, we used a variant of one of the existing mechanisms of proof of cheating, which is very popular in the literature of Gabel circuits. And the next kind was to enforce input consistency across executions. So for this, we introduced a new trick, which is specially for the three party case. And here, we are able to enforce the corrupt party to use the same input with absolutely no additional overhead over our protocol. So with this, I conclude my presentation and thank you for listening.