 Okay, so hello everyone. Today I'm going to talk about the power of honest majority in three-party computation without broadcast. This is joint work with RunCoin, Eranumri and Tom Swad. So before actually talking about three-party computation, let me start with an example in the two-party setting, specifically about coin tossing. In this example, we have two parties A and B that wish to toss a coin, meaning that at the end of the execution, they both agree on a common uniform random bit. And this should hold even if one of the parties is corrupted. Namely, if A is corrupted, then B outputs a uniform bit regardless of what A does. However, a famous result by Cleave showed that this is actually impossible to achieve. There is no fair coin tossing in the two-party setting. So one way to try and overcome this issue is to consider what we call the server-aided computation. Well, instead of two parties, now we have three parties. Well, the third party, C, is the server that tried to assist A and B to toss this coin. So at the end, it doesn't receive the output, only A and B receive this common random bit. However, what happens if this server is colluding, meaning that it might help one of the parties? Well, Cleave's result can still be applied to this case as well. It can be generalized and show that no protocol can compute coin tossing in this setting. However, if only if the server is non-colluding and at most one party may be corrupted at the time, then the result of Robin and Beno show that this is actually computable. We can compute coin tossing with full security in this setting. However, the result heavily relies on the use of broadcast, meaning that if one party sends a message, then the other two party receive exactly the same message regardless of who is malicious. So a natural question would be to ask whether we can achieve this secure protocol without broadcast. Namely, we only assume an honest majority at most one party may be corrupted at the time. And now there is no broadcast. Can we compute coin tossing in this setting as well? So until this work, this question was left unanswered. So in general, we can ask ourselves, what is the power of having only an honest majority? Meaning that we do not allow the parties to have broadcast. And in particular, they do not have any setup assumptions that imply broadcast like PKI or proof of work. And the only thing that we consider is that in this free party setting, only one party may be corrupted at the time. So what can be computed in this setting? This is the main question of our people. So let me give you a brief overview of the known results. So first of all, broadcast is impossible to compute without broadcast. This is due to lamppost, chostac, and peace. And furthermore, Cohen, Haithner, Omri, and Rotem showed in 2016, they actually gave a characterization of all symmetric functionalities where a symmetric functionality means that all parties have the same output. And the upper bounds can actually be generalized to the asymmetric case where the parties do not necessarily have the same output. However, their lower bounds cannot be generalized to the asymmetric case because they use a generalization of the hexagon argument due to fish or lynch and merit. And so they heavily rely on the agreement property. In particular, they are not able to talk about the server-aided setting. They cannot have the lower bounds not apply them. And finally, they're the result by Fitsiga and Mauer and Dostovsky that showed that two examples called ObliviousCast and ConvergeCast that imply broadcast. And therefore, by the previous results from the previous slide, this implies that these two functionalities are not computable in this setting. So finally, let me give you a brief overview of our results, what we show. We consider only three-party settings and we show both necessary and sufficient conditions for full security. And an interesting use case of our result is that we actually characterize server-aided computation, meaning that we know to say exactly which functionalities can be computed in the setting as which cannot. And moreover, what the proof actually shows is that given any server-aided free-party protocol, we can actually transform it into a two-party protocol computing exactly the same functionality. So this means that seed, the additional server didn't actually provide any additional power to the two parties. We can, if the parties can compute the function we've seen, then they computed without it. And in particular, if we consider the coin tossing functionality that we started this talk with, then combine this result with Clive's result. This means that coin tossing cannot be computed in the server-aided setting, even though we have an honest majority. So now I want to talk about the main argument that we use in order to prove all of our lower bounds. This technique is called the split-bin argument. It's actually taken from the distributed computing. And it was used only we used to roll out agreement. So to show that functionalities like broadcast cannot be computed in certain situations in asynchronous network and partially synchronous networks. However, in this work, we don't care about agreement. We care about MPC. We care about privacy, fairness, guaranteed output delivery, meaning that the honest parties must always receive an output according to some functionality that was predetermined. And again, the output of the parties do not necessarily have to be the same value. So we adapt their argument from the queues to roll out agreement and adapt it to the MPC setting where the parties do not necessarily agree. So before showing you how we use this argument in our setting, let me give you a brief explanation on how to use it to show that broadcast is impossible to compute. So first of all, so let's say that we have a party C that wishes to broadcast the value Z to both A and B. So security means that if C is corrupted, then both A and B still agree on some outcome. This outcome doesn't have to be Z, but it has to be the same for both A and B. Second, if one of the other parties is corrupted, let's say A, then the other party that's supposed to receive the output will always receive the input of C, so it will always output Z. So this is how we define security for broadcast in this setting. So how does the split-brain argument work? Now I'm going to show you three different scenarios, three different attackers, and then we're going to analyze them. So the first attacker that we're going to consider is a corrupt C that does the following. It's split-brain and talks to A like an honest C would on input zero as if it never received any messages from B. So this CA, we will denote it, assumes that it never received any messages from B and it talks like an honest C would in this setting on input zero. And similarly, CB only talks to B on input one as if it never received any messages from A. So this is the first attacker. And in order to show you how we construct the other attackers, let's first put them all these four parties now on a line. So on the left we have CA that talks to A and on the right we have CB that talks to B. So by the security property that we mentioned earlier, in this setting A and B must output the same value, whatever it may be. We don't know but we don't care. It says to be the same value. So now let us consider the second scenario. This time we consider a corrupt A and this attacker does not talk to C. And additionally in this setting we also assume that C, the honest C has input one. And what the attacker does in this setting is that it imagined talking to that CA that we mentioned in the previous attack. So it imagined talking to some honest C that never received any messages from B as if it has input zero. So by the security that we mentioned, by the security of broadcast, the output of B must be the input of the real honest C which is one in this case. So now let us compare these two scenarios. So clearly B cannot distinguish which one is the corrupted party because it has exactly the same transcript in both of them and it has the same view. So it also has the same output but because he has output one in the top scenario when A is corrupted, he also has an output one in the bottom scenario when C is corrupted. And similarly we can also consider a corrupt B that will imagine talking to some CB that has input one and we can consider this scenario where the honest real C will have an input zero. So in that scenario, by definition, A output zero. And again, A cannot distinguish between the bottom scenario where B is corrupted to the middle scenario where C is corrupted. So we must also output zero in the middle scenario. However, now we have a contradiction since we said earlier that A and B must output the same value if C is corrupted, which is clearly not the case. So this is how we use broadcast in the, how do you use the split brain argument in the broadcast case. So now I'm going to show you how we adapt this argument to the general case for MPC. So again, what we'll do is that we will start with the three scenarios, the three corrupted parties and give the four inputs and fix some four inputs Z1, X1 and Z2 for these four parties. So again, by the same argument as before, B cannot distinguish the top scenario from the middle scenario and A cannot distinguish the middle scenario from the bottom scenario. So the output must be the same in those settings. But now, unlike in the broadcast setting, here with the parties, I and B do not necessarily have to agree. So we cannot claim that we already have a contradiction here. Instead, what we will do is we will use the security definition of MPC in order to get some necessary condition that the functionality must satisfy. Particularly, the way we define security is using the ideal versus real paradigm. Here we construct, we imagine some idealized execution that computes the functionality. Well, in this idealized version, there is another party, another trusted party that cannot be corrupted. It is always honest. And the parties will send it some input and then it will send them some outputs. And then we said that the protocol is secure if no one can distinguish the real execution from the ideal execution. However, notice that in the ideal execution at the bottom, the honest parties always send the original input. This case, B and C sends Y and Z to respectively. And the corrupt A can send whatever it wants. So for the specific attack defined above, because A imagines talking like some honest A and honest C A will do in a certain situation, it means that this X star is sampled from some distribution that depends only on the real input X and the imaginary input Z1. So after giving these three values, the trusted party now computes the functionality, computes F and distribute the output accordingly. And in this case, we only care about the output of the honest party B. And what we end the security definition in particular states that the output of B in the real world must be indistinguishable from its output in the ideal world denoted WB in this case. So this is what we know about the first scenario. Now let us consider the second scenario where C is corrupted. We know that B cannot distinguish these two cases and therefore it has exactly the same output in both of them. But now the ideal world is different because this time A is the one that is honest and C is the one that is malicious and can change its input. In particular, A will send the original into X and C will sample some Z star that depends on a distribution that depends on Z1 and Z2. However, by the argument we said earlier, the output of B in this ideal world is indistinguishable from the real world which is exactly the same output as in the previous scenario. So this means that if we compare these two ideal worlds where one on the left A is co-opened and on the right C is co-opted, it means that the output of B in those two scenarios must be indistinguishable. And similarly, we can say the same thing about the output of A when B and C are co-opted and together we call this property C split brand simulatability since we only split the brand of C. We can analogously split the brand of A and split the brand of B and get additional conditions that F must satisfy. In particular, we have the following theorem that states that if a function is securely computable then it must be C split brand simulatable and it also must be A and B split brand simulatable by using a similar argument. So now I'm going to show you some of the implications that this argument has, both negative and positive results and I'm going to show them pictorially. So first of all, we have the result that I mentioned earlier about the server edit setting where C an additional third party C doesn't have an input nor output. And what we show is that we can take any three-party protocol in the server edit setting and use it to construct a two-party protocol computing exactly the same functionality, meaning that the additional help of the server didn't give anything to A and B. Another result is about a solitary output functionalities where only one part is in this setting only A to receive the output. And here we showed that these two examples that are shown on the left cannot be computed even assuming on a majority and even though only one party received the output. And interestingly here we also use the fact that the function must satisfy the privacy of the property. So we can show that if the split brand argument doesn't work, the protocol actually is secure against that then A can somehow learn one of the inputs of the other parties. And finally we also show some positive results and we show specifically for this positive result we consider two output functionalities where only two parties in this case A and B receive an output and C doesn't receive an output. And unlike in the server edit setting, C might have an input that can affect the output of the function. And we show an interesting class of functionalities where such a protocol computing them actually exists. And even more, we show that this class of functionalities is strictly contains the class of both fair to party protocols with fair to party functionalities where C doesn't have an input. And it also includes the generalization of coin attack results that used to be for symmetric functionalities. They can be, the upper bound can easily be generalized to the asymmetry case. And our protocol includes also those generalizations. So with the time I have left, I want to talk about the server edit computation and prove to you that result. So here we fix some two-party functionality F and assume that we have a protocol that securely computes it. And our goal is to construct a two-party protocol that also computes it. So we start with actually constructing a four-party protocol. And this four-party protocol is the result, resulting protocol from the split-by-an argument where we let these four parties be on a line where CA talks only to A like an honest C would as if it never received any messages from B and CB will talk to B as if it never received any messages from A. So this four-party protocol has a lot of interesting properties. One way to see one of these properties is to consider an adversary that corrupts both CA and CB. So by the argument that we said earlier, it can be easily translated to an adversary in the free-party protocol above, corrupting only C. But in the above free-party protocol, we have the guarantee that because C has no input and the protocol is secure, it means that the output of A and B must be indistinguishable from the real output F of X, Y. So this also means that the output of A and B in the bottom four-party protocol must also be the same because they cannot distinguish these two scenarios. They don't know if they act in the four-party protocol or in the free-party protocol. And additionally, this also holds if we consider in the bottom four-party protocol, the adversary to be semi-honest. So just to, I want to stress this out that even though the adversary below is semi-honest, the adversary above is malicious. But what this gives us is because the adversary at the bottom is semi-honest, it means that the view of A and B is exactly the same as in an honest execution. So overall, we get that the output, even in an honest execution, is also F of X, Y, meaning that the four-party protocol is correct. It computes F correctly. So now let us consider different adversaries. So what happens if we have an adversary that corrupts both A and C, A on the left? Well, by the argument that we're saying on the split-band argument, this directly translates to an adversary corrupting A in the original free-party protocol. So by the security of the free-party protocol, it means that the bottom four-party protocol is secure against an adversary corrupting A and C, A. And furthermore, notice that this argument actually works, even if we consider malicious adversaries in the bottom four-party protocol. And furthermore, the same argument can be said about an adversary corrupting B and C, B. And they can also be translated to an adversary corrupting B in the original free-party protocol. So overall, what we got is that we constructed a four-party protocol that computes F correctly, and it has all these weird security guarantees that it's secure against A and C, A if they are colluding and secure against B and C, B if they are colluding. But I promised you a two-party protocol. So to finish the argument, notice that this four-party protocol is actually a two-party protocol in disguise, because we can take A and C, C, A to be one party, we can take B and C, B to be the second party, and we have a secure two-party protocol. The reason why secure is because any adversary corrupting either of those parties directly translates to an adversary in the original free-party protocol corrupting either A or B. And I want to stress out that this argument only works if C doesn't have an input. If it has an input, then there's a bit of a subtlety where this argument fails, and I refer to the paper for the interested people. So with the time I've left, now I want to talk to go back to the coin tossing that I mentioned earlier, the coin tossing functionality. So again, we have in the two-party setting, if they wish to toss a coin, then Cleve already showed that this cannot be done fairly. One of them can always bias the output of the other parties. So combined with the previous results, this means that even in the server-aided setting, this is impossible to compute. But for the special case of coin tossing, we can actually say more about not being able to securely compute it, because Cleve actually showed more than that. He showed that any R round two-party protocol can be biased by at least one over R, one over the number of rounds. And if we combine this with the result of Moran, Oran, and Segev, they also constructed a protocol whose bias is at most one over R. So no adversary can bias by more than that. So this means that the optimal bias of any R round two-party protocol is one over R. So an interesting thing to ask is, is the same holds in the server-aided setting? And notice that the previous argument that I showed you doesn't actually answer this question. And to see why, first let us consider this translation from the three-party protocol to the two-party protocol. And notice that in the two-party protocol, because we said that C cannot force A and B to output different values, it cannot attack. We got from this that in the two-party protocol, the output of A prime and B prime on the right is must be statistically close to a uniform bit. So why doesn't this answer the question about the bias that can be achieved about the optimal bias in the three-party server-aided setting is because there might be some weird protocol where we allow C to bias the output by a lot less than one over R. Notice that we did not allow it in the previous argument to bias by anything. But now because we have to give it this freedom, this actually restricts both A and B. And now neither of them can bias by one over R. They can bias by a little bit less, but not one over R. So this could result in a better protocol than what can be achieved in the two-party setting. So our previous argument doesn't answer this. However, what I claim is that this protocol doesn't actually exist. We can show that one over R is the optimal bias in the server-aided setting. And in particular, having A and B ignore C and execute the protocol of Moran and Naurin Segev is actually optimal. So how does the proof go? The proof is actually just a refinement of the previous argument that I showed you in the server-aided setting. So again, we start with a free-party protocol that computes the coin tossing functionality. However, this time we allow the adversary to have some bias. In the non-execution, the output of A and B will be in a uniform random bit. However, if one of the parties is corrupted, then we only have the guarantee that the output is still common, but it has bias at most one over the CR. We'll see some sufficiently large constant. So again, just like before, we first translate it to a four-party protocol and using exactly the same argument. If we take an adversary corrupting CR and CB, this directly translates to an adversary corrupting C in the free-party protocol. And therefore, by the security guarantee that we have, this means that the output of A and B in the bottom four-party protocol is a common bit. However, it is biased by at most one of the CR. So again, we have this four-party protocol and using the same trick as before, this is actually a two-party protocol hidden in disguise. So to finish the argument and to get a contradiction, well, the only thing that we need to show is how is the existence of an adversary corrupting one of those parties that can bias by more than we allow, by more than one over CR. And to achieve this, we can actually still use Cliff's argument. And to make this more formal, we use the result of Aguil and Prabhakaran that generalizes Cliff to the general sampling functionality that sample to two parties correlated values. And when you employ their results in this setting, this bias coin tossing setting, then we get that in a two-party protocol, one of the parties can bias the output by more than one over CR, assuming that C is sufficiently low. And so one of the parties, if it's corrupted, they can bias by more than we allow. And therefore, this directly translate to an adversary corrupting either A or B in the original free-party protocol that can bias by more than what we allow. And this finishes the proof. So to summarize, we consider general three-parties functionalities. We assume an honest majority and we assume that the parties cannot broadcast. And we show both positive and negative results in this setting about which functionalities can be computed with full security in which cannot. In addition, we provide a characterization of the server-aided model where one of the parties don't have an input and no one output. And all of these results can actually be extended to more parties using standard techniques like the player partitioning argument. However, we do not provide a characterization of all functionalities. So we have a lot of interesting open problems. One example of such problem is the following. Is there exists a free-party functionality that is not securely computable? However, if any one of the parties does not receive an output, then suddenly the functionality becomes computable. So if C doesn't receive an output and only A and B do, then the function can be computed. And now if B doesn't receive an output and A and C would receive the output, then again, the function is securely computed and the same if A doesn't receive. So we do not know how to answer this question. And that's it. Thank you for listening.