 Hello. OK, so next talk is by Sanjay Mkagan, Amit Sahae. It's called Adaptively Secure Multi-Party Computation with Dishonest Majority. And the talk will be given by Sanjay. Thanks for the introduction. So as I mentioned, I'm going to talk about Adaptively Secure Multi-Party Computation with Dishonest Majority. This is joint work with Amit Sahae from UCLA. OK, so let me start by giving the basic outline of what multi-party computation is. So multi-party computation allows a set of mutually distrustful parties to compute a function on their private inputs. This notion was introduced by Fayyaw in the setting of two parties and then extended to a multi-party by GMW. We're going to consider a setting of adaptive adversaries, where the adversary has the ability to corrupt parties on the flight. So we can look at what is happening in the protocol. And based on what it has seen, it can choose to corrupt parties. And the subsequent parties are corrupted based on its views so far. So it learns something from the view of the parties corrupt so far and subsequently corrupts future parties. So this notion was introduced and formalized by CFGN in 1996. And it's been very interesting and has had a lot of implications in different areas in crypto. So let me define things more formally. So in multi-party computation, security is defined by comparing a real-world execution of the protocol with an ideal-world scenario. So in the real world, we require that for every real-world adversary that corrupts parties adoptedly, the parties of its choice. There exists an ideal-world adversary that corrupts the same party. And in such a way that for every real-world adversary, the ideal-world adversary essentially achieves the same effect. What I mean by that is that the outputs generated by all the parties in the real-world is the distribution of the output generated by the parties in the real-world is indistinguishable from the distribution of the outputs, the parties in the ideal-world. So more formally, no probabilistic poly-time distinguisher can tell, differentiate between the input-output distribution of the honest parties and the adversary in the ideal-world from the one in the real-world, except with negligible problems. OK, so before going into details, I want to give a motivating example. This was in CFD in itself. So consider a setting that we have one dealer who holds some secret SK. And there are N parties. What we're going to require is that this dealer, Secret, shares his secret key SK among some random root and chosen parties. So what he does is he chooses root N random strings, such that they all X or 2 the secret value SK. And he distributes these chosen values among the root N parties, which are also chosen randomly among the set of all parties. And he distributes these shares to these parties. And he publishes the set of parties who get these shares. Now, consider an adversary that corrupts, let's say, order N. So roughly N by 2 of the parties, let's say. Then if you have a non-adaptive adversary or a static adversary that gets to corrupt all parties before anything happens, then we can see that it's highly unlikely that it will be able to hit all the root N parties which actually get the shares, and hence, will be unable to reconstruct the secret. On the other hand, an adaptive adversary, it can just corrupt the root N parties which actually have the shares and use that to get the secret out. So what are the previous results? As I already mentioned, this notion was introduced by CFGN. And they got an adaptive secure MPC protocol with in the standalone setting with the honest majority. And there's been a lot of work trying to improve it and do things without resuming this. So the first results were for the setting of zero null engine OT by Beaver, which were later extended to the two-party setting, like general two-party computation using by Beaver and Katz and Lestrosky. And the first result, which got some form of adaptive secure multi-party computation without resuming honest majority by CLS, but they got it in the common random string model. In this work, they got security under a stronger notion of, under a stronger notion called the UC security, but my focus is going to be in the standalone setting, so no composition in that sense. So the question we ask is, can we do it after secure MPC without honest majority and without resuming any trusted setup like common random string? So let me start with a very simple approach and see what goes wrong. So let's say you had this trusted party that was willing to do ideal commitments for you. So there is a guy in the sky who's OK with doing ideal commitments for you. And this was given in, for example, you can get it from CLS. And now you can ask the question, well, we have this, given this trusted party, we can do everything as in CLS. Can we securely realize this trusted party? And the answer to that is also yes, so you can do that. Can we compose this? So we have a trusted party that's willing to do, if it was willing to do commitments, then you can do everything. We also have a protocol that securely realizes this commitment, can we put things together and get protocol directly? Well, we do have a composition theorem by Kennedy, which allows us to do it. But surprisingly, a direct application of these results doesn't follow and it fails. I want to stress here that with the correct formalization, all the results are correct and still the composition doesn't go through. And this was because of a subtle issue that was overlooked and was kind of thought of as obvious. So let's see why. So this is like the punchline that I want to convey. So if you have a two-party protocol that is adoptively secure, you pick your favorite two-party adoptively secure protocol. And you want to execute in a setting where there are multiple other parties present in the system. So if I have a protocol that was secure, but now I want to execute in a setting where there are other parties in the system who don't even talk. They don't talk, but they have secret inputs. So they have some secret state, but they don't talk. Then this protocol that you started with, which was adoptively secure, will fail to be adoptively secure in the setting where you have other parties present in the system. To see why, so let's look at your favorite two-party protocol, which has a black box simulator. So now it at some point is going to rely on rewinding. We are not in the trusted setup world, so it has to do rewinding at some point. And since we are in this end-party setting, the adversity now has the ability to corrupt these quiet parties. They do have secret states, so it does learn something when it corrupts them. And when it does corrupt them, it learns something in this process. And this case was never handled in the proof for the two-party case. And as we'll see, this is not just a small issue that was missed in the proof. It's a fundamental problem. And as we'll see, there's an impossibility result that you cannot argue security of such a protocol. So to summarize my results, the first result that we have is that you cannot construct an adoptively secure protocol that is round-efficient. So in particular, for an end-party functionality, you cannot have an adoptively secure protocol that has little of end-by-log end rounds with black box simulation. So this result holds even if erasures are allowed, except if we require that there are no erasures on the inputs of parties. So you cannot erase your input, but you can erase anything else. And these results are not for the setting of super polynomial simulation, if you know. It's just the most basic setting. There's a positive visibility result which is round-inefficient. There are round-efficient protocols with black box simulation, but non-black box techniques are inherently inefficient. So the protocol is efficient, but inefficient, but the round efficiency is better. So it's OK. And the efficiency here is as good as the semi-honestering. So in particular, that means if you're willing to assume that at least one party remains honest, or if you're willing to assume that honest parties can erase, then you can get constant rounds. But if you don't want to assume any of those assumptions, then you can do something which is linear in the depth of the circuit being evaluated. So the long story short, the key point is that you can do as good as you could do in the semi-honest setting. OK. So I want to give some intuition behind the impossibility result. And the point is that I will demonstrate actual real-world adversity in the setting which I said. So we'll have two parties which actually talk, and there are a bunch of other parties which never talk. But they do have secret states. The first party has, let's say, secret state x1. The second has the input x1, and the second has input x2, and so on. All parties have these inputs. Only these two parties will ever speak in the conversation. While I'll demonstrate an adversary, a real-life adversary, for which you cannot construct a simulator. And this adversary has the secret state. It obtains a secret state as, let's say, some auxiliary input in some way. So let's consider the real-world execution. So what happens is I'm going to have this adversary that after receiving each message just corrupts some super logarithmic number of parties among the quite party. So what happens is, let's say, without loss of generality, the first party sends the first message. Then our adversary decides to corrupt some super omega log n by two parties. So let's say it decides to corrupt the party 4, party 6, and party xn minus 1. At this point, the input or the secret state of these parties is going to be handed over to the adversary. It checks whether the values that has been handed over are consistent with what it obtained as its auxiliary information. And if they're not, then it aborts. Otherwise, it just continues and behaves honestly and sends the message. So it does this after each step. So after receiving the second message from the first party, it again corrupts some omega log n by two parties and obtains their secret state and so on. So this happens after each step. So it's not hard to see, because we have a little of n by log n rounds. And there are omega log n by two parties that have been corrupted. So if you set the parameters appropriately, you can get that at most n by two parties that are ever going to be corrupted in this execution. So now, I have described this adversary. This is a real-world attack. I'm now going to demonstrate that you cannot construct a simulator for this adversary. A black box simulator for this adversary. Let's see how. So what's going to happen in the ideal world? We have the simulator. He talks to the ideal functionality. And on the other hand, he talks to this real-world adversary. So it sends the first message to the adversary. The accuracy at this point is going to, just like in the real-world, corrupt some super logarithmic number of parts. So again, let's say x4. And at this point, it's handed over those values. It checks whether everything is consistent and goes on. And this process is repeated just like the real-world. So what happens if our simulator tries to rewind something? So recall that I mentioned that if you have a black box simulator, it has to rewind at some point. So if it does try to rewind. And what it's going to do is it's going to, at some point, change the message that it sent in this domain. I'm going to call this the main execution. It changes what the message it sent in the main execution by something else. Now, our adversary consistent with its behavior is again going to corrupt some omega log n by 2 party. So when it corrupts these parties, let's say this party 5 is one of these parties which was never corrupted in the main execution. So recall that at most n by 2 parties were in total getting corrupted here. And I'm corrupting omega log n by 2 parties when a rewind happens. So if these parties which are being corrupted at this point are randomly chosen, then there's going to be at least one party among the n by 2 parties that are already corrupted in the main execution. There are going to be n by 2 parties which are not corrupted. And this guy is going to corrupt at least one of them. So what that means is that our simulator has to provide the internal state or the input of this party that was never corrupted in the actual execution or the main execution when the rewind happens. And our simulator is stuck. So it cannot rewind at this point. So what this means is that the simulator cannot rewind at any point. And that allows us to conclude that you cannot have a black box simulation in this setting. So what I just argued is that you cannot have a black box simulator for a round efficient adopter to secure MPC protocol. So you can circumvent this problem by having very large rounds. So think of a protocol that has more than n rounds. Now just by pigeonhole principle, you have n parties that are going to be at least one round where no party is corrupted. So either all parties are corrupted before this point or there is no party that's corrupted in this in a specific round. So at that point, our simulator can focus all efforts and try to get it up to the secure. So the crucial point is that there exists this place where he can actually rewind. So I'm not going to go into details of how this happens. But you can look at the paper for that. So there are other issues of non-malability here. But again, it can be handled using known techniques. I'm going to talk a little, just one slide about how you can get a constant round protocol using non-black box simulation. And I'm going to leave the details further. So the starting point is we cannot rewind the adversity. And so the only alternative in some sense is to have a simulator that does not rewind and has a straight line simulation strategy. So the only kind of technique that we have at hand is the non-black box simulation technique of Barak. And the problem with Barak's protocol, it's not adoptively secure. So how do we get it to work? Well, you can look at the paper for that. So I'll find you to conclude the CFGN constructed the first adoptively secure MPC protocol in the setting of onus majority. They left open the question of doing in the setting where you don't have onus majority. And we resolved this question and proved that, actually, non-black box simulation technique is essential for achieving this work.