 So, hello everyone and welcome to this presentation. Today I'm going to talk about the work adaptive security of multi-party protocols revisited. And this is joint work with Martin Hirt and Wally Maurer. In the problem of multi-party computation, there is a set of parties who would like to jointly perform a computation. And one can think about this as the parties having or sitting in front of their own computing device, which can be a laptop, a mobile phone, whatever device they have. And the device is connected to some communication network, which for example can be the internet. Then each party can give inputs and obtain outputs from their respective devices. And the goal of the protocol is to construct an ideal computer that performs some computation. And that is completely resilient and trustworthy, even when some of the devices can be hacked. And usually those parties whose computers were not hacked are typically referred to as honest parties, while those that had their computer hacked are referred to as corrupted parties. And typically it doesn't need to be like that, but typically we care about guarantees for the honest parties, those that didn't have their computer hacked. We would like that the honest parties obtain the correct output of the computation and moreover we usually also want that their inputs to the computation remain as private as possible. So in this work we consider the setting of adaptive corruption. This is a very challenging setting where the parties computers can be hacked or corrupted during the execution of the protocol. And not only that, the corruption can be based on any information gathered so far. So this can be the information that the adversary bought from the network or information that is leaked from an internal state of a computer. The question is what can we guarantee in this challenging setting? And very intuitively the security guarantee should depend on the set of parties that are corrupted. So the more parties are corrupted, the less is guaranteed. And typically all guarantees are lost when the number of corruptions exceed a certain amount. So let us first review what the current standard notion of adaptive security is. I'm going to represent the computers that execute the protocol with round objects and the network as an square object. And in the standard notion one first specifies an ideal system that I call the system MPC, which for concreteness we can think that it simply computes a function. So it gets an input at each interface and then computes a function over these inputs and gives back an output at each interface. So we would like to construct such an ideal system, but in fact it is easy to see that we don't quite get this ideal system MPC. Because in the left the adversary kind of learns some information, he learns some internal states of the parties when corrupting them. Maybe he learns something from the network. And in the right the system doesn't have, there is no network nor internal states of computers. So what we get is an ideal system that also involves a system Sigma, which is usually called the simulator. And the simulator is in charge of reproducing any information or any interaction with the adverse. So the simulator will reproduce any adversarial information while having only access to the data inputs and outputs of corrupted parties at that point in time. So for example, if parties 3 and 4 are corrupted, the simulator must reproduce whatever the adversary learns from the information that has been leaked in the wires of these corrupted parties. For example, their inputs and maybe their outputs if they got any. Therefore the honest parties benefit from the security guarantees because any information leaked to the adversary can be deduced solely from the data of corrupted parties. Let's go into it in a bit more detail. So in the beginning honest parties define their inputs and during this input stage the adversary can corrupt for example party 3. And therefore the simulator gets access to P3's input and has to explain his internal state. And the simulator also has the capability to substitute and decide what the input of P3 is, whatever he wants. And based on the learned information the adversary maybe decides to corrupt P4 and the same happens with the simulator. And of course if any information is leaked from the network the simulator also has to give those messages. At some point all the inputs are defined both from honest and corrupted parties and at this point the NPC system computes the function over the defined inputs and gives back the outputs. And this stage takes some time and typically in the left system the adversary learns the output of the corrupted parties first. And based on that he can corrupt further parties and in the right system the simulator has to output all messages that are leaked to the adversary and so on. And the logic is that anything that the adversary can provoke in the left system he can also achieve the same effect in the right system. And while this standard notion of adaptive security is intuitive it turns out to be technically too strong or apparently too strong. And one of the main obstacles here is the so called commitment problem. Let me explain it with a very simple example. So consider a setting with two parties connected via authenticated channel. And let's say that party 1 inputs x1 to the protocol and commits to its input using some perfectly hiding commitment scheme, some encryption or something. And since the channel is authenticated the adversary sees the commitment b. And therefore the simulator has to come up with a fake commitment b prime. And the point here is that b prime has to be generated without the knowledge of input x1 because at this point the party is honest. But now the problem is that if the adversary adaptively corrupts party 1 and learns its internal state. And the adversary learns in particular the way the commitment was created. And the problem is that the simulator cannot consistently explain the fake commitment b prime with respect to the input x1. So note that this protocol is technically or apparently insecure under the standard notion no matter what happens next. But even if we execute an extremely like super secure protocol afterwards. The mere fact that the party publishes a commitment of its private input makes the protocol insecure. And this is the case even if the commitment is perfectly hiding right. And this should not be the case because at least intuitively publishing b cannot harm in any way the protocol. And the technical issue arises in many scenarios where some form of encryption or commitment is used. And as a result current typical protocols use additional techniques that are designed to specifically target or overcome this technical challenge. So for example, they use the secure erasable memory which incurs an additional assumption. Or we also use non committing encryption or equivocal tools, which typically incur an important efficiency loss. In this work we ask whether one can modify the definition and find natural definition for security against an adaptive adversary. That is not subject to the commitment problem. And the answer is yes. And here is how we do it. So when one thinks about adaptive security, a very natural idea is to specify a guarantee for each possible set of honest parties. So at any point in the protocol, the guarantees that we have depends on the current set of honest parties. So here is a picture where we consider three parties p1, p2 and p3. So at the beginning, all parties are honest. So we benefit from all these guarantees. And then at some point maybe p1 becomes corrupted and therefore we lose some of the guarantees. But all those guarantees corresponding to sets that are still honest remain. And the same for p2. At this point only p3 is honest and therefore there are only these two guarantees. And maybe in the end everyone is corrupted so we are left with a single guarantee. But at a high level it seems to be clear that this is what adaptive security is about. Somehow we give guarantees and the more parties are corrupted the less is guaranteed. And we will now explain how with this view one can overcome the commitment problem. So intuitively what we do is we guarantee, we give a guarantee for each set X that provides intuitively privacy to the parties in X. So the view, the guarantee for X says the view of the adversary depends only on the inputs of the parties that are not in X. Regardless of whether they are corrupted or not. And technically this means that the simulator can read the inputs of parties not in X. As I said regardless of whether they are corrupted. One can think about many different variants but this seems to be kind of one of the strongest guarantees to consider that allows at the same time to overcome the type of commitment problem that they explained before. So why do we overcome the commitment problem? So first observe that the guarantees dropped as soon as any party in X gets corrupted. So technically what we are saying is that the simulator stops at that point and the state of the party doesn't need to be explained so there is no commitment problem here. And moreover, if a party not in X gets corrupted, the simulator can explain its internal state because it knew its input beforehand. So let's go back to the slide with guarantees. We have that guarantee for X says intuitively any information leaked to the adversary so far depends only on the inputs of parties not in X. So if we pick the largest set X what we are saying is that all information that the adversary has learned so far depends only on the data of the corrupted set at that point. And this holds for any point in time. For example, if we fix this T, we say that any information leaked so far up to time T can be derived from the data of the corrupted parties at time T. And of course this is in contrast to the standard definition. The point is that different statements here do not need to be consistent with each other. And this is exactly what allows us to overcome the commitment problem. Now the question is like how can we express this in such a way that it composes as well for example. So in our work we choose the constructive cryptography framework to express such guarantees instantiated to the multi-party setting. The constructive crypto is a theory of what we call resources. There are clear algebraic and well defined rules. So the resources are typically elements that we care about. So those elements that we explicitly assume like the network or that we would like to construct like the ideal computer. These are simply systems with interfaces to the parties considered in the setting. Converters are the round objects. They are the protocol engines which are the recipes that we use to transform resources to resources. We can then combine resources and converters to construct further resources. And in fact we will understand these as sets of resources or resource specifications. So considering sets is kind of natural I think because it allows us to talk in terms of guarantees. Maybe we want to construct some sort of program that computes some function within a given accuracy and time limit. Maybe the network that we assume has some delay between I don't know 5 and 10 seconds and so on. We don't want to necessarily talk about fully defined concrete systems. We want to talk about guarantees. The construction notion says protocol pi which are all these protocol engines applied to any resource in specification R is a resource in specification S. So specification R includes any resource that we would be willing to assume. And S includes any resource that we would be happy to construct. So how can we say this in the framework? So we say for each set X the protocol attached to any resource in R satisfies a guarantee for X. And we can elegantly base all these guarantees at the same time within one single construction statement, simply by specifying the ideal specification as the intersection of specifications. So here are a few lemmas about our definition. So first, our definition lies strictly in between the current notions of adaptive and static security. It overcomes at least this type of commitment problem that I explained before. And therefore is kind of strictly weaker than the standard adaptive security definition. But it's also nice that many of the protocols that are believed to be secure in practice but are not secure under the standard adaptive security. These are secure under our definition. And this is the case for the famous CDN and the close protocols, a static version of the close protocol. And second, our guarantees are apparently strong in the sense that many of the typical examples that separate static from standard adaptive security actually also separate static from our new notion. So in conclusion, we opened a new space of security definitions with the idea of giving a guarantee for every set of so far owners parties. And we show that this viewpoint allows to give like a concrete instantiation that overcomes a particular commitment problem, while at the same time achieving strong adaptive security guarantees. So here is the IPRIN version and thank you very much for listening.