 Thank you, Alessandra. Let's start with the motivation. Say we have a protocol that takes expected constant number of rounds and think about geometric distribution. What's going to be the expected running time of n parallel executions of that protocol? It turns out it's theta of log n. And just to get the feeling why, think of coin tossing. Say we want to toss a coin until it falls on heads. We expect that two coin tosses will be sufficient. Because that's the probability to get heads, half. What happens if we toss n coins in parallel? Then in the first round, half of the coins will be heads and another half will have to toss again and so on and so on. And we get a logarithmic blow-up. Why do we care about protocols that takes expected constant number of times? Well, most secure protocols use broadcasts. And fast protocols for broadcasts take expected constant number of times. That means that when we have a broadcast channel where all parties talk at the same time, we can't use these protocols, otherwise the round complexity will blow up. And these protocols have a probabilistic termination property. We don't know when they are going to terminate. In addition, there's no simultaneous termination parties may terminate the protocol at different rounds. And this is kind of a nightmare for composition. We want the broadcast protocol to be secure regardless of the protocol that uses it, regardless of the environment in which it runs. So we want composition and known solutions for parallel broadcast, for example, don't provide us composition guarantees. The main obstacle is how to simulate this property of probabilistic termination. And that's what we're doing in this work. We study universal composibility of general protocols with probabilistic termination. And we give a framework that allows you to design very, very simple protocols like you would like to in a modular composition style and compile it into protocols that has all these technical issues that involve probabilistic termination. And we give a few applications we show how to get a perfect security and adaptive security for Byzantine agreement and parallel broadcast with protocols that run expected constant number of rounds. And also for secure function evaluation with protocols that who's running time doesn't relate to the number of parties in the protocol. And just to emphasize, we give examples for perfect security, but our results also apply to statistical security or computational security. So after we talked about the end we can go back to the beginning. We talk about multi-party computation. We have a set of parties they want to compute a task like compute a joint function and we think of the dream scenario where we have an ideal functionality a trusted party that does the computation and we want the protocol to emulate this ideal functionality by saying that no external distinguish or environment that gives input to the parties and get the output can say whether they run the protocol or talk to the trusted party in terms of security we say that every attack on the protocol can also be simulated in this ideal model of computation. We have several communication models in traditional MPC In the point-to-point setting parties are connected in a full communication graph and we consider a private communication. This is the secure message transmission functionality in UC. In a broadcast model parties can talk over a bulletin board. I can post a message, everybody can see the message. And in both models we'll consider synchronous communication. That means that the protocol proceed in rounds. This can be achieved in UC if we have bounded delay on the channels and also a global clock that synchronize the parties. What do we know about feasibility of computation if we have a broadcast channel? Well, we have very very good protocols, the classic results from the 80s give us perfect security, adaptive security if sufficiently many parties are honest and they provide concurrent composition you can compose them in any arbitrary way you want and security will remain The round complexity only depends on the depth of the circuit that we compute. It doesn't depend on the number of parties and there have been many follow-up works we can even get the same properties by using only one round in which we use broadcast and we ask whether we can get the same security and same efficiency if we only communicate over a point-to-point channel where parties don't have access to a broadcast channel How does the protocol in the broadcast model looks like? So we have certain rounds where parties use the point-to-point channels. Everybody talk over point-to-point at the same time. This is parallel secure message transmission and we have some rounds where parties talk over a broadcast channel together. This is parallel broadcast and we want to instantiate this parallel broadcast in a secure manner to get a protocol that is only over point-to-point channels. A standard methodology for getting a broadcast protocol if we have an honest majority is to use Byzantine agreement Each party has an input and we require two properties from the Byzantine agreement protocol agreement says that all parties output the same value and validity says that if all parties start off the protocol with the same input this will be the output. Now to get broadcast from Byzantine agreement simply the sender sends its input to all the parties and they run Byzantine agreement on these values if we have an honest majority that will be a broadcast protocol. We also know how to instantiate Byzantine agreement and broadcast using deterministic protocols. So these are protocols that have deterministic termination We know which round they are going to finish and all honest parties finish the protocol together at the same round. We know how to get them with perfect security and adaptive security and also with UC composition. The drawback is that the round complexity depends on the number of parties. So number of rounds is at least the number of bad guys that we have that we can tolerate. And this is inherent. We can't get faster protocols using this technique. So if we have one round of parallel broadcast it turns out it becomes many many rounds in the point to point channels. Rabin and Beno showed how to get over this impossibility by using randomization. So we have a randomized Byzantine agreement. Feldman and Mikali showed in the 80s a protocol that computes binary randomized Byzantine agreement from scratch without any setup assumptions. And the idea is that the protocol proceeds in rounds. In each round each party has a bit. Initially it's input bit. And each phase looks at a high level like that. So at the beginning parties do a voting. They check whether sufficiently many parties have the same input. If so they terminate. The protocol will complete at that phase. Otherwise they toss a coin an oblivious coin. Oblivious coin means that with the constant probability the coin toss will succeed. All parties will get the same bit. In that case they will complete that phase. But with the complement probability we have no guarantees. And the parties don't know whether the coin toss succeeded or not. This is the oblivious part. So with probability 1-p the adversary can basically decide what's going to happen in that phase. Either the parties will not get to agreement and move on to another phase. That's one option. The other option is the adversary can make some honest parties think that they have reached agreement. In which case only a subset of the parties will terminate at that phase. In this case it is guaranteed however that at the next phase the remaining honest parties will terminate with the same input. So focusing on that again the adversary can make some parties finish fast in one iteration and all the remaining parties will finish in another iteration. So this is the non-simultaneous termination. Looking at it again we have a probabilistic termination we don't know when the protocol will end and we don't have simultaneous termination. On the good side the expected round complexity is constant because each phase will succeed with the constant probability. And we know that we have a constant window of difference between termination rounds of honest parties. So this is good. We also have many improvements on this protocol. We can make it multi-valued in agreement. We can get perfect security. We can also get a parallel variant of this protocol by Bernoulli and Neath. So it looks like we have everything we want. What's missing? The problem is that all of these broadcast protocols are proving secure using a game-based definition. And we want composition. Composition follows from simulation-based definitions. Cuts et al recently showed a framework where we can prove security of synchronous protocols over UC, but only if they have deterministic termination. And many subtle issues with probabilistic termination are not captured in that framework of Cuts et al. That's what we do here. We extend this framework to deal with synchronous protocols that have probabilistic termination. So in the remaining of the talk we'll go over this framework. We'll start at the first part of defining what probabilistic termination means in UC. We'll get protocols that are secure if all parties start at the same time. In the next part of the talk we'll talk about non-simultaneous start how to deal with that problem. And finally we'll briefly mention some applications. So the main building block in our framework are objects called canonical synchronous functionalities or CSF. CSF is a very simple ideal functionality. The main idea is to separate the function that we want from the round complexity that takes to compute it. So in a CSF we only have two rounds. We have an input round and an output round. In the input round all parties send input and output they get the output. Each canonical functionality is parameterized by two function. The first is the function that we want to compute which takes input from all the parties and also from the adversary and also a leakage function that reveals information on the inputs to the adversary. So at the beginning parties send input, the adversary gets a leakage and so on and so forth. All parties send the input the adversary sends its own input and in the output phase the party simply request the output and get it. So it's a very simple functionality. Parties send input and later ask for the output. It's simple but it's very strong. We can model many tasks using a canonical synchronous functionality. So if you want secure message transmission, party i wants to send a message to party j the function is simply a projection from the i's coordinate to the j's coordinate. If you want party i broadcasts its message to all parties then the function is the projection from the i's coordinate to all coordinates. And secure function evaluation is simply we ignore the input from the adversary and compute the function that we want. But using CSF we can also model tasks that are not a function like Byzantine agreement. So if n minus t parties have the same input this will be the output otherwise we let the adversary decide what the output is going to be. That's very useful for simulating many broadcast protocols. The leakage in Byzantine agreement for example, is the entire input vector because we don't want privacy with Byzantine agreement. In the paper we have many examples of instantiation of canonical synchronous functionalities. Once we have that object we can define a synchronous normal form protocol, an SNF protocol. So these are protocols like we want them to be. All parties are synchronized throughout the execution of the protocol and all hybrids are CSFs in that case. And in each round all parties call the same hybrid. So it's protocols that are like we know and love. We know how to deal with such nice protocols. As an example think of the randomized Byzantine agreement protocol of Phelmen and Michali. So in the SNF form of that protocol looks like that. This is the skeleton of the hybrids that we are invoking during the protocol. At the beginning we use secure message transmission to distribute the inputs of all parties. And then we have the phases. We toss an oblivious coin and do the voting in phases until we terminate. The problem is that most functionalities can't be instantiated using a two round protocol. An economical functionality takes two rounds. So we define wrappers around a CSF that simply extend the rounds. So each wrapper has a distribution on the termination round. It samples the termination round where the computation will complete. And simply forwards all input messages to the CSF and ignores first requests until the protocol, until the computation gets to the last termination round. And this is again a simple wrapper. It doesn't have any much inside. Simply extend the rounds of the ideal computation. This is very similar to the framework of Kacital for deterministic termination functionalities. When we deal with probabilistic termination it's a bit more delicate. So now we also have a distribution that samples the termination round. But this is an upper bound on the rounds of the computation. And in addition we need to give the adversary the ability to decide which parties will terminate before others to model non-simultaneous termination. So again the wrapper samples the termination round sends it to the adversary forwards the input messages to the canonical functionalities and then it starts getting output requests from the parties. The adversary can say I want party i to get output now. In this case the wrapper will give the party i the output. But not to other parties. So we have this non-simultaneous property in such a wrapper. All parties will get the output either if the adversary says give it to this party now or if the computation gets to the termination round. So we have a guarantee termination here. Where do we stand with all these definitions? So we can now show without too much effort that the randomized Byzantine agreement of Feldman and Michali implement the wrapped Byzantine agreement functionality where D is the geometric distribution over the phases and the protocol works in the parallel secure message transmission and oblivious coin hybrid model and it is secure assuming all parties start the protocol at the same round. So pictorially this is how it looks like. There is this issue that we assume parties to start protocol at the same round and if we want composition this is a bit tricky. So that's the second part of the framework. What happens when we have fast parties and slow parties? So we have the two calls to broadcast one after the other, then we have a phase in between that can be overlapped between the two executions. And we want, this is very bad for security because fast parties will give information for the next execution before the slow parties started the next one. We have solutions for dealing with this problem of a lindel et al of Benoble and Neven of cuts and coup, but all of these solutions are focused on broadcasts on Byzantine agreement and all of them use game based definition for security. And we show a generic compiler that take any synchronous normal form protocol and compiles it to a protocol that doesn't need a simultaneous start. So the compiler is parameterized by a slack parameter c and we assume that parties start within c plus 1 rounds. Parties are no longer synchronized throughout the execution of the protocol. So the hybrids are now being called concurrently. And we deal with that also. The compiler retains the round complexity of the original protocol. So that's a good feature. The main idea is to make that overlap between the execution meaningless. So we extend each round to 3c plus 1 rounds. These ideas are also being used in the previous solutions but we had to slightly adjust them to make the simulation go through. So in the first 2c plus 1 rounds the party listens and get messages from other parties in round c plus 1 it will send its message and then it waits for another c message, c rounds without even listening. Simply waits. And to get a feeling why that works, so think of the example where c equals 1. So each party will start, all parties will start within 2 rounds. We have a first party on the left, a slow party on the right. So first round the first party listens. Second round the slow party starts by listening and the first party sends its message. So it will arrive at the destination. In the third round the first party listens and the slow party sends its message and then the first party doesn't do anything for one round. After which it completed the first communication round in the original protocol. Next the slow party doesn't do anything and the first party already started the next execution. So this is the concurrency issue. And we can see that messages are being sent as they should. Each party works in a locally sequential manner, so we have a local sequential composition of the protocols, but globally we have concurrent composition. So that works well if the protocol doesn't introduce additional slack. But what happens with probabilistic termination protocols? Each protocol can bring additional slack of c rounds and that can blow up the round complexity exponentially. If we use that solution from the previous slide. So we use technique that goes back to Bracha to reduce the slack between the parties to one round after each probabilistic termination hybrid. So we introduce another wrapper that deals with non-simultaneous starts. In the ideal model it simply extends the round properly. Now we can illustrate the composition theorem. So if we have an SNF protocol that implements some wrapped functionality when all parties started the same round, then the compiled protocol that doubly wraps all the hybrids properly implements a doubly wrapped canonical functionality and it can tolerate non-simultaneous starts of c plus 1 rounds. And the compiled protocol takes the expected same round complexity as the original protocol. So a bit about applications. If you want to take that broadcast protocol and run it and times in parallel we know the round complexity will blow up. And we have solutions for parallel broadcasts of Bernoulli and Yaniv and Fitsigari and also Katzenk who gave such solutions but actually they implement a weaker notion of broadcast. They implement unfair parallel broadcast. That means that the adversary can corrupt senders and change their message based on the message that they were intending to send. And we show how to get parallel broadcast from unfair parallel broadcast using only secret sharing, previous solutions of Hirt and Zikas used VSS. And using that we can show that a doubly-repped version of the parallel broadcast functionality can be realized in the secure point-to-point channels using expected constant number of rounds. We take that parallel broadcast instantiation and plug it into the VGW protocol which is already in a way in a synchronous normal form and we can get a secure function evaluation protocol that realizes the doubly-repped SFE functionality with round complexity that only depends on the depth of the circuit in expectation. So to summarize we show how to deal with protocols that have probabilistic termination. We can design and analyze protocols while completely ignoring these technical issues and compile it into protocols that will behave as we want. We get some applications for protocols with perfect security and we can also extend it to statistical and computational protocols. That's it. Thank you.