 Hi, I'm Aarushi. I'm going to talk about this joint work with Arka Rai Choudhary, Matthew Green, Abhishek Jain and Gabriel Kapturk on Secure Multi-Party Computation with Dynamic Participants. To begin with, MPC is an interactive protocol that allows a group of mutually distrusting parties to compute on their private inputs. And the security guaranteed by an MPC protocol is that an adversary who corrupts a subset of the parties should not be able to learn anything beyond just the output of the function that the parties were computing. MPC was introduced in the 80s and since then there has been extensive research in trying to bring MPC closer to practice. And as the efficiency of MPC increases, the hope is that it can be used to compute large complex functionalities such as training machine learning algorithms or simulating large RAM programs on massive distributed datasets. However, since these are massive functionalities, irrespective of how efficient the MPC protocol is, computing on them could take up to several hours or even days. The problem with most existing literature on MPC is that it considers static participants. And what that means is that the participating parties are fixed at the beginning of the protocol. In such protocols, if the computation is long drawn, some of the parties might leave or drop out in the middle of the computation. This could be because they have some other commitment or maybe because they ran out of resources. In such cases, the remaining parties usually have to simply abort the computation and all of the work done so far goes to waste. Indeed, requiring all of the parties to stay online throughout the entire computation is an unrealistic expectation. Therefore, the main question that we address in this work is whether the static nature of MPC protocols is really inherent or if there is a way to design protocols that allow dynamic participants. To elaborate on what I mean by dynamic participants, let's consider a group of parties that begin executing an MPC protocol. Now, after some time, let's say two parties need to leave, but at the same time, another party is willing to join the computation. What we want is that this should not disrupt the computation and instead this previous group of parties should be able to securely share the computation done by them with the new group such that they can use this group to continue the rest of the computation without having to redo the work done by the previous group. And the protocol can proceed in a similar way with parties leaving and joining Instavish without causing any disruptions. This approach will clearly reduce the burden of computation on individual parties. Parties that have low computational resources, for example the blue party here, can participate for a small amount of time and parties that have enough dedicated resources can continue to participate for a longer time, for example the yellow party here. Such a protocol would result in a weighted privacy preserving distributed computing system which can also be used as a paid MPC as a service framework where the clients can pay to delegate computational tasks and anyone irrespective of their time availability or computing resources can volunteer to participate and get paid accordingly. Notions related to dynamic participants have been considered in some prior works. For instance this notion of player replaceability where the parties are replaced after every round has been studied by Mikali and Chen in the context of Byzantine agreement. This notion of player replaceability was also used in the design of Algorand by Gillard et al. This idea in Algorand was used to keep the identity of the parties hidden until they speak and this notion of player replaceability helps prevent targeted attacks on chosen participants after their identity is revealed. While the idea of dynamic participants is relatively new, the notion of dynamism itself dates back to the 90s where Ostrovsky et al. introduced the notion of a mobile adversary that can keep changing its set of corrupted parties while the participants remain static. The main idea behind their modelling choice was to capture a slow moving adversary such as the spread of a computer virus. More recently Goyal et al and Benomoda et al have also studied secret sharing with dynamic participants. Their notion of dynamism is similar to ours but they only focus on secret sharing while we focus on a broader goal of designing MPC in this setting. Moving on to our contributions. Our contributions are twofold. Our first and main contribution is in formalising a model for MPC with dynamic participants. We refer to this model as fluid MPC and this name comes from the fact that it allows a smooth flow of information between the different groups of parties. We also design both semi-honest and maliciously secure protocols in this fluid MPC model. I will now talk about the fluid MPC model in more detail. So we model computation in the setting in the well-studied client-server model where the clients can delegate computational tasks to a set of volunteer servers and this computation proceeds in three main stages. The first stage is the input stage where the clients can give or share their inputs with the servers. Now given these inputs a set of dynamic servers participate in the execution stage where the main computation is done. And finally during the output stage the servers share the output of the computation with the clients who can then reconstruct and learn the output. Since we assume that the clients are static, dynamism only shows up in the execution stage therefore I am only going to focus on the execution stage now. So the execution stage can be viewed as proceeding in discrete steps called epochs where each epoch could potentially consist of many rounds. Each epoch is further divided into two main sub-phases called the computation phase and the handoff phase and each epoch has a designated set of parties which we refer to as the assigned committee for that epoch. While in the handoff phase the parties within the assigned committee of an epoch interact with each other in the computation phase and then in the handoff phase they interact with the parties in the next committee to share information about the computation done by them so far. And this is basically how the computation proceeds. For our protocols we consider an honest majority of clients and an honest majority of servers in each committee but of course one could consider a dishonest majority setting as well. So based on the discussion so far it's clear that there are two main components in a fluid MPC protocol. The first is deciding how the committees for each epoch will be selected and then given these committees how the protocol in particular the execution phase of the protocol will proceed. I will now elaborate on the different properties required from both these components. So the two main properties required from our protocol execution phase of our protocol are division of work and fluidity. Let me explain each of them in more detail. So the first main requirement that is division of work essentially means that each committee should only be required to compute a small part of the circuit. And ideally this computation should be independent of the depth of the circuit. Otherwise if the committee is required to do a lot of work then that would defeat the whole purpose of dynamism in some sense. Our second requirement is that the protocol should have a high churn rate. In other words the commitment that each party needs to make in order to participate should be relatively small. And we measure this commitment in the number of rounds in an epoch. In fact ideally what we want is that a protocol should have maximum fluidity meaning that each party should only be required to communicate in one round. This can be achieved only if the computation phase is completely silent and no messages are exchanged during this phase. While the handoff phase should consist of a single round of unidirectional messages from the old committee to the committee of the next epoch. Designing such protocols where each committee is only required to send a single message is in my opinion both theoretically and practically interesting. Looking ahead we design our protocols keeping these two properties in mind. So while we view committee selection as an external process and assume that its outcome is an input to our main protocol. Nevertheless the properties of the selection process actually dictates the design of the remaining protocol. Therefore the main properties of the committee selection process that we want to consider here are how and when the committees are formed and when the adversary gets to corrupt parties in these committees. I will discuss each of these in more detail now and on the way also mention how this committee selection process with the desired properties can be instantiated using prior works. Starting with when the committees are formed. So one could consider a weaker variant where the committees are decided at the beginning of the protocol but that is clearly too restrictive. Therefore we consider an on the fly committee formation model where the committees for each epoch are known at the start of the handoff phase of the previous epoch. So for instance in this example the committee for the first epoch is decided during the input stage. And similarly the committee for the second epoch is determined at the beginning of the handoff phase of the first epoch and so on. Now the next question to consider is how are these committees decided. So again one could consider a completely volunteer based system where anyone is allowed to sign up at any time and anyone who signs up is included in the computation. But clearly this kind of system is prone to similar tax in which a single party pretends to be many parties and enforcing a strict corruption threshold in each committee in the setting would be difficult. The other more realistic option is one that's based on an election mechanism. This is where every party who wants to participate can nominate itself and an election process decides which of these nominated parties actually get to participate. This is clearly a more realistic model and in fact these recent works by Benomoda et al and Goyal et al implement such an election process that enforces a corruption threshold in each committee using proof of stake blockchains. As I mentioned earlier one could choose which of these they want to consider depending on the need from the perspective of our main protocol. We came these choices separate from the main protocol and as long as the next committee is determined at the beginning of the handoff phase that's all we require. So the next question is about committee corruption. Here again one could consider static corruptions but we consider a stronger adaptive corruption model where the adversary is allowed to corrupt parties within a committee throughout its activity period. So for instance here it could corrupt a party in a given committee during the handoff phase or anytime during the computation phase or during the next handoff phase. We also need to account for the effect that corrupting a server has on the prior epochs where it participated. For instance here if the adversary corrupts this yellow party at some later epoch then it inevitably learns its private state in the previous epoch. Therefore if there is an overlap we assume that a server can only be corrupted if it doesn't violate the corruption threshold of the prior epochs. Now let me move on to the actual protocols that we design in this setting. So assuming that we have a committee selection process that satisfies all of the properties that I mentioned on the previous slides. We observe that an optimized version of the classical BGW protocol by Genero et al. can actually be naturally adapted to the fluid MPC setting. And in fact this protocol achieves maximal fluidity. I'm going to give a quick recap of this optimized semi-alist BGW protocol first. So this protocol evaluates the circuit in a gate by gate manual on secret shared inputs. In more detail the parties start by secret sharing their inputs using a threshold secret sharing scheme. Then for the addition gates the parties simply locally add their shares and for multiplication gates they first locally multiply their shares then reshare these locally evaluated shares using a lower degree polynomial. Exchange these shares of shares with all of the parties and finally reconstruct the relevant shares for the outgoing wire of this multiplication gate. For the final output gate the parties reveal their local shares for the output wires and then reconstruct the output. Our main observation here is that this protocol is quite amenable to the fluid MPC setting. In particular consider an input phase where the clients secret share their inputs with the first committee. Now the execution stage proceeds in different layers. In particular committee I is responsible for computing or evaluating layer I of the circuit. More specifically after the input phase the parties in the first committee locally compute on their respective shares depending on the gate in the layer. They then compute shares of these locally evaluated shares for both the addition gates and the multiplication gates. They then exchange these shares of shares in the handoff phase with the next committee. And now the next committee starts by first extracting the relevant shares from these shares of shares received from the previous committee in the handoff phase and then compute on them to evaluate the next layer and so on. Finally for the output phase the last committee simply sends the shares of the output wires to the clients who can then reconstruct the output. Note that here in the computation phase we do not require the parties to communicate at all. Moreover the handoff phase is also only one round long. Also the committee is here each committee here is only doing work proportional to the width of the circuit and is independent of the depth. Therefore it's easy to see that this protocol satisfies both division of work and maximal fluidity. Next for our maliciously secure protocol we present a compiler that transforms certain semi honest protocols including the fluid BGW protocol that I just mentioned into a maliciously secure protocol. And this resulting maliciously secure protocol achieves security with a part and has the same level of fluidity as the underlying semi honest protocol. We also provide an implementation of our maliciously secure protocol based on semi honest fluid BGW. I will now discuss this compiler in more. We start by examining the approach used in modern efficient NPC protocols and we notice that most of these protocols rely on an observation made by Genkin et al. who showed that most secret sharing based semi honest protocols are actually secure against malicious adversaries up to additive attacks. Meaning that the attack strategy of a malicious adversary in these protocols is restricted to just injecting arbitrary additive errors on intermediate wire values. And these additive errors are actually independent of the actual wire values. Most recent efficient NPC protocols exploit this idea by running two parallel executions of such semi honest protocols. One of the executions is on the actual inputs and the other one is on randomized inputs as shown on the slide. At the end of the protocol before the outputs are revealed to the parties, a correctness check is performed by comparing a random linear combination of the intermediate values in the two executions. What we now want to explore is whether we can use the similar strategy in the fluid NPC setting as well to get a maliciously secure protocol by transforming fluid BGW. And we observe that this paradigm does indeed extend in a natural way to the fluid NPC setting. And in fact, most secret sharing based semi honest fluid NPC protocols are likely to be secured against malicious adversaries up to additive attacks. Now let us try to see if known techniques in this paradigm can work in the fluid NPC setting as it is. So the first case that we consider is if the linear combination is computed at the very end. But in this case, all of the intermediate wire values will have to be passed along across all of the committees until the very end. And therefore we won't be able to get division of work. On the other hand, if we try to compute the linear combination in an incremental way, that is by combining all of the values of a given layer and only passing along this partially computed value, then we would have to find a way to generate these alpha values that are used in the linear combination on the fly. And generating these alpha values could take more than one rounds, which could hamper maximum fluidity. We resolve this problem using a simple idea. Let's say this is the circuit that we want to compute. Now at the time of secret sharing their inputs with the first committee, we require the clients to also secret share random values, beta and alpha one to alpha w where w is the width of the circuit. Now when the first committee computes the first layer in the dual executions, they also additionally multiply beta with itself and with each of the alpha values. Similarly, the next committee will additionally multiply beta with all of the previously computed random values. These values will then be used to compute the linear combination in an incremental manner. In particular, the first committee simply sets the partial linear combination to zero since no values have been computed so far. The next committee will compute this partial linear combination by multiplying alpha one beta with the first value, alpha two beta with the second value and so on and add these to the previously computed partial linear combination. And similarly, the parties will compute another linear combination using the same alpha beta values and the values induced on the circuit from the other parallel execution that is run on randomized inputs. Finally, the clients can compare the two linear combinations before reconstructing the actual R. Note that each of these additional computations are done in parallel with the dual executions and hence if the underlying semi honest MPC has maximum fluidity, so will the resulting maliciously secured protocol. Moreover, the parties in the solution still only do work that is proportional to the width of the circuit and doesn't depend on the depth and hence this protocol also supports division of work. Finally, to summarize, we present the first formal model for MPC with dynamic participants. We also present constructions for semi honest and maliciously secured protocols in this model and implement our maliciously secured protocol. However, these are only baseline solutions and there are many interesting open problems in this area. Thank you.