 Hello everyone, my name is Ariel Norf and I will present the paper sublinear DMW style compiler for MPC with pre-processing. This is a joint work with Elette Boyle, Nibbilboa and Uwali Shai. So in this work, we consider multi-party computation in the pre-processing model. In this model, the execution is divided into two phases, an offline phase, known also as the pre-processing phase and an online phase. The goal of the offline phase is to produce correlated randomness and it can be executed even before the inputs are known. In the online execution, the parties use the correlated randomness to compute the desired functionality over their private inputs. Another way to look at this model is to view the execution as an execution with a trusted dealer who gives the party's correlated randomness and then later design a secure protocol to distribute the dealer. Now every MPC protocol can be modeled in this way but this model is in particular useful in the dishonest majority setting which we consider in this work. It is a challenging setting where no one trusts no one but keeping secure multi-party computation requires using expensive tools, tools that either have a lot of communication or that are computationally expensive. Whether in this model we can move all the expensive machinery to the offline phase and obtain an online execution which is fast, cheap and information theoretic. Now in this talk, we will mainly focus on the online execution and its efficiency and we will look at two main metrics, the online communication cost and the amount of correlated randomness that the dealer needs to produce. The standard approach for MPC in the pre-processing model is to use biblical triples. Here the dealer gives the parties shares of a random multiplication triple and these are used to multiply shared values in the online execution. Now in this table we give the exact communication cost and the exact amount of correlated randomness per multiplication with this approach and as can be seen there are two variants for this approach. One with circuit independent pre-processing and one with circuit dependent pre-processing. When allowing circuit dependent pre-processing the dealer knows the structure of the circuit and therefore this can be used to reduce the cost slightly. Furthermore, if we allow PRG-based compression then we can give each party a seed from which he derives all these shares but even in this case we still need to give one element per gate to one of the parties because a one share of A times B is fixed and is not run. So these are the costs for similar security. To achieve malicious security the most popular approach commonly is the speeds approach where the dealer also gives the parties an authenticated on authenticated version of each of the three events by multiplying each value on each triple with a global random secret mask. The main advantage of this approach is that the online communication cost with malicious security is the same as the cost with similar security. However, the amount of correlated randomness grows specifically for large fields it grows by a factor of two but for small fields or for rings it grows by a factor of couple where couple is the statistical security parameter because the authenticated triple needs to be generated over an extension field with size that depends on the statistical parameter. This is necessary to achieve cheating probability that is sufficiently small. A different approach was first introduced in the minimum protocol for small fields and this approach achieves constant correlated randomness overhead that does not depend on the statistical parameter and this is achieved by authenticating multiple triples together but this comes at the expense of increasing the communication cost. So now the communication cost with malicious security is higher than the cost with semi-honor security. So as can be seen from this slide there is a trade-off between the communication overhead and the correlated randomness overhead for malicious security. This raises the following question can we achieve malicious security where both the amortized online communication cost and the amortized amount of correlated randomness are the same as with semi-honor security and of course without introducing any new assumption and this work will give a positive answer to this question and our main result is the following. So given an arithmetic circuit C which is defined over a finite field or over the ring of integers model 2 to the k then we can take every natural semi-honor MPC protocol which computes C and I will explain what natural means later and we can compile it into malicious security where both the additional amount of correlated randomness and the additional amount of communication are logarithmic in the size of the circuit times some statistical parameter kappa. This implies that amortized over the circuit both the communication cost and the amount of correlated randomness per multiplication gate remain the same as for semi-honor security. The high level framework of our solution works as follows. First the parties actively share their inputs then they run a semi-honor protocol to compute the circuit. Now addition gates can be locally computed because the secret sharing scheme is linear but the parties need to interact to compute multiplication gates and therefore they need to verify that all multiplications will compute this correctly. Our main contribution is a new verification protocol to verify correctness of all multiplications with logarithmic amount of communication and with logarithmic amount of correlated randomness. If this step ends successfully then the parties proceed to reveal their outputs. Otherwise the parties abort. Now what are the requirements from our semi-honor protocol? So we have two requirements. First that it needs to be additively secure meaning that the adversary can only add errors to the wires and the second requirement is star sharing compliance and I will explain later what it means. But the main point is that many secret sharing based semi-honor protocol including beetle-style semi-honor protocol satisfy these properties and therefore they can be used as the underlying semi-honor protocol in our framework. So from now on let's focus on our verification protocol. The main meaning work that we that we use is zero-knowledge fully linear proof systems, a notion that was introduced by Bonnet and in crypto 19. So here we have a proven and a verifier and the proven holds a secret input x and he wants to prove some statement over x. And the proven and the verifier interact in multiple rounds and in each round first the proven outputs a proof by i then there are there are public coins that are being chosen and then the verifier can query both the input and the proof and based on the answers to the queries it decides whether to accept or reject. But the main property here is that the verifier is only allowed to run linear queries on the proof and the input. This is why these proof systems are called fully linear and of course we can define completely soundness and zero-knowledge in the standard way. Now from this abstract building block we can derive a very useful tool called distributed zero-knowledge proofs. Here we have multiple verifiers and the input x is distributed across the verifiers or in our case x is secret shared across the verifiers. So now we will ask the prover to take the proof that is generated by the prover in the fully linear proof system and also secret share it across the verifiers. So now the verifiers hold shares of both the input and the proof. Now if the secret sharing scheme is linear then since the queries are also linear then the part then the verifiers can simply query independently their shares of the input and the proof and obtain a secret sharing of the answers to the queries and then they can simply exchange the shares of the answers and obtain the answers to the queries. Now what Bonnet has shown is that if x is robustly shared across the parties meaning that the shares held by the by the honest parties are enough to reconstruct the secret and if the statement to be proven is a degree too polynomial over the input x then there exists a distributed zero-knowledge proof where both the communication and the number of runs are logarithmic in the size of the input and and someness holds even if a subset of the verifiers collude with the prover. Now this tool is very useful to achieve malicious security in MPC because in order to achieve malicious security we need to prove that all multiplications were computed correctly. Now multiplications are degree two computations and now after the parties have computed the circuit they hold a secret sharing of the inputs and the outputs of each multiplication gate so this is what and this is what we need in order to apply the distributed zero-knowledge proofs machinery and indeed this tool was used in the honest majority setting in previous works to achieve malicious security with very low cost relying on the fact that in the honest majority setting the secret sharing is inherently robust because the shells held by the honest parties are enough to reconstruct all the secrets. Now when we move to the dishonest majority setting this raises the question how to achieve the same robustness and without increasing the amount of correlated randomness per multiplication so to solve this challenge we have two main technical ideas first we define a robust secret sharing scheme using the dealer which we call star secret sharing star secret sharing and then we show how to maintain this scheme and the robustness that it brings throughout the verification protocol and the idea here is that we use the dealer as the one of the verifiers in the distributed zero-knowledge proof. So now let's look into the details. So what is this mysterious star secret sharing scheme? So the idea is very simple for each secret x each party will hold a mask of the masked secret and an additive share of the mask and the dealer will hold the shares of the mask and therefore he will know the mask. Now this secret sharing scheme is robust because the shares held by each party and the shares held by the dealer are enough to reconstruct the secret. In particular an honest party and the dealer can reconstruct the secret. Now of course this secret sharing scheme is not new and is and it is in fact used in many semi-realist protocols including beaver style protocols so the challenge that remains is how to maintain this invariant and the robustness also in the verification protocol. So how the verification protocol works? So the goal of the parties is to verify that for each multiplication gate with inputs x and y and output z that z minus x times y equals zero. Now we can replace each secret with the masked secret which is known to the parties plus the mask which is known to the dealer and take a random linear combination of all these equations where each equation corresponds to one multiplication gate and obtain one expression that needs to be checked by the parties. And here the alpha ks are random coefficients that are chosen by the parties at the beginning of the verification protocol and become public. Now if we look at this expression that the parties want to check a quality to zero and open it and do algebra we can split it into three parts. The first part contains only masked values and therefore each party can locally compute it. The second part contains only masks which are known to the dealer and therefore can be computed locally by the dealer. The third part is basically a sum of products between values that are known to the parties and values that are known to the dealer but are also additively secret shared to the parties and therefore the parties can locally compute an additive sharing of this value. Now let's denote the first part by lambda the second part by omega and the last part by gamma. It implies that the parties wish to verify that lambda plus omega plus gamma equals to zero. Now indeed in the first step of our verification protocol the parties do the local computation. Each party computes lambda and his share of gamma which we denote by gamma i and the dealer computes omega. In the second step of the verification protocol we ask each party to secret share gamma i using our star secret sharing scheme. This means that each party will broadcast the masked gamma i where the dealer knows the mask. Now of course our malicious pi, maget and secret share an incorrect value so in third step we ask each party to prove that it shared the correct gamma i and we will go into the details of this step in a minute. If this step passes successfully and all the proofs are accepted then the parties proceed to the next step where the dealer sends the sum of all the masks that were used to the parties and then the parties in the last step can check equality to zero to the final value. If the equality holds they know with high probability that all multiplication gates work correctly otherwise they know that cheating took place and they can abort the protocol. Now let's go into the details of the third step where each party proves that the gamma i he shared is the correct gamma i. So basically each party pi needs to prove that the following equation holds. The idea that we take the masked gamma i we add the mask and subtract from it the gamma i that should have been computed. So if party pi acted honestly then the result should be zero and now if we look at this expression without even understanding it we observe two things. First that this is a degree two polynomial over the inputs and remember that the alpha ks are public constants at this stage and second that each input to this expression is known to either all the parties or to the dealer. Specifically the values that are marked in blue are known to all parties and the values that are marked in yellow are known to the dealer and it can use this fact in the following way. So now we will run the distributed zero-notch proof and use the dealer as one of the verifiers. Now the parties will define a vector of inputs where they take all the inputs and replace all the inputs that are unknown to them by zero. The dealer will do the same thing he will define a vector of all inputs and replace all the inputs that are unknown to him by zero. However since each input is known to each of the parties or to the dealer this implies that now each party and the dealer hold a two out of two additive sharing of the input. Now we will ask either to share the proof in the same way meaning that he will send the masked proof to the parties and the dealer will hold the mask. So now what we get is that the input and the proof are identically shared between each of the parties and the dealer. So now we can ask each of the parties and the dealer to run the linear queries on his shares of the proof and the input and this guarantees us that now the parties and the dealer will obtain a star secret sharing of the answers meaning that the answers are shared between each party and the dealer. So now for an honest party to receive the correct answer he only needs the information held by the dealer. This means that even if all the other parties pollute with the proofer then still this one honest party will receive the correct answer to the queries and this is what eventually leads to the soundness that we require. So let's sum up what we get from this process and what we get from using the dealer as a verifier. So since each piece of information is known by an honest participant which is either an honest party or the dealer this is what gives us robustness throughout the process and this is what leads to soundness even if all the other parties collude with the proofer. And since the statement to be proven is a two degree polynomial then we can run the distributed knowledge proof with logarithmic amount of communication in the numbers of multiplications to verify. Now since communication is logarithmic then this implies that also that the communication from the side of the dealer who acts as a verifier is also is also logarithmic in the number of multiplications to verify. Now since the dealer performs communication only over random data this implies that he can pre-process its computations and then all the messages that he needs to send as a verifier he can give it as correlated randomness to the parties with logarithmic size. This is what eventually leads to our solution. So if we go back to our verification protocol and estimate its cost then the first step is simply local computation. In the second step each party need to secret share one single value and therefore the communication cost is constant. In the first step we have n proofs and the cost of each proof is a logarithm of the number of multiplications to verify. In the fourth step the dealer needs to send constant amount of data and therefore the communication cost is again constant and the last step is local computation. So overall the communication cost of the verification protocol is logarithmic in the size in the number of multiplications to verify time the number of parties. Now in the paper we also give concrete instantiations to the building block used in our protocol. In particular we show how to implement the distributed zero knowledge proof in the dishonorably setting with high efficiency as can be seen from the numbers we show here. And remember that if the computation is carried out over Boolean circuits or over rings then the verification protocol needs to be executed over an extension field with size that depends on the security parameter kappa. However since kappa is independent of the size of the circuit the overall cost of the verification protocol remains sub-linear. In addition we can add the cost of our verification protocol to the cost of the beaver style semi-honest multiplication protocols that we saw at the beginning of the talk and obtain the overall costs to compute circuit c with malicious security shown in this table. Now it is worth mentioning that we can also use the recent results for efficient pcg-based compression to compress the correlated randomness for the semi-honest execution and then combining it with the sub-linear correlated randomness of our compiler we can get overall sub-linear correlated randomness to compute a circuit c with malicious security. Finally a few words about distributed pillar so in the paper we do not design a protocol to distribute a dealer however we estimate the cost of using a generic npc protocol to distribute the dealer. The idea here is that we represent the dealer as a circuit and then use a generic npc protocol to compute the dealer's circuit. And the cost of this approach is depends on the number of multiplication gates in the dealer's circuit. As we can see here the number of multiplication gates is depends on the number of parties but for small number of parties it is almost equivalent to the size of the original circuit and this implies that even when using the generic npc to compute a dealer's circuit the costs are still reasonable. And we leave the question of optimizing the dealer's work for designing a specific protocol to distribute a dealer for future work. So with this I will end my talk. Thank you very much for watching and listening.