 Hi everyone, thanks for tuning in. My name is Aapetha and in this talk, I'm going to present a joint work with Benny Applebaum and Eleon Coachloan entitled The Resiliency of MPC with Low Interaction, The Benefit of Making Errors. Interaction is a valuable and expensive resource in cryptography and distributed computation. Consequently, a huge amount of research has been devoted towards characterizing the amount of interaction needed for various cryptographic tasks. The amount of interaction is estimated using round complexity and in this work, we focus on the round complexity of two central cryptographic primitives, general multi-party computation or MPC and a special yet well-known primitive called verifiable secret sharing of ESS. Over the years, MPC has been studied over various settings. In this work, we consider the classical setting of MPC as introduced in BGW and CCD-88. That is, we assume every pair of parties are connected by a secure and authenticated point-to-point channel and apart from these, there is a broadcast channel available using which a value can be sent to all the parties identically. Next, we consider an adversary that is computationally unbounded, active, rusty and can corrupt tea parties out of the in-parties. By tolerating an adversary that is computationally unbounded, we obtain information theoretic security and lastly, we aim for pool security or guaranteed output delivery. A bit of history of round complexity in our setting, there is a huge body of work and I will not be able to mention everything, so let me touch upon the milestones. In 1988, BGW and CCD proved a landmark result that every function can be computed perfectly with less than one-third corruption. Subsequently, RB89 have shown that one-half corruption is enough to attain statistically secure MPC, demonstrating the benefit of making error. Both these bounds are optimal. The complexity of these protocols relate to the size and the depth of the circuit of the function where the later is tied to the degree of the function. Specifically, computation complexity of those protocols were polynomial in the size or circuit size of the function and the rounds needed is the round complexity of VSS in the respective settings times log of the degree of the function plus one. Next, I would like to mention the works of Ishai and Kusilovich who saw that every function has a constant round realizations. The work rely on randomized encoding paradigm, which allows to show degree three completeness. These protocols, however, are efficient for log space computations, but here I would like to point out a very important point that inefficient protocols are still meaningful tolerating unbounded powerful adversaries, but this is not true in the cryptographic setting. Lastly, I want to mention two of the very recent works that provide round optimal perfect protocols, ABT19 and AKP20. The first work show that every function can be realized perfectly in three rounds with less than one-fourth corruption and the second work shows that every function can be realized perfectly in four rounds with less than one-third corruption. Both these protocols are optimal when to a lower bound proven in AKP20 and both these protocols rely on multi-party randomized encoding paradigm that allows to show degree two completeness over binary field as well as large field. And like IK constructions, these protocols are also efficient for log space computations. An interesting question that is left of hand by these works is, can we do better by relaxing to statistical security? In this paper, we answer this question in affirmative and that is the starting point of our work. We are now ready to discuss our results. GIKR02 shows that fairness let alone full security cannot be achieved in two rounds with more than one corruption. And that leaves little to be done in the two round space and the gap is filled by IKKT15 that provides a protocol with one corruption and four or more parties. Next, ABT19 provides a perfect protocol in three rounds with less than one-fourth corruption and AKP20 constructs a perfect protocol in four rounds with less than one-third corruption. Both these protocols are optimal going to the lower bound shown in AKP20 that says that perfect full security in three rounds is impossible with less than one-third corruption. In this work, we show that we can get best of these two protocols that is around complexity of three and resiliency of one-third if we are happy to make errors. The round complexity of our protocol cannot be improved further due to GIKR02 lower bound and the resiliency of our protocol cannot be improved further because in this work, we prove that with more than one-third corruption, statistical full security in three rounds is impossible. Unfortunately, our construction requires exponential in-end computation. In downgrading security to computational setting, we present a protocol which is efficient for all functions, not just for log-space functions. Our protocol relies on non-interactive commitment schemes and by plugging in statistical non-interactive commitment schemes in CRS setting, we obtain overlasting security. Our protocol is to be compared with the existing protocols that rely on public key primitives. Our next contribution is about coming up with a new two-round model of computation. A direct implication of GIKR02 result is that no two-round fully secure protocol in plain model can tolerate linear resiliency. So we asked the question, can we introduce a new natural meaningful two-round model that would allow protocols with linear resiliency? In this paper, we answered this question in affirmative and we introduced single input fast round or CFAR hybrid model. In this model, in both the rounds, the parties are allowed to communicate to each other over point-to-point secure and authenticated channels and the broadcast. And apart from that, in the first round, every party is allowed to invoke ideal calls to single input functions. And by single input function, I mean that the output of such a function would depend on the input of a single party. Such functions abstract authenticated private channel communication, broadcast communication, as well as interestingly polynomial vs. functions, which is what we would use later to come up with our upper bounds. In the second round, however, there is no call to single input functions allowed. So we observed that in a CFAR protocol, the fast round messages would depend on the individual party's input and the second round escape for mixing the inputs. Therefore, two rounds are indeed necessary for computing non-trivial functions securely. The second observation is that when we downgrade security of a CFAR protocol in the semi-onus setting, then the fast round simply reduces to a round overplane model. In CFAR model, we prove a very interesting impossibility result. That is, perfect MPC in the CFAR model is impossible with more than one-fourth corruption. This result is complemented with a perfect CFAR protocol with less than one-fourth corruption. In fact, this protocol can be obtained from ABT-19 coupled with the completeness result of AKP-20. This protocol is in F vs. hybrid model, and when we plug in with a two-round vs. protocol, then we get a three-round perfect protocol with the same resiliency. Now, in this paper, we show that the impossibility can be circumvented, and we can obtain a CFAR protocol with less than one-third corruption if we are happy to make errors. This protocol is also obtained in F vs. hybrid model, and when we plug in with a two-round vs. realization, which is also a contribution from this paper, then we get a three-round statistical protocol with one-third, less than one-third resiliency. Next, I would like to draw your attention to a very interesting implication of our impossibility result. In the perfect setting with less than one-third corruption, our impossibility implies that we cannot obtain a R plus one-round MPC from an R-round vs. in a black box bay. However, AKP-20 provides a four-round MPC protocol starting from a three-round vs. protocol exploiting the features of the three-round vs. Our impossibility shows that such complication, such opening of the box of vs. is inherent. We have now reached to the last leg of our contributions, where we designed vs. protocols in two-round budget. First, I would like to mention that our vs. functionality is tailored for polynomial-based vs. protocols. Here, the dealer picks a bivariate, symmetric bivariate polynomial, and gives it to the functionality which checks the degree t-ness, and then to the ith party, it sends f of xi, the evaluation of the bivariate, at y equal to i. The IKR-01 shows that one-round perfect vs. is impossible with more than one corruption, and two-round perfect vs. is impossible with more than one-third corruption. These lower bounds are complemented with a perfect two-round protocol with less than one-fourth corruption, and a perfect three-round protocol with less than one-third corruption. In this work, we show that we can get best of both these protocols, that is, a round complexity of two, and a resiliency of one-third if we are happy to make errors. The round complexity of our protocol cannot be improved further because of the IKR-01 lower bound, and the resiliency of our protocol cannot be improved further because in this paper, we show that with more than one-third corruption, two-round statistical vs. is impossible. Here, I would like to draw your attention to an important point. While our upper bound introduces error only in the correctness, the accuracy remains perfect. Our lower bound is strong in the sense that the error is introduced both in the correctness as well as in the secrecy, and still the lower bound holds good. I would also like to mention that PCIR-09 also gives a two-round vs. with the same resiliency, however, it does not realize a vs. functionality. Lastly, we show a qualitative difference between vs. protocols in one-third regime and one-half regime. For the later protocols, we show that the reconstruction cannot be simply broadcast of views from the sharing phase. Secret state needs to be kept, and this explains why all the existing vs. protocols in one-half regime indeed keeps secret during the reconstruction phase. In summary, I would like to say that it is good to make errors. Yes, it is good to make errors. If we want to talk less or we would like to have less amount of interaction, and we would like modularity. In the next remaining part of the talk, I am going to elaborate a little bit on our three-round statistical MPC protocol. Our three-round statistical MPC consists of two building blocks, a two-round MPC in the F vs. CFI model, and a two-round vs. realising the F vs. functionality. Put together, we obtain our three-round statistical MPC. So, for the rest of the talk, we are going to talk about the two-round MPC in the CFI model. We use the AKP-20's completeness result of degree 2 completeness over large fields of characteristic 2. And therefore, our simplified goal turns out to compute a simplified quadratic function x.y, where x comes from some party and y from another party in two rounds in F vs. CFI model. For achieving our goal, we introduce a new three-party MPC abstraction called Secure Computation with a Guard or SCG. This primitive has interestingly one input dependent round. So, let me introduce this new primitive of Secure Computation with a Guard or SCG. So, we have three parties Alice, Bob and Carol. They would like to compute a function F of A, B. Alice holds both A and B, and Bob holds B. While Alice holds both the inputs and herself can compute the function, the point of having Bob in the computation is to have him as a Guard to ensure that the computation indeed uses the copy of B that is poses by Bob. When Bob is corrupt, we want either F of A, B or nothing must be released. When Carol is corrupt, we want privacy of B. We do not care about the privacy of A. In fact, in our protocol, A will be disclosed. When Alice is corrupt, we want F of A, B for some A prime or nothing must be released. Interestingly, as I said, SCG has a single input dependent round realization given correlated randomness. And the correlated randomness is input independent and can be set up by Bob in a prior round. Therefore, SCG has a 1 plus 1 round realization in the plane model. We rely on PSM and MAC, which has statistical error for realizing our SCG. And this protocol is efficient for logarithmic depth circuits. However, for our protocol, we will only run it for constant depth linear functions. I would like to emphasize that error is inherent in the realization of SCG. That is, we do not have a perfect 1 plus 1 round realization. Next, I am going to demonstrate how using SCG, we can save a round for while evaluating a linear function. So, assume that Alice distributes 2 secrets B and C using N by 3 degree polynomial in offline phase amongst the parties. Now, in an online phase, Alice gets AA and its goal is to release the value AB plus C or the polynomial of degree N by 3 carrying AB plus C. For doing this, Alice can broadcast A in the first round and in the second round, these parties can broadcast the linear combination on their shares. And from these shares, the value AB plus C as well as the polynomial carrying AB plus C can be reconstructed. However, the number of rounds taken in the solution is 2 and both are depending on A, the online input. Next, I will show you how we can save a round using SCG. We will simply employ N SCGs and in the ith SCG, Alice and the ith party will participate with A and the ith share of B and C. Where A is exclusively held by Alice and the shares of B and C at I are held by both Alice and the ith party. The effect is the same, but since SCG has one round realization which is dependent on input, we get one round realization of this. We can evaluate the linear function in one round. The correlation randomness setup for the SCGs can be run in the offline phase. In some sense, we can view this as Alice releases the value AB plus C and gets it approved from the guards. And consequently, when Alice is honest, then we are guaranteed to obtain AB plus C in the end of the computation because there are at most one-third bad guards who may not approve the values computation, but there are remaining two-third plus one honest guards who will approve the computations and therefore the value AB plus C can be obtained. On the other hand, when Alice is corrupt, then either we will obtain AB plus C or what? Assuming that there is a mechanism to ensure that a correct A is input by Alice and we indeed have such a mechanism in our protocol. Now, here the key is that Alice must get approved from one-third plus one honest guards. And since the degree of the polynomial carrying AB plus C is n by 3, this would make sure that either correct AB plus C or bought will be released. Now, we are going to use this trick to construct a two-round degree two computation in the FVSSR model. So, let us say these are the two parties who provide with the inputs, the first party x and second party y. Both of them pick uniform symmetric bivariate polynomials, hide their secrets in the constant of those polynomials and then they invoke an instance of FVSS to distribute the shares of the bivariate polynomial. Now, let us focus on one of these sharing and let us see what this information distribution entails to. Spreading out the bivariate polynomial into an n cross n matrix, we see that it is a symmetric matrix. The ith party holds the ith row and this being a symmetric matrix, ith row obtaining ith row entails obtaining the ith column. Now, evaluating the row polynomials at 0, we obtain the column polynomial, the 0th column polynomial which defines the first level secret sharing of the secret. And next, the ith first level share is further secret shared using the ith column polynomials. These are the column polynomials and these are called second level sharing. So, in effect, the ith party receives the ith first level sharing of the secret as well as the ith shares of all the first level shares. And the degree of both the first level as well as the second level shareings are n by 3. Next, moving on up to the distribution of the shares of the secrets. The parties can locally multiply their univariate polynomials and they can obtain a secret sharing of x, y, but here both the first level as well as the second level shareings are done with respect to a 2n by 3 degree polynomial. Now, the goal of every column leader, so we call a party as column or row leader if it holds the, so the ith party would be called as the ith column or row leader. So, the goal of ith or every column or row leader would be to publish a degree reduced polynomial whose constant term matches with the product polynomials, the respective product polynomial. And these constant terms define the first level secret sharing of this product x dot y. And there are well known degree reduction functions and we can turn this degree reduction functions to linear via Beaver's trick and triple sharing. And then the goal of publishing this degree reduced polynomial would reduce to running nscgs for every leader. And for a honest leader, the degree reduced polynomial, the corresponding degree reduced polynomial will be correctly be released. Whereas for a corrupt dealer, it can happen that we may not get the corresponding degree reduced polynomial, but that is fine. Here we use the observation of AKP 20 that shows that first level product sharing which is of degree 2n by 3 is n by 3 erasers resilient. So, everything put together of here is how our two round protocol works. In the first round, apart from running the ideal VSS instances, the correlation setup for all these scgs are run. And in the second round, we run the bunch of scgs and obtain the degree reduced polynomials whose constant term match with the product polynomials. And then we take the constant terms to interpolate back the product x dot y. So, that brings me to the end of the talk. Thank you. Thank you for listening and bye.