 Hi, everyone. This is a joint work with MUTU and Morvaise. It's about the price of active security in cryptographic particles. Secure multi-party computation is a tool that allows a set of parties to directly communicate while obtaining the same security guarantees as when given an access to a trusted party that computes the function. Specifically, any adversary that corrupts a subset of parties cannot learn anything beyond the output of the computation. Security is proven in the presence of two central types of attackers. A passive adversary, which is a benign attacker, follows the protocol's instructions while trying to violate privacy. Whereas the strongest and most realistic attack scenario is an active adversary that can follow an arbitrary attack strategy, another aspect of defining security is the number of corrupted parties. Beyond the majority setting, the adversary may corrupt less than half of the parties. Whereas in the dishonest majority setting, all parties but one may be corrupted. Working in this model has the advantage that the party does not need to trust anyone but itself. In this talk, we will be focusing on the most challenging setting that is the active dishonest majority setting. By now, we know that MPC is very useful and has many important applications such as private electronic voting and auctions, privacy preserving data mining, protecting cryptographic keys and more. A common paradigm in designing MPC protocols is to first design a protocol with passive security and then amplify its security to the active security. The focus of this talk will be regarding understanding the overhead of amplifying passive to active security. More specifically, we will care about the cost of the communication complexity. The theoretical goal would be to design components that introduce essentially no cost for the active setting, keeping the same asymptotic efficiency as the underlying passive protocol. Whereas in practice, this goal is translated into a very small constant overhead. As small as to frame a clean theoretical question, we focus on designing modular protocols in which the computationally expensive cryptographic component is separated from the rest of the protocol and abstracted as an ideal functionality. Specifically, the cryptographic abstraction we consider in this work is a constant round protocol for computing distributed multiplication. This abstraction allows to capture many settings simultaneously. More specifically, we abstract distributed multiplication as an F-mold functionality that is parameterized by a secret sharing scheme S over some field F. It takes two shares of two secret inputs and produces the shares of their product. More simplistically, I'm going to use the additive secret sharing notation for the rest of the talk. Given the previous discussion, I can now phrase our motivating questions follows. Can actively secure protocols over an arbitrary field match the complexity of passively secure implementations given only black box access to F-mold. In the dishonest and honest majority settings and with an arbitrary number of parts. Let's overview the state of the art solutions in this domain. Recall that we measure the communication complexity overhead of achieving active over passive security. This is calculated by the number of calls to F-mold per multiplication gate. In the two-party setting, this has been achieved theoretically. We're in the recent work joined with Yuval, Antonio and Motu. We show how to reduce this constant into two over large fields using MPC and the head techniques. In the Boolean category with an arbitrary number of parties, current best multiplicative overhead over GMW is polylogarithmic in the circuit size and statistical parameter S. Or the overhead over passive YAU is order of S over LOXY due to using cotton juice. In the arithmetic category with an arbitrary number of parties, current techniques achieve constant communication overhead over GMW in the OLE hybrid for sufficiently large fields. Given the state of the art, our main result shows a compiler that makes a constant number of calls to any passive implementations of F-mold. In practice, this constant can be brought down to two for large fields. I will now explain about our techniques. It's easier to start by explaining the details of MPC and the head and the IPS compiler. So this is a powerful technique that was introduced more than a decade ago where the idea is that active MPC can be obtained by combining two simpler and much weaker components. Passive MPC with this honest majority also denoted as the inner protocol. And active MPC with honest majority also denoted as the outer protocol or the virtual protocol. This approach broke barriers when it was introduced, achieving the best asymptotic complexity in this setting, but only for a small number of parties. Moreover, while it achieves good asymptotic efficiency, its practicality has not been well understood. Extending this technique directly would not work, as I'll show you next, and pushing the limits of the current bottlenecks requires considering a new approach. At a high level, the IPS compiler works as follows. Consider a two-party scenario. Then the two parties P1 and P2 emulate an imaginary protocol with honest majority security while running a passive protocol. This honest majority protocol is carried out by a set of servers with no inputs to the computation. This means that the parties must jointly compute the inner computations performed by the servers. Taking a closer look, this implies that the parties maintain additive shares of the state of each server throughout the protocol emulation. Therefore, additive computations are performed locally, while multiplicative operations require communication and are performed by calling F mode. The main question is how to ensure that the parties emulated the servers computations correctly. Meaning how do we leave the security of this passive inner protocol to the active settings? For that, IPS introduced a watch this mechanism, a beautiful idea in which the parties constantly watch each other to enforce an honest behavior. This is implemented by using oblivious transfer, where each party chooses a subset of servers for which it learns the additive shares of the other party, as well as its randomness used to run F mode. It can now verify that the computation with respect to this set of servers were performed correctly. One of the main bottlenecks in making IPS practical is related to the number of virtual servers and the watch list parameters. This raises the following question of whether we want to use this approach. Let me illustrate this more precisely. There is tension between the privacy and the correctness properties with respect to the number of servers set for the other protocol. Specifically to ensure an negligible soundness error in the statistical parameter is we must set the number of servers to the order of N. On the other hand, the overall number of servers to be watched must be bounded by N over 2. Otherwise, the privacy of the honest majority protocol will be violated. Given these restrictions, constant overhead is only possible for a constant number of parties. Therefore, instead of checking the party's actions throughout the protocol execution, we suggest a new correctness mechanism that pushes this check towards the end of the execution, right before the parties reveal their output shares. Meaning we will have a single set of servers to be chosen and watched by all parties using a coin tossing protocol. However, there are several serious challenges when applying this approach. As an active adversary can easily break the passive security of the inner protocol and only get caught at the end. By that time, it will already violate the privacy of the honest parties and steal all their information. This is true even if we enhance the security of the inner protocol, since the adversary can deviate when emulating the actions of the servers in the outer protocol. In light of these challenges, how do we choose the inner protocol? The next observation is that the amount of work per party must be small. So, keep in mind that prior instantiations for the outer protocol, such as the thumb guard shadow six, will not work here since the use global operations, such as degree reduction, that are expensive and costly when emulated by the inner protocol. In this work we take a dual approach to IPS where our outer protocol is tailor made while using different associations of the inner protocol with different security guarantees and efficiency analysis. Towards introducing some technical details I'll give a quick recap of Shamir's secret journey. To share a secret S, choose a random polynomial P of degree T, where the constant term equals S. Distribute any valuations on the points one through N. This will be the shares. Then we have two useful security properties. Privacy, we're up to T evaluations to not reveal anything about S. And robustness, we're modifying up to N minus T over two values does not affect the correctness of the secret reconstruction. One of the most influential ideas in scalable MPC is packed secret sharing, where instead of sharing a single secret, we share a block. This allows for constant amortized overhead where many multiplications can be performed in parallel, similarly to SIMT operations in fully homomorphic encryption. On the other hand, we slightly lose on the parameters where the degree of the polynomials are increased by the block size. Okay, so let's start with a warmup protocol that is based on the classic BJW problem. Recall that this protocol follows by having the parties secret share their inputs using Shamir scheme. Then following the gate by gate paradigm addition gates are computed locally due to the linearity of the Shamir scheme. Whereas multiplication gates are computed locally but require communication to reuse the polynomial degree. In contrast, we will be working with two layers of shares where the first layer will be Shamir's as in BJW and the second layer will be additive secretion. Consequently, multiplication operations would require communication whereas operations such as degree reduction can be performed locally due to their linearity. Specifically working with two layers of shares imply that the parties have a share of a global view of the protocol execution. I will now describe the high level overview of our protocol. So in the first phase, the parties secret share their inputs using two layers of sharing. We start with Shamir secret sharing and then additive share each Shamir share. The brackets notation refers to additive sharing. Next, the parties evaluate the circuit gate by gate. First, addition gates are computed locally by the parties. Note that the parties obtain the additive shares of the Shamir share that correspond to the output value of each addition gate. Next, multiplication gates require communication. So to compute the product of two Shamir shares. The parties communicate using F mode that allows to compute the cross products of the shares coming from the two input wires. Upon completing this multiplication phase, the parties have a global view of the Shamir shares of all the servers. And now they can run local degree reduction computation. The description I gave didn't take care of malicious attacks where at every phase in the computation a corrupted party may disturb the computation. To combat with these attacks, we use tests that enforce correct behavior. These tests perform batched checks and their overhead is independent of the circuit sense. So far it was a warm-up that correspond to a variant of BGW. However, this does not give you the complexity that we want. And in fact, this complexity is inflated with a statistical parameter s just in prior work. Specifically, the complexity is s times the passive overhead. And the reason is because we need to replicate every gate at least s times to get small statistical error. Similarly, to the analysis of cotton shoes to achieve constant communication overhead, we use tactic return. Namely, the secrets are arranged in blocks and then shared. As I mentioned before, this technique played a central role in reducing the amortized communication complexity per party in large-scale NPC. And when using packing, the secrets must be rearranged between two layers according to the replication pattern of the computed circuit. So using packing enables to eliminate the factor of s in the communication complexity. And the reason is because now the overhead is amortized over a block of notifications. As a final point, let me explain about the security required for realizing F mode. Obviously, we want to weaken its security as much as possible as its effective efficiency. So we first observe that defensively private protocol is sufficient for our protocol as the security proof. In security proof, the simulator can extract the adversary's input based on its defense, where it defences as a so-called proof of correct behavior that includes the input and the randomness of the adversary. The second observation is that defensible privacy can be achieved by compiling a semi-honest protocol with a coin tossing protocol and forcing the parties to use that randomness. This is already shown in previous work. What's new here is that if you observe that we can have a separate consistency check for the randomness used for generating random instances of F mode, such as random triples. And in contrast to prior work, where all such instances must be correct, here we can tolerate up to K such errors, which will be caught with high probability by the consistency check in the online phase. A wrap up with several open problems. First and foremost, understanding the concrete constant of small fields and AG codes. This is the major open problem for constant communication overhead. Second, can we push this approach to achieve better adaptively secure protocols by using adaptively secure F mode or maybe other features of F mode will give us other features in the overall compiler. Can we get constant round with constant overhead in the FOT hybrid, our protocol achieves that only in the F only hybrid. And finally, we need more compilers with different features. With that, I'll conclude. Thank you very much.