 Hello everyone! Welcome! Today I'm going to talk about information-circuit-to-run the MPC without run collapsing, adaptive security and more. Or I would prefer to call it simple-to-run the MPC. This is a joint work with Retrolin and Hotecv. We all know MPC. Parties have private inputs. They talk and jointly compute the function. If some parties are corrupted by an adversary, they should learn nothing beyond the function output, and their joint view can be simulated. MPC has many different settings. In this work, we consider semi-honest adversary. We allow the adversary to adaptively choose next party to crowd, and we consider information-circuit-to-run security for computing NC1 function. The function can be either boolean or arithmetic. They are as powerful in some sense, but emulating arithmetic computation by boolean gates can be expensive. Our construction natively supports arithmetic computation, and only needs black box field access. As for the model, we present two constructions in different models. Our construction in a plain model tolerates less than half of the corruption. The other construction lies in a correlated-to-run race model. I'll introduce it later. It tolerates any number of corruptions. The run complexity of semi-honest MPC has been studied by many pre-sourced. By 2000, Isha'i-Kushilevi had already shown a three-run information-circuit-to-run MPC. In recent years, people managed to achieve the optimal run complexity, which is two-run. The first, it's by gargoyle-to-or, assuming I.O., then by mokotri-vix, assuming multi-key F.H.E. Then comes 2018, which is a fruitful year. The homodaline and gargoyle-neverson independently weakened the assumption to two-run OT. And there are many follow-up works here. In the honest majority setting, also in 2018, Nanset-O-R constructed two-run MPC-o-zooming-moment function. Then in the same year, F.O.O.R and gargoyle-to-or make it information-circuit-to-or. Our work developed the technique from I.K.O.2 and ABT-18. We construct information-circuit-to-run MPC in two settings. One tolerates less than half-run corruption in the plain model. The other tolerates any number of corruption in the O.L.E. correlated randomness model. This is a model where any pairs of parties jointly hold O.L.E. correlation. They hold random field elements R.A., R.B., respectively, and jointly hold the additive share of R.A. times R.B. This is the osmatic analog of OT correlation. The key contribution of our work is simplicity. You are going to see how simple our protocol is. Besides simplicity, we also achieve adaptive security with an explicit simulator, whose star indicates a person will claim adaptive security without proof. Our construction supports osmatic computation with black box filters S, and this also partially explains why our construction is very simple, and therefore it's also more efficient. For P-poly, there is a standard extension to P-poly with black box use of PRG. Our key technique is direct construction without run collapsing. So a natural question here, what is run collapsing? Run collapsing is a technique used by previous two run MPC. Say the functions of Degree 3, which we knew it's complete. It can be computed by constant run MPCs APJW, but it takes more than two rounds and we'd like to clap the rounds. The first step is to write down the whole MPC as a single Boolean circuit. Then consider the goblin of this Boolean circuit. APT-18 made this brilliant observation. The goblin circuit is effectively a Degree 2 function if the input and the randomness are locally prioritized. I'll explain it in next slides. So if it is a Degree 2 function, it can be computed by two round BJW. So that's the construction. The construction is gorgeous, but it's also quite complicated. In APT, they also provide a simpler high-level abstraction. So let's forget the goblin circuit of OTIWAR. Forget it. It's just an encoder. An encoder that takes part in private inputs and local randomness and outputs an encoding. The encoder will map the encoding to the function output. So multi-party randomness encoding, or MPRE for short, is the combination of such encoder and decoder. MPRE is correct if this always matches the function output. It's private if the encoding can be simulated from the functional output. As you might recognize so far, this is the definition of randomness encoding. For multi-party randomness encoding, the local randomness can also be simulated. Give the simulator private input for up to T parties. Here T is the security search hold. The simulator can simulate the local randomness of the corresponding party. In this work, we also consider adaptive security. Give the simulator private input. It simulates the local randomness of the corresponding party. Give simulator the function output. It simulates the encoding. And you can repeat and ask for up to T parties. We want to construct run optimal MPC, so we hope the encoder has low degree. Unfortunately, Ishaan Kushilevi proved that the degree of the encoder has to be at least three. But this is not the end of the story. Epiphone Bakatski and Sabari come and say, Look, you can divide the encoder into local parts and the global part. And we construct the MPRE that the global part of the encoder has degree to. Once you have such degree to MPRE, you are almost done. Serum says combining a degree to MPRE for NC1 with a two-round MPC computing degree to function gives you a two-round MPC computing NC1. And here's the proof. Parties individually sample MPRE local randomness. Then individually compute the local parts of the MPRE encoding. The global part of the MPRE encoding is computed by a two-round MPC protocol. And this is the only interaction. Finally, parties locally decode the output. That's it. Now comes to our result. With this serum in mind, we just fill in the blanks. In the honest majority setting, we construct degree to MPRE that tolerates half the corruption. Combine it with a two-round MPC computing degree to, which we knew is BJW. In the honest minority setting, we construct degree to MPRE that tolerates any number of corruption using OIE correlation. And we construct two-round MPC computing degree to, tolerating any corruption in our OIE correlated randomness model. And this is basically the arithmetic analog of GMW. So for the rest of the talk, I move to the following. First, I'll briefly review the IK-randmassing coding. Then I'll present our MPRE in the play model and in the OIE-related randomness model. IK-randmassing coding works as the following. Any NC1 function can be evaluated as the determinant of a matrix in this canonical form. For example, XYZ plus S equals the determinant of this dimension 3 matrix. This is due to the connection between NC1 and the branching program. In IK-randmassing coding, they multiply this matrix by random matrix on the left and the random matrix on the right. The resulting matrix is the encoding. It is correct because the random matrix they multiply has determinant 1. So the determinant is preserved. It is private. It is also arithmetic. As you can see, it only uses black box field operation. And very importantly, the randomized encoding is a degree 3 function on input and randomness. As corollary, it's sufficient to just construct MPRE for a degree 3 function. So here is a complete function. Three parties hold XYZ respectively. The function outputs X times Y times Z plus some active term. As mentioned, I will first consider the honest majority setting. When I think of honest majority, the first thing comes into my mind. It's Shamir's X-sharing. So let the party holding X sample a random polynomial P whose constant term is X. This is the Shamir's X-sharing of X. Similarly, let the party holding Y samples a random polynomial Q which is the Shamir's X-sharing of Q. By standard analysis, the product of X and Y can be linearly covered for the products of the shares. Therefore, the function output can be computed by a falling formula. Shamir also does that. Though only less than half of the parties may be crafted, it's safe to let the S-party know PIQI. So let them learn. Imagine that magically the S-party gets PIQI. Now each party can locally compute PI times QI. After the local computation, the target function becomes a degree 2 polynomial on local information. Since that we are done, the problem is that parties won't magically get PIQI. Though this doesn't work out directly, we are making some progress here. Consider all the monomials in the formula. We can compute them separately. So this reduced to a new complete function. It's sufficient to just construct MPRE for PI times QI times C plus some linear term. So let's take a closer look. Here PI, QI and Z are held by different parties. They seem the same as the initial complete function, but the ISE party gets leakage PI, QI. What does the leakage mean? We formalize it as MPRE with leakage in our paper. Intuitively, it means we don't have to hide PI, QI from the S-party. We can give the diversity PI, QI for free if the diversity corrupts the S-party. Let's see how the leakage can help us. Write down the new complete function as the determinant of this matrix. Then apply IK and MIS encoding. Expand all terms in encoding matrix. Among them, observe that there is only one cubic term. So the first question. How should we handle this decrease with her? Let me delay the answer for a bit. Here's another question. Are EU's randomness? Remember, MPRE only have local randomness. So who should sample this random R1 amplifier file? The naive way is to jointly sample the randomness, but it won't work. We observe a smarter way to sample the randomness. Let the ISE party sample R1, R5 by himself. So why is this a cure? Observe that, in the encoding, R1, R5 are used to one-time-pile P1, Q1. Since P1, Q1 can be leaked to the ISE party, it's fine to let him sample R1, R5. More formally, the only concern of letting the ISE party sample R1, R5 is that when he is corrupted by the adversary, the adversary will learn R1, R5. But this is actually not a concern, because if the adversary crops the ISE party, it will get PI, QI for free as the leakage. The adversary also learns PI minus R1 and the QI minus R5 from the encoding. So it can compute R1, R5 by itself. So if you buy this, we are ready to answer the first question. Since we can let the ISE party sample R1, R5, he can locally compute R1 times R5. After this local computation, the only degree 3 term in the matrix becomes effectively degree 2. So we are done. Putting all this together, this is the random mass encoding for the complete function, where P2U are polynomials sampled by the parties holding X and Y. The ISE party also samples corresponding R1, R5. Then locally multiply them to reduce the degree of the global computation. The rest of the randomness can be jointly sampled. That's it. The MPRE in one slide. Next, I will present our MPRE using ORE correctly randomness. If you didn't get the last one, follow me from the next slides. I'll start from scratch and the next one is even simpler. We would like to construct MPRE for this complete function XYZ plus some linear term. The function output equals the determinant of this matrix. Apply Ike randomness encoding by multiplying random matrix on both sides and expand the encoding matrix. Observe that there's only one degree 3 term and we need to somehow handle it. Before that, let's first consider how to sample the randomness R1 up to R5. And here's the answer. Let the party holding X sample R1 because the randomness encoding includes X minus R1. Then the party holding X will learn R1 anyway. Does it save to just let her sample R1? Similarly, let the party holding Y sample R5. Now back to the first question. How to handle degree 3 term? The answer is use ORE correlated randomness. The ORE correlation provides random R1, random R5 and the active share of R1 times R5. Now by replacing R1 times R5 by its active share, the encoding matrix no more have any degree 3 term. So that's it. In short, the MPRE outputs the following matrix. Randomness R1 and R5 are sampled from the ORE correlation between two parties. And the rest of the randomness can be jointly sampled. So let me finish talk by recapping our result. We construct information circle to run the MPC in two settings. They can be extended to P poly with black box user PRG. As you just saw, our construction are arithmetic and only use black box field operations. I didn't show the adaptive security in the talk. In our paper, there is an explicit, efficient, modular and also black box field arithmetic adaptive simulator. As for technique, we present a new direct construction of degree 2 MPRE without one class. And this formula explains most of it. So that's my talk. Thank you for listening.