 Hey everyone, I'm Inuo Zhang and I will be presenting our work, reusable to run MPC from OPM. This is a drawing to work with James Portasic, Sanjay Mngarga and Akshayam Serini Masal. So let's recall the setting of secure multi-party computation. In this setting, there are in different parties P1 to PN with their respective input X1 to XN, and they want to jointly compute some circuit C. We assume that every party has access to a broadcast channel, and the adversary might corrupt up to N-1 parties. It could be either semi-honest or malicious. So why do we study the problem of two-round MPC? Well, ideally, we want to minimize their interaction. So we want as less round as possible. We also know that MPC in one round is impossible, so we're focusing on two-round MPC in our work. We are especially interested in the problem of reusable to run MPC. In the reusable to run MPC, the interaction pattern is minimal. In particular, the first round of messages could be reused across an unbounded number of second-round executions. So let's look at an example. This is an example of two-round MPC. In the first round, these parties release all the first-round messages, and after looking at all the first-round messages, they are given some circuit C to compute. Then each of these parties send a single second-round message and using all the second-round messages, they can learn the output of the circuit. Now let's say they want to compute a different circuit C2. We want them to reuse their previously sent first-round messages, and each party only has to release a single new second-round message. And using these new second-round messages with the previous first-round message, they can learn the output of this new circuit C2. So there are some fireworks on reusable to run MPC. There are mainly two different frameworks. The first framework is based on multi-key FHG. It is known either in the CRS model or the plane model. We are more interested in the second framework, which is based on a round-compressing compiler. What that means is that we take a multi-round MPC protocol and we run the collapse state into just two rounds. The reusable to run MPC in this framework is known under either obfuscation, by-year maps, DDH assumption, and in this work, we add LPN assumption into this category. So here is our main result. Adduming learning parity with noise assumption with inverse polynomial migrate, then there exists a semi-honest reusable to run MPC protocol in the plane model, or a maliciously secure reusable to run MPC protocol in the CRS model. Here is our roadmap. In the first step, we built what we call it a multi-party silent non-interactive secure computation protocol, or simply multi-party silent NISC from LPN. This protocol is essentially a two-round MPC in a dealer pre-processing model, and it supports bounded polynomial-sized circuit. It has the additional property that the first round message size is independent of the circuit size. In the second step, we show how to take a multi-party silent NISC protocol and transform it into a bounded first message succinct FMS MPC protocol. The difference here is that we are removing the trusted dealer in the protocol. Then we show how to go from a bounded first message succinct MPC protocol to a first message succinct for FMS MPC protocol. And the difference here, obviously, is that we're supporting unbounded polynomial-sized circuit. Finally, we show that we can transform any FMS MPC protocol into reusable two-round MPC. This transformation is already done in the previous works. So let's quickly recall the two-round MPC where round collapsing approach. Ideally, we want to build our multi-party silent NISC protocol based on this framework. So let's look at this framework in details. Here is the template. In the first round, each pair of parties are going to exchange a set of OT1 messages. And subsequently, they will release a sequence of garbled circuits in the second round. We call that in our multi-party silent NISC protocol, we want the first-round messages to be succinct. But here is a problem. In this two-round MPC protocol approach, the first-round messages are not succinct. This is because we need enough OD correlations to compute the actual circuit. Therefore, the number of OT1 messages exchanged in the first round will actually grow with the size of the circuit, thus making it not succinct. So here is the problem. Can we get large number of OT correlations with small first-round communication? Before diving into this question, let's look closely into these OT correlations that we want. What is required in this two-round MPC protocol is the following OT correlations. So between each pair of parties, we want them to set up these correlations. The receiver will have a secret random vector v, and the sender will have a secret, which is a pair of garbled circuit labels. We call that it will have many garbled circuits sent in the second round. So these labels will be a secret. And every OT correlation is specified by a set of public parameters. The receiver's choice bit v is going to be determined by a non-function over both the public parameters and its secret random vector v. The sender's messages are always going to be a pair of labels that is associated with this set of public parameters. So in order to generate large amount of OT correlations with small first-round communication, we rely on this tool called pseudo-random correlation generator, or simply put PCG. We use PCG for correlated OT correlation. In this setting, there is a trusted dealer which distributes two Cs to two parties. It gives the first seed as zero to the first party p zero and second seed as one to the second party p one. Then each party can locally expand its seed. In particular, the first party will get two vectors, v and x is zero. The second party will get a constant delta and a vector x one. So for simplicity, let's just assume that all the vectors have dimension one. So every party get two elements essentially. Now these four elements are going to satisfy this linear equation. So why is this useful? Well, we argue that you can build this linear relation as an OT correlation. What we are going to do is to ask the second party p one to define two messages. The first message will be the element x one and the second message will be the element x one shifted by the constant delta. Notice that the first party can simply use its element v as the choice fit. This is because when v is zero x zero is essentially x one. So he's indeed getting the first message. When v is one, on the other wise p one, p zero will get the second message. Now in the work, in the previous work, they show how to build pseudo random correlation generator using LPN assumption with the inverse polynomial noise rate. Under this assumption, they can expand lambda size seeds into a fixed polynomial size number of correlations. Now we want to use this pseudo random correlation generator to generate our desired OT correlations that is being used in this two round MPC framework. So the sender is going to send a pair of labels. Meanwhile, the sender is going to get a set of correlated OT strings after the PCG protocol. In particular, he is going to get a message x one and then x one plus the shift delta. So how would the sender transmit his labels using these correlated strings? The standard approach is to first apply a correlation robust hash function to break the correlations and then use these values to mask the labels and send them to the receiver. On the other hand, the receiver in the PCG protocol will get a vector v corresponding to its choice bit. But in the actual two round MPC setting, the receiver will need a choice bit that is going to be a function over the elements in this vector and some public parameters. So our way to tackle this problem is to rewrite this choice bit b as a degree two function on the vector v. Then we slightly tweak the PCG protocol to enable it to generate a degree two function of the choice, a degree two function over v and use them as the choice bits. In particular, we generate different parts of this degree two functions separately and we simply compose them together by adding them up. So here is our first step. We built a multi-party silentness protocol using two ingredients. The first is two round MPC. The second is pseudo random correlation generator. In order to use PCG, we add a pre-processing phase where a trusted dealer will interact with these parties. In particular, the trusted dealer is going to set up some OT correlations between every pair of parties. This is done by distributing the PCG seeds between every two parties. When it comes to sending the first round of messages, every party no longer needs to include all of its OT1 messages in the first round. Instead, they will get all the OT correlations by silently expanding its PCG seeds. Then these parties will complete the protocol by sending the second round of message in the two round MPC. Let's now see why all the communications within the first round are very small. First, notice that all the seed lines depend only on lambda. Second, the size of the first round messages depend only on the input size. This is because we're now excluding all the OT1 messages. And so far, all the communications and the computations are indeed very small. So using this PCG approach, assuming L-beam with the inverse polynomial noise rate, we can get a number of correlations that will support bounded polynomial size computations. Now let's look at the security of this protocol. Intuitively, the only difference between this protocol and a two round MPC is that we're using PCG to generate all the OT correlations. So ideally, we are preserving the security of the underlying two round MPC protocol. And intuitively, we can just replace all the pseudo random OT correlations with truly random OT correlations. But in reality, we need to argue security via the reverse sampleability of PCG. So here it comes to our second step. We build a bounded first message succinct MPC. And the difference here is that we're removing the pre-processing phase and the trusted dealer. And we do this by first take a multi-party silent NISC protocol and we cut it in half. We will implement the first half using a two round MPC and it will implement both the dealer pre-processing and outputting all the first round messages. Now these parties will need to send the remaining second round messages in the third round. So first let's see why the first round messages are very small. We call that we have previously argued that all the communications and the computations up to the cutting point are very small. Therefore intuitively, the two round MPC is implementing a very small circuit. And therefore, its first round message should be very small. So this will lead to a three round MPC protocol. However, we don't want a three round protocol. So what we do is that we're applying the round collapsing compiler again to this three round protocol and we squish all the third round messages and make it into just two runs. It turns out even if we use this round complex collapsing compiler, the first round messages will remain succinct. So let's summarize what we have got so far. We have a bounded first message succinct MPC protocol for a bounded size polynomial, a bounded polynomial size computation. And the first round of messages are going to be independent of the circuit side. So in our next step, we will show how to enable unbounded polynomial size computation. So now we're going to build a first step message succinct MPC protocol for unbounded polynomial size circuit. And here's the high level overview. We can think of the bounded first message succinct MPC as an X many object like a PRG with some fixed stretch. On the other hand, FMS MPC is a PRG with arbitrary polynomial stretch. So we can just use the idea of DGM, which build a PRF from a PRG to transform a PRG with fixed stretch into arbitrary polynomial stretch. And here is how it looks like. So that pi be the bounded FMS MPC protocol and that pi of C be this protocol computing the circuit C. Now, this protocol will first take out the inputs X1, X2 and X3 and it will output the first round message. So we can sort of think of the first round messages as the seed of a PRG. When it comes to evaluating the circuit, we can think of this process as expanding the PRG. So that the second round message is the output of such PRG. Now, how do we use DGM approach in this framework? Well, we just define an expansion circuit N which takes the same input and it outputs two copies of the first round messages with the same input. So this is equivalent at saying I'm treating the second round messages as another two fresh seeds of the PRG. But first we need to argue that our bounded FMS MPC can indeed compute this expansion circuit N. Well, this is because with a closer look into the circuit its size is linear in lambda. So it is indeed supported by our bounded FMS MPC. So this is good news. Now we can just build a tree that consists of polynomial, actually arbitrary polynomial number of leaves. And each leaf will be a bounded FMS MPC instance that will support a bounded polynomial side circuit. So how do we actually evaluate some unbounded polynomial side circuit once we have this tree? So here comes to our next idea. We will break down this large circuit into smaller chunks using randomized encodings. In particular, we're breaking down the size of this unbounded poly side circuit and make it into some arbitrary polynomial number of smaller chunks, each of a fixed constant size. And this means we can simply use the instance at the leaf level of this tree to compute every small chunk. And since every leaf supports this bounded poly size computation, we can indeed evaluate every small randomized encoding. So here is how it looks like. This will naturally leads to a multi-run protocol for computing the circuit C. Notice that the first run will be outputting the first-run messages at the root level. This is followed by outputting the second-run messages at the second level and then the second-run messages at the third level and so on and so forth. Okay, so now we have a multi-run MPC protocol but recall that we only want two runs. So we will again apply this run-collapsing compiler and squish all the third-run messages. And it turns out after applying this run-collapsing compiler, our first-run messages will still remain succinct. So now we have already achieved a first-run message succinct MPC for unbounded polynomial size circuit. Our final step is to show that we can indeed go from FMS MPC to reusable to run MPC but such transformation is already suggested in the previous works. And somehow somewhat interestingly, this transformation also involves building a tree. But the difference is that the tree that this transformation is building is going to be of exponential size. And another difference is that they're going to go down only one route to leave pass industry and that one route to leave pass will lead to a particular circuit being evaluated. So they're essentially evaluating just the one leaf industry. Whereas in our previous transformation, we're evaluating all the leaves in some polynomial size tree. Okay, so to conclude our mean takeaway is that we can get reusable to run MPC from LBN assumption with inverse polynomial noise rates. And to achieve this goal, we use pseudo-random correlation generators and any to run MPC, both of which are already known from LPN. And our techniques include Gavel protocol randomized encoding in Gavel tree. Okay, that's it. Thanks everyone.