 Okay, I'm going to discuss our work on reusable two-round multi-party computation from the DDH assumption. And this is joint work with Sandram Daniel and Prate. So first some background on two-round MPC. So we're interested in the setting where we have N parties. Each have their own private input Xi and they wish to jointly compute some circuit C over their inputs securely. And we're going to assume they have access to a broadcast channel. And an adversary attacking the system, we're going to be in the setting where the adversary is has static corruptions. So it's picking which parties it's going to corrupt before the protocol begins. And we're going to be in the dishonest majority setting, allowing the adversary to corrupt potentially up to all but one of the parties. For this reason, we'll need to rely on computational assumptions. And so we'll restrict our adversary to be polynomial time computable. Okay, and we'll be considering the standard notions of semi honest and malicious simulation based security. So why study two-round MPC? So interaction along with communication, complexity and computational complexity is like one thing you might want to optimize for an MPC. And two-round MPC kind of represents the best you can do in terms of minimizing interaction, okay? Because it's been known for a while that one-round MPC is impossible. So, specifically reusable to two-round MPC is kind of taking this minimizing interaction a little bit further. So, here what we want to say is that the first round message that's computed by these parties may actually be reused across potentially unbounded number of second-round executions, competing different circuits over their same inputs. So in pictures, this looks like the following. So, these parties in the first round just kind of broadcast messages to each other, committing themselves to their particular inputs. And then later, yes, if they want to compute a circuit C, they just have to release one second-round message, right? But now if they want to compute, say a second circuit C over their same inputs, all they have to do is again, send one more message, okay, reusing their first-round messages and they can keep doing this. So, in particular, like if they say wanted to compute M different circuits over their inputs, they only require M plus one messages in total rather than say two M messages if they just used regular two-round MPC. So, in this way it's kind of minimizing interaction even further, okay? So there's a number of prior works on two-round MPC and also on reusable two-round MPC and I'll highlight the reusable ones in orange. There's really two paradigms that exist for developing two-round MPC. The first is based on fully homomorphic encryption. And our first instantiation of this was given by Mukiji Wix and kind of later improved by AJJM recently and they removed the need for a CRS when considering semi-honest adversaries. A second paradigm is kind of based on this round compressing compiler approach, which was first shown in this work of GGHR where they instantiated it based on indistinguishability obfuscation, okay? And kind of since then the assumptions needed to instantiate this sort of approach have been improved. So first to bilinear maps and later to kind of the minimal assumption of two-message oblivious transfer, okay? And so another concurrent independent work to ours. So this work of Ben Hamouda and Lin also show how to make the, this is an orange. So they show how to make the bilinear map based approach reusable. So note that before their work using this round compressing compiler approach it was only known how to maintain reusability when using the strong primitive of IO, okay? And so our work is also kind of fitting into this second approach and we showed how to weaken the assumption even further. In particular, we show how to do this like maintain reusability just based on the DDH assumption, okay? So in more detail, assuming DDH there exists, you know, a semi-honest reusable to run a PC in the plain model and a maliciously secure reusable to run a PC in the CRS model, okay? Now, our starting point for actually constructing this is kind of the GS18 approach to to run a PC, okay? And this is kind of roughly the template that they follow. In the first round, each pair of parties exchanges a set of OT1 messages between themselves, okay? And then in the second round each party releases a sequence of garbled circuits and these garbled circuits will actually be used to kind of communicate among themselves in kind of computing maybe a multi-round at PC protocol but the details of this particular step are not too important for us. Also, what's more important is kind of understanding what's going on in this first round. And so we can ask the question, why is this, why is the GS18 protocol not already reusable? So the issue is that kind of the number of OT1 messages that have to be exchanged in the first round grows with the size of the circuit to be computed in the second round, okay? So for each kind of gate in the circuit that the parties want to compute they need to exchange a number of OT1 messages and these OT1 messages can't be reused in particular because these garbled circuits will actually kind of release randomness used to generate these OT1 messages. And so that would, so this would not be secure if you try to kind of reuse these OT1 messages multiple times, okay? So really what's going on is that like these OT1 messages are being used to set up kind of useful correlations that the parties are then taking advantage of in the second round, okay? And so intuitively what we would kind of need to get some notion of reusability is kind of the ability for these parties to communicate in the first round and then set up a potentially unbounded number of OT correlations to be used in the second round rather than just some fixed number that depends on the size of their communication, right? So that's kind of our, to first, that's our entry point is like, how can we just in one round kind of allow these parties to generate a huge number of OT correlations without communicating too much, okay? And our main tool for doing this is this primitive of homomorphic secret sharing which was introduced by VGI in 2016. So it's a fairly simple primitive and here we consider a dealer. So for the next few minutes we'll be considering kind of a dealer model where we have a trusted dealer that's going to start the protocol by giving each of these parties some parameters, some messages. So for the specific case of HSS, here we have a dealer that has a secret S. This dealer secret shares that secret to two parties, P0 and P1. And this is where the homomorphism comes in. So say these parties want to compute some circuit C over the secret. There's an evaluation algorithm that each of them kind of apply to their share to produce these output shares S0 prime, S1 prime. And the correctness or homomorphism property says that these output shares simply exhort the value of C of S, okay? So now there are these parties knows what S is like the shares kind of hide what S is, but yet they can homomorphically compute a function of S on their individual shares, okay? And so BGI show how to do this from the DDH assumption as long as the circuit C to be computed is low depth or an NC1. And one kind of caveat of their work is that this correctness requirement, this kind of reconstruction is going to actually fail with some inverse polynomial probability. Ideally you'd want kind of negligible probability of failure, but they only achieve kind of inverse polynomial probability of failure. And this will kind of usually comes up and has to be dealt with in applications of HSS, okay? Now, so we again recall that what we want to do is kind of allow these parties to generate a huge number of OT correlations. And we're going to show how to do this from HSS. And kind of another kind of name for what we're doing is constructing something called the pseudorandom correlation generator as introduced in this work of Boyle at all, okay? So first, yeah, what exactly is a random OT correlation? So it's a correlation between two parties. The sender gets two random strings, R0, R1. The receiver gets a random bit and one of the two random strings, RB, okay? So to generate many of these with a small dealer message, this is how we do it. So basically the dealer just chooses two PRF keys, kind of one to be associated to the sender and one to be associated to the receiver. The sender's PRF key has output space in like a lambda bit strings whereas the receiver's just has one bit output. And the dealer will use HSS to share these two keys. And in addition, it will also give like the sender's key to the sender and the receiver's key to the receiver. So now let's say that the parties want to kind of compute, say the ith correlation or i can be in some very large range. Okay, what they're going to do is they're going to use HSS evaluation to compute the following function under the hood of HSS. So what is this function? They're simply going to be evaluating the sender's PRF on input i. And then they're either going to be outputting that string or they're going to be outputting the all zero string. And whether they do this is based on the receiver's PRF evaluated at i, right? So if the receiver's PRF evaluated i is just equal to one, then the result of this computation is just the sender's PRF value. Otherwise the result of this computation is just the all zero string, okay? And so they evaluate this homomorphically to produce their output shares. And now I'm claiming that these shares can basically be used to produce a random OT correlation. And so on the sender's side, one of the sender's strings is just going to be simply its output share. And the other is going to be its output share XORed with its own PRF evaluated i, which I can compute because it has its own key, okay? On the other hand, the receiver's string is just going to be their output share and their choice bit is just going to be their PRF value. So quickly to check correctness of this, assuming the HSS is perfectly correct for now, let's say B is zero, which means that really these parties just kind of computed the all zero string under the hood of HSS. All that means is that Z zero is actually equal to Z one, right? So in particular, our B is equal to our zero, okay, and that's what we wanted. If B is equal to one, then Z zero XOR Z one is going to be the sender's PRF value, right? So in that case, our B is actually equal to our one, as you can see, okay? So that was kind of quickly to check correctness. The point is that like, you know, for any i and say potentially an exponential size range, the parties can compute, do this like HSS evaluation and derive these like random OT correlations. And so out of this like very small dealer message, which are, which just consists of shares of PRF keys, they can produce potentially, you know, exponentially many random OT correlations, okay? So yeah, before moving on, just a couple small things to note. Actually, you know, recall that, you know, this HSS is only known for circuits in NC one. So this requires a PRF and NC one to be computed under the HSS and this is known from DDH by Nara Reingold. And another thing is actually to instantiate GS18, which I'm not getting into the details of, you need a little bit more structure than just simply a bunch of random OT correlations. In particular, you need some structure or some correlations between the different receiver choice bits. So I won't get into how to do that, but it's required just a little more complicated of an evaluation procedure here, okay? So given what we just discussed, which is basically a way to take, to have a dealer, you know, distribute small seeds and then two parties to generate from those small seeds, a large number of OT correlations. So given that our eventual kind of construction of two-round MPC proceeds in three steps, okay? The first step is just going to be plugging in this HSS primitive directly into the GS18 template. And so the primitive that comes out of that is what we call sharing compact multi-party HSS, okay? So in this primitive, again, we're still in this dealer model. Now there's multiple parties. So the dealer will just share kind of a secret X among say N parties. And now if the parties decide to compute a circuit, they can each just release a single second message MI, which can then be used to reconstruct the output, okay? So we're going to kind of instantiate this template by having, you know, the dealer just share between each pair of parties, like doing this HSS sharing of PRF keys as presented in the last slide. And so that's what these DI's are going to consist of. The output shares of the second round messages MI are just going to be the GS18 second round messages. Where we're using the OT correlations generated from these dealer messages to instantiate the second round message, okay? And the key property of this primitive that we achieve is the fact that the size of the dealer messages D1 through DN are small, in particular, independent of the size of the circuit that the parties can compute in the second round. And this is the key property, okay? And you might think, oh, like we even get something stronger in the sense that the dealer messages are reusable and can already reusable and can be used to compute many circuits in the second round. So it turns out that this is not true and I won't go into details why, but really the problem stems from the fact that this HSS reconstruction is not negligibly correct. There's actually an inverse polynomial error. That error ends up translating into a security issue, which we have to fix, but which results in a primitive that only allows like simulation, you know, for a single second round message, okay? A single second round computation. So what we get is the sharing compact primitive where the dealer shares are small. At this point, the parties can compute a single large circuit, but only security is guaranteed for say one execution of the second round. Okay, so that's kind of the first step. And now there's really just two more challenges that kind of have to be solved is to get our final result of reusable to your own NPC. So first, of course, we want to remove the dealer because we don't have this dealer in a usual to your own NPC setting. And then we also want to make sure the second round is reusable, right? So to remove the dealer, we're going to do the following. And this is our second step. So we're going to be taking our sharing compact HSS and turning it into a two round NPC protocol. That's not quite reusable, but it satisfies this property that we call first message succinctness. Okay, so here this is the same kind of template from last slide, all I have now is that there's basically different input associated with each party and the dealer is kind of just sharing the concatenation of all these inputs. Okay, so not really any difference. So to turn this thing into an actual NPC, first we're going to assume a, what I'm calling here, vanilla to your own NPC. So kind of a two round NPC that doesn't satisfy any special properties such as first message succinctness or reusability. And for example, GS18 gives such a two round NPC. And a very natural approach to removing the dealers to simply use this NPC PI to compute the dealer's functionality. Okay, so we can have the parties distributively compute this dealer functionality in the first two rounds. And then in a third round, then they can broadcast that messages, MIs. Okay, but really we want two round NPC. So to achieve this, we're going to kind of compress the third round of this approach into the second round using garbled circuits. So at a very high level, this is what happens. The parties will use PI not to compute just the dealer's functionality. They'll use PI to compute, first the dealer messages, D1 through DN, and then kind of release labels corresponding to those dealer messages. And in parallel in the second round, each party will output a garbled circuit, their own garbled circuit that maps their dealer message DI to their output message MI. Okay. So now consider someone that wants to reconstruct the output. At the end of the second round, they have a bunch of labels and they have a bunch of garbled circuits which they can combine to produce these output messages MI and then combine those to learn the output of the circuit. Okay. Again, see, so this is a high level overview and encourage you to see the paper for more details on this transformation. But here what we ended up with is really this two round MPC with the key property that the first round of the MPC, which you can notice is just the first round of this MPC protocol PI is small. And all the communication and computation needed to compute that first round is independent of the size of the circuit that the parties will compute in the second round. Okay. This is due to the fact that this dealer functionality is quite small. All it is is kind of, you know, sampling and distributing PRF keys and shares of PRF keys. Okay. So what we have is this first message synced FMS MPC. And in our final step, we showed a generic transformation from FMS two round MPC to reusable two round MPC. And then again, at a very high level, this is kind of the intuition for this step. One can think of a first message synced MPC is somewhat of an expanding object similar to a PRG since there's some small amount of communication that goes on in the first round. And then that kind of the resulting correlations from that communication can be expanded to compute some large circuit in the second round. Reusable MPC can be seen as a strengthening of this in the sense that you can expand that those first round correlations to compute potentially maybe unbounded number of second round circuits potentially exponentially many. So this can maybe be seen as kind of an exponentially expanding object like a PRF. And given this, you know, what we do is basically just adapt this famous, you know, GGM tree based construction of a PRF from a PRG kind of a mildly expanding object to fully expanding object. So we adapt this to the MPC setting and show a tree based approach to taking any first message to synced MPC and producing a reusable MPC, okay. And again, I won't go into any more details of this and you can see the paper for that. But it's essentially this GGM approach where kind of instead of having a PRG at each node you essentially have an FMS MPC at each node and the whole tree, we again collapse the whole tree into two rounds using garbled circuits, okay. Good. So now, so that kind of completes the high level overview of our construction. I'll just quickly conclude. So I guess really one of the main takeaways I would say is that our work really shows that one can achieve this really minimal interaction pattern of reusable two round MPC without heavy hammers of obfuscation or FHE, which prior to our work were the only two known to imply reusable two round MPC, okay. And just to recap kind of where, like what tools we actually needed to use from prior work that are all known from DDH, we use this HSS, we used a PRF and NC1 and we also used two round MPC and like vanilla two round MPC and all these kind of tools were previously constructed from DDH, okay. In terms of techniques, this is another work now in a pretty long line of works investigating these garbled protocols or garbled circuits that talk, right. So it's another work that uses those ideas. It's also combining that line now with this like pseudo random correlation generator primitive that was recently introduced. And also this, what I'm saying is a garbled tree it's basically taking like a tree-based approach and kind of collapsing it via garbled circuits. And so similar ideas have been used in a few recent works, okay. And then again, just to mention there's a couple of concurrent independent works this work of Ben Hamuda and Lynn. Like I said, they achieve reusable MPC from pairings and which is a strictly, it's a stronger assumption than us but they actually get a kind of a stronger functionality in that their first round, the first round messages of the parties actually don't even depend on the number of parties in the system. Whereas we assumed kind of the number of parties end was known kind of beforehand, okay. And then another thing to note is this other concurrent work of AJJM that's focusing on reusable turn MPC from LWE. They actually use a very similar type of FMS MPC to reusable MPC transformation as we do which is our step three. So that's all I wanted to say and thanks for watching.