 Hi everyone, thanks for tuning in. I'm going to talk about our work on going beyond dual execution, multi-party computation for functions with efficient verification. This is joint work with Kermit Hozai and Abishalat. As you all know, secure multi-party computation allows a group of mutually distrustful parties to jointly compute some function F over their private inputs in such a way that they learn nothing beyond the output of the function. There are several settings in which multi-party computation or MPC has been explored. A couple of the categories that are relevant to this work are the kind of corruption and the number of parties corrupted. Semi-honest versus malicious or passive versus active. This is the kind of corruption that we are used to. And in terms of the number of parties corrupted, there are two distinct categories. One is honest majority, where an adversary can corrupt only a minority of the participants and the other stronger setting is dishonest majority. Some of our results are going to apply to honest majority, but for the most part, our results here are going to be for the strongest setting which is allowing an active adversary that can corrupt all but one of the parties. Several applications to multi-party computation and more recently applications of MPC to do secure machine learning has been on the rise. A standard blueprint for designing secure multi-party computation with active security is to first design a protocol that is secure only against a passive adversary and then amplify it. This talk is about understanding what is the overhead of achieving this active security over passive or in other words, what is the overhead of step two in this blueprint. I'm going to start with the classic garbled circuit protocol of YOW. This is a two-party computation protocol for Boolean computations. It proceeds as follows. There is a garbler, Bob and an evaluator, Alice. Bob garbles the function with his input encoded. Alice and Bob engage in oblivious transfer where Alice gets a garbled form of her input and then Bob transmits the garbled version of the function and using the garbled input and the garbled circuit, Alice will be able to evaluate the function on her input X. As such, this protocol already has some features against active corruption. The way I've described it is the passive version of this protocol, but this already achieves some security against active corruption. In fact, an adversary corrupting the evaluator here cannot do anything beyond giving the input to the oblivious transfer. On the other hand, if you consider this protocol where Bob is malicious, in fact, he can launch an attack. Since he garbles the function, there is a way by which he can garble an arbitrary function, not quite arbitrary, but it's something that is different from the function intended for the computation and make Alice evaluate this function. So this protocol is secure against an active corruption of Alice or the evaluator, but is not secure against an active corruption of Bob or the garbled. Now there has been a lot of works in the past decade trying to get concretely efficient versions of this compilation from passive yaw to active security. And in fact, the best, at least asymptotic state of the art is the work of Hazai et al, which matches the passive protocol on several parameters. It matches a number of rounds, the communication complexity, the number of calls to the underlying pseudo-random generator to build the garbled circuit and the number of oblivious transfer calls. So as such, the overhead from passive to active, when you look at the communication, this is constant overhead and this is nice. However, the computational overhead is still exorbitant, not just this work, but also all previous work. The computational overhead of getting active security is an order of magnitude higher than a passive. So in this work, we're going to try to reduce this overhead. And the first step we're going to do is we're going to concede to some leakage. And the rationale is that now this is being practical, we want to apply multi-party computation for large computations, for big data, we want to do machine learning computation and we want active security in many of these scenarios and the overhead is still pretty high for this to be practical. So the goal I'm going to say here is that we want to get active security with constant overhead, both communication and computation over the passive security where we're going to concede to some amount of leakage. So in the context of leakage, in fact, one of the very first words that came towards getting an actively secure version of the YOW protocol is this beautiful work of Mohassell and Franklin, which where they introduced the dual execution paradigm. So here, they want to take the YOW protocol and come up with a new protocol which will give some form of security against active corruptions of both the garbler and the evaluator. And roughly the idea is as follows, instead of doing the protocol, once you're going to do it twice with the roles being reversed in each of these executions. So Bob's going to be the garbler in the first, Alice is going to be the garbler in the second. And as you already know, we know that this protocol is actively secure against the evaluator, which means that in both these executions, we are guaranteed at least one of these executions, the garbler is going to be honest and the result of the computation is going to be correct. So if we do this twice, we know that at least one of these two executions must have resulted in the right answer. So to fix this protocol, essentially what they're going to do is, we're going to reveal some form of masked form of the answer and then do an equality check. And we need to do this equality check with full security, meaning it should withstand active corruption of either participants. We're just going to check if the result of the computation in both these executions were the same. And if it were, then we're going to reveal the answer. And now, as I said, the idea essentially is one of the answers is right and the equality check, make sure that only when the answer is right, that's going to only the right answer will be revealed. Why is that a one bit leakage? Well, the other computation where the garbler is the one corrupted could result in a bad evaluation or a bad computation, but the adversity gets one bit of information whether the bad computations result equals the actual answer. And this is the one bit of information that the adversity gets and this is what is leaked. And as you can see, the overhead of this protocol is in fact not just constant, it is two. You run the garble circuit protocol twice. The equality check is only on the output and this is a significantly simpler computation even if you want it with a higher security requirement. However, this approach of Mohassell and Franklin works only for the YOW protocol which is in the two-party setting. And as we know, this is only for Boolean computations. So what's our goal? Well, we want to extend this idea and try to do more. In fact, we are going to start with a more stringent requirement. We are going to say we want constant overhead. Well, we actually want one overhead and we still want to keep that the leakage is acknowledged one bit. So what are our results? Well, we have two main results. The first result is we extended the basic Mohassell, Franklin dual execution paradigm where we show that if the function F satisfies an additional property that we call is G efficiently verifiable, then the complexity of our protocol is a passive YOW execution of the function F plus a passive YOW execution of the function G. A second theorem that we show is not just for Boolean functions F, but for arbitrary functions that have the same property which is G efficiently verifiable that I'll tell in a minute. The overhead or the complexity of our protocol is going to be a weakly private execution of the function F plus an active protocol to securely compute G. In both these cases, what you should think of is the protocol that we are going to consider for F is actually the passive protocol and the protocol for G is the active protocol. So the overhead in fact is what we are trying to do, what secure computation we are going to do with this function G. For the second thing, we're going to show that there are many classical protocols, the passive protocols that we already know satisfy this weakly private property. The classic GMW protocol, BGW, CCD, dam guardish, all these protocols satisfy this weakly private notion. In fact, we choose a specific kind of weakly private notion that was introduced by Genkin et al. This is called AMD circuits which says, which introduced this notion of these protocols that are secure, these passive protocols that are secure against active adversaries up to additive attacks. And all these protocols in fact fall under this category and since all of these extend even to the multi-party setting, theorem two applies to Boolean arithmetic, two-party, multi-party. And finally, we also extend this like this idea of weakly private, we show that the passive implementation of the distributed garbling protocol of Belare et al in the OT hybrid is weakly private. What does this give us? Well, it gives us, we can now instantiate the pi weak of F using the distributed garbling protocol for Boolean computation. The added benefit is that this is constant round as opposed to using GMW in the multi-party setting where the round complexity is the depth of the circuit. And this in fact, among all our results, this theorem three in fact is the most technically intensive result of our work. I won't talk much about it, but please go look into the paper. All right, so these are our results. I haven't said, what is G efficiently verifiable mean? Okay, so we're gonna say a function F is efficiently verifiable by another function G if the complexity of G is significantly smaller than F and G is a predicate that will evaluate to one on inputs X, Y and Z, if and only if Z is the result of the computation of F on X, Y, okay? So G is sort of a checker for the result of the computation by F. And there are many families of algorithms that fall under this efficiently, that are efficiently verifiable. Matrix multiplication can be checked just by using free walls algorithm. Max flow can be checked by looking at the min cut. Linear programming, computations can be checked by the complementary slackness condition. Convex hull can be checked by just looking at the convexity requirement on each point in the hull. And all of these, you can see that checking is an order of magnitude in terms of the input size or even the computation size is an order of magnitude smaller than computing the function itself. So let's talk about the first result. And here we extend the result of Mohassel and Franklin where we show that for a Boolean computation we can get a complexity of garbled circuit and function F plus garbled circuit and function G. And this is how we change the protocol. First, we're gonna keep, the first execution is going to be the same. Alice and Bob are going to exchange a garbled circuit for the function F and try to compute it. The second, we're not going to do, the second computation is not gonna be on the function F, instead it's going to be on the function G. So we have computed this mask form of this output in the first one, we're going to use G to check whether this computation is right. And if this computation, if the predicate returns true, then Alice is going to return the answer to Bob. Now you can see here that just as before we know that garbled circuit is already secured against active corruption against the evaluator. So one of these two computations, the result must be right. So if the first computation, the result is right, then it doesn't matter what the second computation is, whatever is revealed in the last will be the right answer. Bob can still get one bit of leakage based on the second computation. That's as best because the result is just one bit of information. On the other hand, if the second computation is right, no matter what happens in the first computation, the second computation will give you, will return the right answer and G returns one only if the computation was right. So this gives us a two-party computation which withstands active corruption against both parties but we still have one bit of leakage. And as I said, G is smaller than F. So the overall computation, the complexity of this is to do the garbled circuit for function F plus the garbled circuit protocol for the function G. Now, moving on to theorem two, here we said that we can extend this if we have a weakly private protocol for the function F. Here we're gonna consider an arbitrary function. I'm still showing the diagram for two-party but this diagram works even for multi-party. So let's just instantiate it with one of the protocols before talking about our generalization. The idea is that they're going to use, let's just assume they're executing the classic GMW protocol in the OT hybrid. They're going to put their inputs, they're going to learn shares of the result. Now with shares of the result, instead of revealing the answer as it is done in the passive protocol, before revealing the answer, they are going to check whether the result was right using an actively secure protocol for the function G. Just extends what I said for the dual execution in the previous slide with the YOW protocol. We first run this passive GMW protocol to learn shares of the answer. And then we checked that the shares of the answer is right using an active protocol here. And here we need like a fully active protocol for this function G. So the idea is that no matter what the computation is, what happens before this malicious implementation for function G will let the answer be revealed only if it is right. And as before, this will lead to a one bit leakage. And the protocol here is the complexity of doing the passive protocol GMW plus and actively secure a protocol for the function G. We don't need to, it doesn't work just for GMW. All we need is this first protocol to be weakly private. And intuitively, weakly private here means that this protocol must already guarantee privacy against active adversaries. So it doesn't have to guarantee correctness, it only needs to guarantee privacy. And here the concrete instantiation of this weakly private notion we use is this notion introduced by Genkin et al. Which says that a protocol is, like we consider protocols that are secure against active adversaries up to algebraic attacks. Meaning that they can add an arbitrary value to any of the wires in the computation. This allows us to instantiate this with various protocols. The GMW protocol, BGW, CCD, DI, so forth. And as I said, another technical theorem we show here is that the passive protocol for the distributed garbling by Belare et al. on the OT hybrid is in addition weakly private. So we can plug this in this framework. As I said, G is, we say we are results only hold for functions that are efficiently verifiable, where the complexity of G is significantly smaller than F. It doesn't have to be very, very small. It just needs to be an order of magnitude smaller. And then we can apply any of the current concretely efficient protocols that we have for active security for G. One application in this paradigm that we have done, it doesn't fall exactly there, but it is similar in spirit. We show for the perfect matching application, for the perfect matching use case. Here, there have been several works, at least in the algorithms literature, many algebraic approaches for matching and other metroid problems. The work of Harvey actually stands out here as one of the most efficient protocols for perfect matching. In fact, we can cast Harvey's algorithm when we, you can make it into a passive protocol where all the parties need is a secure implementation, a passive secure implementation of matrix multiplication. Here you can think of it as, Alice and Bob give shares of two matrices. They get shares of the product of the shared matrices. So given this box, you should think of this as replacing the standard oblivious transfer. And then you can just run the protocol just like GMW in the OT hybrid. You can think of some version of that in the matrix multiplication hybrid. And in fact, this passive protocol gives a communication complexity of order n squared. And this is interesting because even just naively computing this is order n to the omega where omega is the matrix multiplication constant. So the passive protocol is order n squared. Checking perfect matching is order n squared. And applying this paradigm, we get an active secure protocol for doing perfect matching with communication complexity order n squared where with one bit of leakage. It doesn't quite fall under the paradigm of constant overhead from before, but this idea of combining a passive protocol with a checker gives us a concretely efficient protocol for perfect matching. This protocol is weakly private. We can amplify it with one bit leakage because we can check the perfect matching at the end of the passive protocol. One thing I skimmed over here of course is I gave the complexity in the matrix multiplication hybrid. You have to implement this and you can use kind of any additively homomorphic encryption to realize this matrix multiplication hybrid with the complexity that you want. Thank you.