 The joint work with Navneet Agarwal and Sanath Anand, who were undergraduate students at IIT Bombay. So I'm talking about MPC, or Secure Multi-Party Computation. So let me just start by reminding you the setting. So there are a bunch of parties. They all have inputs. And they may all want to compute some function, so they could all potentially get outputs. And to compute this function on their inputs, they run a protocol. They talk to each other. And the requirement is that they should learn nothing other than their own inputs and the outputs of the function. So what else they see in the protocol could have been simulated. It's a very fundamental model. Since the 80s, cryptographers have been working on this. We know a lot about it. And one of the foundational questions, the fundamental questions about MPC is which functions that met Secure Multi-Party Computation protocols. And you'd think that by now, we should know everything about it. Unfortunately, even in the most basic setting for this problem, things are wide open. So what do I mean by the most basic setting? Information theoretic security with no restrictions on number of parties were corrupted. We do, of course, know a lot of very important, powerful, positive results when you do put restrictions. But I would think that this would be the most basic problem if we didn't have all those positive results. The first thing you would think of. We know a little bit for the setting of two-party functions deterministic two-party functions. We do have a full characterization of what's possible and what's not. And in this work, we are interested in more than two parties. And we'll study mostly in this talk. I'll talk about passive security. So the honest bit curious adversary setting. And we do have results on UC security also, which I'll mention towards the end. So to understand the feasibility question for MPC, we also, in this work, introduce some minimal or simple models of MPC for aggregating functionality. So what do we mean by aggregating functionalities? There's just one party who has output, the aggregator, and all the other parties have inputs. They don't have any outputs. So simple models of MPC for these aggregating functionalities have been around for a while. And one of the most influential works in this setting is by Phi Gake-Leon and Naur from the 90s who introduced the private simultaneous messages model. So in this model, all the input players send a single message to the output player. And it has to compute the output. But we also want security. So for that, they allow a trusted party, a coordinator, who can a priori send some correlated randomness to the parties. They also require security only against the corruption of the output party. So none of the input parties can be corrupted. More recently, Amos and Scott has introduced another model called NIMPC, where they do allow all parties to be corrupted, any subset of parties could be corrupt. But they make another restriction on security. Namely, the adversary is allowed to learn not just the output of the function, but the residual input, the residual function of the inputs of the honest players. That is to say, the adversary is allowed to learn the output of the function on all possible settings of its own inputs, of the corrupt party's inputs with the inputs of the honest players fixed. So you can evaluate the function on, keep the honest players' inputs fixed and run through all possible inputs of the corrupt players. So that is allowed in the ideal world. That's given as legitimate information. And in either of these models, we understand feasibility fully. In fact, every function can be securely computed under these models. So these models are not so much trying to learn about feasibility question. It's about asking questions like computational complexity of these problems. In our case, we are interested in the feasibility question of MPC. So what we introduce is a new model called, we do, of course, talk about MPC, but we also introduce a new model called Anastashstich NIMPC or UNIMPC. So how does it work? It's like NIMPC, but we don't have the coordinator. And also, we don't allow the output party, the star, and the output party have removed it to say that the output party is not allowed to learn the residual function. So it's just like an MPC protocol. But we do allow us two-phase computation. So there's a first phase before the inputs come in, where the input players can talk to each other. And then there is a phase when the input come in, they just send a single message to the output player. So that is unassisted NIMPC. Our most minimal model is what we call UNIMPC star, where in the pre-input phase model, in pre-input phase, the party send a single message to each other over in private channels. So it's a single message, and then they get the inputs, then they send a single message to the output player. So that's our basic model. So why are these models relevant to the study of feasibility question in MPC? If you show an UNIMPC star protocol, it's automatically UNIMPC protocol, which is automatically an MPC protocol. So the security notion, and the communication pattern, everything here is consistent with the corruption model is all what is allowed in MPC. So a protocol that's secure in either of these models is also an MPC protocol. So now let me actually show you the results we have before I explain them. So this is a landscape in the sense every point here is supposed to be a function, a functionality. And we have these three sets, right? The set of functionalities which have MPC protocols, and some subset of them have UNIMPC, and some subset have UNIMPC star protocols. We don't fully understand what these sets are. Even after our work, we don't fully understand what we can to use a following. There is a class which we'll define in a combinatorial or algebraic way, a class of functions called CPS. So I'll describe what CPS is in a bit. And all MPC protocols live inside this class. All functionalities with MPC protocols live inside this class. And on the other hand, we can sandwich all these classes within, or we have another class, the CPSS, which also I'll describe in a second. It's also defined combinatorially or algebraically. And we show that the class of CPSS functionalities do actually have UNIMPC star protocols, right? So the gap is between these two things. There's an extra S that's confounding S. And so let me first tell you about these two results. And we have a few more things in the paper which I'll mention towards the end. OK, but first I should really vote CPSS and CPSSR. These are names we came up with. If you have seen these things elsewhere, I'd be happy to know. They're very natural objects. I just don't know any standard name for these things. So what is an NMCPS? A CPS stands for commuting permutations system. So you should think of N as the size of the output alphabet and functions we'll be describing. So N is one to N are the possible outputs of some function. And each of these x1 to xm are the input sets of m players. And what each input is is actually just a permutation of these n numbers, right? So each element in the input set, each element in xi, is a permutation on these n elements. Just put it another way, xi is a subset of the group of our permutations. That's called a symmetric group, Sn. And it's OK. So far, there is nothing commuting about it. Permutations don't usually commute. The commuting part is a following. Suppose I took m permutations, one from each of these sets. And I applied it to one. One is some designated special element in the output set. So I can apply one after the other these m permutations on one. I'll get some other number in the range one to n. That number should be the same even if I applied these permutations in a different order. So the row is some permutation of these m permutations. So I applied them in a different order. Just to illustrate, so pi1, pi2, pi3 are three permutations that color differently because they're coming from different sets, x1, x2, x3. And you apply them to one. It's the same thing as if you applied them in a different order. And in any of these six orders, you apply them. You'll get the same thing. That's what the definition of commuting permutations is. It's very simple. Just to point out, the commutativity is required only across the sets, xi, and also only for applications to that one value, one. So as permutations, they may not commute with each other. But when applied to this one value, the order doesn't matter. So that is the definition. And I'll hopefully make a little more sense soon. But let me go ahead and explain. Oh, so CPS is not a function. It's a system as a bunch of permutations. The function that we associate with the CPS is the following. Just the inputs correspond. Inputs are these m permutations, one from each set. It's one from each party. And the output of the function is just evaluate those m inputs on one. That is a function associated with the CPS. And CPS says it's just an extra requirement. So there's a subgroup system, which means each of these xi's should be actually a subgroup of the symmetric groups. So symmetric group is a group structure. There is a group structure with composition as a group operation. And these sets should be all subgroups of this set. So that is a requirement of CPSS. OK, so where does the CPS thing come from, right? So let me explain it. Actually, I think fairly intuitive. So suppose somebody gave you a function, an aggregating function F, and they said it has an MPC protocol. Information's very strictly secure against passive corruption. Then let's consider a partition of this set of m players into two parts. So one part consists of just the output player, which I'm calling P0 there, and one of the input players, Pi. And the other set is the remaining set of parties. Now think of this as a two-party setting now between the set R and the output players. So it's not easy to, it's not hard to see that if you have a secure protocol, the only thing you can effectively do is the set of parties R. They're not supposed to learn anything other than their own inputs. So they have to just send out their residual function to the other two players, so that they can evaluate their function correctly. And so if you have a secure protocol, or if you have any protocol which R doesn't learn anything, then this is the only way it can work. But on the other hand, it's also a secure protocol where these two parties, P0, Pi, should learn nothing. What that means is residual function that they have to learn is something they could have learned just from the output of the function. So without, just by learning, with PI's input, fixed input Pi I, whatever they can learn about the input players, inputs of the set R is all that they should learn even if they learn the residual function. So the residual function is nothing more than what the input and output reveals to the set P0, Pi. And CPS captures exactly this condition. It may not look that, look like that immediately, but if you play with it a little bit, you'll see that CPS is exactly captured in the condition that the residual function is for free, in the sense that it's already implied by just one input for Pi and the output corresponding to that. Up to relabeling of inputs and outputs. And one direction is very easy to see. Suppose I gave you a CPS, then it indeed is a case that, so if you consider the function applied on Pi 1 to Pi M, by commutativity, I could just apply Pi I at the end. So I apply all the other permutations first, so I'm calling it Pi R, the composition of those things. And so the function is actually just Pi R of 1. So Pi R of 1 is all the information that you need to evaluate this function on any input, on Pi I in particular. So that is the residual function, Pi R of 1. And if I give you Pi I of Pi R of 1, since Pi I is a permutation, I can compute Pi R of 1. So at least it's very easy to see that if I give you a CPS, it does have this property that residual function is for free. And the other way is something in pro, you need to go over things and ensure. So that also works out. So that's one way to think of what CPS is doing. It is that condition. Another way we could look at it, there's a class we didn't give a name in the paper. So I'm just calling it NIMPC without leakage. So remember, in NIMPC, the output party was allowed to get the leakage, the residual function. So you could imagine a model where everything else is as an NIMPC, I just remove this condition. And then actually, CPS is an exact characterization of which functions are securely realizable and not. Which functions are not. It is a characteristic of which functions are realizable in this model. And I'll also mention one connection with another recent work of Hallevi et al. from last year's TCC. They defined a notion called best possible information theoretic MPC, which I don't have time to go into right now. But actually, the definition is actually very simple. Same as this NIMPC with the leakage allowed to the aggregator, but it just removed the trusted coordinator. So in our case, in this slide, we are keeping the trusted coordinator to remove the leakage. If you do the other one, you get this bit MPC. So the reason I mention is I'll show the connection here. So to remind you, this is our landscape, which I have not yet proven anything to you. I just defined CPS and CPSS. And we can all actually show that these things are actually, there is a gap. So these are two combinatorial classes. We can show there are functions which are CPSS, and they are not CPSS. In fact, they do not embed into any CPSS. And the thing about bit MPC is that inside CPS, the residual function is for free. So there is no distinction between MPC and bit MPC in that gap, inside CPS, rather. So if there is a gap between CPSS and MPC, that means that's also a gap, that's also a set of functions which do not have bit MPC protocols. We don't know if this gap exists or not. But it's an open problem whether bit MPC is possible for all functions or not. If this gap exists, then it is not. So now let me tell you about CPSS. Why is it inside this UNIMPC star? So that is actually a protocol. So we are going to give a UNIMPC star protocol. And I think it's kind of a huge protocol which generalizes this very natural and simple protocol for summation that you might have all seen. So let me tell you what the protocol is. So suppose this is your function defined in terms of M subgroups of SN, of the symmetric group. And suppose there are three parties, so M equal to three. And before getting their input, they are going to talk to each other a little bit. So here's what they will do. So each party, so in this case, P1 picks three elements in the groups G1, G2, and G3, one each. So that's their color hood differently. And the condition, I mean, they're a bit randomly subject to, one is a fixed point of their composition. So if you apply these permutations to one, you can apply in whatever order. That doesn't matter. If you apply to one, you'll get back one. Once you apply all three of them. So P1 picks such permutations in its head. And each of these guys, P2, P3, they do that independently. And then they've not communicated yet. What they will do is just send, so all the group one permutations are sent to P1, group two permutations sent to P2, group three permutations sent to P3. That's all the communication they do before the input comes in. And then they get their inputs, pi 1, pi 2, pi 3. They do a secret sharing as follows. And they compute these additional sigma 1, 0. And so P1 computes sigma 1, 0, such that if you composed all of them together, you'll get pi i. And why is this possible? Because it's a group. The blue things all live in a subgroup of SN. So you'll be able to find this. And they send, each of them computes this and sends it to the aggregator, the output party. Who does something somewhat strange? It takes its permutations and applies them to one. And outputs whatever comes out. So let me at least tell you why it's correct. It's the correct thing. The correct thing would have been the following. What you would really like to compute is pi 1 up to pi m applied to one, where each pi i is the composition of the sigma i's in a row. So this is what you'd like to do. Compose all these sigma's in the bottom row. They'll give you pi 1. Then compose them with these and so forth. And apply all of them to one. This is what you would like to do. What our protocol does is this somewhat strange thing. It just applied these things to one. OK, so let's try to make sense of it. We know a little bit about how these other things look. They all keep one as a fixed point. So at least we can do this. Instead of saying the application was oblivious of the other shares, you can at least write it this way. But still, they don't look the same. So let me just show you that they are actually the same because of this commuting permutations condition. So I'm just going to focus on these two sets of columns for now. So the order in which you are starting off is applying them in this order. That's the order in which they are written. And what you can do is you can bubble this green thing from here all the way up to here, relying on the fact that when applied to one, the order doesn't matter. So you can bubble up that green thing all the way to the other green thing. And then since they live in a subgroup, you can put them together. And you can do this and bubble up things appropriately. And you will get this other order, which is, if you notice, the same as first do these two, then compose with these two, then compose with these two. Within the group, you don't change the order. But across the groups, you can change more. But across the groups, you can move them around. And you can work this out. And you can do it more cleanly. It works out. And it can also be shown to be secure. So that is all about the passive security. Let me spend just a couple of slides on one slide on UC security. So UC security, you might have, there's a myth, or you might have heard that there's very little you can do. If you do not have any setups, there's virtually nothing you can do, no interesting functions you can do. But that is for two parties. If you go beyond two parties, there are interesting functions you can do. Well, there are some limitations. You can only do either aggregating functions like we are talking about, or they're kind of a dual disseminating functions where one party has input and all the other parties are output. And you might have seen broadcast with abort is possible in there is a UC secure protocol. It was observed by Golders and Linde. And there are also some other special cases for disseminated functions that I've shown up. Our first result here is that actually every disseminated function has a UC secure protocol, not just these few. So that is about disseminated functions. How about aggregated functions? Well, there are aggregating functions. There, our protocol doesn't work for all of CPSS to get UC security. But it works for something called complete CPSS. Complete means it's like a Latin square where everybody has n inputs. So everybody has n permutations, where n is the output alphabet size. And it's not just that functions which are complete CPS, even functions which embed into a complete CPS function to CPSS can do a UC secure protocol. And that requires, for UC security, you need to show that if you restrict the domain to a subset, you can still get UC security. So there's a new protocol needed to get that. Let me just finish off by this one thing. So UC security is a nice notion. You think of it as a very strong security. But it has some idiosyncrasies. It doesn't always imply security against passive corruption. So we go ahead and define. And I know people are, it's a foreclosure notion, I guess. But I don't think it has a name. So we just call it strong security. So a strongly secure protocol is one which is UC secure and secure against passive corruption simultaneously. And from whatever I said, and a few little observations, what you can say is that we pretty much know which functions have strong MPC at this point. All disseminating functions have aggregating functions which are necessary and the sufficient condition, but they are not the same condition. And otherwise, if it's not disseminating or aggregating, it's not possible. So the gap that is left is CPS functions which are not complete CPS. OK, so to conclude, we have this new algebraic structures. If you look at the paper, you'll find a few more things. You'll find some cute examples, some open problems on, and about these algebraic structures. And also, we, of course, gave connections to all these new models of computation, as well as MPC itself, the standard model itself. And at the end of all of this still, the full characterization, the exact characterization remains open for the standard MPC module and even for some of the new models we have. All right, so that's all. We have time for one short question. OK, so let's thank the speaker again.