 Thank you, and thanks for everyone for attending this session. So the whole session is about secure computation. So here's a standard first definition slide. So secure computation talks about the problem of several mutually distrusting parties that want to carry out and agreed upon computation. So they each have inputs to the computation. They each get outputs from the computation. And we want to enforce some security guarantees, such as privacy of inputs, independence of inputs, even in the presence of cheating parties. And one thing I want to just maybe point out is that the computation that they carry out, it could be something simple and natural, like just a function evaluation. Like you said, intersection is a concrete example. But it could also be something that's randomized, like a coin toss, or it could even be an interactive computation, like a poker game, where there's inputs and outputs over many rounds. So we have good news about secure computation. We have the right framework to talk about security of secure computation protocols in complicated contexts like the internet. It's called the UC model of Kennedy. But the bad news is that it's a really demanding security model. And in fact, we can achieve this stringent level of security for most things that we care about. So when life gives cryptographers bad news, we can either give up or we can change the rules of the game. We usually change the rules of the game. So we can't get UC security. So let's tweak the model a little bit so that we can get something that's achievable. So there's a lot of ways you could consider tweaking the definition of UC security. You could make some assumptions about the underlying network. You could tweak the computation model of the adversaries in various ways. These are all good ways. You could also consider allowing the parties to have access to a trusted setup. So you could consider a bit commitment functionality or oblivious transfer or a common reference string. So when I say trusted setup, think of one of these examples. And this talk will focus on trusted setup. So in particular, what happens to the UC framework when we add a trusted setup into the mix? Do we get more? Do we not get anything? So the fundamental question is, you have a trusted setup in your mind. And you wonder, how useful is this as a setup? It may be that the functionality that you have in mind is it doesn't give you any more than you could have gotten without a setup. So that would be a useless setup. It doesn't get you anything. So we haven't tweaked the model enough to get more feasibility results. It's easy to see that a functionality is useless if and only if it already has a UC secure protocol without setups. On the other extreme, it may be that when you add a trusted setup, you suddenly get secure protocols for everything that you can imagine. That's really good. We'll call a setup complete, if that's the case. But these are only just two extremes. And it may be that you're thinking about a setup that has some intermediate level of power. Who knows, right? So that's kind of the context that we're working in. So we already know which two-party setups are useless. So this is a previous work with myself and Manoj. This talk is going to address the question of which two-party setups are complete. So in this paper, there's an almost complete characterization. It's a little messier just because completeness is a little bit messier concept. But when you take these two characterizations and you look at them next to each other, you get a picture that looks like this. So this is the universe of all functionalities, secure computation functionalities. You have a situation where you've got these useless ones on the bottom. We know exactly which ones those are. You've got these complete ones up on the top. And then there's a very small kind of fuzzy region. There is a region in between, and it's fuzzy. I don't know what's in there, but it exists. But I claim that it's a small region where strange creatures live. So one more thing I want to say about these characterizations is that both of these kind of use the same technical framework. And they consider all arbitrary functionalities. So I'll just put that in contrast to some previous work that drew a qualitatively similar picture. But the techniques were restricted to a very small class of functionalities, namely deterministic and constant size. So in the UC model, we can talk about lots more functionalities than just deterministic constant size functionalities. So that's the technical increment. So how do these characterizations work? So I have to describe to you this new game. It's called the splitting game. So take your favorite functionality, F. And we're going to play a splitting game where F has some role in here. So there's two parties. There's this Z. He's an environment. So Z is the environment. And there's two interactions. So in the left interaction, it's just the environment talking to the functionality, as Alice on one side and Bob on the other. In the other interaction, we have two independent instances of the functionality. We have the same environment, but we have this other player, T. So think of T as the synchronizer. And the instances of F are hooked up in opposite polarities so that Z still is talking as Alice on one side and Bob on the other side. So think of this as a two-party game. Z, the environment, wants these two left and the right interactions to look different. And T wants these two interactions to look the same. So that's the two-party game. So let's call delta the difference between these two interactions. The output of the interaction is the output of the environment here. So delta, let's say, is the difference in these two interactions. So that's the payout for the environment. The environment wants delta to be high. He wants these two to look different. So with this game, we say that F is splitable if there's a winning strategy for T. So there's a way for T to behave that fools all Zs into making these two pictures look the same. Conversely, I'll say that F is strongly unsplittable if there's a winning strategy for Z, the environment. So there's a way for Z to send inputs and look at the outputs such that no matter what T is doing, he'll be able to tell the difference between these two interactions. Okay, you can see the quantifier is there if you really care. It may not be that anyone has a winning strategy. It could be like, we play the game of who can name the biggest number. That doesn't have a winning strategy for either player if it's a simultaneous thing. So you can have weird things that don't have a winning strategy, but most things have a winning strategy in this game for one player or the other. And I didn't need to say anything special about F to make these pictures. I didn't have to look inside F. I can put any F that I can consider in the UC framework into this picture and it's okay. So these concepts apply to arbitrary functionalities. So just to reinforce our understanding, so here's a pop quiz. This functionality takes in X from Alice and it gives F of X to Bob, where F is a one-way function. Is this a splitable function or is it strongly unsplittable? So we have to consider these two different interactions. I'll help. Let's consider an environment that picks a random X and sends it as Alice, gets back a string and says, oh, did I get back F of X? The environment can know what F is supposed to be. If you think of what T has to do, T wants these two pictures to look the same. He gets F of X on his left and he has to give back a pre-image of F of X on his right. He has to invert F, which we know is not possible. It's a one-way function. So this is a winning strategy for Z that I described. On the left-hand side, he always says yes. On the right-hand side, he almost always says no. So a winning strategy for Z, that means F is strongly unsplittable. So we've reinforced our understanding of this concept and this example will come back a little later. So how does this relate to our question of the power of trusted setups in the UC framework? Well, splitability was originally introduced in this previous work I mentioned as a complete characterization of useless setups. So that's the blue region here. So F is useless if and only if it's splitable. So kind of the converse statement is there's too many, too much fine print here, but I'll restrict myself to saying things that I know are true, because I don't wanna lie too much. So in this talk, I'll describe this part of the result. So if F is strongly unsplittable, then F is complete. I put a star over it because if F is reactive, then there's some slight, there's some more fine print that I don't wanna get into, but know that there is fine print. So we're gonna be sticking to non-reactive Fs, and so that statement is true. I'll talk about the converse direction a little bit at the end as well. So we know that this fuzzy region is the things for which there is no winning strategy for either T or Z in the game that I define. So kind of bizarre creatures in this fuzzy region. So this is the theorem that I wanna talk about. So how do we show that a trusted setup is complete? Well, from the nice result CLOS, the well-known result, it just suffices to build a commitment protocol. So I'm gonna take a strongly unsplittable F and show that you can always build a commitment protocol out of it. And I'm gonna do it using the example that I showed earlier. So I hope you haven't forgotten what this does. Okay, so I'll describe the protocol and try to argue some of its security properties. So it's a commitment protocol. The commit phase is just a standalone commitment. So this comm is just a standalone commitment. Nothing fancy here. Sigma is, say that sigma is like the opening of the commitment, okay? So you commit to something, you get the opening value that you can use later. The reveal phase is gonna be something very bizarre as you might expect. So the sender wants to open his commitment to a bit B and so the receiver does the following. So we generate a challenge. It's a random string and we send it to the functionality so that the sender gets F of X, okay? So we use the functionality to transfer F of X to the sender. Then we do this totally bizarre thing. We run a sub protocol for a task which I've defined there. So think of that as the code of a task. We'll run a standalone secure sub protocol for this. Don't worry about that too much. The sender has this value Y and it gives it to the sub protocol. And what happens if the sender can indeed open the standalone commitment to the value that he's claiming, then the sub protocol is gonna let Y just go straight through to the receiver. Otherwise it's gonna apply F to that value and give it to the receiver, okay? It's very strange. And the second branch of the if statement is not used in the normal course of things. So the honest sender, okay, he's honest, so he can indeed open the commitment to the value that he's claiming. So the honest sender always activates this red line. So the thing that the receiver sees at the end is always F of X. So that's just what the protocol says. The receiver is gonna check, oh, did I get back F of X from this sub protocol? So the receiver's happy. What about if the receiver is corrupt? I need to show a simulator for a corrupt receiver. The simulator does this. So the simulator gets to bypass this instance of F. Like that's what simulators do in the UC framework. They get to bypass the interface to the functionality. The simulator gets this value of X and he can feed it in through here and who cares about this first branch of the if statement because he has X so he can find a way to get F of X delivered to the receiver, okay? Even if he committed to a dummy thing, right? So in this case, the opening of the commitment is never really used, but still he can get F of X delivered to the receiver and the receiver's happy, okay? What if it's a cheating sender? So a sender commits to one minus B and then tries to open the commitment to B. I think that's the problem of binding. So what happens here? Well, we can argue from the properties of the standalone commitment that this first statement here can never be achieved, right? If it's a statistically binding commitment protocol, you can never find a sigma that does this. So I've crossed it off. And so what we have in the sub-protocol, if I cross off that first if statement, I just have this, right? Which is whatever the sender gives as input to this sub-protocol, we apply F to it and give it to the receiver, okay? But a cheating sender would have to come up, right? This is looking similar maybe, right? The cheating sender would have to come up with a Z, which is a pre-image of F of X. You'd have to invert this one-way function and that can't happen. So just kind of to generalize from this basic protocol, we have conceptually like two instances of F. We have one that's an ideal instance. That's the top one. We have one that's simulated inside of a sub-protocol, okay? The honest sender has like a special way to bypass the instance of F that's in the sub-protocol, right? So from the receiver's point of view, it's like there's only one instance of F in between the two sides. The simulator can do the opposite thing. Simulators can always bypass the ideal instance, but the simulator maybe can't bypass the second instance. That's okay. Hopefully those two look the same. And then a cheating sender can't bypass the ideal instance, can't activate the special mode of the sub-protocol, so he's kind of stuck between two instances of F. And if that looks familiar, well I hope it does because that means that I've done my job. This is exactly what strong unsplitability is supposed to guarantee for us, right? There is a way for the receiver to behave here such that he can tell the difference between the first two cases where there was only one instance of F doing stuff and this last case where there's like a bad guy stuck between two instances of F. Like that's exactly this splitting game, okay? And I'll just note that the simulator never has to like apply any of the rewinding properties of this sub-protocol. So this can be a standalone secure sub-protocol. Like the analysis uses its security, but our final simulator is straight line and never has to rewind this sub-protocol. So all we need is a standalone sub-protocol for some complicated thing that's related to F. Okay, so that sort of gives you a flavor of this final commitment protocol. So we use strong unsplitability to get a UC commitment protocol and that tells us that that setup is complete. If you're paying really close attention, you'll notice that I only described a straight line simulator for a cheating receiver. I didn't describe a straight line simulator for a cheating sender. We have to do some tricks and run it back and forth and do something to get that to work, but that can be done. That's maybe the less interesting part of the result. I encourage you to look at the paper for the subtleties and the fine print regarding reactive functionalities. It's kind of beyond the scope of the talk. And finally, so I showed strongly unsplitable implies complete. And you might ask, what about the other direction? Well, like I'm legally obligated to tell you that it's, I don't have a proof of the other direction, but I have like the next best thing. It's very, very close. There are some, it's just really messy because completeness is much messier than uselessness, I guess. So check the paper if you really care about what exactly is a statement there, but it's almost an if and only if. So finally, if you take anything away, like this is my like one sentence summary of the results. So every natural functionality, so I say that, oh, if a functionality doesn't have a winning strategy for either player in this game, then it's unnatural. And you can look at the paper for all the weird ways that you can not have a winning strategy in this game. Everything else I claim is natural. And all of those are either useless or complete. So think of that picture with red and blue in a little tiny fuzzy area. And I think that's not too far from the tooth. That's a safe way to think about it. So that concludes my talk. Thanks for your attention.