 Yeah, so I'm Joel Alwyn, Collusion Preserving Computation. Yeah. So OK, so the primary goal in this work, it's we'd like to come up with a realization notion that bounds the capabilities of deviating coalitions, even in the presence of arbitrary composition. So let's see, what do we mean by this? Well, OK, a realization notion, if we have two functionalities, R and F, R realizes F if there's a way to use R in place of F. So there's some protocol in the R hybrid world that realizes F. What do we mean by capabilities of deviating coalitions? Well, traditionally we look at how deviating coalitions can influence honest parties. But in this work, what we're also interested in, additionally, is how they can influence each other. So we want that collaborating dishonest players can do no more, also with respect to each other, when they're in the R hybrid world than when they're in the F hybrid world. And of course, arbitrary composition, we mean that this sort of you can use R in place of F, that this should also hold regardless of what other concurrent activities are going on at the same time. So why are we interested in this? Well, for example, you could look at composable game theory. This is an extreme case of when you're considering coalitions of deviating parties. Essentially, there are no honest parties in game theory. Everybody's rational, so everybody's doing whatever they want to do. Another case where you could be interested in this kind of notion is if you'd like to perform some sort of collusion free computation, but which is robust, even in the presence of side channels. And we'll see in a moment that this is actually a real problem. Moreover, other intuitive examples where you'd like to maybe bound the capabilities of collaborating dishonest players are if you'd like to capture some sort of notion of incorrisibility. So you can think about just intuitively, incorrisibility, what are we asking for here? We have a setting where we have potential informers and a core C, somebody who's being coerced. Now, if we can bound the amount or the type of communication between the core C and the informer, it could be possible that the informer can essentially not tell whether the core C is doing what he was supposed to do or whether what he wants to do. So this gives rise to some kind of notion of incorrisibility. Also, we might want auctions where we can avoid bid fixing, maybe some kind of bounded isolation. Let's say we want to implement a game of poker or bridge. We want that the protocol doesn't allow people to say, show their cards to each other. So they gain some sort of unfair advantage over the other players. So other goals in this paper are we'd like to give a generic definition that's independent of the realizing resource, because that allows us to then better compare constructions. So by the way, by construction now, we don't just mean the protocol. We also mean the resource that you're going to use. So the construction involves defining both of these things. And making this definition independent of the communication resource that we use also allows investigating minimal properties of this resource that we would need in order to realize a given functionality F. So it allows just asking more questions, maybe. We also want a kind of non-triviality here. We want that there's a fallback security. So in other words, we define this protocol pi that'll realize some functionality F. And then we ask, what happens if you run this protocol with a different resource than the one it was designed for? Can we still ask for some kind of security? Again, not necessarily a standard thing to look at, but trust is a rare commodity. And you might want some guarantees if you're not actually using the network that you thought you were using. Yeah. So another thing we do is we explore implications for composable game theory. And we give a construction for a large class of functionalities F. So related work, all right. We've been looking at different kinds of realization notions for some time now in cryptography. In the multi-party setting, starting with GMW, BGW, these were the first kind of ideal real type definitions. But of course, they were not a priori generally composable. And they didn't tell us about anything about deviating coalitions, how they could influence each other. Because on a technical note, they model adversaries as being a single monolithic adversary controlling all corrupt parties. So in other words, this influence between corrupt parties is given for free, because we're modeling in them as a single entity already. Then getting further later, we got arbitrary composition starting with the work of Caneti, these frameworks like UC, GUC, JUC, they gave rise to realization notions that hold independently of what's happening on the side. On the other hand, we also have some notions that do tell us, that do provide bounds on colluding corrupt parties, namely what's called collusion-free computation. Starting with the works of Lepinsky, Michali, Pico, Benchelat, these have been focused on two computational models, ballot boxes and envelopes, and later on the mediated model. So why are we not happy with collusion-freeness? Well, the problem is it's not composable. And it's relatively straightforward to see this. So let's look at the functionality F. It's not a particularly interesting one, but it'll suffice for the example. It's the null functionality. It doesn't do anything. No input, no output. So we're going to try and realize this. In particular, we're going to define a resource R in a protocol pi as follows. It's two parties. First, protocol pi sends a message, pi2, the second player, sends a message of his choosing of length 2k, 2k bits, k being the security parameter, sends it to the resource R. R responds with a uniform random string R of length k bits, so half the length of the message. Then the first player sends, again, a k-bit string R prime to the resource. And now the resource has two options. If R equals R prime, he sends back the message to the first player, M. Otherwise, he just sends back bottom, a null message. So why is this a collusion-free realization? Well, we have to look at, OK, we need to be able to simulate independently the two sides in the setting where we're given F, in the ideal world. Now, R is a uniform random string. And F doesn't allow any communication between these two adversaries, the left, the first and the second player. But that's not a problem, because in the real world, there's also no communication between them. And so the chances that a corrupt first player actually sends R prime equal to this R are negligible. In other words, you can always simulate in the ideal world by sending A equal to bottom. So it's easy to simulate. However, now let's see what happens if we compose with a k-bit channel, both in the ideal and in the real world. And we'll see that it becomes impossible to simulate in the ideal world. Why is this? Well, because in the real world, they have the following option. They can send, player two can send the message R to player one. And then player one can send forward that onto the resource, thereby learning this message, this 2k-bit message. So now we're in a state, in the real world, using this resource R, where both players learn this message M that's 2k-bits long. However, in the ideal world, F allows them no communication. So the only thing they have is this compose channel C, which only allows k-bits of communication. So in the ideal world, there's no way for these two parties to come up with the 2k-bit message that they both share. So the conclusion is, if you compose with this channel, this is no longer collusion-free realization. So our goal is, we'd like a composable version of collusion-free-ness. And to get composition, well, we do what we usually do, what we've learned from the UC sequence of works. We add an environment to our definition. And we get some immediate results here. The dummy adversary lemma holds pretty much unchanged. The composition, the main goal of this composition theorem holds essentially unchanged from the UC line of works. So that's good. Moreover, it turns out that this new realization notion strictly generalizes the GUC and UC realization notion, something we show more formally in the paper. Okay, so we have our definition. Now let's look, can we realize it somehow? Of course, if you want to realize the functionality F, one way you can do it is you can simply use the functionality F. But besides not being particularly interesting, there's some other issues with this. For example, the resource you use in the real world, of course, depends completely on the ideal resource you want to use, which is not something we're used to from, say, UC. Turns out that we show that to some extent this kind of dependency is unavoidable, but at least the dependency is something you can compute with an algorithm. It's not some sort of non-uniform dependency. There's a deterministic way to compute the dependency. If you know the functionality, you know exactly what kind of resource are. One resource which you can use to realize it. Another point, another criticism one might have with this approach is that if R misbehaves, if R is not actually acting as F, you don't get anything anymore, right? All bets are off. And now usually this is not something we care about, but maybe we can do more than this. So in order to capture what else we could do, we define this notion of fallback security. What's fallback security? It's the security that you achieve if you run the protocol with an arbitrary communication resource, not necessarily the one it was designed for. So let's parse this for a moment. What kind of security notion? By the way, here, the first R and F, they're switched, mistake in the slide, sorry. So if you read now Pi CP realizes F from R with GUC fallback security, what does that mean? It means that if you run the protocol Pi with a resource R, then you've CP realized F. But if you run this protocol Pi with any resource, R star, it doesn't matter which one, at the least you still GUC realize F, okay? So we've kind of reduced the trust you place in the network because at the very least, whatever the network is, whatever resource you're using, you still get GUC realization. So now we've ruled out this trivial construction. So we're back to trying to find something that works. So we look at previous works where, in particular collusion-free-ness, where we've managed to somehow bound what corrupt parties can do with each other. And the idea, the construction that worked for collusion-free-ness in the mediated model was something we can call assisted SFE, secure function evaluation, in the mediator's head. And intuitively it can be understood as follows. So let F be the functionality we're trying to realize and Pi be, say, the GMW protocol of this functionality. Then the resource we're gonna use, we call him the mediator, it's from the mediated model, runs this protocol Pi in his head. And the state of each player for the protocol is shared by a secret sharing between the mediator and that player. And the way the protocol then progresses is through a sequence of two-party SFE's, each time between a player and the mediator, where they compute the next message in the protocol and they update the state of the player. So this was the solution idea from ALPSV, call it assisted SFE in the mediator's head. So we'd like to adapt this to see if we can get collusion preservation. Of course, we'd like GUC fallback, so one thing we're gonna have to change is that we're not gonna use the GMW protocol, we'll use the GUC protocol. And of course, we want composition, so let's say these two-party SFE's, now they're GUC SFE's, okay? Are we done? Does this give us a good construction? And the answer is no, because there's actually a new security concern that arises in the setting of collusion preservation that didn't exist before, even for collusion freeness. So recall the intuitive goal is that we'd like to ensure that corrupt colluding parties, they get no more from R than F, okay? In other words, that we can do the split simulation, we can simulate them with individual simulators even in the ideal world. So historically, progress towards being able to do the split simulation has come in two major steps. The first one, already back in 84 by Simms, was the realization we need to remove steganography from the protocol, right? We need to remove the capability of subliminal messages being exchanged in our realizing protocol. The second major step was made in the papers that introduced the notion of collusion freeness, Lepinsky, Mikali, and Shalat, and they identified this thing called randomness pollution. And that's when a protocol produces public randomness in the view of the participants. Think for example, if you're trying to realize a single bit flip, that's your ideal functionality you're trying to realize, if you use a CRS to do this in the real world, that means that immediately these adversaries have a very long public random string in their view. But in the ideal world, there's no way, they can only communicate it most a single bit through this bit flip. There's no way for them to simulate this long string so they really have the exact same string in their simulations, all right? So that's what we call randomness pollution. We identify a new kind of property that we need to mitigate, that we need to remove from our protocol. We call this, so this is synchronization pollution and you can think of it as follows. Adversaries should not obtain more synchronization of events from this resource R than they did from functionality F, okay? So why is this an issue? Well, because if they have more observable events in R than they did with F, that means the adversaries can coordinate more in the real world than they could in the ideal world, say in concurrent executions, for example. Technically, what's the problem? Well, if F doesn't provide the simulators enough synchronization for them to coordinate their simulations in an online manner, we call it we have an environment that's an online distinguisher who sees the simulations as they're generated, right? So if F doesn't provide them with enough synchronization, that means that the simulations might occur in the wrong order and it becomes easy to distinguish, right? Notice for collusion of freeness, this is not an issue because the distinguisher is offline, in other words, she only gets to see the simulations once they're complete, once they're all done. So this is something new for collusion preservation that didn't exist in collusion of freeness. So how do we mitigate this? Well, again, we want GUC fallback, so we're gonna use protocol rule, we're gonna run it in the head of the resource. And the idea is, so the problem is that because we're using a GUC protocol, this is a multi-round protocol, has all sorts of publicly visible events. For example, the current round which parties are in. So the question arises, what is the minimal amount of synchronization that we need to provide in the ideal world in order to be able to simulate assisting the resource in running this protocol rule in his head? And it turns out the answer is, really, we just need to ensure that output is delivered at the right point, right? Intuitively, what we need is that output is not delivered too soon in the ideal world. Parties need to be activated enough, they need to have enough fuel injected, even in the ideal world, that they could have run this protocol rule, right? If you think, in the ideal world, the moment everybody's been activated once, they give their input to the functionality and then output is delivered immediately, in the real world, that's too fast. They won't be able to come up with their outputs that fast. So what you need to do is you at least need output synchronization in the ideal world. It turns out that's all you really need, right? Technically what that implies, these two-party SFEs between player and resource, they need to be modified so that they hide not just the information content of the protocol that's been run in the head, but even the events, the publicly visible events. What round are they in? Was a message delivered? From who did the message come? Did the state of a party change? All of these things need to be hidden in the two-party SFEs, which were not an issue for collusion-freeness. Inclusion-freeness, all that was needed to be hidden, was the internal state of a party and the message content. So that's how you essentially get a collusion-preserving realization from the collusion-free one in the mediated model. So what results do we have? Well, we show the necessity of several properties of our realizing resource R, and this in fact allows us to rule out most standard resources for realizing practically any interesting functionality F. For example, we can't use broadcast channels, we can't use insecure, even ideally perfect private channels. It kind of makes sense because if you have such a channel between two parties, then they immediately get, they can arbitrarily influence each other in the real world. So it better be that the ideal resource you're trying, the ideal functionality you're trying to realize also allows these parties this type of interaction. Also it shows, showing that the necessity of these properties, to some extent it shows the minimality of the resource that we define in our construction, namely this mediator resource. So paraphrasing very roughly, the theorem we get is that for a very large class of functionalities F, we can provide a resource and a protocol that CP realize the functionality with GUC fallback. In other words, it's not just an interesting definition, it's actually at least under some conditions, it's an achievable one. Now quickly for the applications to game theory, so in order to show how this can be used in a game theoretic setting, we define a new model of game theory, of game play. Rational players computationally bounded and concurrent game play. This departs significantly from most normal game theoretic models and it brings it maybe closer to the way the real world works. People are not involved in a single game at a time. In reality, we're playing all sorts of games with different sets of players at the same time. And the guiding principle when we define this model is that actions should be local, right? You place a bit in an auction that's defined independently of what else you're doing. But your goals, your intentions and your consequences, in other words, your strategy and your utility, those are global. You don't just care about the outcome of this game, you care about the outcome of all the games together. And the outcome of this game may mean different things to you depending on what else happens, right? So in this model, we show that we can replace ideal mechanisms with cryptographic games, protocol games, such that the following is true. On the one hand, game theorists can still design and analyze their mechanisms in an ideal, fully trusted setting the way they're used to. On the other hand, the game will then be played by computers as a protocol game over a special kind of network, right? And finally, less trust is placed in the network than was originally placed in the mechanism. So there's various directions we can take this work. In the future, the only one I'd like to highlight now is that currently when we look at existing notions, realization notions, if a single process on a machine is corrupted, all bets are off for that party. That party is considered corrupt and everything he's doing is all bets are off for that. But maybe we can do better. If we look at what happens in the real world, we have sandboxes, virtual machines, shrewd jails, restricted UIDs, all these kind of ways of isolating processes. It would be nice if we had a theoretical foundation for this. What are we actually achieving by doing this? What can we hope for? And an interesting first step has been made by Canetti and Wald in their recent paper you see with local adversaries. Thank you. Thank you.