 The first speaker of the session is Adam Gross. He'll talk about fair computation with rational players in joint work with Jonathan Katz. OK. So the problem that we're talking about here is two-party computation. So the setting is two parties of some sort have private information, say, their location. And they want to compute some function of that private information. So for example, how far apart they are. And then to do that, the ideal situation would be that they had some sort of trusted third party to use, where they could send their input and then get back the answer to their function. But unfortunately, in the real world, we don't have ideal trusted third parties. We have some sort of imperfect third party that we don't want to trust so much. So what we want to do is, instead of using some third party, use a protocol. Now, there's a lot of stuff you could expect the protocol to deliver, for example, keeping your inputs private, getting the correct answer, et cetera. But the particular topic here that we're worried about is fairness. So fairness requires that if either player learns the output, then the other player should as well. The worry is that, say, one player aborts the protocol early, doesn't complete it, they have the answer, but the other player doesn't. So unfortunately, this was proven impossible to do in general. So we move on to trying to deal with the impossibility. There's a variety of ways you can do this. One option is to look at fairness for specific functions. There are some functions where you can do it, and that's been done. You can also achieve partial fairness, which is sparing technical details, a relaxation of the fairness definition. You can also do it with physical assumptions. If you have envelopes, ballot boxes, et cetera, that are secure, then you can do this. But what we do here in this talk is assume rational behavior. So this has been done before with other problems, in particular, rational secret sharing. You assume that the parties in question have some sort of well-defined goals that we know about when we design the protocol, and then we design a protocol that achieves the goal when the parties behave rationally. So our results at a high level look at an ideal world where we do have a trusted third party that can evaluate the function, and we look for a game theoretic equilibrium in that setting. And then what we do is informally, say that if behaving honestly in that ideal world is a strict Nash equilibrium, then what we can do is give you a real-world protocol where rational players will follow the protocol and their output will be fair. So there's been a history of looking at game theory and cryptography. Both are situations where you're talking about adversarial behavior. Sort of makes sense that there'd be some overlap here. And there's a variety of ways you can go with this. One is to apply game theory to what are traditionally cryptographic tasks, to try to get around impossibility results or get better efficiency, et cetera. You can also use cryptography to look at what are traditionally game theory questions, problems where you have some sort of correlated equilibria, you have some sort of mediator, and what we wanna do is remove the mediator, replace it with a protocol that the parties can run without that third party. So our work is sort of both of these things, right? We are looking at this impossibility result, but since what we're trying to do is remove a mediator and the traditional cryptographic setting, we do both of these things simultaneously. Another thing you can do is look at cryptographic goals in game theoretic terms. So Ashraf, Kennedy, and Hazai had a paper here last year talking about this, where they defined what are traditionally cryptographic notions in terms of games. And they did talk about fairness there and seemed to give a negative answer regarding whether fairness was possible in this setting. And I'll get back to that in a little bit. So the real world game that we're defining has parties that are trying to compute some function F. They get inputs from what we assume is a known distribution. Any distribution can be joint, no restrictions there. And then they run a protocol using potentially that input, and then they output some answer. You can think of this as equivalent to acting on that answer and then sort of doing the right action or not based on what output you saw. And then they get some sort of utility that can depend on both their output and the other party's output as well as the true answer to the function. So our goal is to design a quote unquote rational fair protocol for F, meaning one where running the protocol as specified is a computational Nash equilibrium. Now our computational Nash equilibrium is one where when we limit the players to polynomial time strategies, we can show that no player can gain more than a negligible utility by deviating. So we do have a security parameter and we require that the gain from deviating is negligible in that parameter. There are stronger notions out there that you can debate the merits of as you wish, but we're gonna leave those for future work. So the ACH paper that I mentioned before, what they look at in that case is basically a special case of our real world game. Remember, they're coming at it with a different motivation. They're trying to show a definition that will be equivalent to the traditional fairness definition. And then they're trying to show an impossibility result. So they look at a limited setting. They look at uniform independent binary inputs and the function is essentially X or something isomorphic to it. And then they look at the following specific utility. So they assume that if both parties give a right answer or both parties give a wrong answer, then no utility is gained by either party. And what you're trying to do is get the right answer when the other party does not. If you do that, you get one utility. If the other person does that, you lose one utility. Now the results, they show a protocol that has correctness one half. But they also show that you can't get a protocol that has correctness better than one half. And since we like to do better than one half correctness, this seems like a pretty limiting impossibility result. But if you look at it more closely, you'll see that there are a variety of equilibria to this game. In particular, guessing randomly is an equilibrium. The strategy where both players, just knowing their input, guess the other player's input and output what the function would give if that were the input of the other player, that is one which no player will deviate from, and it achieves the same payoff as the protocol in question. In fact, even if you had a trusted third party, even if you had a perfect realization of the solution to this problem, you don't gain any utility from it. So there's no incentive in this setting to actually run the protocol at all. Now we're gonna look at a setting where there is an incentive to run a protocol. So we define, first of all, an ideal world game. This is analogous to the previous game, but with the ideal third party available to the players. So in this game, the players receive their inputs the same as before, and then they send their input or a symbol saying I don't want to participate to some sort of ideal functionality. And as a result, they then receive the output from the functionality. The functionality just honestly computes the answer to the problem and gives it back to all players simultaneously, or if any player refused to participate, they just give an error symbol to the players in question. The players then output an answer. And as before, they get some utility based on the answer they output. Now, what we're gonna do is assume that the players are incentivized to play honest strategies in this game. So what do we mean by that? Well, first of all, we define utilities to be very general. We say that your utility depends only on whether you are right or wrong about the true output of the function, but other than that, can take on arbitrary real values. We assume that the most preferable case to you is that you're right and the other person is wrong, followed by both of you being correct, followed by both of you being wrong, followed by you being wrong while the other player is correct. But we don't actually require any specific magnitudes or relationships between these other than that ordering. So the honest strategy that we have in mind involves the players sending the true inputs to the functionality, right? You're allowed to lie about it. You're allowed to send some other input to the functionality, but if you're honest, you wouldn't do that. You just send the correct input and then once you're given an answer by the functionality, you will trust that answer as the real one and output that answer, right? Now, anything satisfying the first two bullets here will consider an honest strategy, but it's worth saying that any strategy has to include some sort of rule for what you would do in any possible situation that might come up, and in particular, it must give some sort of way to guess an output when what you get back from the ideal functionality is an error message. So whatever that is, it can only depend on your input because that's all you've seen up to this point. And we're just going to call that distribution of guessing W0, this is for player P0. Now, if everyone's playing honestly, that guessing distribution is never used, but it still must exist as part of the equilibrium, right? That's the threat that keeps the other players from deviating from the protocol. Somehow, they trust that you're good enough at guessing that it's not in their interest to deviate. Now, for technical reasons I'm not going to go into here, we can assume that this guessing distribution has full support, meaning that any output that can occur with positive probability does, or that can occur as a true output of the function, occurs with positive probability in this guessing distribution. So what our result says is that if honest behavior is a strict Nash equilibrium in the ideal world, strict meaning that if you deviate, you lose some utility, then we can show that there exists a real-world protocol that is rational fair. This holds both in the fail-stop setting where you can only deviate by aborting early and in the Byzantine setting where you can deviate arbitrarily. There are slightly different versions of the protocol in each of these settings. I'm going to be talking about the fail-stop one here, but they're not that different. The other thing that's worth pointing out is that the impossibility result that I talked about before is in a setting where our precondition does not apply. So there's no contradiction here. Now, our protocol uses ideas that have existed in the fairness literature before, in particular a function called sharegen that has a sort of elaborate functionality. So what sharegen does is first choose i star from a geometric distribution with parameter p. p is going to be set later to make the proof work out, and i is going to be an index on rounds in the later part of our protocol. But we choose a special i value, i star, and then for each i, we create two values, one for player zero, one for player one, our i zero and our i one, and we set these values according to whether or not they are, whether they are greater than or equal to i star or less than it. So if they're greater than or equal to i star, we set these values equal to the desired outputs, the true output of the function that these players want to get. And if it's less than i star, then we have these values chosen according to the guessing distributions that in the ideal world they would have used where the function not to give any output. And then for each of these values, remember there's one per player per round, we secret share them and give one share to each player. So what they get as output of sharegen looks information theoretically random. There's no information in this output at all. So the way our protocol works then is to first compute sharegen. You can do this using a known protocol because we don't care whether it's fair or not. And then we try to achieve fairness using the output of that function. So in each round, the parties exchange shares and each player learns it's our value for that round. If at some point the other player aborts early, then the player left, basically, simply outputs the last R value that they received or if they haven't received any guesses according to their guessing distribution. If the protocol finishes, they just output their final R value. Now what we wanna show is that there's no incentive to abort early. So we can make a couple modifications here. In particular, we can assume that P0 is notified once i star has passed. I should say that we're gonna look at P0, but this is completely symmetric or cost player, so this analysis carries over. So this is only additional information. It can only help P0. And importantly, after i star has passed, aborting can't do anything beneficial for the player in question. At that point, everyone's getting the right answer no matter what you do, so it doesn't matter. But we can look at the situation. So the next option is that P0 doesn't abort at all, follows the protocol through honestly, in which case both players get the right answer. That's utility A0. If P0 aborts early, then we have two cases. If it's in round i star, we're going to assume they get utility B0. That's the utility that's the sort of worst possible, it's the best utility possible for P0, but it's the worst case as far as their incentive to abort, right? This is assuming that they get the right answer that the other player gets the incorrect answer is not guaranteed to happen, but it's an upper bound, which is all we need. And if they abort before round i star, then we know that they get utility that's strictly less than A0. This comes from the assumption of an equilibrium and ideal world. If you could abort when you have no information at all about the output, and get utility that was A0 or greater, then we don't have a strict Nash equilibrium in the real world. So then we can work through some math. The expected utility if you abort, which is what we're trying to compute, well, it's the utility if this is round i star times the probability that it is round i star, plus the utility if it's before i star, plus times the probability that it is before i star. So this one we know for sure, this is what we upper bounded with B0 before, right? And this one, this is the thing that we know is less than A0, right? So it's A0 minus some constant. And I should say when I'm talking about constants, I mean things that are first of all, not functions of the security parameter, but also not functions of P, the parameter in share gen, that we are trying to set, or that we're going to set during this calculation. Now, we then have the probability that this is round i star, and this one's a little bit messy. So the probability can be rewritten as the probability that you get y, where y is some possible r value that you see in that round, the output you would have if the protocol were to stop there. So it's the probability that you get y and this is i star over the overall probability that you get y in that round. So we can split this up into the probability you get y given it that it is i star times the probability that it is i star. We can similarly split up the bottom, though into two cases here, so it's the probability that you get y if it isn't i star, times the probability that it is an i star, plus the same thing as the top, the probability that you get y if it is i star times the probability that it is. So this quantity on the right is P. The quantity on the left, well, we don't really know anything about it, but what we do know is that it's a constant. It could be zero, it could be positive, but it's a constant, doesn't depend on those two variables I mentioned before. Now this value, a probability that this isn't i star, right? Well, this is just the opposite of P, right? This is minus P, and the probability that you get y given that this isn't i star, well, that's a constant, and it's a constant we know is greater than zero. This is a direct result of the assumption we made on the guessing distribution having full support before. It's exactly that. It's that this output must occur with positive probability. Now, what does that mean? Well, we get to choose P. So by choosing P to be low, we can make the quantity on the left as low as we want. And by making it low, well, we can plug it back in here, right? This is arbitrarily low, and this is one minus that value. And when we plug in something very low, we can force this value overall to be less than a zero, right? We can make it as close as we want to the bottom, right, to the a zero minus constant rather than to b zero. And once we do and get it less than a zero, then it's less than the utility of not boarding, right? And that's good enough to know that the protocol will be run honestly, right? If you deviate in any particular way and you get utility that's lower than if you were to never deviate at all, then you won't deviate at all, right? It's possible that given some earlier deviation, you would then find it in your interest to deviate later, but that's not enough to make you deviate in the first place. So what does that show? Well, first of all, it shows that rational fairness is indeed possible, right? As long as there is a strict preference for fairness in the ideal world by at least one of the parties. So I haven't mentioned it here. If you look at the paper, you'll see that we don't actually need it to be a full strict Nash equilibrium in the ideal world. We just need that one of the parties has an incentive to use the trusted third party, and that's good enough to make our protocol work. And interestingly, I think what we get is that the more pronounced the party's preferences are, the more round efficient the real world protocol is. So the more that the parties would be happy if a trusted third party existed, the more they want such a protocol to exist, then the more efficient the protocol is. Sort of by definition, it can only be inefficient in cases where we don't care about having it in the first place. So there's a variety of places that are still left to go with this. There's, for example, looking at the multi-party case where you have more than two parties computing this function. You can also look at more general utilities. So instead of assuming all you care about is whether you're right or wrong, you might care about how close you are to the correct answer, which cases you're correct in, et cetera. This we have actually solved. We have recent work with Amos Bemo and Elon Orlov that'll be coming out soon that solves these two problems. But a lot is still open. So one interesting question is to look at possibly proving some sort of partial converse of our result. So obviously in the case where it's not in equilibrium to use the trusted third party at all in the ideal world, it's impossible to show a protocol in the real world. When it's a strict desire to use the trusted third party, we give a protocol. But the case where you are willing to use the trusted third party, but you don't have a strict incentive to do so, hasn't been categorized yet. You can also look at stronger notions of equilibrium in the real world, as I mentioned before. And finally, there's a lot of other impossibility results in cryptography, and it's possible that we could address other ones by looking at rational players. Do we have any questions? Yes. Thanks for the talk. So how small should p value be? I mean, because if it is negligibly small, you need exponential many rounds, right? Yeah, so if it's very small, the round complexity increases. How small? This is a function of the utilities in question. The exact expression is in the paper, it's sort of messy. But generally the idea is that if you are sort of indifferent about the protocol existing, then you might have to, if you're incentive to use that third party very small, then p might be very small. And the round complexity is essentially one over p, so the round complexity gets big very quickly. But in cases you care about, that shouldn't happen. So it shouldn't be too bad. How complex is necessary, or even the fact that the protocol has to know exactly how strong the preference is for fairness? I mean, if you just know there is a preference, but don't know what it is, could you have a protocol that would do it? So I think you need to know something about the preference. Basically, any possible utilities will give you some p value. It's never bad to set p too low. So if you have some sort of bound on the amount you care about, that's good enough. But I think you would need to know something about it. Okay, let's thank the speaker again.