 I'd like to talk about the fun topic, essentially blockchain inspired protocols where game theory meets cryptography. This is joint work with Hilbert Chan, Kamin Chan, and Tianwen. Let me begin with the motivating example that comes from real life. This is not uncommon in cryptography community. You write a paper and you discover there's another concurrent work that has identical results. This has happened to me a couple times. In fact, it has, it happened to my first ever crypto paper in grad school, and then it happened again in 2013, and this time the two papers even have the same title. So both papers were submitted to your 14 and the PC recommended had merged. And here comes the real challenge. Who should go to the conference to present the merge paper. A strong ass solution to this problem is for me and Shafi was co-author on the other paper to deal it out. But we can do better. Why not run a coin toss protocol. In fact, this is what we actually did. Coin toss was first proposed in Blum's groundbreaking work in 1983. The way it works is the following. So in this example we are using a blockchain as a public bulletin board. First Shafi and I each select a random bit. We both commit to our bids. We open our commitments. And these are the open bids. We will compute the XR of the openings. And let's say Shafi prefers zero and I prefer one right so in this case the XR is one, and therefore I win. And, you know, this is also what actually happened. Our group. In fact, my former postdoc went and presented the paper. I may be somewhat concerned here because Shafi, you know, may be malicious and maybe she'll deviate from the protocol, because this is a commitment based protocol. The only possible deviation is essentially a party right you can open your commitment wrongly but that's the same as a party too. And it turns out in this case, it's not a big issue because if Shafi abards, we can say she just automatically forfeits and I'm going to win anyway. If she abards the outcome is defined to be one. Okay, so how do we decide how do we define a coin toss protocol. There are two requirements correctness is defined in the most natural manner. Basically if both parties are honest we want the outcome to be a random point. How do we define fairness. The standard line of work on multi party competition considers a very strange notion of fairness and I'll call this strong fairness in this talk. And fairness requires that even if Shafi abards, I would nonetheless output a completely unbiased coin. This is unfortunately known to be impossible in a two party setting, or in a multi party setting where half or more than half of the parties can be corrupt. And this was proven in the elegant paper by Cleave back in 1986. Now you may find this strange right since there's an impossibility how can blumps protocol work. The key observation is that blumps protocol actually achieves a strictly weaker notion of fairness and I'll call it game theoretic fairness. In the protocol we've seen. It's not like Shafi cannot bias the coin she can indeed bias the coin by aborting, but it'll just end up hurting herself and benefiting me. Eventually we are considering rational players who care about maximizing some utility function and game theoretic fairness requires that no matter what Shafi does, she cannot benefit herself or hurt me. This also means Shafi's best response is to play honestly and therefore honest behavior is an equilibrium. Since blumps protocol achieves think theoretic fairness in a two party setting, a very natural question is can we achieve game theoretic fairness in multi party coin toss to. So it may not be completely clear what multi what the formulation is I'll explain that in a little bit. When we first started working on this problem back in 2018, we were surprised to find that no work has considered this question. Actually the whole line of work on multi party competition considered the strong fairness notion, even though actually you know the first kind of paper blumps kind of in fact achieve game theoretic fairness. Before explaining our results. Let me mention that under some strong assumptions we can have some trivial immediate solution, and these are not the settings we are interested in. We have honest majority we can just use honest majority multi party competition with fairness and guarantee outputs. This can be accomplished in constant number of rounds. However, in decentralized blockchain settings on this majority is often not a reasonable assumption like let's say you have a smart contract and people are entering and you know playing with their pseudonyms. The pseudonyms are often cheap to make up right so it could be like 90% of the pseudonyms are controlled by a single entity. So the real solution is essentially to assume the problem away. Suppose there's a trusted setup that can pass the coin for us, or maybe the trusted setup just pick some pseudorandom seed and whenever we need a coin we can stretch to see to get more pseudorandomness out of it. However, in a decentralized setting often, we don't want to trust any single entity. And moreover, we want the coin to be unpredictable unpredictable in advance. With this in mind, let's refine the problem. We are actually asking and assuming correct majority and no set up assumptions that is in the plain model. Can we achieve game theoretic fairness in multi party kind of us. If we establish visibility than the next question we may care about efficiency of these protocols. And in this talk will will particularly care about round complexity. So let me try to define the problem. More precisely, there are actually two formulations for game theoretic multi party coin toss. And they're both very natural. In the first formulation I want you to think of it as like binary roulette so every player bets on either zero one. And if their bet is the same as the outcome then they win. So imagine everyone will put down one ether to enter the game, and then all of the winners will divide the pot right so that can be the utility function. Now in the second formulation it's more like lottery, and it's the same as leader election so we have and players we want to elect the winner at random. And the winner will take the part of all bets. For this talk I'm going to focus on the second formulation. I just want to quickly mention that in a couple other papers we actually gave a complete characterization of the first definition essentially when it's feasible and when it's not feasible. So let's focus on the leader election definition. I want to reiterate the utility function here right as I said the winner takes off, but we can easily rescale the utility and just simply assume the winner has utility one. And everyone else has utility zero so this is the utility function we will assume for the rest of the talk. Blockchain has quite a lot of applications in blockchain settings. So imagine there's a smart contract, and it's looking for workers to provide verifiable competition service or provide, let's say, you know VDF service. And whoever provides service can earn rewards so that's why everyone's eager to enter their competing to become the leader. So before talking about our protocol let me first give you a simple folklore solution and then you have something concrete in mind. Okay so imagine we have these cryptographers in the room they're trying to elect the PC chair of the next crypto conference. And of course being PC chair is a lot of work but say everyone's eager to serve the community so they all want to get elected. So we're going to pair them up and every pair will run blums coin toss to elect the winner, the winner survives to the next round, and then the winners get paired up, and then they compete against each other using blums protocol until a final winner is elected. In general, you know you can do it in a tree like fashion and the whole protocol will complete in log and number of rounds. So you know it's actually interesting to see how to do it in better than log and rounds and that's what we are going to talk about later in the talk. One thing I want to mention is that if anyone apart, or maybe if anyone opens their commitment broadly right which is the same as a party, and you automatically forfeit, and you are kicked out. If you look at this tournament tree protocol it achieves a very strong notion of game theoretic specialist. It achieves the following two things. First, no coalition is able to increase its own utility, no matter how it deviates from the protocol. Because you know as I mentioned when you abort you automatically forfeit so it never makes sense for you to abort. We also want a second property that is no coalition can harm any honest individual. The second property is also important for example let's say I'm providing verifiable competition service to a smart contract in exchange for rewards. It may be in my interest to monopolize the ecosystem. Therefore, I may want to drive drive away the competition and make the smaller players go away. Even if in the short term it may cost me something to perform this attack. I'm still interested in harming you know smaller players. And if a coalition can harm individual players it'll create this incentive for small players to join the system. If a protocol satisfies these properties then honest behavior is an equilibrium right because if I joined the system and everyone else is behaving honestly, my best response is also to behave honestly. And this is true no matter what my goal is like I may be selfish and a profit seeking and maybe malicious and trying to harm others in an attempt to monopolize the ecosystem. I may be paranoid and just wanting to defend myself in the worst possible scenario, no matter what my goal is like I just shouldn't have incentive to deviate. Okay, so we know game theoretic fairness is possible for leader election in logon rounds. And this follows from the tournament tree protocol. Now the interesting question is, can we asymptotically improve the round complexity to you know something little old logon. Again, remember we are assuming corrupt majority and no setup. And also going forward, whenever I say fair I automatically mean game theoretic fair. Okay, so what's the first thing we got to try right I mean the most naive idea is to take the tournament tree protocol and compress it to two rounds right remember in the tournament tree protocol we have logon rounds. And every round you'll commit and you'll reveal but now let's just say we commit the coins for all rounds in one shot, and then we reveal other kinds in one shot, and then we computed the winner using the tree like fashion. So I want you to think you know what something like this work. Well, not surprisingly, this naive approach is completely broken and that's why the problem is interesting. Let's say the attack let's say Shafi and Alizandro, they form a coalition. They have a definitive strategy of winning. And the way it works is they're going to each commit 00 and one one, respectively right so so there are two kinds, one coin corresponding to each round in the original protocol. And now they're going to wait till the other bracket opens their kind. At this moment they can choose one of them to apart and the other one will survive, and they can do so in a way that allows them to win definitively. So this brings us to our results. I'll talk about these results at a high level and then I'll go into details. We have good news and bad news right so the bad news is that if we restrict ourselves to protocols that are similar in structure to the tournament tree protocol, then logon rounds is the best you can do. So what do I mean by similar in structure to the tournament tree protocol. So in the tournament tree protocol right remember it works by commit and then immediately reveal you commit to something in the immediate next round you reveal and this repeats for logon times. So if we restrict ourselves to this commit and immediately reveal model, then you know, it's pretty hopeless and we have to suffer from logon rounds. I want you to think of this law upon as a sanity check in protocol design, rather than a deal breaker because who says we have to tie our hands like this. And the reason why we came up with this law bond is more because like initially we were trying protocols that are similar to tournament tree but you know soon enough we realize that this is kind of like a dead end. Okay. And, and this brings us to our main upper bound results right so we show that if you are willing to make a couple of relaxations that we can indeed overcomes logon barrier. So first we have to relax the fairness notion to approximate fairness. On this slide it says little old friend is I'll explain what that means in just a little bit. And second, you know, we cannot restrict ourselves to the commit and immediately reveal model right we now are willing to work with the standard cryptographic assumptions that we can use crypto in a general way. So in this talk, I only have time to talk about upper bound I don't have any time to show you the law about proof. So this result I first have to explain what is approximate fairness. And then I'll tell you what crypto we need and you know how the construction works. So first what do I mean by approximate fairness. It's similar to the notions we saw earlier but now allowing a small epsilon approximation factor. So protocol is said to be epsilon fair is the following two things. So, and I'm always going to assume that the players are polynomially bounded in terms of computation power. So first we want that no coalition can increase its expected utility by more than epsilon factor. Second, we want to make sure that no coalition. Up to one minus epsilon times ending size can reduce any honest individuals utility by more than epsilon. And so, you know, in our actual theorem, we can work with epsilon that's, let's say little of one. Before this talk I'm often going to use 1% for epsilon just for simplicity so this means you know the coalition can be as large as 99%. And we want to make sure the coalition cannot increase its own utility by more than 1% it cannot harm an honest individual by more than 1% and it's also important to note that this epsilon slack is a multiplicative notion. For example, let's look at the second notion that is you cannot harm others, no display. Normally if everyone plays honestly that any single individual utility is one over and in expectation. So with approximate fairness what we want is that no coalition up to 99% in size can reduce any individuals utility by 1% this means any honest individual should have utility at least one minus epsilon over and right so you can see this. Normalize it's a multiplicative notion and this kind of multiplicative notion is the most natural in many applications like for instance you could be playing some game repeatedly and in these cases the absolute value of the utility like doesn't mean very much so you know it's natural to like normalize the game. And use this multiplicative notion. Again, our philosophy here is to achieve incentive compatibility now we have this like tiny you know epsilon slack right so if you deviate maybe you're able to do just a little bit better. epsilon better, but the relative gain is so small like it's just not worth the trouble. Like if you deviate you can get caught are exposed. Okay. In this talk I'm going to stick with these simple notions I've mentioned but in our actual paper we actually define an even better solution concept card epsilon sequentially fair. So I will very briefly mention about this at the end of the talk. I've explained what is approximate fairness before I jump into our construction let me mention that our result is actually parameterize. So here are means the wrong complexity right so for for any are that's at least some constant times log log and we can achieve roughly speaking to to the minus our fairness. So one thing to observe is that as number of rounds are increases the slack or approximation factor will drop exponentially fast. So this is like a sharp curve. Okay, I'm going to next tell you how our construction works at a very high level we use a combination of extractors and honest majority, multiply the computation. So when you first see this you should be surprised because you know didn't I just tell me we are working with corrupt majority majority coalitions. So how can honest majority and PC be useful in the setting. And, but you'll see that in a little bit. Okay, so let's dive into the technical details of the construction our first start with the straw man, the straw man scheme actually relies on a random article. And it has a couple flaws but we are going to you know fix these couple flaws one by one and we can remove the random article at the end. And at the very end of the talk I'll quickly mention about the sequential fairness and then I'll conclude. So here's the blueprint that we have a large number of players and first we will, you know, this is a universe reduction tactic we will first sample a small committee that's poly logging size. And then we will run the tournament tree protocol, among the poly log size committee, and because you know the committee is poly logging size the tournament tree protocol will completing log log and rounds. So what remains to be answered is how do we do this committee election and we also want the committee election to complete in a small number of rounds. Okay, so let's see. First, here's a strong man approach this approach is actually inspired by proof of state protocols like snow white. Imagine each player posts a random bit to the blockchain, and the concatenation of all of these bits is fed into a random article, let's say sharp 56. And then outcome is used to elect a poly log size committee. So from now on, I will assume that if anyone supports them their bit is treated as a default bit zero. Let's think about what this protocol gives us. Imagine that the red, red players from a coalition. The coalition has some advantage over the honest players because they can wait until the honest players post their coins, they can look at honest coins, and then decide their own kinds. This means they can try different combinations of their own kinds. By making queries to the random article multiple times. And then they can pick a combination that helps itself the most. Recall that you know eventually we will elect a winner among the committee right. So if the coalition wants to, you know, get elected. What is to increase the representation in the committee like they should get as many seats as possible in the committee. So let's say this is the coalition's objective. Fortunately, we can prove that a coalition that controls any constant fraction of players cannot increase its representation on the committee, noticeably more than its fair share. For example, suppose the coalition controls 99%. Then in this case, you know the committee should also be roughly 99% red. And not too much more. So this means that any coalition that controls the constant fraction of the players cannot benefit itself, noticeably, and proving this is pretty straightforward. So what we can show you know for a single random article query using a standard turn up on basically the probability that the committee is bad is negligibly small. And here a bad committee means that it's more than 99.1% red. Okay now if the coalition is polynomially bounded, it can only make polynomially many random article queries so we can just take a union bound over this polynomial random article queries. And we can conclude basically except with negligible probability and the committee will be at most 99.1% red. Okay. So that was the good news but the scheme has a couple of flaws. The first problem is that a large coalition can actually harm a single individual. And here we are worried about a single individual being harmed. And if you want to harm an honest individual, you can do the same attack you can wait until the honest people post their coins, and then you can try different combinations of your own kinds and pick one that excludes this single individual. So intuitively, you know, what's the intuition here that it's easy for you to make sure a specific individual is either excluded or included, but if you want to harm an honest individual, you can do the same attack, you can wait until the honest people post their coins and exclude this single individual. So intuitively, you know, what's the intuition here that it's easy for you to make sure a specific individual is either excluded or included. But if you want to make sure that many honest people are excluded, or many red layers are included, then that's much harder. And for a similar reason, you know, we have the second flaw. And basically, a small coalition actually can benefit itself significantly. So at first sight, like you may be shocked, right, because you know, earlier I said, a large coalition cannot benefit itself noticeably but now I'm telling you a small coalition can benefit itself significantly. It seems really counterintuitive because you know it should somehow be easier to defend against the small coalition. But on the other hand, like if you think about it, we have a multiplicative notion of fairness right so for let's say a single individual is normal utility is like just one over and so even if it can increase its utility just by a tiny bit let's say I can increase by one over and I'm already doing twice better and that's significant by our definition because anything more than one plus epsilon is considered significant. In some sense it's actually, you know, in this case, and maybe the small coalition is actually harder to defend against. And again, it's the same kind of attack the small coalition can wait till honest people post their coins, and then try different combinations of their own kinds to increase their representation on the committee. We can fix these problems one by one and let's first begin with the, you know, the first problem and a large coalition can harm a single honest individual to fix this problem. Our observation is that if you want to exclude some individual from the committee, you have to know the identity of the victim. And so our idea is to make sure you know you don't know the identity of the victim. And how does this work. So let's say every player, they're not going to use their real identities in the committee selection, they're going to pick a virtual ID at random. So they know their virtual ID but other people don't. And now they can commit to their virtual ID. They can't upload but actually everyone's committing to its virtual ID separately. And we still run the same protocol. But the only difference here is that we are running this random article based committee selection on the virtual IDs instead. And only when we finish the virtual ID election. Do people open their virtual IDs and now we find the reverse mapping and we can know, you know, who actually lands in the committee. And then we just, you know, around the tournament tree among the committee and elect the final winner. So there is actually a small subtlety here. If you just do this it turns out it doesn't work and here's an attack like let's say the coalition doesn't know honest people's virtual IDs but they do know their own virtual IDs right so they can place all of their virtual IDs like they can pick their virtual ID, let's say every every coalition member picks the same virtual ID, and then they can pick their coins carefully to make sure that specific virtual ID is included in the committee. So this way, every red player can become part of the committee and that's bad. And then at this attack, we are going to modify the rule, just slightly we are going to say you know, you're, you're this virtual ID is only elected into the committee, if there's no collision with other virtual IDs. So if you are going to put your acts in one basket and then make sure that basket is elected. That's not going to work because it's just going to cause collision amongst yourselves. If you introduce this fix, it'll almost work except that you just have to make the virtual ID space a little bit larger to make the collision probability small enough. So I won't tell you the detail parameters here. But essentially with this modification. We can fix the first problem. And now we just have to focus on how to fix the second problem, a small coalition can benefit itself significantly. The problem. In fact, earlier, I already kind of gave away the solution right because I told you, you know, our solution involves the honest majority and to see, and here is where we use the honest majority and to see the idea is that And, you know, players don't choose their own virtual IDs right. So the reason that a coalition can help itself is because every player knows their own virtual ID. Okay, so in order to make sure people don't know even their own virtual IDs, we will have every player pick a random on mass virtual ID but this is not a final virtual ID. And to get to a final virtual ID, you have to apply another masking permutation. The masking permutation will be chosen using honest majority NPC. So if the coalition is small, let's say less than one percent. And then, you know, the honest majority MPC has privacy right so the coalition has no idea what this masking permutation is and therefore it doesn't have any idea what its own virtual IDs are. And so that's the idea. And the question is, what happens with large coalitions. So for large coalitions the honest majority MPC is completely broken it doesn't have any security. But that doesn't matter because we can get fairness from like the first part of the construction or the argument we saw earlier. Earlier we showed that, you know, a large coalition cannot benefit itself noticeably. And that same argument still applies here even with the new modifications. Okay, so when we combine these ideas we can fix both of these flaws and the resulting scheme should work. And the only problem that remains is we still have a random article and we want to get rid of it. At a high level we want to replace the random article with a combinatorial objects called a sampler and this is kind of equivalent to like a seeded extractor. It's not just so simple as it turns out we'll have to introduce some other changes to the protocol to make it work. One thing to keep in mind is that earlier, our proof relied on the fact that the coalition can only make polynomially many random queries. But once we replace the random article with a sampler will have to make a combinatorial argument instead like without any regard to the player's computational balance. Unfortunately for this talk I actually won't have time to tell you how to get rid of the random article in details. So let me just skip to the end. Okay. And at a high level, in order to replace the random article with a sampler, we actually need to adopt a two phase committee election strategy. So first, we run a single iteration of fighters like a spin protocol to elect what's called a preliminary committee. We're not running the parliamentary protocol among the preliminary committee. So the preliminary committee will then run a protocol just like the random article based protocol we saw to elect the final committee but of course now we want to replace random article with a sampler. And finally, we'll run parliamentary among the final committee to elect the final leader. So I'm going to ask you to read the paper for the details of the scheme, as well as the proofs. Before I conclude I want to say, you know, I mentioned about sequential fairness rights. I'm just going to say a few words. At a high level, epsilon fairness which is the notion we saw requires that a priori before the protocol starts, the coalition has little incentive to deviate. We see that with some small but non-negligible probability, let's say epsilon over two probability. Some bad event happens, let's say you know with small probability Alice deals Bob's secret key. And if this bad event happens right conditioned on the bad event happened. And Alice does have incentive to deviate because in that case, he maybe she can just steal Bob's bitcoins. But because this bad event happens with let's say only epsilon over two probability that a priori Alice actually doesn't have a noticeable incentive to deviate. So if we just use the current epsilon fairness notion like it would sometimes fail to rule out undesirable protocols like this. And that's why we think epsilon sequential fairness is a better solution concept for approximate fairness. And speaking, the sequential notion requires that, except with negligible probability at no point in the protocol, should you have noticeable incentive to deviate. Okay, so I'll just leave it at this. Please read the paper for the details. And what's interesting here is that this actually shows like even how to define approximate game theoretic fairness is non-trivial and requires careful considerations. Okay, finally, I want to conclude by saying that you know game theory needs crypto is much needed in decentralized blockchain applications, just like most blockchain ideas in this space actually industry is you know ahead of research. In many cases, it's like it's not even clear how to model things are how to formulate the problem. And I personally think you know this is an exciting area that needs new scientific foundations, and also it needs like interdisciplinary research. Thank you very much.