 Γεια σας afternoon. My name is Elias Kutsupias and it's very exciting. It's a privilege, a great privilege to be here and to talk to you. I'm going to talk about blockchain and game theory and games in general. So I'm going to have a short introduction on topics like why game theory, what this has to do with blockchain. So if we are going to build, if blockchains are going to be the next great revolution, we need to build a strong mathematical theory. And game theory and more generally mathematical economics seem to be an indispensable tool in this direction. In my opinion it's also, they are also a source of great potential applications. So in order to build a truly universal blockchain, robust blockchain, first of all we have to solve formidable technical problems. So we need to have fast protocols, we need high quality, very reliable code that meets well defined specifications to a very high degree at least. But blockchains and like traditional software systems run by dependent nodes and each dependent node has its own objective. There are other systems that have similar issues and more importantly the internet and of course peer-to-peer systems. They also run on dependent nodes but blockchains seem to have a new game, a new key to the block that has much more complicated technical and economic interactions between the participants. So in order to understand this interaction we need to bring in game theory. A field that is at the intersection of mathematics, economics and recently of computer science. Game theory studies collaboration and competition, it has some very beautiful questions and some amazing answers and it draws from any areas of mathematics. And also intersects with other strange areas like common knowledge, like learning. Most of these concepts were formulated by game theories back in the 50s, well before computer scientists started thinking about them. And also game theory is a source of great algorithmic and computational complexity problems. For example a typical such problem is given a game, is there a Nash equilibrium? This is a major question. Can we compute efficiently a Nash equilibrium? We know that there exists one. Is there an efficient Nash equilibrium? Efficiently computable. So the study of game theory of course is games and a game is defined essentially by two things. The actual definition is a little bit more complicated but there are two things. The first is a set of strategies, one set for each player and then there is the payoff functions, one payoff for every participant. And so if we have a set of strategies instead of payoff functions each player selects a strategy out of his or her set and gets a payoff. The important thing here is that the payoff depends on the selection of all players. So this makes it much more interesting than actually computing a function. It's a function that depends on all the decision of all participants. So what is a solution to a game? A solution is an equilibrium and the most prevalent notion of equilibrium is a Nash equilibrium. And what's a Nash equilibrium? It's a set of strategies so that no player unilaterally wants to deviate from it. So this was defined in late 40s by John Nash. We've seen this similar picture before. And when he was around 20 years old. And furthermore not only he defined this notion he proved his famous theorem that every finite game has a mixed Nash equilibrium. In blockchain technology there are a lot of games around a lot of questions that look like games. So there are a lot of blockchain games but they have something that is very unusual with respect to traditional games. They are dynamic. They are not static settings, stylized settings like traditional games but in fact they are protocols. Each participant, its player, runs an algorithm. All the algorithms together is a distributed protocol and that's the solution. Now a Nash equilibrium is a set of protocols from which no participant has an incentive to deviate. There is something more general here. Blockchain games are usually stochastic. And what is a stochastic game? A stochastic game is a generalization of classical games. It has the following attributes which are so familiar to blockchains. These games have a state. The blockchain itself is a state. The decision is based on the current state. There is usually an external sort of source of randomness as in blockchains. These games traditionally are called games against nature. Nature is the player that provides the randomization. And these games also have incomplete information. Participants don't have complete knowledge of the current state. They don't know what the other players know. And we know these games are very complex and unfortunately we don't understand them very well. In fact we have very few theorems about these games. From computational questions to quality questions. These games were defined by Lloyd Shappley in the early 50s, I think in 1952. Shappley and Nash were fellow students, PhD students at Princeton at the time. It's the time of the movie that Vasili showed you, the actor. So this is the beautiful mind movie. There is a famous scene at a bar where students of Princeton are sitting in and a group of girls comes in. And they decide, they discuss how they plan to hit on them. And then Nash, John Nash, has an epiphany. He understands the notion of Nash equilibrium. And of course as in most of these movies, the movie gets it completely wrong. Exactly the opposite of what is a Nash equilibrium. So both of them, Nash and Shappley, Shappley is better known for Shappley's value. Which is a way to evaluate the power of a voting system. For example, at UN people calculate what's the value, the power of permanent members versus the non-permanent members. So Shappley's value. And both of them got a Nobel Prize. They were not exactly friends. Nash was much younger at the time. Shappley was a war hero. Nash was hooked on him but Shappley probably didn't like Nash so much. Both got the Nobel Prize much later at the end of their lives separately. And Shappley had a good life. Nash had a tragic life up to his accidental end. Okay, so now I'm going to become a little bit more specific. I'm going to discuss mining games. These are specific games that arise in blockchains. And this is where game theory came into the picture of blockchains of Bitcoin. So what's a mining game? So just to simplify the discussion, suppose that consider Bitcoin but this has nothing to do with proof of work or proof of stake. So we can consider Uroboros if you want but usually these games started life with Bitcoin. So imagine that you have three miners or three block leaders. We have the almost red one, almost green one and almost grey player. And they run the protocol of extending of creating the blockchain. So according to the suggestion, they have the following policy. This is the chain so far. They take the last block and they try to extend. Okay, but green player starts thinking. This is smarter than the others probably. He says, okay, what will happen if I press the wrong button, that's one question. The other question is what will happen if I start mining the previous block. So he's a player that has controls 45% of the power, computational power or stake. And with this probability 45%, this player manages to get the next block before the others. Now in this situation, of course the green player is going to continue his chain. The other players may switch to his chain, his path, but maybe not. So let's say that they stay in their own path. So what if he is lucky again or sees lucky again and creates another block. Now the situation is much better. This happens with another probability of 45%. So what's the probability of getting two blocks, these are independent events, this is approximately 20%. But then the other miners will switch to the longest path and they will continue from this. So he managed, the green player managed with some probability, with 20%, some significant probability to create two blocks and remove this red block from the chain. So potentially this will give the green player more money. But if he continued from here, we'll have one red block and two green blocks. So now we'll have only two green blocks. So potentially he can get more money. For Bitcoin he gets more money. For the reward function of Bitcoin he gets more money. This is a profitable strategy for a player that has 45% of power. So this is the game, but we didn't really define the game. So let's try to define it. So in order to define the game again, when I will have such a situation, you should always ask, what are the strategies? What are the payoff functions? So let's start with the strategies. What are the strategies? Every miner now has a following possibilities. Every miner has to decide where to continue mining, the longest path or some other path. When to reveal a block that he managed to create. He may have reasons not to reveal it immediately. There are other things that the miner can consider. For example, which transactions to include in a block. And that's a crucial part of the strategy and Bitcoin as it is right now. In fact most of the block chains would collapse if players start strategizing about the transactions that they are going to include in their blocks. There is something else that we're going to discuss probably later on. There is another thing. What transactions, this looks like a space of strategies, but this fact is much probably larger. For example, a miner can create transactions to include in a hair block or probably not to include in his block and somebody else to include in him. So by creating transactions he may change completely the game. So this is the space, but let's just focus now on, essentially we are going to only focus on where. Only this subset of strategies. So what happens if you decide where to block, where to continue mining, like the green miner before. So what is the payoff function? Okay let's not define it precisely, but intuitively a miner gains if the miner manages to waste enough effort of the other miners. Like the green miner managed to waste the effort of the red miner in the previous picture and this block was removed from the blockchain. But notice that this only happens if the payoff depends on the surviving blocks. So after realizing this, people that design blockchains thought about other reward schemes like Uroboros that says, okay let's change the reward scheme to prevent this kind of games. This kind of bad Nash equilibria. Are these completely successful? In my opinion not completely successful because the strategy space is complicated and we don't really understand it. But it's still a major question. So let's consider a simplified game now. It's a complicated game, but let's consider a simplified game. I want to show you what happens if we consider only the game where the miner decides where to continue playing. So imagine that we have the green strategizing miner and everybody else follows the longest path. This means that essentially they act like one player, but a player that fixes the strategy to the longest path. So we have the red player that plays longest path and the green player that plays starts thinking what's the best of this. So let's focus on the green player. What's his best strategy? First of all, notice that if they play this game, what is going to happen? This chain here, the red chain, is going to be longer than the green chain. Otherwise the red player will switch to the green one, longer or equal. So these are B blocks, these are A blocks. So the question is, if we have these A B blocks, what is the best strategy for the green player? As A is in a path of A blocks, the other is in B blocks, what's the best strategy? And the answer is what's the Nash equilibrium, what's the solution to this is given by this picture here. Imagine that P is something like 0.45, this picture is for something like that. And it says the following. For every A and for every B, the dark area says that if you are here, if the current state is here with this A and this B, then the miner, the green miner should persist on his own strategy, not switching to the longest chain. So if Bitcoin or the longest chain, if everybody plays longest chain, the state is always down here. Somebody succeeds, all of them move there, there are zero points ahead. There are no such chains here, the length of this and the length of this is zero. So if everybody is honest, the state is always down here. But this picture here shows that it pays not to be honest in this, not to play the longest path here, because notice what will happen here, if the red one manages to create one block ahead, we're still in gray area, which means that the green player doesn't switch to the other. So although the other guy is one step ahead, one block ahead, the best strategy is not to switch to that. So this needs a lot of calculation. This is not easy to calculate. In fact, it's a process that depends on the future, what will happen. So this is a Markov decision process and if you make it a little bit more complicated, it becomes a stochastic game, which we don't know how to analyze. So that's why I analyze the Markov decision process here. So that's essentially, I'm going to skip the details of this, but I'm going to just say a few thoughts about mining. So in some proof-of-stake systems like Uroboros, they have a slight disadvantage, or probably a great disadvantage in this mining game over proof-of-work systems, because as it was mentioned before, you know the sequence of miners you've done in advance. So we'll create a sequence for all the epoch so a player knows what is going to happen in the future. So this is going to be the next miner. For example, imagine, let's take this example, that a miner creates a transaction that says I'm going to pay the next miner, and he knows who is going to be the next miner, the miner of the next block, an amount. So this gives incentive for the other miner to abandon the longest chain and come to collect this reward. So there is some safety if the players don't know the future. So in proof-of-stake they know the future and this creates some disadvantage. And also the community seems to realize little by little that creating a blockchain is inefficient. You should be able to create blocks in parallel, like fruit chain is one such idea, like Uroboros, the next version of Uroboros from what I understand have this idea. But then this mining game may become much more complicated and we need to analyze it to be sure that the Nash equilibrium is a good strategy. Okay, but now I'm going to switch to some exciting game that we analyze, some exciting incentive situation which we analyze now with Lars, Duncan and Agelos and my student Christian Koster and Agelos student Katarina Stuka. So let's discuss delegation games. So what's a delegation game? We have here a proof-of-stake and we need to have the stakeholders to delegate their power to somebody else in order to run the nodes. So we need to design a coalition game that each stakeholder selects a delegate and gives a stake to that delegate. It may be herself. And then if we do this, the stakeholders are organized in pools. And then the pool leader, the delegate, runs the node. So ideally we want these nodes to have some special properties. So here is the situation. Imagine that these dots here represent stakeholders. The larger the circle, the larger the stake. So what we want here, we want to create, say, a few of them, a few pools of them, not so many. So we're going to give them incentives to do something like that, that this miner joint delegated his stake to this. And then another one delegates his stake to this. And another one to this. And so on and so forth. So at the end of the process we're going to have four pools in this example. And the central circles there are the delegates and these are the delegates that are going to run the system. So we need to give incentives for this to happen. So what we really need to do here is that we take the gains from running the protocol, say transaction fees, probably some money that comes from another source, and we want to distribute them to stakeholders to give them incentive to do some such behavior. So they come together and they create pools. Say there is a pool of stake sigma i and this pool will get collectively a reward r of sigma i, where r is a function, a reward function. This is the only thing that we need essentially to design. So we need to design a good reward function, right? So that's our objective, define a reward function. Probably there are other parameters here, but let's focus on the simplest thing here. We need to design a reward function r that gives incentivizes the stakeholders to form pools and what we want these pools to have. What is the objective? The objective is that we want to have a given number of pools. So at the beginning of selling we need, say, if you know it's 20 or 100, it depends on the kind of technology. If we have a faster chain, we go to thousands or probably to tens of thousands. But we have a fixed target k and we want the number of pools to be approximately this. And also it makes sense to have incentives so that no pool becomes really large and controls a lot of power. So we don't want dictators. So is there a solution to this question? So not only that, we need solutions that are independent. At least we want to design a reward function that works for every number of participants. We don't know the number of participants, but probably for every distribution of steak, who has a lot of steak, smaller steak and so on, for every degree of concurrency the decision, are they going to decide together or one after the other and so on, are they going to wait, change their decision process and so on? Are we going to have myopic players that just look at the current situation and decide where to delegate or they start thinking about the future and they say, if the others are going to do this, I'm going to do something else. So non-myopic players. And of course there is a decision whether participants are allowed to, or have an incentive to split their steak and go half to steak to one delegate and half of the steak to another delegate. So there is a lot of things that we need to satisfy and perhaps we cannot get all of them, but after looking at this and running some experiments, we ended up with a very simple natural solution, this reward function here. So this incentivizes small pools to come together to join, why, or to small delegates to small stakeholders to come together because they save on the cost and they get the same, the total reward is the same as being separate. So this satisfies them to group. Up to some point, up to one over K, then by joining together they have no advantage, in fact they lose because it's better for them to run separately now. So it goes up to one over K and then no more incentive to grow up. So this is a reasonable reward function that says up to one over K join together then stop joining. So this would create pools of size, intuitively size one over K, therefore K pools. If the total steak is one. So that's the function and let's see some results about it. I'm going to show you two things. Let's first look at the dynamics of the situation of the game and they are as follows. At the beginning, so horizontally here is time and this here represents the size of steak of each pool. So let's look at the last step here when the process finishes. So what we'll have here, we'll have a pool here which is as size 10%, another pool that has approximately 10% and so on, smaller pools, but all of them. The difference between every two lines is the steak of every pool. So we have approximately 20 pools here. It's almost all of them with the same size, so this is ideal. So we started with some distribution and then they grouped together to approximately 20, that was our target, 20 pools of almost the same size, ideal. Assume that everybody is rational, there are no adversaries in this process. Everybody is rational, they will try to maximize their own reward. So this is the end of the process, how we start. We start with all the steak, all the execution, all the... There is one delegate at the beginning as it is now, right, in the system. And then we allow them to decide whether to stay with the current pool or create another pool. And so they start creating more pools and more pools and more pools. For example, at this point here, somebody decided to create her own pool. So that's why we have one more pool at this point. And then this pool grows, grows, and you get this. That's the process. Every time you see a split, it means a new pool is created and a new delegate was created. And then this delegate attracts more and more steak and eventually went up with this ideal, almost ideal picture. Okay, good question, but I was planning not to talk about it. But for this picture here, the following happens. The pool gets some money according to the reward function. They subtract the cost for running the node. The delegate gets some amount, a fraction. This is from fraction, in fact, because of my experiment. But usually we experimented or a fixed amount for just doing the work. So he covers his cost, he gets an extra margin gain, and the rest is equally split among them, equally according to their steak. There are other considerations here. In fact, there are two or three other natural ways to share the money, the reward. And this is a very interesting question because usually when they look, this is a question that a coalition of game theory, it's a typical chapter or section in game theory books, they talk a lot about coalition game theory or cooperative game theory, sometimes it's called. But the goal there is to create one pool, a large pool, everybody is in the same pool, and how they are going to get the money out of it. Here we have a completely different objective, which creates other very interesting problems and questions. So here, yes. Okay, good. So this is a misleading picture because it's not exactly time, it's time, it's steps of change. So here is how this process works, which is a randomly one stakeholder. He decides whether to delegate or not, which is another one and another one. Every time a change happens, it's recorded in this picture. So it's not exactly time, it's misleading. I don't see how this can be done. The delegates, first of all, are public and known. They have an ID, they have a name. Other stakeholders who don't want them to have a name, but they can delegate, they have to create a process, a transparent process that shows that they delegate it exactly to one. They cannot use the same stake. These are technical issues. Sorry, where are they? Okay, it's not about fairness, it's about technical specifications. We have a system and this system can run efficiently with K, we decide with K pools, with K delegates. That's why we choose K. We want to choose K as high as possible to have more democracy, but the system may not be able to run if K is in the billions. You cannot have one billion delegates coming together and decide and run a multi-party protocol. It may not be feasible. So we select K to be the largest feasible with the current technology, with the current implementation. And probably a little bit less just to be safe and secure. So here is the picture of what happens before and after. So there are two histograms here, the almost yellow one and almost blue whatever is that color. And so here is the histogram before. This is the stake. So we have chosen this picture here, a Pareto distribution, which means that there are rich stakeholders with very large stake, smaller stakeholders and so on. And the distribution drops fast like this. This is typical distribution with wealth in every country almost. The parameters are not the same, but the range is not that large. This also corresponds to the Gini coefficient, which measures the inequality of this society. So if we fix one of these parameters, we get such a distribution. So running this system with this process with such, a distribution, the end result, the size of the pools are shown here. This is the histogram of the pools. This is almost ideal. All the pools have the same size. And that's what we want, right? Want the stake to be collected to exactly a number, and the number of pools and all of them to be almost the same. Okay. In this particular case, there is a default player that has 10% and just sits there. But this player is not strategic, never tries to maximize the reward. But it's useful in the process in this game. So this is the ideal picture. If we could have this picture for every distribution, for every parameter, we would be happy. Unfortunately, that's not the case. But if we change the parameters, this picture becomes different. So we don't get pools of the same size, or we get many pools more than K. So there is a lot of work that we need to do to understand how this evolves. Okay. I don't want to take much of your time. But let me say a few things, a few words about the applications of blockchains. So blockchains are so exciting today. There is such excitement everywhere. But what's the killer application? We don't have it. Let's admit it. We don't have a killer application for blockchains. There are many applications. There is a great future probably, but we need something more. And let me just describe one possible source of applications. So I don't think that blockchains is just trusted information sharing ledgers. They should facilitate cooperation. That's probably the source of applications. Because blockchains sit exactly between distributed computation and economics. They have this unique position. And so here is one potential application, efficient distributed mechanism design. So I'm going to discuss to elaborate on this. So and in order to do this, I'll take a very simple example and just discuss this. Okay. So let's take a simple example. We have some agents that want to collaborate for a common good. Let's make it precise. We have only three agents that want to collaborate to build a bridge. The bridge say cost 10 coins, billions, whatever 10 coins say. We're talking about blockchains and some of the coins are so expensive that will become a billion, hopefully some of them. So if we manage to collect 10 coins, we will build a bridge. Otherwise, the bridge will not be. Okay. Let's consider three points of view of traditional areas of science. And let's see how they deal with this problem. So we're going to see the situation from the perspective of distributed computing of simple game theory and of mechanism design. So let's go to distributed system perspective. So if somebody works in distributed systems, it has this problem to solve. So they think of the following. They say, okay, the problem is that if these three agents come together, they may not, that is a computation, may not work as we expect. Somebody may not appear, somebody may drop out in the middle of the computation. That's the problem that distributed computation tries to solve. For example, to solve consensus. That's what blockchains do. So what's the solution in this case? A solution is, from distributed point of view, it's just three numbers that sum up to at least 10. So here is a particular solution that is acceptable from distributed computation, one to an eight. If this is a solution, say, okay, they are going to build the bridge. This is not the number, sum up to 10, more than 10, so the bridge is going to be built. Now, okay, that's a solution. Here is another solution. Nothing seven three. The first player does not participate. So that's the solution. If you look at, if you are, if you're researching, if you're doing research in distributed system, you want an algorithm that will come up with such a solution. Acceptable solutions are three numbers that sum up to 10 or more. Notice that here there is no major of the quality of the solution. This is a good solution, one to eight. Is it good for every agent? Is it good for the society? That's not a concern of distributed computation. Because they had harder problems to solve, how to coordinate them to come up with three numbers. That was their concern in the 67, this 80s and 90s. Okay, now let's look at this, the same situation from the game theory perspective. Game theory perspective say, okay, let's focus on the agents, on their thinking, on their strategic thinking. Agents have utilities that associate the value to every solution. So what is a natural thing to do here? What is the natural game to define? First of all, they want to build the bridge. Everybody wants to build the bridge. Otherwise for them it's a disaster. Their utility becomes a minus infinity. And if they build the bridge, they want to pay as little as possible. So that's their utility. So once we define this utility, then the solution is an equilibrium. Let's say a nice equilibrium. What are the solutions for this game here? So it's every three numbers that sum up to exactly ten. Less than ten, we will not have a bridge and it's a disaster for everybody. More than ten, it means that some player has a reason to deviate to decrease their contribution. So every such triple is a solution from the game, is a nice equilibrium. Now notice what's the problem here. A bad dance of nice equilibrium. Which one is good? Which one is bad? It's not prescribed in the definition of game theory of nice equilibrium. And there is no interaction between the players to influence the quality of equilibrium, because there is no such quality. There is no defined such quality. Okay. Now let's look at a more sophisticated game theory. Let's look at the mechanism design perspective. Mechanism design perspective is even stranger. It says, okay, let's have somebody help us do this. So we associate the solution with an external entity. An auctioneer usually. But here I say just the mechanism design, the guy that runs the mechanism design algorithms. So there is an external entity that is going to help them coordinate. How? So this entity is going to ask the three players to give their values. How much they are willing to pay for the bridge. Notice that this is a private value is known only to the players, not to the entity. And then looking at these numbers that he gets, he gets three numbers essentially. This entity gets three numbers and out of these numbers computes how much each one should pay. And the problem is that these numbers are private and the participants have reason to lie. But there is an ingenious solution since the 60s for this problem. Even if they can lie, you can still solve this problem, almost solve it. So there is a catch here while allowing the entity to contribute money or get out money of the system. So if we allow the entity to play with money, to use or to contribute or get away money, then there is a solution. And here is a solution. Each agent tells their value and the solution for this for the bridge problem is the following. You pay, each one pays as little as possible in order for the bridge to be built. Assuming that the others pay their actual amount. So let's make it precise. Suppose that we have three players with values 3, 4 and 5. Their sum is 12, more than 10. So in such a society, we can build the bridge. So this algorithm here, this mechanism design, asks the players, the first player to pay only one, although his value is 3, he pays only one. Because the mechanism promises this player to pay the minimum amount to build the bridge provided that the others pay their full amount. So if the others pay 4 and 5, you need just one, you need to go up to 10. So the first player pays one. Similarly, the second player pays 10 minus the values of the other players, 2 and the third player pays 3. So that's a solution, that's a mechanism design solution. It has some great properties. It's incentive compatible. Nobody has a reason to lie. Why? Simple. Let's look at the first player here. He pays one, is calculated by this formula here. This formula does not involve the number that he reported. So he has no reason to lie. No matter what the player reports, he is going to pay one. So that's what's incentive compatible. Meaning that it's truthful. The player has no incentive to lie. Even better. This is a dominant strategy. This is a solution like an ass equilibrium but has much better properties. It means that every player in isolation decides what is the best strategy independently of what the others decide. In a distributed system, this is a great solution. But few games have dominant strategies. Here, who created a game that has dominant strategies? But there are some problems, of course, with the solution. We have already observed that they pay an amount of one, two and three, which is six, which is less than ten. Where the difference of four comes from the external entity, who have a God that pays for everything here. And notice also that there is no direct interaction between the players. There is interaction between the entity and the participant. So they need this intermediary to solve their problem. So that's why blockchains may be a solution here. So the problem that we just saw is a single dimensional problem. Each player had just one value. Usually, when we want to coordinate, players have many values. For example, when there is an auction for frequencies for cellular phone companies, they have a lot of information. In fact, they don't know how much they are willing to pay, because it depends on the area, what frequencies they want and so on. And just communicate, first of all, they don't have this information, but even if they had and communicating it to this external entity, it would have huge cost. And they don't want probably to even to reveal this information. So in this multi-dimensional games, who have these multi-dimensional situations, they are more complicated and they have very high communication complexity. So the question, of course, is can blockchain technology provide a better solution for such problems? Can it facilitate distributed mechanism design schemes that achieve at least two things, better solutions? Because none of the three solutions we have seen so far, the distributed approach, the game theory approach, the mechanism design approach, none of them gave a satisfactory answer. Far from what one would expect as a good solution. So can we use the new technology to get better solutions? And can we use it to get a solution that requires less communication between the players, between the participants? And it seems that the blockchains have exactly all the right ingredients. They allow decentralized interaction, they have now smart contracts, there is trust, there is money, there is a great community. So let's solve this problem. Thank you.