 Let me start with the bottom line since it's quite simple to explain here and I will start or start with the result So what I'm going to show you today is the following result that finding possibly mixed Nash equilibrium in potential games requires large communication and this result is true for games with few actions and many players and also in case So with few players and many actions and therefore it's true if we combine those two as well And the same is true for congestion games with many facilities. Okay, this is what I want to Show you today. So the plan of the talk I Will briefly explain what potential games are then what congestion games are I will Talk about communication complexity specifically in the context of game theory to will provide some motivation for it Then I will discuss what Related literature and if time will allow We'll see some proof ideas Okay, so let's start with what is a potential game. Okay, so A game is a potential game if there exists some Potential function that captures the gains and losses of players by unilateral deviations Or formally if They change in the utility of a player when the by deviating from AI to AI prime Is the same as the change in the potential note that this is the same potential for all players Okay Just an example of a potential game. So one one example is the prisoner dilemma Okay, how can I see this is the prisoner dilemma where this action call the first action corresponds to a cooperation the second to defection and What I claim that this This function is a potential of the game. How can I see it the difference between? 3 & 4 is 1 mainly player will gain 1 by deviating from here to here That here I should look at the utilities of player 2 because he's the one to deviate so the change here between 0 & 1 is 1 and You can see that it works out and Essentially, so the potential of the prisoner dilemma is this function. Okay, and There is the one-to-one correspondence between pure Nash equilibria in the potential game and Local maxima of the potential right so this point 2 2 in fact It's also a global maximum, but even if it was local maximum and here I'm taking about Locality in the context of unilateral deviations so if this Every local maximum corresponds to Nash equilibrium and vice versa okay Okay, so now what what congestion games are and I would instead of giving you Notations, let me explain it verbally. So we have a set of facilities F and Think of those facility facilities as roads that connect the vertices in the graph Each player has to choose a subset of facilities Right so to get from his source to his destination. He should take several roads yeah, so So actions of a player was essentially allowed from source to destination He went every player chooses some subset it defines a congestion on every road each for each road how many players decided to to use it yeah, so The congestion on each road is the number of users of this road and And each facility has a cost function that in the context of routing it means that how How congested would be the road or namely how much time will it take me to cross the road? if this is the number of Players that will use it And finally The utilities of each player essentially is the sum of costs over all facilities that he has used and Which is in this case captures the total Time from source to destination, okay so this game this class is called congestion games and a famous result by mondar and chapley is that Potential games that I just described In the previous slide before is that potential games are equivalent to congestion games namely every congestion games admits a potential in fact this goes back to rosental who who defined the notion of congestion games and You can do also the opposite direction you can every potential game can be viewed as a congestion game with With some facilities and the number of facilities that needed to describe a game as a potential Game is of the same order as the number of action profiles, okay Now a short Introduction to Communication complexity and specifically to communication complexity in game to you So one thing that I could tell you and I think for is okay communication complexity is a nice complexity model and let's Game Nash equilibrium is also a very nice model Let's let's just understand what is the communication complexity, but I think that in this case there is also something interesting to say beyond Just two interesting models, and this is what I'm trying to want to tell you so equilibrium is essentially a static notions that Essentially in a in an indirect way it assumes that players know to predict correctly the behavior of their opponents Yes, because I'm I'm best replying to the action profiles of the opponent. Namely, I know what my opponent are are going to do yeah, so and It is a good question to answer where this assumption comes from. Yeah, so why Is it indeed a case that I know to predict correctly the behavior of the opponent? and this raises the question so and Nash in his seminal paper on Nash equilibria Suggested an explanation for that that said okay, so if the game is played repeatedly that there may be in some scenarios I can Predict the behavior of the opponent simply by looking at their past behave at their past behavior so and Then I will assume that they will play Approximately the same as they did in the last previous round This was just an informal idea and he didn't claim that it works Always works, but this was kind of one explanation for For this and in direct assumption that I know Utility of my parents and then the economists asked the question Can players learn to play in equilibrium? Is it indeed the case if we have a game and we play it again and again and again? Can we learn to play in equilibrium and There was an extensive literature on this question and I think that the bottom line essentially Presents quite a few Learning dynamics that that all of them Those that I mentioned here all of them essentially lead to equilibrium even in general Classes of games or without any restriction on the game and so on so Existence of dynamics that lead to equilibrium. We know that such such dynamics exist, okay? and Slightly more recently people have asked themselves also the question how fast can players learn to play in equilibrium Okay, is it if can we do it in reasonable time or? Or not, okay, so Before I'm getting deeper into this question, we should maybe a strain. What do we mean by? by Learning and equilibrium which type of dynamics we allow and which type we disallow so here is a dynamic at the very No, no, no, all this This is a general Introduction to communication complexity in games not necessarily in potential games. Yeah, it's There was also an extensive Work on specifically potential games, but this is So here is a dynamic that Leads to equilibrium every player sits at his home compute the set of all Equilibria they pick the lexicographically the first Equilibrium and from the very first day they start to play an equilibrium and the plate always Okay, so this is probably not what we mean when we say learn to play in equilibrium, okay, so the Actual question that we want to ask is how fast can players learn to play in equilibrium using some reasonable learning rules Okay, so what classifies a learning rule is reasonable. So one suggestion That has been suggested by houghton, muscolel and it includes in fact most of the Learning rules that has been suggested before or after this paper Is the following that a learning rule is uncoupled if my Behaviour does not depend on the utilities of the opponents or Saying it differently as if I don't know the utilities of the opponents I know only my own utility and Know that it might and in fact it should depend on the X on the Realized actions of my opponents, right? Otherwise I learn nothing nothing, right? It should depend on the actions of the opponent, but not on their utility Okay, so this is the notion of uncoupledness. It's quite acceptable class of learning rules So now let's rephrase the questions how fast can players learn to play in equilibrium using uncoupled dynamics, okay? And the answer to this question turns out to be communication complexity It was it's an observation that by Konica and Seldon that essentially Communication complexity captures up to some logarithmic factor Exactly captures the rate of convergence of dynamics to equilibrium. Why is that so? when we think of dynamics Right every dynamic defines a communication protocol instead of playing an action I communicate to the to the other players which which action I'm going to play and the other direction if I want to simulate a communication protocol Yeah, then instead of sending the other players a bit of zero or one I will play either my first or my second action Okay, so it's one-to-one direction and this is why I think communication complexity is a very interesting Topic to study in the context of game theory because it captures precisely The rate of convergence of a very natural class of dynamics. Yes So I think one reaction Exactly Because this protocol Uncoupledness I agree with you that the dynamic that that follows from a from a protocol is Maybe is not the most natural as best reply dynamic fictitious play. Yes It's not it will not be as elegant as those it will be kind of weird dynamic that Based on the history will the will tell to player either to play his first or second Action and after they will learn that the equilibrium they will start to play it Okay It's not the dynamic that follows from a protocol is not an not necessarily is a natural one, but it is an uncoupled dynamic Because the distribution of information in in that couple the Exemption is precisely the one that that we have in communication complexity Yes, so an upper bound On on the communication or complexity of every notion that relates to To games is the input size if this is so generally in the yeah, so there Yeah, so the interesting questions will be whether it is polynomial in the input or logarithmic in the input Yeah, okay So so far I gave I explained you what is the potential games What is a congestion game and gave some general motivation about communication complexity not necessarily in the context of potential games Okay, so now let's see what what's what is known specifically in the context of potential games Okay, so let's start with approximate equilibrium Okay, so for approximate equilibrium We have a very simple very natural dynamic that leads us to To an approximate equilibrium So let me repeat what is the uncoupled assumption the uncoupled assumption is that my behavior during the game Essentially, let's say it's simpler each player knows his own utility function only but does not know the utility functions of the opponent This is precisely the the communication complexity model that we typically Study in game theory that ever the private input of every player is his own utility function You start that as you can And obviously, yeah, I reveal a lot of information about my utility during that Okay Okay, so In in potential games in in fact we have very quite natural and very simple Dynamic that will lead us to approximate equilibrium and the observation is that when a player deviates to an action that improves him his own utility It also improves the potential by exactly the same amount, right? This is exactly the potential property. So if we Look at the sequence of unilateral deviations that improved by at least epsilon the the potential then Every time we we make a jump of an epsilon So the number of steps that it will take us to reach an approximate equilibrium is at most the bound on the potential and Divided by epsilon. Okay, and so in each step I Pick one player who can gain more than epsilon by deviation if such exists and if no then and I tell me I stop at this point and this is a Dynamic that works quite Quite fast. In fact, once we talk about approximate equilibria We as typically have to assume some bound on the utilities So we typically normalize it in zero one and in games where the utilities are in the zero one the potential is Bounded by N. Okay, so this is a very fast procedure very fast procedure or polynomial procedure in the number of players and the in In in the approximation that leads us to To to approximate national equilibrium. Okay, so the approximate national equilibrium is Solved in some in some sense you can we can think that it is Solvable quite easily And this is true, but both in the communication model and in the computational model. It's it's the number of This protocol can be also Executed various using very small communication Okay so Now what is known about? Exact equilibrium, so let me start with the summary about what what we know in the case of Of computational complexity so in the computational complexity We know that computing a pure mesh equilibrium is PLS complete. Okay, this is a PLS complete problem and What we now know about mixed national equilibrium Nothing we Sorry, we know we know we know kind of a positive result We know that computing of a mixed national equilibrium belongs to the class CLS, which is continuous local search, but and Right and open problem is to provide any hardness evidence for mixed national equilibrium in potential games Or alternatively it proved that it is to provide a polynomial algorithm. Okay, so this is an open problem that we do not know today How to solve What about communication complexity the the one the topic that we are talking about today, so A pure mesh equilibrium in potential game And as requires large communication and in fact it's quite recent result And our result today what I want to present to you is that mixed national equilibrium in potential games requires high communication Okay, let me say a few words about Kind of in my mind tuition for this some intuition on how why Mixed national equilibrium proving hardness on mixed national equilibrium is how okay, so And Over here you can so we can recall the The picture of router so we have here NP here. We have TF NP those those are total problems that has Solutions and and and in this class we we have two and of very central classes of problems one is pp ad which roughly speaking captures Total problems where existence can be proved by blower fixed-point theorem. Yes Yes Was Definitely not for sure not. It had a lot of bad mixed national equilibrium. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah It's quite common phenomena That mix that potential games will have mixed national equilibrium. So for instance Yeah This is what the proof does in principle I could also design a game that would have many Equilibria, but all the pure and mixed national equilibrium essentially will solve some communicationally hard problem in the what you Indeed what we do is we we construct a game with the unique national equilibrium, which is pure and Yeah, so just this is a potential game And it has two pure national equilibrium, but it has also the mixed national equilibrium that well I played This action with probability two-thirds and this action with probability one-third and the second guy does the same Okay, so it's not a real phenomenon that makes that potential games will have mixed national equilibrium in some sense you can say that it is kind of the typical case Okay, so We have here the class pp ad which roughly speaking captures Problems that can whose existence can be proved by blower fixed-point theorem and we have another class PLS, which is polynomial local search Which roughly speaking captures a total problems where there exists some Existence proof that is based on a discrete local local Existence of local maximum in discrete graphs and In the intersection of them. We have another class of continuous local search and that in fact has So that relies on existence of local maximum in a in kind of smooth in a continuum in the continuum space and and Potential mixed national equilibrium. In fact, belong belongs to this class. This is something that the skalakis and papa dimitra have shown and 2011 and but yes, so the intuition is that I Can I can give you at least the intuition for why it belongs to the intersection is very simple. Yes, that I Can ignore the fact that this is it is a potential game and just use blower fixed-point theorem to to prove Existence of mixed national equilibrium. This will be This tells us why this problem belongs to P P A D and why it belongs to PLS is because I Can ignore the fact that I I want I allow a mixed national equilibrium and so just for a pure one and use the Potential property okay to find the pure national equilibrium. So but when we have more Kind of existence proof For the notion then hiding it or proving that it is hard to To find it becomes more tricky because This is roughly speaking the idea and as I said, it is a big open problem whether this problem is Complete in CLS case in the computational world and we show that in the communication analog it is okay Okay, so So so so I don't show that it belong it is We don't show that in the communication analog it is CLS because We just show that it is hard. Okay, the problem is hard in the communication world Okay, so let me clarify what what is the notion of communication complexity that we look at so when we talk about potential games so the private information of player I is his own utility function and The promise is that the utilities form a potential game and the output is a mixed national equilibrium and the theorem states that the communication complexity of This problem with two players and in actions is high polynomial in the input and that the problem in In games with n players and two actions is also high We do not have exactly a polynomial bound, but yet it is exponential in Square root of n rather than n okay Okay, so these are the results for communication for Communication complexity, let me note also that the way I presented it. I presented it as a promise communication problem And I like more total problems But so there is a very close analog of this communication problem that in addition is Is total and the analog I will tell you what it is you are given any game and Either provide me a succinct evidence for the fact that the game is not a potential game or Find me a pure mesh equilibrium and The fact that the game has a succinct Evidence and that this succinct evidence can be computed using low communication This is something that we have proved in in the in another paper with the norm and With norm Nissan and the shachar gobsinsky, okay? So Determining whether a game is a potential game is is something that you and Providing an evidence that it is not is is communicationally easy problem Okay, and yet computer computing even mixed mesh equilibrium is hard What about congestion games? Okay, so Here I think it is more natural To consider slightly different notion of Slide to distribute the information slightly differently than uncoupledness, right? So I think the most natural thing I will not say the most but the version that we considered is the following so The action sets which are out each player can take our common knowledge to all the players, okay, and What should I know in order to play the game? I should know my own the cost of the facility the cost of the facilities and In which facilities so those facilities that I potentially can choose in any of my actions I should know them. Yeah, I should know what are the consequences of of these games So this is the distribution of Information that we assume And We say that every player knows they call the cost functions of all facilities that he can potentially use in fact the only costs that he doesn't know the cost of of facilities that Do not appear in any of his subsets. Yeah, these are the only facilities that he doesn't know and The output that is our output is a pure mesh equilibrium and the results are essentially the same Let me point out one more. So There is in the game that we have defined As you can see we have quite a lot facilities. Yes, the ideal Ideally we would like the number of facilities to be smaller. Yeah, and to prove To prove to replace here Let's say let's focus on the case where the number of action is 2 and we have n players Ideally, we would like the number of facilities. Let's say will be polynomial in the number of players rather than Exponential Unfortunately, it is impossible to prove such a hardness result. Why it is impossible? Because there exists a very simple communication protocol Yes, in case where the number of facilities is small even in cases where the number of actions is huge and Then each player can just communicate to all the others the cost of all facilities and then they essentially know the whole game and And they can compute an equilibrium. Okay, so So the bottom the bottleneck in Congestion games is the number of facilities essentially not the number of actions as in potential games, okay now let me So maybe I will say first that This second result on on congestion games is essentially Echo so it is not a straightforward corollary from the So I just mentioned in the second slide. I think that Congestion games are equivalent to potential games Yeah, so if you use the reduction of Monderer and Chaplie between these two games This reduction doesn't work for us Because It does not preserve this distribution of information so we need slightly different Reduction and what we does is we We utilize the specific structure of our hard instances to provide not a general reduction from Potential games to congestion games as the Monderer and Chaplie does but only For our specific case we succeed to To produce such a congestion games with such a congestion game, okay So the the real interesting kind of the real deep theorem is the the first one about potential potential games and and let me Before I'm getting to what world what are the proof ideas of of our paper? Let me tell you some kind of general structure of a proof that recently has been implemented in quite a few papers For proving some informational Lower bound on mixed mesh on mixed equilibria, right? So the main difficulty with with with mixed mesh equilibria that are essentially in addition to the pure Equilibria, there might erase some very crazy mixed mesh equilibrium and what to avoid a city to avoid this situation Okay, this is the main difficulty in proving hardness for For mixed mesh equilibria and the structure roughly speaking was the following We start with some end of a line problem Where essentially we know where the line starts and then And I I want to know when the line ends and we can query it and so proving Hardness for this problem is not is not very hard in the query model if we are talking about communication model, then we can use the celebrated Recent recent simulation theorems, which will distribute the information in the line between Alice and Okay, so let me focus for simplicity on the case of query complexity. Let's say that we want to prove slightly weaker result on query complexity In order to lift it to communication complexity, we just need the results of the recent results on lifting theorems so we start With the with the with the hard a query hard End of a line problem and then what we do we embed this line to a continuous Brouwer function Okay, so this is a line you can think of it as a line on the n-dimensional discrete hyper cube and we define from it a function from the hype a hyper cube to itself and the this we We embed it to a kind of smooth function that should have good properties as locality, but the the issue is that the I will get more details what what are the properties of of this function, but essentially the fixed points of it Should correspond to and to the end of the line So this is all the fixed points of this function appears at the end of the line and then We we are we get closer to what we want we define a continuous action Game Where all equilibria are fixed points of this function f Okay, and this will appear in the second slide in the next slide So I will wait with it and finally to complete the reduction we discretize the the action space of this game and hope that everything will work well okay, and That the fact that we discretize didn't hurt much That the same arguments that we we succeeded to do for continuous actions are applicable in the discretized So this is the general structure that worked quite well in different Variants of the problem Why this structure is not good for us? So let me focus specifically on item number three that I mentioned is to define a continuous imitation game With all equilibria are fixed points of f and in fact, this is very elegant and very And quite simple quite simple game think that you have a function f and You want that all equilibria will be the fixed points of f I Define the following game That each player has to choose a point in the hypercube Okay, the payments are essentially player one tries to imitate player 2 name to be as close as possible to the Square to the square of the of L2 norm and Player 2 wants to wants to imitate f of x not x, but f of x, okay, and so even So even if player 1 Player 2 plays some crazy mix strategy My I have a I have a unique best reply, which is their Expectation of y, okay, and this is the way we can kind of avoid Crazy mixed equilibria because even if the x even if the other plane is playing some something crazy I will play pure and the this argument essentially tells us that the mixed Any mixed Nash equilibrium of this game is in fact pure Nash equilibrium and to see that the pure Nash equilibrium must Satisfy x equals f of x is immediate right because if x is pure and y is pure I want to play x and y equals f of x, okay What's the problem with this construction in our case that this is not it is not a potential game Okay, the imitation game is not a potential game It's So we cannot use this idea So this was step the step of the imitation game and let me tell you what we do instead of thinking of of of a brow function Think that what we want to do is to find a local maximum of a potential Why local maximum of the potential we just said that in the Computational world the problem of mixed Nash equilibrium is essentially CLA is in CLS So it is very natural to think of a problem that essentially tries to maxima instead of finding a fixed point Which is a pp ad tries to find the local maximum of a potential Okay, so and the game that I will define is the following player It looks similar the players each choose a point but instead of trying just to imitate why I will add him a bonus of Fee of X in addition to what he's doing and the second player in addition to The imitation I will add him a bonus of Y Okay, so this is the idea. So first of all, this is a potential game. How can I see it? So identical interest games games well our utilities are identical our potential games and also Terms that do not depend on the action of the opponent are also potential Potential games and sums of two potential games is a potential game So essentially this is a potential game whose Potential function is this one plus this one plus fee of Y. Okay And so the good news that this is a potential game moreover the In equilibrium So if we set Fee to be sufficiently small with respect to X minus Y, then we can approximately Do the same Arguments to prove that everything is almost pure. We cannot apply exactly the same argument to say that I'm playing exactly the expectation But I definitely don't want to be too far from the expectation. Okay, so it still has this property of Concentrating all possible actions in some small environment, which is something that I want to work with so if if we fix fee to be not Not very large or relatively small to this term Then I still will always want to play something that is close to expectation and on the other hand how can I see that In equilibrium say it's pure or an intuition wine equilibrium We will have a local maximum because This term by by by choosing something slightly bigger I My game will be epsilon times the gradient and My loss will be of order epsilon square right because here this is a quadratic term And this is a linear term so for very close points. I I will prefer to gain in the In the potential and lose in the Losing the imitation save save this one. So this is the main so This kind of the Yeah So so so let's say that we are talking about well supported And then you can apply other standard tweaks, but if we talk about Nash. Yes, it's it's it's it's indeed a problem, but so you need to go through No, sorry, I'm taking it back. We are talking about here about exact exact equilibrium, right? We said that approximate everything you can do so Yeah, yeah In fact our techniques can be applied also to Approximate equilibrium with when the approximation is exponential Yes, but and then you are you what you said is correct You can play bad actions, and then you need to go through the notion of will support Okay, so So I remind you that this was the structure before and now I changed bullet number 3 Right, so I changed bullet number 3 and this Tells me also that I have to change the previous steps, right? What we want now to embed L to a continuous potential function with all local maxima are Correspond to to an equilibrium to To the are located at the end of the line. Okay, this is and To those of you so this is typically the most at least here's for Panimitro Vavasis is quite complicated construction it takes Quite a few pages to prove its correctness and to define even the construction It's not easy and people since this Construction was so useful. So people have tried to simplify it and And essentially to the rest of my knowledge. There is not know some simple Construction that does what here's for Panimitro and Vavasis did and in our case we wanted to to embed it to a potential function and Not surprisingly also Our construction is no non trivial. I will not say it's trivial But essentially Technically, I think this is the main contribution. This takes this is the longest Proposition in the in the in the paper. It is essentially it says that What here's for Panimitro and Vavasis did for embedding line into Brower function We have a similar the same analog for embedding a metered line Those of you who know meter is means that when when I query a vertex, I Know not only know whether the line goes through it or not But I also know the value how many steps I went so far in the line This is approximately this is roughly speaking the definition of meter line and the main so so our main technical contribution is to show that a Similar Construction can be done for the case of potential. I just will mention briefly. What are the properties that here's Panimitro and Vavasis? What are the desired properties and So the first one is that the unique local maximum of the potential will be located At the end of the line the second is that for every point the gradient is strictly positive The reason for that if you if we go back here We want the gradient to be strictly positive so that players indeed will want to run away To run away from the point X to to increase their gradient and And the last one which is very very crucial Is the locality essentially the definition of the the potential does not depend on a global behavior of the Of the line, but only on the local behavior namely and local behavior in this case is What was the previous vertex? What is the next vertex, but not more than that? Yeah, this is the this is and the locality is the Crucial point for for for having a reduction from the query complexity to To this problem Okay, I will not go with you on the How we did the contraction? Let me finish with With an open problem So there are several open problems one of them for instance if you saw that our result is 2 to the power square root of n So it's an interesting whether it can be improved to 2 to the power n my conjecture that it can and but we didn't succeed to do it yet But I think the most interesting Open problem that I see I will finish with it is Can these techniques be applied to a computational settings? Can can can we maybe take something from the communication result and apply it in In a computational setting to prove to provide some hardness for mixed Nash equilibrium For my past experience in different areas this idea of Taking the insights from communication complexity and essentially proving something better in the computational It was successful. So I'm positive about I am positive about you Okay, thank you So in this yeah, this is a very good open question to optimize the parameters like what Aviad and Mika Goose did recently when they succeeded to prove that in two-player games essentially the communication complexity of two-player n action games is not only polynomial in n, but in fact, it's that the correct parameter is square, which means that essentially you have to communicate the entire game our results so far Are not of this type they just some polynomial n to some power which We didn't try to kind of to to optimize that and so if I remember correctly our our constant is n to the power one fourth or Something like this Because we have hardness for For let's say sparse games a communication hardness results about sparse So you should be careful and That if it will be to sparse then you can just report where the non-zeros appear, right? And then it will be succinct, but it's definitely a good question with what You have to do the location Yeah, but yeah, yeah, yeah, you need them to to be yeah something. Yeah, yeah, yeah if if Yeah, so so this is yeah, this is I think interesting open problem in this context is Is to look on One specific special class, which I think is very interesting is that of Leafshitz games essentially a leafshitz game is a game where a sing a unilateral deviation of player I does not affect much the utility of player J No In the in the leafshitz sense. Yeah, so They define it The nice property of these games is that There is sorry No, no, no, so think even of a discrete set what it says that the So think of games with many players Yeah, and the strategy if player one switches from action one to action two it affects the utility of player two by at most let's say Some epsilon And think of epsilon for instance as one of our square root n or One over n if you take epsilon to be too small, then it becomes too trivial game Yeah, but if you take epsilon for instance to be one of our square root of n then it's a very nice result by Shmai as really and Shmaiya that these games are always have a pure mesh equilibrium so To the best of my knowledge, I don't know how what I Don't know what is the complexity of finding this pure mesh equilibrium And I think that communication complexity is very nice tool to analyze it because those games do not we are talking about here about Large games that do not have succinct representation So hardness evidence might come in the form of a communication complexity Model and I want also to thank the organizers