 So, I am going to talk about a theory of equilibria. So, game theory, what does it do? It does analysis and design of a system where people are interacting, rational, let us call them rational agents with their selfish rules. And so, let us take an example, it is called game of chicken, where on a single lane path, two cars are heading on to each other. Both of them have only two choice, only one car can pass. So, either both of them head on and collide or they can change their course and go into a field or a side path or whatever and then they cannot go forward where they were heading to. So, if you look at the payoff, of course, both of them want to go straight, but then they are going to collide and that is an accident that is a really bad outcome for both of them. If both of them change their course, then nobody is going forward, nobody is happy. In the other two cases, one of them is happy, whoever goes straight is happy and the other person is not happy. Immediately you can see that these two outcomes are undesirable and these two are ok outcomes, at least one of them is happy. And so, this is kind of an analysis of a game scenario and traffic light is a mechanism which achieves one of these two outcomes and that is kind of a design part of it. So, in particular I am going to talk about games today. So, in our games there are going to be players and players have strategies. Rock, paper, scissor is one example. So, everybody knows the game Rock, paper, scissor or do you want me to explain what the game is? So, there are through other play Rock, paper or scissor and once both the players chooses their strategy, the payoffs are decided. Let us say it is a win-lose game, the loser has to give 1 dollar or 1 rupee to the winner. If you are playing this game, do you want to choose a strategy deterministically? Do you want to play Rock or paper or scissor? If you want I am I really want to play with you. So, no we would like to randomize right, we do not want our opponent to guess what we are going to play right. And so, randomizing, randomization is essential not only in this game, but in many important games. And Nash proved that once you allow randomization not in only two player games, but many player games a stable state exist. What does that stable state mean? That no individual player wants to change their strategy and they can improve no unilateral deviation that is the stable state. And the way Nash proved this theorem is through a fixed point theorem by Brauer. So, the fixed point theorem says that if you start with a disk by disk I mean a compact convex set and the function is continuous and, but it is still over the disk. So, it takes every point from the disk to itself to the disk itself. Then there is the point which is fixed by the function f of a is a. And this essentially fixed points will be the Nash equilibrium that is how we constructs the function. But what about computation? This is completely existential theorem. It does not tell us how to compute this fixed point. And the computation question is not only like CSV started asking this question. Immediately after Nash's theorem people started asking this question how do we compute the Nash equilibrium even before actually this Nash equilibrium for special cases of game. Nash proved it in more generality. And there were many, many algorithms, but for the algorithms for the most general case are. So, these are economics mathematicians they never cared about running time of the algorithm. And for the most general game the algorithms we have are through fixed point theorems and so they suffer from numerical instability on all these issues. Within CS when we started asking this question first thing we saw was that it does not seem easy. So, then we think of complexity classes can we so show hardness of this problem for some complexity class. And the first thing that comes to mind is NP. But for NP there always exists a solution. For Nash the solution sorry for NP we ask whether there exists a solution, but for Nash the solution always exists. So, this question is irrelevant in that sense because the answer is always yes. So, then we are in the regime of total function problems within NP where existence of solution is guaranteed. And Megiddo and Papadimitru showed that if Nash is NP hard then NP equals co-NP because Nash is both in NP and co-NP. So, the later Papadimitru defined this PPAD class and showed that it is kind of between P NP and P and PPAD is short form of Papadimitru. It is a polynomial parity argument for directed graph although it resembles very well. So, the problem in PPAD canonical problem is this. You have a graph every vertex has at most one in degree and at most one out degree. So, you can imagine this graph is essentially sets of set of paths and cycles. You are given one endpoint of one of the paths and ask give me any endpoint of any path. You know there exists one because this path which has a starting which is given to you has an ending. So, you know that there exists a solution, but give me any solution I am fine with that. Now, you can say that oh I am going to check every vertex no, but there are exponentially many vertices and graph is given by a circuit succinctly. So, there are two circuit predecessor and successor that tells you given a vertex it tells you who are the who is the predecessor and who is the successor. So, you cannot just do enumeration and showed that Nash is in the PPAD. Now, our focus is two player game. So, let me describe how we are going to represent the game. Let us say our two players are row and column it will be clear why we call them row and column briefly. So, we are going to represent when they play we cannot think that the row player is choosing a row of some matrix and column player is choosing a column of that matrix. The corresponding. So, there are two matrices corresponding entry in the R matrix is the payoff of the row player and corresponding entry in the column matrix C matrix is the payoff of the column. So, that is the representation of the game is this clear because I am going to use this part again and again throughout the talk is the representation clear. So, now, the problem of two Nash is finding a Nash equilibrium in a two player game which is represented by two matrices R and C. If the number of strategies of the players are m and n then these are m by n matrices. Now, what about algorithms? So, we have from 64 a very nice path following type algorithm which is kind of simplex, but not essentially simplex is a pivoting is complementary pivoting. It always finds a Nash equilibrium, but worst case is exponential. Now, for complexity it was shown that it is PPAD complete in 2006 and the same time even approximation one over poly an approximation was shown to be PPAD complete. On a positive side constant approximation we have a quasi polynomial algorithm. So, QP does rank 0 that is 0 sum game your loss is my win and my loss is your win. So, some of the payoff of two players is always 0 then you can just solve it through LP which was known even before Nash proved his theorem by von Neumann. And then there has been a lot of work for special cases ranks rank of the individual matrices are constant rank of a R plus C is low for this we know approximation or sparse which is work by Siddharth and so on and so forth. And there is a lot of work on query complexity communication complexity which Yako will talk about which I have not listed here. So, this is just for communication. And then we know that the constant approximation we cannot beat this if we assume exponential time hypothesis for PPAD. And similarly, if we now try to optimize something over the Nash equilibrium like give me a best approximate Nash equilibrium in terms of payoff or give me a Nash equilibrium where this strategy is played with non-zero probability and so on. Any such decision is cannot do better than quasi polynomial assuming exponential time hypothesis, the standard exponential time hypothesis. So, in all of these algorithms that we know are kind of enumerative using the properties of the game you restrict the search space and then enumerate. So, whenever they are solving for one Nash equilibrium essentially they are also solving the decision problem whatever you want. So, the first question I want to ask is are there efficient non-enumerative techniques to solve this problem. And the second question I want to ask is can we get some kind of hardness that are kind of unconditional. And I have my talk is divided in two parts. First part I will kind of talk about the first question, the second part the second question. So, the first part I am going to describe a method which is based on a linear program and one dimensional fixed point and it solves the game when rank of r plus c is 1. Remember rank of r plus c is 0 that is a 0 sum that is L p, r plus c is 1. So, 0 sum has convex set of solution because it is L p, rank 1 can have exponentially many disconnected Nash equilibria. So, immediately the complexity goes like a while, but still we can solve it, rank 2 and beyond is hard. So, this is threshold that Kavita was also mentioning about K equals 1 to K is 1. Somehow you could just get it. And then the second part I will talk about this another algorithmic framework which is sum of squares which I will try to convince you that is pretty strong powerful framework and then prove some lower bounds against this algorithmic framework yes. So, you can ask this question and there has been a recent very nice work on finding equivalent games. So, if you are give me a game can you tell me is there a rank 1 game which is equivalent to this and that is a very valid question. Rank is fragile in that sense, but this is basically motivation was because we can do rank 0, can we do rank 1 and there is a fundamental difference between rank 0 and rank 1 convexity with opposite versus exponentially many disconnected equilibria. And that is a separate question that is very valid yes. So, the first part, so again back to our game. Now as we just discussed that players would want to randomize. So, let us say the randomized strategy for the row player is x and randomized strategy for the column player is y. So, this x is m dimensional vector whose entries are non-negative and they sum up to 1 because it is probability distribution over the rows and y is a probability distribution over the columns. In this case the expected payoff of these players is this vector matrix vector multiplication right you can just see this. And then the Nash equilibrium is just no unilateral deviation. So, once they play x and y individually row or column player does not want to change their strategy. So, what does this mean? So, let us fix column player to playing y. Now row player will think if I play my ith strategy what is the payoff do I get? Now remember why column player is playing y. So, she gets Rij the ijth entry of R matrix with probability yj because that is the probability yj is the probability of playing column j for the column player right. So, this is the expected payoff from playing the jth cell right. So, if you take the summation then that is the expected payoff from playing ith strategy against y right. So, let us this is just ith coordinate of this R y vector ok. So, the maximum payoff she can achieve from playing an individual strategy is this quantity right. So, E i is just the unit vector with ith entry 1. And if she wants to achieve this maximum payoff with any randomized strategy she cannot achieve more than this maximum individual maximum right. If you have a set of numbers and if you put a probability distribution over that number you cannot achieve more than the maximum in that set right. So, the maximum she can achieve is this. So, checking whether x gives the maximum payoff is equivalent to just saying that it is better than any individual strategy pure strategies right. And this can be done only if she is playing a strategy with non-zero probability whenever it gives maximum payoff. So, the strategies that are played with non-zero probability then must be giving the maximum payoff right. You have to put non-zero mass only on the maximum numbers then only you can achieve that maximum right ok. So, this is the characterization and this is kind of a complementary slackness type of condition and the variable is non-zero only if some constraint is tight right. So, using this let us define this polytope that capture the maximum payoff. So, let us say pi r is the variable that captures maximum payoff of the row player and pi c is the captures the maximum payoff of the column player. So, to see that pi r will be only more than the maximum payoff for any fixed y right and pi c will be more than the maximum payoff for the column player right ok. So, now suppose I take the a point in the product polytope of this P and Q and look at this objective what is the first term? This is the sum of the payoffs of both the players. What is the second term? The second term is the sum of the at least the maximum payoffs right. So, this can this quantity be positive? This can never be positive because this is more this term is more than this term always right. So, this is at most 0 and this is exactly 0 when you are at the Nash equilibrium. So, this gives us a nice simple quadratic program which is maximizing this objective over this product polytope and now you see why you get an LP for a 0 sum game. When r plus c is 0 this bilinear term the difficult term vanishes everything else is linear and you get an LP right ok. So, this is the problematic term if your game is rank 1 then again this simplifies is to product of two linear terms right ok. Still it is a rank 1 quadratic program which is in general NP hard. So, we cannot just rely on existing techniques for solving QPs ok. So, what we do is let us first use minimum information we need for representing the game. So, if we have r u and v I can reconstruct the matrix C right because that is just u v transpose minus r. So, u and v is just rank 1 matrix it is a u matrix column matrix multiplied by a row row vector right. So, that is a rank 1 matrix and do this replacement at both places in the objective sorry in the in the polytope ok. After that I am going to look at not only one game, but a space of games where I allow any vector instead of u ok. So, instead of u I have star. So, I do that replacement in the objective and in the constraint. Now, this is more complicated because now this is like three variable in some sense right because star is also a variable in that sense. So, I replace that difficult term with a lambda and now you see this is kind of a parameterized linear program where lambda is the parameter. Once I plug in some value of lambda it is a linear program right ok. And what can I show for this linear program? If you look at all the solutions of this linear program for all possible values of lambda they exactly nothing less nothing more exactly capture all Nash equilibria of this game space for all of these games ok. And we can show this through complementary conditions. Complimentary sickness condition of this linear program matches with the complementary condition of for the Nash equilibrium characterization ok. In particular what can we show is that if you plug in lambda you get some solution for this LP then corresponding x and y is a Nash equilibrium for all c. So, instead of u I have a vector c such that x transpose c is lambda. So, instead of finding solution Nash equilibrium for one game I get a Nash equilibrium by solving one LP for a space of games which is m minus 1 dimensional space right because this is m variables here. But I am after this R u v and not any R c v right. If u belongs to this space then I am done. So, now what we have is we have this parameterized linear program where we plug in lambda we get x and y. If lambda then I am done right. So, what should I do? Just put a box around this think of it as a function which is inside is a single parameter function takes a lambda inside solves the LP commutes x and y outputs x transpose u. If x transpose u is lambda then we are done that is a fixed point of this function right. And fortunately we know how to solve one dimensional fixed point that is just binary search. So, if you have a function from a 0 1 to itself then you know that if you do not have a trivial fixed point at 0 and 1 then 0 will belong to this upper half and for 1 the function value will belong to a lower half right. So, this give you two pivots between them the function is continuous I know that right because it is a solutions of a parameterized linear program and then you just do the binary search. And we know that the precision of a Nash equilibrium is finite actually polynomial in the bit length of the game. So, we are going to be done in polynomially many iterations and that is it that gives us the algorithm. So, it would be interesting to see if this can be applied to we did apply this technique to other Nash equilibrium computation problem in other games, but can be applied to other quadratic programs let us say which are coming from some different domain it would be interesting to see that. So, that second part is some bad news. So, I am going to focus on applying sum of squares algorithmic framework on Nash equilibrium what is sum of squares it is basically SDP based hierarchy based method and many state of the art algorithms for optimization problems has been using this powerful technique. It has helped break into integrality gaps it has helped get state of the art approximation algorithms and so on and so forth for really classical important problems. So, we were hoping that may be something can be done using this and we really honestly started with a hope of a positive result not a negative result, but we ended up with a negative result. Again when we try to apply this framework to Nash the first road block was NP versus PPAD. So, far it has been applied to optimization problem the formulation was like that how do we kind of get rid of optimization and just want to kind of solve a search problem. So, here is our attempt we try to define this algorithmic framework. So, this is the definition of Nash equilibrium right you want to find vectors x and y such that they satisfy these inequalities and now these are polynomial inequalities they are not linear because of this bilinear term on the left hand side and what is an approximate epsilon approximate Nash. So, it says that a player can deviate and gain, but no more than epsilon that is an epsilon Nash. And now if we do linear relaxation of this that means, here we have a quadratic terms here right x i times y j if we replace those x i times y j by y j by variables say p i j then we get a linear relaxation of the system which captures correlated equilibria if you know about it if you do not know about it do not worry, but you do not get Nash equilibrium, but you get something which is correlated equilibrium. Now, we want to do more we want to build this hierarchy on top of this by doing this relaxation, but higher order. So, what is I am going to define degree D pseudo distribution it is not a distribution it is a pseudo distribution in the sense that it is going to be defined by its expectation operator. What do I want from this expectation operator that it is linear it has a variable for these higher ordered monomial terms. So, instead of just replacing x i and y j x i times y j by p i j now I am going to replace x i times x j times x k by say p i j k and I am going to look at all degree D monomials and introduce variables for them. And on top of that I have to make sure that everything makes sense. So, I have to introduce all the symmetry constraints and whatever the other important constraint is that the normalization constraint that the expectation is of 1 is 1, non negativity constraint that if you take any polynomial q of degree less than of course, d over 2 and take a square of it then the expectation of that has to be non negative. So, that is why some of squares in some sense and then this expectation has to satisfy my important polynomials by this. If you want that p greater than equals 0 polynomial p greater than equals 0 then you introduce all of these constraint that expectation of q square times p where degree of q square times p is less than equals d is non negative. So, this is how it is defined this essentially captures up to degree D movements of your the final distribution you are after. So, degree D pseudo equilibrium is degree D pseudo distribution that satisfies say Nash equilibrium or epsilon Nash equilibrium constraint whatever we are after. And you can see that if you just take d equals 1 you get or it should be d equals 2 sorry it is a type of if you take d equals 2. So, degree 2 monomials if you only replace degree 2 monomials then you get collated equilibrium polytope if you go all the way to degree n and higher then you get back the Nash equilibrium. But you can see the size of your this program exponentially increases with d right if you have let us say degree d then your size is n to the d if n is the original set of variables right. So, you do not want to go all the way to n. So, the algorithmic frame where we work with is kind of oblivious is information theoretic. So, you have you get your system gets this degree D pseudo equilibrium you never get to see the actual gain. Somebody computes this degree D pseudo equilibrium and gives it to you using this you can take as much time as you want as much space as you want. And whatever resources you want you do some kind of rounding, but you can only look at this distribution and not at the actual gain. After that you generate set of solutions or let us say one solution and then verifier will tell you whether this is actually an equilibrium or not. So, that is how it is information theoretic instead of and then we also allow partial solutions where you do not have to give like entire equilibrium you can give one of this player strategy and we can complete the other player strategy that is fine. And not only one solution, but an array of solutions ok. So, degree D query Q oblivious rounding with verification oracle algorithm will take degree D pseudo equilibrium do whatever rounding it wants in whatever time and resource it wants generates Q queries for possible solutions gives it to verifier. And if any of them gives you a Nash equilibrium I am happy. So, the framework is clear because I am going to prove lower bound against this framework. This system never gets to see the game that is important because if I give you the game this part will generate the solution right because there is an exponential time algorithm ok. We observe that many or most algorithms that we knew for S O using S O S fit into this framework. So, it is strong enough in that sense ok. So, first because we were aiming for some kind of positive result we could reconstruct the Q P task and we were not the first one we realize that just before us somebody had use S O S framework to come up with a Q P task which we already knew from Lipton metamarcha is that Tim mentioned using they actually gave a Q P task the first Q P task for Nash. So, what the main theorem we show is that if you are aiming for a 1 over poly n approximation then either you need D to be omega n or you need the queries to be exponential. So, either way you are going to hit the exponential running time and if you are looking for a constant approximation then D has to be either the D has to be log n or the query has to be n to the log n ok. So, we cannot beat using this framework we cannot beat the state of the art. So, here this you once you get the pseudo equilibrium this oblivious rounding algorithm not only generates one solution, but let us say Q solutions and this is the query part. Yeah, you can generate 2 to the n if you wish. So, these are the final theorems and essentially saying that this particular framework does not allow you to beat the state of the art ok. So, high degree pseudo equilibrium does not give you any information about actual equilibrium that is what we want to achieve ok. So, let me go to the proof I have 15. Yeah, we prove for 1 over n to the 4, but I think we can bring it down to 1 over n square, if you do more optimization into the analysis that we did, but it would be interesting to see if we can do say 1 over n to the c for any c greater than 0 that is the hardness result for P P A D hardness of natural equilibrium it says that any given c greater than 0 getting 1 over n to the c approximate natural equilibrium is P P A D complete. Steps like classes that are data like in the ethers for log n that is an open and super interesting to me because I am trying to look into that yeah. So, there is a queue for this. So, the queue is one way. Sorry, sorry yeah. So, the queue pitas we have does not need to enumerate it is just the SOS solve SOS and then generate the solution queue is 1 yeah otherwise you just do LMM structured classes like let us say rank of R plus C is constant or let us say it is a sparse game where. So, this is general games right. So, let us go into the proof. So, the main idea is to construct a family of games we will have to construct a family of games because we want to also have hardness against this enumeration. So, we are going to construct a family of games that all share a common pseudo equilibrium, but all of their approximate equilibria are disjoint actually they are far apart and that is our goal. So, essentially the pseudo equilibrium contains no information about Nash equilibrium and we know such instances for classical optimization problem like 3 set K click and so on. And another thing we know is that once you have such a problem if you can map that problem to another problem that has a low degree mapping of solution sets as well then the hardness also carries forward yes sorry yeah. So, I will just say that Nash equilibrium sets of any two games they are they are disjoint and they are there is some gap between them. I am going to restrict them to a nicer set and they sets are disjoint. High degree pseudo equilibrium it is not. Yes. It is not a click. No, no I do not allow you to do equilibrium, but I allow you to choose whatever Nash equilibrium you want. So, Nash equilibrium sets are going to be disjoint it is not as if these games have different set it is one Nash equilibrium no it is all Nash equilibria are disjoint kind in both of the player strategies. If you are going to use the SOS framework as a black box then you will have to allow to be any pseudo equilibrium to be used right unless you do some more tweak to it. So, this game construction we will first talk about the meta game construction before we go into the details of the two theorems. It has two parts one is the SOS hard game which has what we want from this game is that it has a degree D pseudo equilibrium with very good payoff, but all of the epsilon Nash equilibrium payoff not only Nash equilibrium epsilon Nash is the bigger set than Nash equilibrium right. All of its epsilon approximate Nash equilibrium has really poor payoff the sum of the payoffs. D degree D pseudo equilibrium has the payoff delta for both the players while all epsilon Nash equilibria has the total payoff of both the players at most two times delta minus epsilon. And then ennume hard game which is a family of games which are defined by subsets of the strategy set such that all of their Nash equilibrium set are disjoint basically. So, we make sure that each of these games so if you take a subset of these numbers then their Nash equilibria are around the uniform distribution over that set US stands for uniform distribution over that set of strategies. So, all their epsilon Nash equilibria are constrained in this epsilon, tau epsilon ball around this point. And so they are essentially this make sure that they are disjoint. And then you construct this combine these two games where you put there are two blocks first block and the second block red block and the blue block. Red block contains the information about the pseudo equilibrium and the blue block is this ennume hard game. And the way we constructed it is important that what you put here because what we want is that this part the red part does not have any equilibrium. Whatever equilibrium are there in R and C we want to kill them. So, we show that this game has degree D pseudo equilibrium in the red block, but all its epsilon Nash equilibria are in the blue block. So, the red block the first block does not contain any Nash equilibrium or any approximate Nash equilibria. And then the theorem we can show is that if you have this subsets S 1 through S q through which you construct these ennume hard games. And then you have this one S O S hard game then you construct this family of games using these two where you have common first block which is this S O S hard game. And then using ennume hard game you construct this blue blocks. So, these games just differ in the blue blocks not in the first red block. Red block is common over all of these games. Then all of these games have a common pseudo distribution or pseudo equilibrium in the red block, but all of their Nash equilibrium are in the blue block and they are distant. So, this essentially gives us the hardness right lower bound because I am allowed to give any pseudo equilibrium. So, if I give a pseudo equilibrium here and then this does not give you any information about the blue block. It could be there yes, but nothing about it. So, the pseudo equilibrium essentially will tell you set all of these strategies corresponding to the blue block to 0. All of their moments will be 0, but still your equilibrium are in the second block itself. There is no equilibrium in the first red block. So, for the exponential lower bound we construct the S O S hard game using the k independent set. So, there is a reduction from independent set to game by Gilbo-Wenzemel and we extend it to get hardness not only to like mapping solutions to Nash equilibrium, but also to approximate Nash equilibrium also gives you independent set and so on. The property is that if there is an independent set then there is a Nash equilibrium with payoff 1. If there is no independent set then 1 over n square approximate Nash equilibrium has payoff less than this whatever we wanted. And then we have a new hard family. This we do using matching penny. We basically have to restrict that the players only play this strategies from set S. You set really bad payoff for the rest of the strategies and then it is basically like that only identity matrix for the this part and then they will have to really randomize even if they just care for approximate solution. Yes, this one. No, so it is that is the gap right. So, if there is an independent set. So, this is the payoff of both the players individually is 1 and this is the total payoff of the. Thank you. So, I am rushing through the details, but essentially this is an extension of Gilbo-Wenzemel reduction from independent set to approximate Nash equilibrium and this is a construction through matching penny extension of a matching penny game. And for the quasi polynomial lower bound we did have to do more work. So, there was a result by these guys. I cannot pronounce the first name for Lee and Savani assuming strong exponential time hypothesis order 1 approximate max payoff n is hard to compute. And so we extended it to get what we wanted that if basically using this. So, they start with this what they call free games. So, from there you want to construct a two player game where omega log n pseudo equilibrium there is a omega log n pseudo equilibrium with payoff at least 1 for both the players individually, but actual equilibria have payoff at most 2 times 1 minus epsilon for epsilon some constant epsilon. Again the constant is not too great it is I think 1 over 500 or something. So, it is not nice. And then for the enum hard game we extend the construction of the Scalakis in Papadimitru which is much more complex I am not going to introduce, but essentially we show that we can construct this n to the log n many games such that for every game USI it is Nash equilibria are restricted to 8 epsilon ball around the uniform distribution over set S I. And the for any 2 games their uniform distribution corresponding uniform distribution are at least 17 epsilon apart. So, these 2 sets have to be destroyed and combine them to get the final game. So, essentially I showed you this very nice technique this is an old result this actually I did this here during my PhD. And this is a relatively new result I am still working on this. So, we define this oblivious framework we using SOS of oblivious surrounding with verification oracle which is kind of information theoretic. And we using this we could find essentially just using pseudo equilibrium not even enumeration part of it a non enumerative cupitas, but show lower bound for the actual questions of finding one over poly and over constant approximation. And as a consequences we extended the results of Bilbova Zamel and the Scalakis in Papadimitru for some certain things. And open questions so somehow we are stuck for constant approximation I guess we are stuck at still 0.3393 right I have not seen any improvement after that. So, we know a polynomial time algorithm for epsilon equals 0.3393 no better than that polynomial time algorithm. So, that is it would be really interesting if any kind of SOS framework or SOS algorithm gives you better than 0.3393 approximate natural equilibrium. And let us say can we do some kind of beyond worst case type of analysis using this framework or maybe special classes can we do better than the state of the art algorithms. That is it thank you questions.