 So, it presents a recent work in algorithmic game theory which is a very active and impactful area of research at the interface of theoretical computer science and game theory. And the story of algorithmic game theory has been a story of impressive bad news. Center solution concepts in game theory like Nash equilibria have been shown to be computationally hard. This essentially means that these key concepts like equilibria for these key concepts there does not exist polynomial and logarithms that compute these equilibria in general. But this is not the end of the story we have to address this bad news. And in the last follow, equilibria are widely used in game theory. So, at high level game study strategic interactions for example games are used to understand strategic interaction like auctions, markets, traffic networks and their games are also used in the study of things like social or biological systems. And equilibria in games model outcomes of these strategic interactions. So, for say for the prediction of outcomes of particular strategic interaction we need algorithmic tools, we need tools that address these hardness results, that address these complexity barriers. So, motivated by these considerations in this talk I present positive results. And by positive results I mean results that identify settings or things that can be done efficiently as opposed to these negative hardness results that identify limits of efficient computation. And particular in this talk I will focus on Nash equilibria which is one of the most central solution concepts in game theory. So, I will formally define Nash equilibria in later slides but let me give you a higher description of this equilibria to begin with. So, these equilibria so in games we have players, players are self-interested entities and players are payoffs. So, players pick some actions to maximize their own payoffs. And these equilibria at a high level you know distributions where players cannot benefit by unilateral deviation. So, they are stable in this in this sense, they are stable in this sense that if everyone and every other player sticks to what it is doing a player cannot benefit by unilaterally doing something else going to some other distribution over its actions. So, as I mentioned there is a lot of application pull for studying algorithmic results for Nash equilibria, but these are computationally hard. In a recent line of deep results it is possible that even in the case of two player groups games where I only have two players it is computationally hard to find an underlying Nash equilibria. But this hardness result has to be added this many applications that require us to find the underlying Nash equilibria. So, the core of algorithmic game theory that questions about computation of Nash equilibria. So, the goal here is to understand given the payoffs of the players under what settings can be efficiently compute Nash equilibria and what settings can be address these underlying hardness results. And this is an important question from an applications perspective, but I have to say that this is an important question from for fundamental reasons as well. And the point is that equilibria model behavior or model outcomes of strategic interaction between human players or organizations run by human agents. So, what is hardness results say is that equilibria requires human players to have solved a computationally hard problem. Now, this is a problem for the this is a question that reminds for the model itself. If the equilibrium requires a computationally hard problem to be solved by a human player we have to rethink the model. So, with this in mind again this understanding when we get efficiently compute equilibria is important for these fundamental reasons as well. And so now, let me move on to the positive results and it is that address these complexity barriers and a very natural approach very sort of classical idea in computer science is to in fact relax the output request. These hardness results say that computing an exact Nash is computationally hard, but intuitively one can think of computer relaxing this output requirement and studying the question of is it possible to compute an approximate Nash equilibria efficient. Again I will formally define what I mean by approximate Nash equilibria, but the goal here is to understand if given the payoffs of the players is there an efficient algorithm that finds approximate Nash equilibria. And this in fact is a central open question in algorithmic game theory today that does there exist a polynomial algorithm for two players even in games for two players finds an approximate Nash equilibria in polynomial. And first part of the talk and present results that will speak to that already address this central open question of computation approximate Nash equilibria. So, as I said this is sort of looking at approximate and so relaxing the output requirement is a typical approach in computer science. Let me ask step back and describe a complementary approach to obtain positive results a complementary approach to address these negative hardness results that in fact focuses on the input side of this picture rather than the output. And this complementary approach is based on a perspective that is quite prevalent in economics in game theory, but it is somewhat unconventional in theoretical computer science. And the idea is that in many studies it is said in economics do not observe these payoffs these payoffs are unobserved theoretical constructs. For example, if you for an economist a consume the utility of consumer is not observed, but what an economist will be observed is what are the choices of the consumer what did the consumer buy and not what the inherent utility is. So, these since these payoffs are observed this is not our input typically what is the input what is given to us is in fact observed behavior it is the choices that are made by the players. So, now in this set up we in fact have some flexibility what I mean is that given a put in given a specific observed behavior a specific observed behavior can in fact potentially be explained by different payoffs. And by that I mean the different payoffs such that the equilibria of these payoffs correspond to the behavior. This is a bit of a subtle point and I will sort of get back to it later in the talk, but just to give you an example say I observed the behavior of players they pick some actions what I can do is explain their behavior by a payoffs which are either in rupees or in dollars. So, to gain an underlying equilibria the strategies and the behavior does not change by the scale in which I specify the payoffs. So, there is some inherent flexibility in these payoffs specification this is a very simple example point is much more subtle than that, but the point is remains that for a specific observed behavior I can have essentially different payoffs that explain this observed behavior. And second part of the talk I present results that is leverage this flexibility to again address the hardness results for equilibria. So, that is the outline of the talk two parts that is clear let me move on to some technical results. So, let me first talk about computation of approximate Nash equilibria in particular show consider the case of two games where I have two self-interested players. Now, this is sort of common set here, but just to make sure everyone has a look on the same page. These two games represent settings where two self-interested entities simultaneously select an action to maximize their own payoffs. So, to kind of get this model right let me just give a very simple example. And typically in these games I list out the payoffs of the players as matrices. So, I have two matrices the first matrix is the payoff of the first player, the second matrix is the payoff of the second player. And these matrices is list out the payoffs of the players in the sense that the actions of the first player index the rows of these matrices and the actions of the second player index the colors of these matrices. So, let us consider the game of rock paper scissors I have two players both players have three actions rock paper and scissors the actions of the first let us say the first player plays rock and the second player plays scissors. So, the first player wins and we lead off the payoff which is 1. So, the first player wins it is a payoff of 1. And when the first player is playing rock the second player is playing scissors, the second player loses and gets a payoff of minus 1. So, for every possible action choice of the two players I can simply lead off their payoffs from these two players. So, that is a two player game that is an example of a two player game. What is noticeable about this example is that here there is no stable deterministic strategy. So, if the first player deterministically decides to always play rock then the second player can deterministically decide to always play paper and will always defeat the first player. So, that is not in the best interest of a rational or smart first player. So, in this setting at this time stable outcomes we have to consider settings where players randomize that is the players pick an action based on some distribution. So, the first player will pick an action i.e. one of the roles based on some distribution and the second player will pick one of its actions i.e. one of the roles one of the columns based on one of these distributions. So, let me generalize this representation throughout the talk I used matrices a and b to represent the payoffs of the two players. So, our two players both players in our personality have ten actions and I have this matrix that lists out the payoffs of the first player and I have this matrix b that lists out the payoffs of the second player. So, first player plays action i second player plays action j, payoffs of the first player is a i j and second player is a i j. As I mentioned before understand stable outcomes and strategies we have to consider probability distribution. So, I will use x and y to denote probability distributions over the rows and the columns respectively. So, here the first player picks an action picks a low based on distribution x and the second player picks an action picks a column based on distribution y. Now, with this notation in hand the definition of Nashi pilgrimage is quite intuitive. It denotes a pair of probability distribution where no player can benefit by unilateral deletion. So, let me pass through these inequalities. So, the left hand side of this first set of probability is the expected payoff of the first player. A is the payoff matrix of the first player and both players are playing according to distribution x and y. I just do this term by term product. What I get is that the expected payoff of the first player is x times y and what this inequality says is the first player cannot benefit by unilaterally going to some other distribution p. So, the second player sticks to distribution y then first player cannot decrease its payoff expectation by calling some other distribution p. And similarly for the second player first player sticks to distribution x the second player cannot benefit by unilaterally going to some other distribution p. So, now from a technical standpoint the problem of computation of Nashi pilgrimage as a following. I am given two matrices a and b and I want to find two distributions x and y that satisfy these inequalities. And as I mentioned this is a computationally hard problem that is unlikely that there exists an efficient polynomial time algorithm it finds these such distributions. So, now let me relax the notion that you consider an approximate Nashi pilgrimage. And the idea is you can look at the following definition which is quite standard in game theory it is very practical in the transfer definition. And the idea is that let us allow for a slack of epsilon that is an approximate Nashi pilgrimage as a pair of property distributions and the players cannot benefit more than epsilon by unilaterally. So, I add this slack of epsilon on the right hand side and get x and y to be an approximate Nashi pilgrimage. It is the same as if there is some inertia to changing the distribution for this sum the players are not optimizing to the last bit and you get an approximate Nashi pilgrimage. And central open version in the algorithmic game theory is to find an efficient algorithm that finds such distributions given matrices a and b. So, the computation of equilibria is a as a long and deep that area of research as I mentioned it is a computationally hard problem it is not surprising that the best known algorithm for finding exact Nashi equilibrium runs in an exponential time. This is a known algorithm by Lenby and Hausen. But there are notable number of classical games where exact Nashi equilibria can be completed efficiently. And in the off node are these zero sum games. So, these are games where the sum of the two payoff matrices a and b is all zero. So, these games model perfect competition. Whatever the first pair wins is exactly equal to what the second pair loses and so on. So, the rock paper scissor example I gave you was a zero sum game some these two matrices 80 by n b I get all zeroes. But the celebrated minimat scissor to one and one and by n b duality when you find exact Nashi equilibria in zero sum games in polynomial time. And here there are other interesting classical games like from the word that was done in 90 Bombay that identifies like classical games where exact Nashi can be completed efficiently. This is what it says that the rank of the sum of the two payoff matrices a plus b is one and again you can find an exact Nashi equilibria in polynomial time. Now moving on to the approximate side of things the best known algorithm for finding an approximate Nashi equilibria runs in time n to the log n an absolute Nashi equilibria. So, sort of famous result by Lipton, Marcaques and Mehta. This gives a quasi polynomial time algorithm for computing approximate Nashi equilibria in two main games. So, you can understand whether this can be reduced to a polynomial this is really important question you have got to be clear. And again there are specific classes of games where approximate Nashi equilibria can be completed efficiently. There is a recent deep result by Lungaila and others that shows that if the rank of the sum of the payoff matrices is small. So, I give you again there is the payoff matrices first there is the payoff matrices second there and the rank of this matrix a plus b is small and again you can find an exact Nashi and approximate Nashi equilibria in polynomial time. Ignoring the poly terms here for piece of presentation. So, the result that I present today will actually contribute to this line of work and will consider a very natural measure of games, the sparsity of the games. And I define this measure formally in the next slide but what I do want to mention and what kind of distinguishes this result from prior work is that it interpolates across the entire spectrum of games. For games there are near zero sum with respect to this sparsity measure will in fact get a polynomial time algorithm for computing approximate Nashi. And for general games there are any time of algorithm that I will mention will in fact match the best known upper bound of the Euclid-Leibniz algorithm. So, in terms of this sparsity we will go from this zero sum poly time case to this general result of Euclid-Leibniz algorithm. And guess the format definition of sparsity is quite natural I look at a game where A and B are the payoff matrix for the two players. And now I look at the matrix A plus B and it defines sparsity to be the number of non-zero entries in any column of this maximum number of non-zero entries in any column of this matrix A plus B. So, I have this matrix I sum it up every column let us say has equals s non-zero entries and this sparsity of the game is s. That is quite the means of one can think of it as a robust notion of zero sum. A zero sum model perfect competition where whatever one player loses is exactly equal to for the second player wins and here the idea is in most let us say in most settings what for the first player loses is exactly equal to the second player wins except for a few exceptions odd cases where both player lose and both player win and those are like the non-zero entries of this matrix A plus B. So, there is definition in mind is it possible that in real life scenarios that both non-zero ones are the only ones we care for? Sure. I mean, yeah, you can have you can have a setting where every entry is just zero both are getting zero and there is one entry where both are getting 10. So, they will just play that. So, yeah. In the light of fact, how would you justify this kind of explanation? So, it's, I mean that's one case but that's not necessarily what will always happen and it might be the case if, yeah, you have, yeah, it's just saying that the money doesn't fall from the sky so it's a general result. So, for those specific cases, you can do something with it. And here you have talked about sparsity I mean rank and sparsity after course related but is there any comment on that? So, rank is sparsely in a but the results are somewhat I mean, they are technically sort of the point being so I did think about it trying to port this result along the result or something of that sort but the change of basis if you going from this to the spectrum it kind of blows the approximation and so I mean I'm not it's still sort of ongoing work whether one can connect the two but in and of itself the two results are incompatible and what do I mean unless you get let's say A plus B is the identity A plus B so the full rank matrix so in that case and this is out of the question it's exponentially large even worse than the it's called a polynomial but for me if A plus B is the identity it's just a sparsity is one and I'm good I can find approximate natural quality time on the flip side I can give you a matrix just A plus B is all once the rank is one you can apply this another end sparsity is completely dense I would like to mention that I'm just working with this definition for ease of presentation the definition is much more robust like you can have small entries you can zero it out the dimensions of these matrices and with that sparsity the result I'm about to mention will hold A and B can be individually spars and again this result will go it's just for one key result and I'm working with this definition so with this definition in my form of theorem state okay let me make this concrete if I look at my rock paper scissors example again then this was a zero sum game of these two matrices is all zero so the sparsity here is zero all columns are zero and in general sparsity can be at most N these are N cross N matrices number now zero entries in any column can be at most N so I mean there's a technical caveat for me sparsity is always max of two and zero but for ease of presentation stick with this definition and here's the formal theorem of the statement what we can show is that in an S pass game we can find an approximate Nash equilibria in time 10 to the log S and the dependence on epsilon is as shown here and as is typical the playoffs are sort of normalized between minus one and one and the useful implications of this result is that when S is fixed the game is near zero sum so per column I have a fixed number of all zero entries log S is fixed, I get a polynomial time algorithm for computing approximate and in general S can be at most N these are N cross N matrices so in that case the running time of the algorithm matches the best known of our bounds of differential algorithm completely interpolates based on this natural sparsity so that's the sort of theorem statement I wanted to mention and I agree for a couple of slides talk about the key technical construction that goes into proving this result and it might seem a bit distant but after describing the result I can connect it back to approximate Nash equilibria and what's coming up is the key technical take away of the top if you want one technical take away from today's top that's the result and the result is an approximate version of the theory so the theory is a classical result geometry over a hundred year old what it says is that if I give you a set of vectors doesn't matter how many if I give you a set of vectors and all these vectors lie in D direction so D direction vectors and any vector w in the convex hull and we express as a convex combination at most D plus 1 of these D i's so pick any w in the convex hull you always write w is a convex combination at most D plus 1 of these D i's doesn't matter how many D i's you have now this is now considered a very natural approximate version of this theorem in the beginning I'm mentioning that this bound of D plus 1 is tight it's not too hard to construct examples particular vectors exactly D plus 1 of these D i's to be expressed exactly but here's an answer in the approximate version let's pick a norm P any arbitrary norm vector norm P and what we can show is that for every vector w in the convex hull the w might require D plus 1 V i's but doesn't matter where w is in the convex hull there always exists a vector w prime that's actually close to W for all w there exists w prime that's actually close such that this D i vector can be expressed as a convex combination of at most 4 P over that's what's going on so there's some scaling going on that I'll mention and like because I'm looking at relative error as a form of theorem statement that I'm giving this set of vectors and all these vectors just for scaling and the result was generally P0 of all these vectors for any norm P where it's equal to Z most 1 so I give you vectors they lie on or inside the P level so what the theorem says is that for every vector w in the convex hull is an excellent close vector w prime it's a convex combination of at most 4 P over 2 square P i's and what's noticeable about this result is that it's dimension free D underline dimension does not make an appearance so you're in a million dimension it doesn't matter if you allow an error of epsilon you can find a vector that's epsilon close that only requires 4 P over epsilon square so the proof I have of this theorem uses ingenious equality one can also instantiate some worries, empirical method from one aspect theory to prove this result in the interest of time I'm not going to be happy to talk about the top line there are many other interesting algorithmic applications that I have and still working on so the point is how is this approximate capital E connected back to Nashiv Kiribriya let's go back to the subject question is it in the convex hull it is and you can so by randomized algorithm it's very easy if you don't know so how does this connect back to Nashiv Kiribriya there's a very higher idea of the connection let's go back to the definition of approximate mass and notice that x and y are probability distributions x is a probability distribution for the rho k and y is the probability distribution of the column there and so Ay is not a vector that lies in the convex hull of the columns A1, A2, A3 these are the columns for this matrix A and y is a probability vector so Ay is a vector in this convex hull that kind of high level connects casually to Nash number of technical details that go into proving using this approximate casually to prove Nash in the interest of time I'll just give this high level picture if you're interested I'll be very happy to tell you the result of this comparison it is here or rather next slide please this one so can I actually choose I believe I can actually choose these the vectors see if I order them one to the other side yeah and I can if the consecutive interval will be twice the is that true so for W or for W no I am just saying that is it is it can I actually pick is it to do it the approximation of the by sparse vector is that there is at least some connection to the sparse this is like there is sparse approximation to W that's exactly what I am saying the point is I mean the point is you can pick different W's for different convex combinations for W's and that is potentially different W's there is at least one such W's there could be many such W's given a W you can find D plus 1 vectors will there exist D plus 1 that can be all the points for each point just look at the thing in this room I can partition it into two triangles so if you are in this triangle I need these and if you are in this triangle yeah talk about this result this combination of approximative Nash in these sparse games considering the problem of competition with the approximate Nash equilibrium in multi-clear games games where we are more than two players and in joint work with Yakov Babichenko and Ron Peretz we present the best known algorithm for finding approximate Nash equilibrium in multi-clear games so specifically we consider games where we have 10 players in action so M is where they are equal to 2 and in the state of the art algorithm the approximate Nash equilibrium for the case when the number of pairs is comparable to the number of actions of each player and the running term of our algorithm is capital N to the log of capital N where capital N is the input size of such a game so I have M players each player has N actions so I have this M-touple with N possible choices for each runway the number of action profiles is n billion and for each action profile I have M possible utility for each of the players the input size of such a game is capital 10 and what we are sure is that when we find an approximate Nash equilibrium for these large games in the capital N to the log of capital N the dependence on epsilon is really important and this research uses an interesting concentration bound for product distributions a bit more qualitatively what we do is to exponentially improve on the running time bounds for my deep layer games by Dr. Muratakis and Mehta and the result of Dr. Muratakis this is sort of a result right here and so that sort of wraps up my first part of the talk I talked about computational Nash equilibrium and the problems I considered I was given these payoffs for two players for my deep layer I also used a computed approximate Nash equilibrium and again let me step back and go to this complementary approach that I had mentioned earlier for positive results and the idea here is that in many settings payoffs are observed what we have is given to us as input is in fact the choices of the players you look at the input is observed behavior we have some flexibility in the payoffs specification in particular for a specific observed behavior you could have multiple payoffs and explain this observed behavior in the sense that you can have multiple payoffs matrices such that the equilibrium of these matrices correspond to the observed behavior let me be a bit more formal about it so let's formalize observed behavior as a collection of probability distribution so these are the equilibrium statutes we have a couple of probability distribution in observation 1 player 1 picks distribution x1 player 1 picks distribution y1 in observation 2 the second observed equilibrium strategy first player picks observation x2 the second player picks distribution y2 we have this collection of probability distribution and now for any such collection for any such observed behavior we have multiple games such that the equilibrium of both these games correspond to this given set of probability distribution it's possible now some of these games can be computationally high and some of these games can be computationally easy what do I need? well we know that games in which the line of one of the payoffs matrices is small we can find all national equilibrium of such games in polynomial time so given a specific observed behavior I can have computationally simple explanation for computationally simple low rank playoff matrices such that the equilibrium of these matrices contain all these observed strategies or I can have a computationally hard game that explains these observed behavior given set of probability distribution both are supposed to be optimal so there is no optimality it's an important point here that this is a testing exercise in learning prediction or these are important problems going ahead all I am saying is that here is this set of observed equilibrium strategies can you give me a statistically simple game which is formalized by land that explains these observed what are those probability distribution things so they are equilibrium they are potentially observed equilibrium equilibrium is optimal right so mine is an equilibrium for A and B so I mean it's a potential equilibrium so you don't know what A A and B is correct but yeah as you mean yeah so as you want to find A and B such that these are equilibrium of the situation it could be A and B that so unknown A and B but X and Y 1 are no equilibrium outside of no equilibrium so here I am giving you equilibrium I am trying to understand can you find a complete simple matrices that for which these are actually equilibrium but then there could be case that I can give you a distribution so that no game can explain them so model of those cases these are observed equilibrium strategies and you want to find a complete simple game thing and the premise is that a simple explanation a complicated simple explanation but yeah but these would be the linear any point in the variables of A and B sure so these are linear so you can always test whether there is some matrix that satisfies but when I have this line constraint then so then you have these linear spaces which you have to check with the line exactly so these are just constructed so in joint work with what we do is sort of identify properties of this observed behavior properties of these observed equilibrium strategies that quantitatively connects to lower angle quantitatively connects to these simple games so I will give you a specific example what we show is that I give you equilibrium strategies that have small support so you have X1, Y1, X2, Y2 so on and so forth and each of these have small support each have small support distribution each of them have support size S and you can always find an edge area then you can always find a game of line 2S plus 1 such that the equilibrium of this game corresponds to or contain the observed shine it does not determine what is this S so X1, X is a let's say I have only 2 distributions X1 is a distribution over N, it's a distribution over the rows Y1 is a distribution over columns I mean a distribution over N and let's say both X1 and Y1 have small support X1 only uses S rows it puts non-zero probability on most S rows and so look at the second observation X2 X2 is a distribution over N, it's a completely arbitrary distribution overlap with X1 does not matter but again in X2 the player is playing only putting non-zero probability on most S rows you have such a generic set of couples of probability distribution the result says that for such a set of observed equilibrium strategies I can find N checks of rank 2 S plus 1 so that these observed satsies are equal to 0 so the support satsies doesn't imply some condition of the rank 3 that's exactly what that's good but you're not missing the depth of the yeah it's not so so it's not uniform distribution so these can be arbitrary distributions so it's not so that's where the first factor yeah it's not even for the if I give you on the diagonal so I just have this set of observations the first observation both players are playing one one no distribution only one action the support size one the second distribution I have second observation both players are playing two parameters no randomization so you plot out these observations the line on the diagonal one explanation is the identity it's full rank so it's not obvious that for such a there's a low rank matrix and this is in this result it uses polynomials and so it's not a it connects to polynomials it's about no trivial observations so you can really look at certain low degree polynomials and make sure that we use those two in these matrices so it's not a there are certain in detail certain and we have similar results for so higher mention one result like that there is a basic assumption that that all these come from a single A and B way so if I give you this some sequence of vectors x and y and x y of support S then it's not clear that they actually you need the additional assumption that they also come they are the observed magically they come from A but from a technical standpoint this makes the result more challenging if I have different matrices I think for each I can come up with a new matrix no no we also if I give you some x k y k all of support K all of support S are you going to just give it this and no other thing will you design the A and B or are you saying that there you know the condition that there exists an A and B for which this I can construct A so if I just give you some x and y and x I can construct completely construct if you allow to feed on the A and multiple A's and B's can be given then your job is easier it is I mean for each so for a single observation I can order the zero sum because there may be you know there may be you see some say for in an x and y way there may be a support which is larger than x 1 including x 1 and larger than y 1 including y 1 which is also larger than x 1 so that maybe of that it might be so these are not the maximum yeah those are you can sort of so these are not generic these are not generic there are some just a testing exercise I mean fundamentally learning prediction all of these are fall over the first step is to understand if there is low rank and this is that you can sort of add additional constraints like these are the only equilibrium so not the only I mean one of the things that given an A, B you know there is this for any people this is like say I, J maximum so you are you are not saying that this I and J would be the I am just saying this in a difficult situation people would play a generic element of I and a generic element of J but I mean I am just for now I am just saying this is the first step this is there are other properties in the section of the sort of chromatic number of the data one such so yeah I mean the high level sort of takeaway message is that to many settings if your observation is structurally simple then you can come up with a structurally simple a comparatively simple explanation a comparatively simple game and hence I guess B is a complexity barrier for natural computing now we also have results of pure natural computing where observations correspond to single action choices for the two players so that is pretty much the end of the technical part of the block I presented two results one on computation and approximate dash the second one that reflects the addressing hardness from a coordinate from a complementary perspective and so going forward and focus was on Nash equilibrium these results had this Nash equilibrium have results of other types of equilibrium and going forward it is natural to ask if such positive results can be obtained for other types of equilibrium and for other settings as well and when you are looking to explore in the immediate future for these games over networks so networks are pervasive today and a lot of networks are now being studied in a big game so the idea is that players and the players are influenced only by its immediate meaning so you have this network and there are some games amongst each other so the network structure over this game the idea is to understand if we can get again approximate Nash equilibrium as hardware the idea is to understand if we can come up with algorithms for positive results for such settings as well and the technical standpoint this is an interesting setting for me because again here a lot of equilibrium competition problems can be cast as mathematical programs and the previous results I mentioned also sort of leverage structures of particular mathematical programs to find the equilibrium and find approximate equilibrium and so going forward I would like to understand the techniques and tools I developed that were approximate mathematics whether those can be extended or applied to address approximate competition in such a way another context where mathematical programs these tools naturally show up on markets equilibrium and price competition in the markets but these are two specific examples of domains that are to look at games over networks in markets my overall thing goal in this domain is to develop a structural understanding of approximation in game theory classes of games that on one hand are computationally tractable and at the same time are expressive they are able to explain large classes of observed behavior so the sparsity result I mentioned is an example of such a result the game is sparse you can find equilibrium efficiently and so on and another direction I am quite interested in working on is optimization problems so I talked to you about a competition of a national equilibrium competition and equilibrium in some context but it is quite relevant in many settings to understand if you can find an equilibrium that is optimal with respect to some object so for example it is natural to ask if you can find an equilibrium with the sum of the payoffs of the players or find an equilibrium it is Pareto efficient and so again this is a very rich area of research and I am quite excited to work in this so far I have focused on my results in game theory but in general I am interested in an approximation in optimization there are some topics and areas that I have worked in I am very happy to talk about these I am interested in conclusion I talked about the following four results today hopefully they gave you a sense of my research interest and I look forward to how these results connect to your chemistry so your which is before the first market game decision can you elaborate because markets are typically given by equilibrium conditions but my point is that in the mathematical program even the sparsity result that I had was using this to solve the philosophical viewpoint markets do you optimize your problem whereas game theory market for example should be given by market players given by very clear complementary conditions exactly, they are not seen as game theory because there will be an inversion of the decision markets so I guess my point here was are you meeting at markets are you trying to enlarge the decision markets or are you the traditional market so right now I am looking at traditional so one complete problem is exchanger it is taboo scar so exchanger typical both are given by market equilibrium conditions of price discovery but again finding approximate and so on remains interesting I am just asking you why have you called it game theory because they are not strategic by game theory are you asking them like encompassing in game theory so microeconomic so I am not encompassing these definitions I am just saying with these tools what can one say no but you are aware market theory conditions are ideal they are not based on rationality or stability they are basically based on ideal the market theory and price discovery and labour is employment and so on and let me finally be aware so these are not really where we allow the players to have rationality and stability I should not say this so that is a different competition I don't have any prediction markets so that is something different what I mean here is classical standard then you said that what is the agenda what is the agenda is it really just computing is it really to support some economic competition some competition that economics economists have posed or does it have an independent agenda so I mean it kind of this is not a single so that is one scheme of work and focus on what I wanted to talk about today what I want to talk about certainly it has so fundamentally there are a lot of game theories coming from engineering design computer sciences it gives design ideas rather than economics it is sort of social science analysis and that fundamentally distinguishes these are there any core agenda by itself which is not coming from economics so designing efficient I think on a competition of equilibrium equilibrium has been defined by a couple of people so is there anything so for example the algorithm word that was not seen by economists before so that is what I am saying designing certain paradigms but even analysis so certain options design descending price options and so on these are game theory constructs rather than just to go back to what I was saying the goal which differentiates is design rather than typically game theory is in the business of understanding like a physicist how do you explain this how do you understand this whereas game theory is also sort of the core separates it from some ideological strong game theory is a design perspective now I know these tools can I design an option can I design a neighbor can I design incentives that lead to competition so it is not like here is the word so in the Indian scenario do you think modeling means definitely certain things that I think are quite useful in some Indian context one of the examples is this thing called being done in I was in Uganda like text options could do with God could do with God and I think that is a very good game scenario platform for pricing something that is very important