 Tupper Award, the Kali Prize in Computer Science and Game Theory, as well as the Gerdel Prize. Another thing I wanted to mention about Tim is that he is a great communicator and a teacher. In fact, you know, whenever somebody wants to work with me on game theory, the first thing I tell them is that, you know, please go and read up Tim's lectures on Game Theory on the web. He's put up videos. He has lecture notes. He has this really interesting demonstrations from from this phenomenon called the Brace Paradox, where he uses various, you know, physical devices to demonstrate how that paradox actually works. If you haven't seen it, I would recommend that you check it out on YouTube, I guess. So yeah, Tim, we're very glad to have you. Okay, thanks very much, Umang. And thanks to Umang and Sid for the invitation to speak at this workshop. I think it's a great topic and I completely agree with him that it would be good to get more complexity theory theorists interested in not really game theory. So I'll be trying to make that pitch, pitch myself. So the plan for the talk is just to give you sort of a teaser of three different papers. So let me just say briefly what you can expect and then we'll dive right in. So I'll start and I should say I haven't been working squarely on complexity theory for a couple of years. So the results are all a little bit old. So if some of you know some of them, I apologize. But I hope, you know, especially for the students in the room, I hope most of these will still be perhaps new. So the first one is joint work with Imbal, Talgrim Cohen from the Technion, Why Prices Need Algorithms. So in that part of the talk, we'll be looking at what are called Wawazian equilibria. So that's a notion of a market clearing equilibrium when you have multiple indivisible goods. Wawazian equilibria, sometimes they exist and sometimes they don't. So economists have been super interested in understanding when exactly Wawazian equilibrium are guaranteed to exist. And we're going to show that really quite surprisingly, complexity theory has a lot to say about existence of this type of equilibrium. Which is odd because I'm asking a non-computational question. I'm asking just an existence question. Does there exist a Wawazian equilibrium? And yet somehow we'll see that if you have a complexity separation between two different optimization problems, that will in fact imply that you do not have Wawazian equilibria in certain kinds of markets. Then we'll move on to barriers to near optimal equilibria. So here we'll be looking at the so-called price of anarchy, which is when you look at a game and you look at its Nash equilibria. And you ask to what extent the Nash equilibria approximate an optimal solution. And so the goal here is going to be to prove lower bounds on the price of anarchy. So negative results, in particular for quote-unquote simple combinatorial auctions. And here we'll see that the role complexity theory plays through communication complexity. So we'll see that there's sort of a generic reduction from lower bounds for non-determinist to communication protocols to lower bounds on equilibria in simple auctions for simple multi-item auctions. And then the final paper is joined with Pritikshikopalan and Nomnisan, the Borders of Borders theorem. So Borders theorem is a famous result from economics. It's about single item auctions and people like it a lot. So people have tried to extend it beyond single item auctions to more complex settings. And for the most part, people have failed. And we're actually going to see that again complexity theory provides an explanation for why. We're going to see that in fact if you could generalize this famous Borders theorem much beyond single item auctions, in fact the polynomial hierarchy would collapse. All right, and definitely feel free to jump in at any time with questions. I'll also pause between each parts if you have questions about that part. But let's jump right in. Why prices need algorithms? So like I said, this is going to be about a concept known as Wal-Rizoun equilibria. So let me tell you what those are. So we're looking at a market. So there's agents or bidders. Those are people who are going to be getting things. And then there are the items, which is what we want to allocate. So n is going to always denote the number of agents or bidders. m will denote the number of items. These items are indivisible. They cannot be split fractionally across many people. So they're like houses or what have you. An allocation, I just mean a partition of the items. You have one copy of each item. You're going to give it to somebody. Once you make an assignment, you've partitioned the items amongst the agents. So that's what I mean by an allocation. How does a bidder feel about a given allocation? Well, they look at the items that they get. And then the question is, what is their valuation for the items that that bidder receives? So this valuation is a set function for a given bidder i. It's like a lookup table of length two to the m, where m is the number of items in the bidder's head. For each subset of items they might receive it, it's saying the maximum willingness to pay of that bidder for that particular bundle of items. So maybe if you get items 3, 5, and 7, you'd be willing to pay up to 12. If you've got items 6 and 8, you'd be willing to pay up to 17, and so on. So that's the valuation of a bidder. And what was in equilibrium, like I said, it's really just a notion of sort of market clearing prices, but where you have multiple items, not just one. So it comprises two things, so on the one hand an allocation. So it's an assignment of each item to a bidder, plus one price for each item, so that in some sense both sides of the market are happy. Bidders are happy in the sense that, given the prices, each bidder gets their utility maximizing bundle, their favorite bundle. So they get the out of the two of the m bundles they could get. In a Warrows and equilibrium they get the one that maximizes their value for those items minus what they have to pay for those items, given the item prices. So each bidder is in the sense that, given the full choice of whatever bundle they could want, they get their favorite. And then it's also happy on the supply side in the sense that the market clears. So each of the items is given, is taken by exactly one of the bidders. Or if you like, you can have some unsold items if nobody wants them, but then the price of those items should be zero. So you think of everybody just sort of independently picking their favorite bundle at these prices and magically each item is chosen by exactly one of the agents. That's what a Warrows and equilibrium is. All right, so as I said, these may or may not exist. So just to show you that they need not exist, and also just to kind of give you some practice with the concept, let's look at a very simple market. So there's just gonna be two bidders and two items. So let me tell you about the valuations of the bidders. The first bidder is what might be called a single-minded bidder. Meaning, this bidder is only happy if they get both A and B. They don't care about having just one of them. But if you give them both, this bidder would be willing to pay up to three. So that's bidder number one. Bidder number two is different. They actually don't want both items. They only want one of the items, and they have value two for either one. If you give them both, they don't care. They'll throw the second one away, but they're happy to pay up to two for either of the two items. And I claim already in this market, there's no Warrows and equilibrium, okay? And you can check that by case analysis. Let me just do the most interesting case. So let's argue that in any allocation where the first bidder gets both items, there's no way to set the prices so that it's a Warrows and equilibrium. And this somehow is the case we care about. Because if you think, what do we want to have happen? Well, we can only make one of the two bidders happy, okay? We can't make them both happy. And the first one has the higher value, okay? So we'd sort of like to see them get both of the items. But there's no way to do that at a Warrows and equilibrium, okay? Why? Well, remember, at equilibrium, every bidder should be as happy as possible, given the prices, okay? So we're thinking about the allocation where bidder two gets nothing. So bidder two should be happiest given the prices getting nothing. That implies that the price of each of the two items has to be at least two. Because again, bidder two would be happy to have either item at a price less than two. But now, if the price of both items one and two is two, now all of a sudden bidder number one is paying four for this bundle for which they have value only three. And then bidder one would actually be happier at those prices getting nothing, okay? So there's no way to make them both happy. And again, you do the other cases, they're similar. It shows there's no Warrows and equilibrium in this particular market. But again, there's lots of other markets where you do have Warrows and equilibria, okay? And in fact, there's really been sort of a cottage industry in economics and operations research trying to identify which markets have Warrows and equilibria and which ones don't. The most famous result in this literature is a sufficient condition for a market to have a Warrows and equilibrium. It's something called the gross substitutes condition. It's not important for this talk that we know what that is. Roughly, it says that the valuations are like matroid rank functions. So that's a sufficient condition, covers various interesting cases of guaranteed existence. There are partial converses saying that basically if you look at rich enough valuation classes, any violation of gross substitutes implies the existence of a market with no Warrows and equilibria. And in general, lots of people have been sort of, including very recently, have been trying to map out the frontier between sort of when you have guaranteed existence and when you don't have guaranteed existence. And in all of these works, whenever they sort of rule out existence, they always do it in exactly the same way that we did on the last slide. You just give an explicit example and do a case analysis to show that no Warrows and equilibrium exists in that market. And so what this paper is about is about a totally different way to prove non-existence based on reductions in complexity theory rather than on explicit examples. So the main result from this first part, there's some undefined terms which I'll define on the next slide, but I want to give you the gist of it. So basically what it says is we're going to think about some fixed class of valuations, right? So think for example about the class of monotone submodular valuations or something like that or subadded valuations or whatever. Just fix some class v of valuation functions that we allow bidders to have, okay? What we're going to see is that if you have a complexity separation between two optimization problems concerning this valuation class, the utility maximization problem and the welfare maximization problem, which I'll define formally on the next slide. But if you have a complexity separation between these two problems, meaning one is strictly harder than the other one, we will show that that implies that there exist markets where bidders have valuations drawn from this class capital V without any Warrasian equilibria. So complexity separations will imply non-existence of a Warrasian equilibria. And again what I think is so surprising here is that the question we're asking does not seem computational, okay? We're just asking about the existence of equilibria. Contrast that with say the famous results about the PPAD hardness of computing Nash equilibria. Right there, the question being asked is computational. How hard is it to compute a Nash equilibrium? So of course complexity theory gives you the answer. What we're saying here is that even for seemingly non-computational economic questions, even their complexity theory has a lot to offer, okay? So this is going to be the main result. Let me define these concepts for you formally and then give you a sketch of the proof, okay? All right, so maybe you can already guess what these problems are. So again, fix this class of valuations V. And think of the valuations as being succinctly described. So having polynomial description length. So the utility maximization problem concerns just one agent. So I tell you one agent, I tell you their valuation, and we sort of posit item prices. So the input is evaluation and item prices. And the optimization problem just says which bundle does this bitter like the most at these prices. So of the two to the m bundles, find the one that maximizes their value minus their price, okay? I should have said at the outset, I mean both of these optimization problems, the complexity is going to depend on this class capital V, okay? So if we use only these very simple valuations, these might both be polynomial time solvable. If we use very complex valuations, these might both be NP hard, okay? But in any case, so for any valuation class, that's the utility maximization problem. Given a valuation, given prices, compute the utility maximizing bundle. Welfare maximization, so here there's not one agent, you have N agents. So I give you N valuations. And the question is just compute the allocation, so the partition of the items amongst the bidders that maximizes the social welfare. Maximizes the sum over the agents of their values for what they receive, okay? Because this problem has multiple agents and this one has only one agent, the welfare maximization problem generally can only be harder than utility maximization. So there's really two different cases. Either these two optimization problems have exactly the same complexity for a given valuation class, or if they don't, it's because welfare maximization is strictly harder than utility maximization. So really what that previous theorem is gonna say, it says whenever welfare maximization is strictly harder than utility maximization, that implies non-existence of well rising equilibria, okay? And when I say strictly harder, I mean does not reduce in the sense of polynomial Turing reductions to the other problem, okay? So if welfare does not polynomially reduce to utility maximization, then we have non-existence. Yeah, so polynomial description length plus you can answer value queries in polynomial time. You can have other models, but let's fix that model for the talk. So I know that's a little abstract, so let's look at some examples. So let's actually prove something we already know through this complexity theoretic framework. Suppose we take the class V to be all possible single-minded bidders. So remember this means there's some subset of items that you want. You only want that subset and you have value for that subset. And you have zero for value for anything which doesn't include that subset, okay? I claim for this class evaluations, assuming p not equal to NP, it is very easy to see that we do in fact have a complexity separation. The welfare maximization is strictly harder than utility maximization. If you just have one single-minded bidder where they either want nothing or they want a single bundle, very easy to check what their favorite bundle is. You have two possibilities, you check them both, okay? So trivially, polynomial time solvable utility maximization. But if I give you a bunch of single-minded bidders and I ask make as many of them happy as possible, well, that's just a set packing problem. It includes, for example, independent set as a special case. So welfare maximization is very obviously NP hard. So again, assuming p not equal to NP, welfare maximization does not reduce the utility maximization and actually so there that theorem applies. It says there will be markets with single-minded bidders without a totalizing equilibria, okay? There's lots of other examples. So one simple one would be budgeted additive valuations. So here, bidders have additive valuations except we have a cutoff. So there's some maximum value you might have. And again, anyone with an undergraduate algorithms background can quickly see that utility maximization is pseudo polynomial time solvable. So it's like the knapsack problem. So if all of the valuations are polynomially bounded, it's polynomial time solvable. Whereas welfare maximization is very much like the bin packing problem. So in particular, it's strongly NP hard and remains NP hard even when you have polynomially bounded valuations. So there again, and this one, this was what I don't think had been observed before, for budgeted additive valuations assuming p not equal to NP, you have, it implies existence of markets without a rosin equilibria, okay? So these are a couple of examples. I mean, the high-level point here is just that, you know, you can look at different valuation classes. And just if you're trained as a theoretical computer scientist, I just want you to take away that you can very quickly figure out the complexity of these two problems. You can very quickly figure out whether they have the same complexity or different. Okay, so at least for people with R training, this is a very easy theorem to apply. Excellent point, excellent point. So because I'm using complexity theory, I'm getting conditional results. So p not equal to NP implies that there are markets without a rosin equilibria. Now it's then natural, you know, so you can take that as just a very strong hint that you can write down an explicit example. I mean, an analogy I like here is sort of imagine an NP-hard problem and imagine a sort of tractable LP, linear programming relaxation for an NP-hard problem. But if p is not equal to NP, then you know that there's an integrality gap. You know there are examples where the only optimal solutions are fractional, okay? And, but then, you know, then sometimes you want to actually like figure out what is the exact instance where the linear program is unconditionally has a gap. So think of it as sort of, you know, a very sort of quick and dirty way to get sort of very strong evidence that there should be an explicit example. Yeah, excellent point though. Yeah, could actually be an exact, I didn't think it would be important. I mean, if you're not trying to do something, would you like to take the subs for the, you know, subs? Yes, at least the proof I will show you in the next slide is certainly about exacts. I think it's an open question, you know, making this connection more robust with respect to approximation. Yeah. So I'll be talking just about the exact version of the problems. Yeah. And you'll see when we do the proof, it's kind of this ellipsoid thing, so you'd have to keep track of how approximation, you know, got carried through the ellipsoid and so on. So I'm not saying that might be doable, but we didn't do it. So that's an open question. Okay, so these are a couple examples of how you would apply the theorem. Why is the theorem true? All right, so let's, so what did the theorem say? The theorem said that well-firm maximization should be as easy as utility maximization, otherwise you're going to have non-existence. Okay, so let's prove sort of a contra-positive. Let's consider a class evaluations. Let's assume that laws and equilibria actually do always exist. And now let's prove that well-firm maximization in fact reduces to utility maximization. Okay? This will actually follow from two reasonably well-known facts. Okay, neither of these facts is trivial, but they're both pretty easy to prove if you're solid on linear programming theory. Okay? Fact number one, observed by Nissan and Segal, is that well, if you look at the relaxed optimization problem of fractional well-firm maximization, meaning bidders can get fractions of bundles. So maybe I get 20% of the bundle with items three and five. I get 30% of the bundle with items five, seven, and nine, and so on. Fractional well-firm maximization does in fact reduce always for any valuation class, reduces to utility maximization. Why is this true? Well, you can write down a linear program for the fraction of well-firm maximization problem. Now, it's going to have an exponential number of variables because you have an exponential number of bundles, but it only has a polynomial number of constraints, one constraint per item and one constraint per bidder, which means if you pass to the dual linear program, you get a linear program with a polynomial number of variables and an exponential number of constraints. Those linear programs can be solved via the ellipsoid method if you have a polynomial time separation oracle. Something which takes an allegedly feasible point and polynomial time either verifies it is feasible or exhibits a violated inequality. And it turns out for this dual linear program for fractional well-firm maximization, the ellipsoid oracle is exactly utility maximization. So you just apply ellipsoid, you invoke the utility maximization oracle a polynomial number of times, and you compute an optimal fractional allocation. That's fact one. Fact, but of course we didn't care about fractional well-firm maximization. We cared about actual well-firm maximization. So fact two, Big Shandani and Mammer, they show that actually market by market, instance by instance, existence of a war as an equilibrium is equivalent to this linear programming relaxation being exact. It's equivalent to there being an optimal fractional solution that is actually integral. Why is this true? Well, you can think of any allocation as like a zero one solution to the linear program. It turns out you can interpret item prices as a feasible dual solution. And then it turns out that while we're on equilibrium conditions just correspond exactly to the complementary slackness conditions for that primal dual pair. Commoner slackness conditions characterize optimal solutions. So that's why that's true. So if you combine these then we're done. If you have guaranteed existence, then in fact fractional integral are the same and just completely generically for fractional well-firm maximization you get a reduction to utility maximization. Okay. So in general, I won't have time to discuss anything other than kind of one teaser result from each paper. There are more results in the paper which I won't talk about. But let me pause here for questions before we move on to the second part of the talk. Exactly. Exactly right. Great. So let's move on to lower bounds on the price of anarchy from communication complexity, lower bounds. Okay. So again, this paper looks at various different models but just for simplicity and also for consistency for the first part, let me just focus on applications to combinatorial options. Okay. In general, the theme of this paper is we're asking the question, can equilibria do things that say efficient algorithms or efficient communication protocols cannot? Right. And in some ways we know the answer to that is yes. If you believe that like P is different than PPAD, then you believe that Nash equilibria are in some sense more powerful than polynomial time algorithms. But what we're gonna see in this part is that for optimization questions like for asking how well can you approximate some optimal solution? In fact, equilibria are not really, they can't do anything that the more familiar objects of efficient algorithms or efficient protocols cannot. In other words, the lower bounds that we tend to prove on efficient algorithms and efficient protocols actually carry over to equilibria as well. Okay, so hardness of approximation results are going to get translated to lower bounds on the price of anarchy. Okay. The results are gonna be quite generic. The two things that really drive the results is they're gonna assume that we're looking at a class of games and a notion of equilibrium so that you have guaranteed existence. So you could think for example about mixed strategy Nash equilibria and finite games. Those are guaranteed to exist. And the other thing is that they should be efficiently verifiable. Okay, so if I hand you as input in a legend equilibrium, you should be able to easily check the best response conditions. Easily check that it is in fact an equilibria. Okay. All right, so like I said, I'm gonna focus on control options. So in many ways this is exactly the same models in the first part. We still have N agents or bidders. We still have M items. Allocations are still partitions of the items amongst the players. We still care about maximizing the welfare. Okay, so all else being equal in a utopian world, we wish we could split the items to make everybody collectively as happy as possible. Okay, or at least to be able to approximate this. So let me tell you what our sort of underlying optimization problem is. And then we'll talk about sort of auctions built on top of it and we'll ask to what extent to those auctions approximately optimize our problem. So for variety's sake, let's switch from sync descriptions to sort of a communication model. Again, you could do it either way. But let's think about number in hand type communication models. Okay, so we have N bidders. Each of them has a valuation. Again, that's sort of like a list of, it's two to the M valuations for each of the possible subsets. And then all of the agents, they're gonna be passing bits, passing messages back and forth. And their aim is in the communication complexity version of the problem is to sort of settle on an approximately optimal allocation without communicating too much. And in particular for us, an efficient communication protocol is gonna mean that the number of bits that get passed between agents is polynomial in the number of agents N and also the number of items M. And notice the second part of that is really quite demanding, okay? Because now we do not have succinct descriptions of the valuations. I'm a bidder, I have two to the M things. I wanna tell everybody else, but I'm only allowed to tell people about a polynomial and M number of them or poly and M bits in some sense, okay? So that's the question. Do there or do there not exist communication protocols, communication polynomial and N and M that do a pretty good job of approximating the subjective? And of course the answer to that, the question will depend on what class evaluations you're looking at, okay? All right, now, so suppose you wanted to design an auction, right? So suppose we actually didn't know what bidder's valuations were, but we wanted to solicit bids and then make an allocation and then assign prices and so on. So one observation is that, if you've heard of say like the VCG mechanism that's a welfare maximizing incentive compatible mechanism, or really any other so-called direct revelation mechanism where you just ask bidders for all of their private information, that's gonna be totally hopeless in these common control auctions unless M is very small. Because again, remember a bidder in their mind, they have these two to the M private parameters. These two to the M valuations, one for each subset they could get. And there's no way you're going to ask everybody for these two to the M numbers in any practical auction format, if M is at least say 10, something like that, okay? So what people actually do when they deploy multi-item auctions in the real world is they use a non-truthful mechanism, and then you hope that at the equilibria of your non-truthful mechanism, things work out well, okay? There's a lot of different simple non-truthful mechanisms you could look at for concreteness, let's just fix one. Let's just think about simultaneous first price auctions, okay? The idea here is just, okay, we're selling M things. If we're only selling one thing, we'd have a pretty good handle on the problem. Lots of different single item auctions in the world, second price auctions, first price auctions, whatever. Let's fix first price auctions for concreteness. Now to sell the M items, we're just gonna sell them in parallel using separate first price auctions. So as a bidder, you submit one bid per item. Each item is awarded to the highest bidder on that item at the price that that highest bidder said they'd be willing to pay, okay? Notice this is not a direct revelation mechanism. It's much easier for the bidders. I'm not asking you to submit two to the M numbers as would be required for your full valuation. I'm only asking you to submit M numbers, okay? Far fewer bidding parameters. That's a sense in which this is a simple mechanism, simple auction, okay? It's not gonna be a truthful auction that doesn't even make sense, right? Because they haven't given bidders enough of a vocabulary to even write down their valuation. So we wouldn't talk about the auction being truthful. So instead, we would look at its equilibria. We would say, suppose bidders bid strategically to optimize their utility, value minus the price. What are the equilibria going to look like, okay? And hopefully your intuition suggests it would be too much to hope for that the equilibria are going to be optimal. So really what's gonna be interesting is to ask, you know, are they at least close to optimal? That is, is the price of anarchy close to one? Okay, so the price of anarchy of a game is just the ratio between the objective function value of the worst equilibrium and that of the optimal possible outcome. Okay, so for us, it's the ratio between the social welfare, the minimum social welfare of an equilibrium, and the maximum possible social welfare of any allocation. Okay, and the question, so the closer this is to one, the more we can say that this auction is performing well, okay, at equilibrium. All right, so these are the kind of questions we want to understand. So the main theorem from this part, it's gonna be basically a sort of black box translation that takes as input a hardness result for communication complexity, specifically for non-deterministic communication protocols, and then it gives you as output an equally good lower bound on the price of anarchy of any simple auction, including, for example, simultaneous first price auction. Okay, so that's what this does. It reduces lower bounds for equilibria to lower bounds of communication protocols. So I'm hiding it because it's a bit of a mouthful, but so what you should expect, you should expect a hardness assumption here concerning non-deterministic communication protocols, and this conclusion is gonna be about the price of anarchy of auctions like simultaneous first price auctions, okay. So here's the precise statement. So indeed, we assume hardness against non-deterministic protocols. So fix a class evaluations, okay, and suppose for this class evaluations, we know that you cannot get an approximation factor better than alpha with any sub-exponential cost communication protocol. And by sub-exponential, I mean sub-exponential in M, the number of items. Okay, suppose that's true. Okay, you can't decide high welfare or low welfare with a factor of alpha difference. Then, as a consequence, you get that the price of anarchy of any simple mechanism when bidders have valuations in V is also no better than the same hardness of approximation threshold alpha. So your hardness of approximation for non-deterministic protocols carries over immediately to the equilibria of simple mechanisms like simultaneous first price auctions. Now, I sort of owe you a definition, right? So I mean what do I mean any simple mechanism? Okay, I showed you an example, but for this theorem to make sense, I need to actually define what I mean by a simple mechanism. And actually for this theorem, it can be very permissive. Basically, you just have to rule, you have to say that you're not using a direct revelation mechanism. You're doing something that's at least a little bit more communication efficient than a direct revelation mechanism like VCG. So precisely, you assume that in whatever game you set up in your auction, each bidder should have a sub-wexponential number of strategies. Again, sub-wexponential in M, the number of items. Okay? And if you think about it in something like VCG, you have two to the M different things you're reporting. So you have something like two to the two to the M different strategies. So this is saying you have at least a little bit less, at least a little bit of compression in your action spaces compared to something like the VCG mechanism, okay? Certainly simultaneous first-price auctions are an example. There you have only M parameters. So you basically have an exponential number of actions, okay? Like a constant raised to the M, to the number of items, okay? As long as anything that's sub-wexponential is not going to be able to overcome such a communication hardness result, okay? So let me show you a couple applications and then we'll talk a little bit about the proof, okay? All right, so that theorem would not be interesting unless we had examples where the hypothesis was satisfied. So your question should be, do we actually have interesting lower bounds against non-deterministic protocols for combinatorial auctions? And yes we do, okay? And actually we have some from the early days of algorithmic game theory. So Noam Nisan a long time ago proved that if you look at, say, general monotone valuations, so no restrictions on the valuations, then it's totally hopeless for communication protocols, including non-deterministic protocols, that you can't get any constant factor as the number of items grows to infinity if valuations are unrestricted, okay? So if you take Noam's theorem and you chain it together with a theorem I showed you on the previous slide, you conclude immediately that no simple mechanism, simultaneous first price auctions or otherwise, can achieve a constant factor approximation of the maximum possible social welfare, okay? So the price of anarchy is also not constant for any simple mechanism for general valuations. That's an immediate consequence of Noam's theorem with that translation theorem. You can play the same game with restricted families of valuations. So one example would be, say, sub-additive valuations. Sub-additive means what it sounds like. It means that the value of a bidder for the union of two bundles is at most the sum of their values for the bundles individually. So if bidders are sub-additive, it turns out that the welfare maximization problem gets easier, okay? So it's known that you can get a two-approximation with sub-additive valuations with polynomial communication. And it's also known that you can't do better than a two-approximation with polynomial or even sub-exponential communication with sub-additive bidders. That's a result of Dovzinski-Niesan and Shapiro. So you can again chain together this theorem with the black box translation theorem a couple slides ago and deduce that even when bidders have sub-additive valuations, no simple mechanism will have a worst case price of energy guarantee better than a factor of two, okay? And what's particularly interesting here is that in fact the simple mechanism I've been using as a running example, simultaneous first-price auctions, that actually does, it is known, that does have the price of energy is in fact at most two. That's a very nice result by Feldman-Fugravet and Luciae, okay? So this specific simple mechanism matches. It's an upper bound matching the lower bound that holds for any simple mechanism. So this is a precise sense in which at least for sub-additive bidders, simultaneous first-price auctions are actually an optimal simple mechanism, okay? And that's the kind of statement I have no idea how you'd ever try to prove it using sort of say traditional economic techniques. I have no idea how you would get that kind of, in my opinion, very interesting conclusion without relying fundamentally on complexity theory. Okay, so those are a couple of applications, yeah? So the converse of your translation theorem where you can actually take a sub-exponential protocol and you can convert that to a simple auction as the same price of energy. That's a good question. It's open. I think I would speculate that if you were willing to tolerate extremely strange mechanisms, like with weird action spaces, the corresponding communication transcripts rather than to anything natural, the converse might be true. Still, someone should prove that. And then I think what's really interesting is, can you actually have at least semi-natural auctions which match what you get from the communication? And we'll see actually an open problem at the very last slide about this. Yeah, but basically the converse is open. It's a great question. Yeah? Yeah, so the question was, are approximate equilibria computable in these? Not necessarily, but remember, we're proving lower bounds. So my lower bounds are only stronger by using an equilibrium concept that is not necessarily polynomial time computable. So I'm gonna be saying, even if you magically gave bitters the ability to find an ash equilibrium, as opposed to being stuck at some worse equilibrium, like correlated or coarse correlated, even if you let them go all the way to an ash equilibrium, still, you still wouldn't be beating the lower bounds. Yeah, so it's sort of, I mean, for upper bounds, this is an important question. Like, are they attractive to the equilibrium concept? For lower bounds, it somehow doesn't come up. Yeah? Is there something known about the place of stability as well? Let's see, so for, it's a good question. Certainly not generically like this. And we'll see. So in general, the statement is false for the price of stability. So there are examples where the hardness results are stronger than the sort of, the lower bounds you get from the optimization problem is actually higher than an upper bound you can prove for best case equilibria. I don't know of any examples in auctions, but they're examples in congestion games. So you can also use sort of NP hardness of problems of finding an optimal outcome of congestion games to prove lower bounds on the price of anarchy. And the price of stability results, we know if our congestion games are actually better than those hardness results. And basically, I mean just, so why would there be a difference remember I said that one of the key things which is going to drive these results is efficient verifiability. So you can recognize the equilibria that you care about. And if it's just an arbitrain ash equilibrium, you just have to check the best response conditions. But like if I promised you this was the best equilibrium, how would you ever know? Right, I could be trying to trick you. So that's the part in the proof where it breaks down. You do not have efficient verifiability for the best equilibrium. If you looked at any equilibrium selection notion which was efficiently recognizable, then you'd be fine. The lower bound would hold also for those. Yeah, other questions? All right, let me say a little bit about the proof. Here's the statement, remember. We're trying to reduce lower bounds for equilibrium, lower bounds for communication protocols. In other words, in contrapositive form, we want to show that if we had a good price of anarchy bound, then we could extract a good non-deterministic protocol. Yeah, that's what the proof's going to do. So suppose we do. Suppose we have a sort of too good to be true price of anarchy bound. So suppose the price of anarchy is rho, where rho is better than the underlying hardness threshold alpha. Let's exhibit a non-deterministic protocol for deciding the welfare maximization problem. So we promised an input where the welfare is either high, at least W star, or small and most W star over alpha. We need to exhibit a non-deterministic protocol. All right, so what is a non-deterministic protocol? One way to think about it is that you have all-powerful prover. You get to look at everybody's inputs, and then write something on a publicly viewable blackboard. And then all of the agents sort of look at the blackboard and use polynomial communication to decide whether to accept or reject. So what could a prover, so a prover gets to look at everybody's valuations, write something on the blackboard, what would be helpful? What would we want the prover to write down? Well, the key idea is just they're going to write down an equilibrium of the game. I'm lying a little bit. I'm sort of skipping a step in the interest of time, which is that what I just said doesn't work because in general the description length of a Nash equilibrium in the kinds of games we're looking at could be exponential. Remember say in simultaneous first price auctions, remember each bidder had an exponential number of strategies. So even just writing down a mixed strategy would take an exponential amount of communication, and that's not okay. But there's this extra technical trick which allows you to sparsify strategies. This is exactly why the epsilon comes in in the theorem. So it's actually not a lower bound for exact Nash equilibrium. It's a lower bound for epsilon Nash equilibrium where epsilon can be super small. I don't know if the theorem is true with epsilon equals zero or not. That's an open question. So this is where the epsilon comes in. So if you use the sampling trick of Lipton, Marcakas, and Metta, that in fact you're guaranteed the existence of an approximate equilibrium with polynomial description length, that is what the prover will write on the blackboard to help out everybody figure out whether the welfare is higher or whether the welfare is low. Why is it helpful to know an equilibrium, an approximate equilibrium? What are the players going to do? Well the easy case, the equilibria, so everybody stares at the equilibria. Everybody can check. So you've written down what everybody's doing. I know my valuation. So I can ask, am I only doing best responses and let the prover wrote on the board? And if not, if it's not an equilibrium, we'll reject. Everybody verifies it's an equilibrium, privately. Everybody can compute their own expected welfare in this equilibrium, because again I know my own valuation. Then we can just communicate all of our individual expected welfare and sum up the result to get the overall expected welfare. Now we look at, is it high or is it low? If the overall welfare of this equilibrium itself is already at least W star over alpha, well the optimal solution can only be better than that. So we're certainly not in case two, we gotta be in case one. So if the equilibria has decently high welfare, we're done, we know we're in case one. But that's sort of trivial, right? It's very easy to certify lower bounds on a maximization problem, I just show you a feasible solution. How would I certify an upper bound? How would I convince you that every single allocation is bad? Well in the presence of a good price of energy bound, I can convince you that even the optimal allocation is bad by showing you an equilibrium, which is bad. So if in this equilibrium the welfare is in fact at most W star over alpha, and if by assumption our price of energy bound is too good to be true, rho better than alpha, then even the optimal solution can only be a rho factor better, but that's gonna be strictly less than W star. So we can't be in case two, so we know we have to be in case one. And that's our nondeterministic protocol. Proof of writes down a sparsified approximate equilibrium, if the welfare is high or the welfare is low, we know which case we're in. So a too good to be true price of energy bound and if this is too good to be true, polynomial cost nondeterministic protocol. Again, there's a bunch of other results in the paper I won't have time for, but let me just pause here and see if there's further questions on this middle part. Then, part three of three. Borders or borders theorem, okay? So borders theorem is about single item auctions. Okay, so in many ways what we're gonna talk about here is simpler than either of our first two settings where we had multiple items. So now we have just one item. Okay, so we can even hope to have truthful auctions, for example, like the Vickrey auction, okay? That's maybe a good sort of concrete example to have in mind, you could run, say, a second price auction with a reserve price, you ask everybody to bid whatever their value is. We're, again, in a single item auction, everybody only has one number that you don't know. You could award it to the, if nobody clears the reserve, you could award it to nobody, otherwise you could award it to the highest bidder at a price equal to either the reserve price or the second highest bid, for example. Okay, so that's one example of a single item auction, also happens to be a truthful single item auction, okay? And we're gonna be interested in finding a revenue maximizing auction, okay? That's the problem that Borders Theorem is concerned with, and there's different ways you can formalize revenue maximization, but we're gonna look at the classic economic approach, which is just average case analysis, okay? So we're going to assume that when we choose our single item auction, we don't know what bidder's valuations are. We don't know exactly what they're willing to pay, but we assume that we know distributions from which bidder's valuations are drawn, okay? So each bidder has their own distribution, but the valuations are independent, and now by the revenue maximizing auction, I just mean the single item auction, which has the highest expected revenue, where the expectation is over the random draws of bidder's valuations, okay? So just on average, over the assumed input distribution, find the auction with the highest expected revenue, okay? So that's the optimal auction problem, and I wanna explore solving the optimal auction problem using optimization, okay? So what I'd like to do is I'd like to just write down a description of all possible single item auctions, and then just optimize over it, okay? Hopefully using linear programming, okay? So that's gonna be the question I'm posing now. Could we identify, so given as inputs, descriptions of the F sub i's of the distributions, could we solve, could we compute using linear program the revenue maximizing auction for those distributions? Okay. So let me next convince you that if we were willing to tolerate super big linear programs, at least, then the answer would be yes. Okay, we could solve this problem using linear programming. So I'm not gonna write down the gory details, but I hope you'll find this very plausible. So let's think about a linear program where you have two sets of decision variables. The first one tracks who gets, who wins, who gets the item, and the second one tracks what are the payments, okay? So for every possible bid profile, okay, so imagine everybody has a value which is between, you know, whatever, zero and a thousand, something like that, okay? So a bid profile just means, you know, what was the bid from each of the n bidders? And in this decision, in this linear program, we have these X sub i variables, which say who is the winner, okay? So X i's gonna be able to one, if you're the winning bidder in this bid profile, otherwise it's gonna be equal to zero. So for example, if you were running a Vickry auction where the highest bidder wins, X i's gonna be one, if you bidder i or the highest bidder, otherwise it's gonna be zero, okay? P sub i is just what does everybody pay when everybody bids this profile B? So again, in a second price auction, P is gonna be zero unless you're the winner, in which case P is gonna be the second highest bid, okay? So those are our decision variables. We wanna optimize over all choices of who wins and who pays what, okay? What are the constraints? Well, we have incentive constraints, okay? So we're gonna look at truthful auctions where bidders are motivated to bid the true valuation. If you think about it, you can express those as linear inequalities in the X's and the P's, right? Because my utility is just my value times my X, whether or not I win, minus my payment, okay? V times X minus P, okay? So that's linear in X and P. So just truthfulness just says my utility when I bid truthfully is at least as large as any other bid, so those are linear inequalities, okay? Similarly, individual rationality, I can say that if I bid truthfully, my utility should be non-negative. Finally, the linear program should have constraints saying you can only sell the item once. Remember, it's a single item auction. So for any bid profile by bid profile, the sum of the X's should be at most one, okay? So this is a linear program where if we solved it, if we maximized sort of the sum over the PIs, okay, weighted by the probabilities in the distributions, we really would get the optimal auction, revenue maximizing auction. So this simply writes constraint for every possible bid fee. This one does, yeah, so that's exactly where I'm going. So why don't we declare victory? Why aren't we done? This linear program is huge, okay? I have a pair of decision variables for every possible bid profile. The bid profile being a bid from each of the N agents, okay? So if you're bidding something between zero and a thousand, this is a thousand raised to the N different bid profiles, different variables that you're working with, okay? So it's not very useful computationally or kind of conceptually, okay? But at least in principle, you could do a linear program just to identify the revenue maximizing auction, all right? Let's get a little more interesting. Let's ask, could we reformulate it with a much smaller linear program, a polynomial size linear program, okay? How might we do that? Well, the key idea here is to use what are called interim variables. And for motivation, imagine you're a bidder, okay? And you're trying to decide whether you should bid truthfully or not, okay? You know, you don't really care about the details of exactly who bids what in sort of every scenario, okay? That's not information I need to make this decision. The only information I need is, you know, if I bid 17, with what likelihood will I win, okay? On average over what the other bidder's valuations might be, okay? Maybe I win with 10% if I bid 17, maybe I win with 13% if I bid 23. I need to know that number. And I need to know what do I expect to pay if I bid 17, what do I expect to pay if I bid 23, and so on, okay? So the idea is to just have new decision variables that reflect the fact that that's all the information I need to deduce whether I should bid truthfully or not. Okay, I should say I'm assuming risk-neutral bidders when I say this, okay? So the y's are just meant to be the probability with which a bidder eye wins when they submit a particular bid b sub i. And again, the expectation here is over the randomness in the other bidder's valuations. Okay, so that's the semantics for the y's, okay? On average, how often do you win at a given bid on average over other bidder's valuations? Similarly, the q's are gonna denote your expected payments for a given bid where again the expectation is over other bidder's valuations, okay? By design, those incentive constraints can be expressed purely in terms of the y's and q's. We did not need the x's and p's. That was total overkill for the incentive constraints. Oh, I should have said. Notice there aren't that many y's and q's, okay? So now we just sum over the agents, we sum over the bids that they might have. That's how many different decision variables we have. So before we had 1,000 raised to the n, now we have 1,000 times n. Okay, so it's much fewer decision variables, polynomial number or pseudo polynomial, okay? So why aren't we done? Well, we're not done because we had that one extra constraint in our super big linear program saying that you can only allocate the item once, okay? If we were willing to allocate the item once on average over bidder's valuations, that would be fine. That would be very easy to express in terms of the y's, but we really need to say with probability one we allocate the item at most once, okay? And it's not clear how to express that in terms of our interim variables, in terms of the y's, okay? And that is what border's theorem is about, okay? So when can you express this feasibility constraint purely in terms of the y's? All right, so I'm getting to the statement of border's theorem now. And the best way to think about it is in terms of certificates, okay? So suppose someone came to you with a collection of y's, a collection of alleged interim allocation variables, okay? And suppose they were claiming that these really were induced by an auction, okay? Let's see how you could falsify that claim very easily, okay? So let's look at a certificate that a given collection of y's are not in fact valid interim allocation of rules, okay? So here's how we do it. Oh, I should have said, right? So, I mean, this sounds very abstract, but I mean, really, you know, you can think of very concrete versions of this question, right? So like, imagine there's some sort of three tuple of random variables that you're looking at, okay? And suppose someone came and told you an alleged marginal distribution of the first pair of random variables, an alleged distribution of the second pair of random variables, an alleged marginal distribution of the first and third random variables. And the question is then just, you know, is this person lying or not? Does there in fact exist a joint distribution over all three random variables that projects to the marginals in the prescribed way? Okay, that's the nature of this question, okay? So given a bunch of marginals, are they consistent? Does there exist a joint distribution agreeing with all of those marginals? So that's the nature of this question. All right, so how would I convince you that in fact there is not a joint distribution consistent with a collection of Ys? Okay, here's how I could do it. So I'm going to designate some of Bitter's bids as special. Okay, so maybe for Bitter number one, if it bids three, five, or seven, I'm gonna call Bitter number one special. Okay, Bitter number two, if it bids eight or 10, I'm gonna call it special, okay, whatever. For each Bitter, I decide on some subset of its bids that are special bids for that Bitter, okay? Bicks such a choice, okay? Now let's think about two numbers, okay? First number, the probability that the winning Bitter of the auction is a special Bitter, okay? Meaning it bid one of its special bids, okay? If you think about it, this is a simple linear expression in the Ys. If you tell me the Ys, I know what this probability is. I just sum over the end Bitter's. For each Bitter, I sum over its special bids times the probability that that really is its value in the given distribution, okay? So this is a linear function of the Ys. Given the Ys, I can compute this. On the other hand, let's think about the case. Forget about who wins. Let's just ask the question, what's the probability that one of the Bitter's is special, okay? Has had bids one of its special bids, okay? This does not depend on the Ys at all, okay? This depends only on the distributions, only on the Fs of Is. Either there's a Bitter sort of born with one of its special bids or there isn't. That's just some probability based on the prior, like point one, okay? If it's the case that according to your alleged interim variables, the Ys, if it's the case, the probability that the winning Bitter is special is bigger than the probability that there exists a Bitter who's special, I know you're lying, okay? That cannot be the case, okay? Obviously a necessary condition to have a special winning Bitter is to have some Bitter who is special. So that's a way I would prove to you that a given collection of Ys cannot possibly be induced by an auction. And Borders theorem says that this is a complete set of certificates. Borders theorem says that if it's the case that no matter how you designate subsets of Bitter's bids as special, that reverse the inequality that you would hope would be true, is true. Then in fact, there does exist Xs. There exists an auction that is consistent with these alleged interim variables, the Ys, okay? So that is Borders theorem, okay? It gives you a characterization, a nice linear characterization of which interim rules, which interim Ys really are consistent with some auction, some Xs, okay? I'm not gonna prove this, you can actually deduce this from the Maxillum and Cut theorem in an exponentially sized graph, if you like, okay? So Borders theorem is just about single item auctions, yeah. Is there a connection between this and the Burke-Auf von Neumann or maybe it's generalization by British to this bi-hierarchy set of constraints? So the second thing you mentioned, I'm not familiar with. It's definitely more, it's more complicated than Burke-Auf von Neumann. And it's not, Burke-Auf von Neumann, I more think of kind of a statement that a certain fractional polytope is integral vertices and that's not what I'm saying here. I mean it is true, it's similar in the sense that it's giving an explicit linear description of what the polytope looks like, yeah. Okay, good. So it's only about single item auctions but like I said, it's a famous result, Borders theorem. It's very useful. So people were motivated for many years to try to extend it beyond single item auctions and resorting to approximation, there was some nice results in computer science that did extend Borders theorem well beyond single item auctions. But if you really wanted an exact Borders theorem, like in the single item case, there were almost no generalizations known. People really seemed stuck. And the final sort of punchline of the talk is that complexity theory will tell us why people have been stuck generalizing Borders theorem. In particular, we're going to have a theorem which says that unless the polynomial hierarchy collapses, there are no sort of significantly more general versions of Borders theorem in the exact case, okay? So again, to make that precise, I need to say, you know, what do I mean by, you know, having a Borders theorem? Okay, what does it mean that you have an analog of that? And your first thought might just be, oh, well, you know, Borders theorem moves this really nice linear description. So maybe it's just about having like a linear description of the y's. Then you think for a minute, you're like, no, no, no. That's not strong enough, right? Because you can always write down the x's, okay? Our first really big linear program. That would be a polytope, the feasible region. The y's are going to be a projection of that polytope. Projections of polytopes are themselves polytopes and every polytope has a linear description. So that somehow doesn't work. So really what Borders theorem is about, it's about having a computationally useful linear description, okay? So how would you define that? Well, one natural definition would be that you could separate over the polytope, okay? That there's some, you know, collection of linear inequalities so that if one of them is violated, you can compute such an inequality in polynomial time. We're actually gonna make an even weaker assumption, which again, because our results are negative, this makes the results stronger. We're just going to assume that you have basically a co-NP oracle for membership, okay? So we just want to assume that if you have something which is not feasible, so a collection of y's, which are not feasible, there should exist a linear inequality in your characterization that you could write down and sort of efficiently check that yes, this is one of our valid inequalities and yes, the alleged point fails to satisfy this inequality, okay? So you should just be able to recognize efficiently points that are not in the polytope. It will call that a borders-like theorem, okay? And then, what's the claim? So the claim is actually, you do not have border-like theorems in this sense for even very mild generalizations of single item auctions. So for example, if you look at so-called public projects where you either sort of build a bridge or you don't build a bridge, depending on people's values for them, or if you look at say multi auctions, even with unit demand bidders and many other sort of cases that are sort of the next steps you'd want to go to beyond single item auctions, this theorem holds, okay? So the polynomial hierarchy collapses if you can come up with a generalization of borders theorem for any of these settings. So how does this proof work? Okay, well, sort of two steps. The first thing is you basically say that if you have a borders-like theorem, it gives you a complexity upper bound on membership, okay? So remember, a borders-like theorem is basically a co-NP oracle for membership in the polytope. And remember that using the ellipsoid method, right, you can compute, you can check whether or not the feasible region is empty, say, or you can compute a point using a polynomial number of invocations of a separation oracle, okay? So a borders-like theorem gives you that ellipsoid separation oracle. It's a co-NP oracle, so it would give you a P to the NP algorithm, okay? For example, recognizing feasible intramelocation rules. On the other hand, okay, so this is completely generic, okay, whatever your setting is, this is gonna be true. The second part of the proof, and this is more case by case, we literally just sort of brainstormed and said what would be like the seven next generalizations you would want from borders theorem other than single atom auctions. And then case by case, we proved hardness results even above code of the NP for the, in each of these settings. Specifically, we proved sharp P hardness of recognition of feasible intramelocation rules for all of these settings we could think of that were a little bit more general than single atom auctions. Okay, so you combine these two and you get that unless, you know, unless sharp P is contained in P to the NP, then there's no borders-like theorem for any of these settings. Let me wrap up with two kind of very concrete and I think very juicy open questions. This is the last slide. Open question number one concerns the second part of the talk when we were discussing lower bounds and the price of anarchy for simple multi-item auctions. And one of the things we saw there is that for sub additive bidders, bidders with sub additive valuations, actually we had matching upper and lower bounds. Simultaneous first price auctions, Feldman and all proved have a price of anarchy of two and the communication complexity lower bounds show that nothing is better than two for sub additive valuations. So that's a very, I think, attractive statement. We do not know if that's true for certain other classes of valuations and specifically for the class of sub modular valuations which is a very fundamental class in comural auctions. We do not know what the optimal simple auction is. Okay? It might be simultaneous first price auctions. We do know exactly what the price of anarchy of simultaneous first price auctions is and it's one minus one over E, roughly 63%. Unfortunately, it is known that you cannot prove a communication lower bound of one minus one over E for welfare maximization with sub modular bidders. The result of FIGA and VONDRAC which gets one minus one over E plus 10 to the minus four. Plus some constant. Okay? And what this means is no matter how you resolve this question it's gonna be super interesting. Either you're going to prove that simultaneous first price auctions are optimal for sub modular bidders, which means you're gonna be forced to develop lower bound techniques stronger than what I've told you about today. Or you're gonna show that there's something which is better than simultaneous first price auctions a better positive result, which would also be super interesting. Second open question consists of the third part concerns these border inequalities and relates back to that question someone had earlier about why are you proving conditional results for supposedly seemingly unconditional statements? And you could make that exact same complaints in the borders part of the talk. All right? I was proving non-existence of these tractable linear relaxations under a complexity assumption and we should be able to have unconditional versions of those statements. In particular, there's been tremendous progress on extension complexity lower bounds for polytopes of various optimization problems. And really I think that technology should be powerful enough to prove unconditional impossibility results for borders like theorems in the settings I talked about here. Okay? So let me stop there. Thanks.