 FST-TCS has had a very rich tradition of having very impressive invited speakers and this rich tradition is going to be further enriched by Professor Tim Ruffgaarden who starts the first invited talk today here. Tim has made very seminal and deep contributions to algorithmic game theory, algorithm design in general and its interactions with economics in particular. So Tim did his PhD from Cornell, then did a postdoc at UC Berkeley and was before moving to Columbia University where he's a professor now, he spent 15 years in Stanford University. As I said, Tim made very deep contributions. I remember back in 2002 when I was a beginning graduate student at McGill, for a while I was thinking of working in game theory with Adrian Veta and the first paper that Adrian asked me to read was this very impressive paper called The Price of Anarchy is independent of network topology written by Tim and that was really amazing but many more significant people were impressed not just me. So Tim in 2012 along with Eva Tardosh won the Godel Prize for basically laying the foundations of algorithmic game theory. He has won numerous awards including the ACM Grace Murlhofer, the Presidential Early Career Award, the Mathematical Programming Society's Tucker Prize and of course the Godel Prize. He was also an invited speaker at the 2006 International Congress of Mathematicians. He was a Shapley lecturer at the 2008 World Congress of the Games Theory Society. He was a Gugunheim fellow in 2007. So without further ado, please join me in welcoming Professor Tim Raffgaarden. Thanks very much for the introduction. Good morning everyone and thanks for coming. I also want to thank the organizers for the invitation to speak. It's really an honor to be here at FST TCS. So it's been maybe 20 years or so since lots of computer scientists started getting interested in economics and game theory and the original reason for that this is back in the late 90s is that that time the internet was really exploding and becoming widely commercialized and so a lot of the new computer science applications at that time really cried out for game theoretic reasoning because they fundamentally involved interaction between different autonomous parties with conflicting objectives. So I'm thinking of everything from sort of competition between internet service providers for routing traffic through a network to the design of real-time auctions for online advertising like those that today drive the business models of companies like Google and Facebook. But something interesting has started to happen over the last maybe 10 years or so which is that we're increasingly seeing ideas flow in the opposite direction from computer science to economics and game theory. And so this talk is meant to be a detailed case study of that point. I want to tell you about a really major auction involving tens of billions of dollars that happened in the U.S. a couple years ago and in which computer science played an absolutely vital role. So our story begins in 2012. That's when the U.S. Congress authorized the FCC to design and deploy a novel auction for selling licenses for the use of wireless spectrum. Now it was not a new idea in 2012 to use an auction to sell spectrum licenses. In fact the government had been doing that since roughly the mid-90s. What was different in 2012 and what necessitated a novel auction format was that for the first time the licenses that the government really wanted to sell were inconveniently already held by other people. In this case over the air television broadcasters. So really the point of this auction was to repurpose spectrum to procure licenses back from the current owners over the air television broadcasters and then award them, reallocate them to parties who could get much more value for that part of the spectrum, primarily telecom companies. So this is really what's called a double auction, meaning there are both buyers and sellers. The sellers are the current owners of licenses, again over the air television broadcasters and the buyers are people who want to use that for emerging technologies, get companies like say T-Mobile and Sprint. So accordingly the auction had two halves a reverse auction, that's where the government is in buyback mode and is trying to procure licenses from the current owners. And that's the part of the auction that was totally new, literally never been run before at anything like this scale in the past. And the second half of the auction is the forward auction. So at this point the government actually has licenses in its hands and is going to sell them to the highest bidder. And that's the types of auctions that the government had been running since the mid-90s. All right, so this auction took place a few years ago, ran for a long time actually, almost a year from March 2016 to March 2017, but you know it's finished and the dust is settled and we can assess how it did. And at least on some metrics it did pretty well. It's true that the government had to shell out ten billion dollars to buy back licenses and take a bunch of television stations off the air, but it was able to sell those exact same licenses in the forward part of the auction for twenty billion dollars. So this auction actually cleared ten billion dollars, which meant that even after sort of recovering the cost of the auction and handling a couple earmarks, there was still something like seven to eight billion dollars left over, which was applied immediately to reduce the U.S. debt. And that was the plan all along. It's probably one of the reasons that Bill didn't have that much trouble passing Congress back in 2012. Of course, another thing that might have helped was the very clever name they chose for the Bill. Remember this is a Bill that authorizes the design of an auction? And somehow it got called the Middle Class Tax Relief and Job Creation Act. So I dare any politician to vote against that. So a plan for the talk is I'll spend about the first half talking about the reverse auction. Again, this is the part that's totally new. And computer science actually directly influenced what auction got deployed for the reverse auction, as we'll see. But I also want to spend half the talk talking about these forward auctions, where computer science was too late in the game to really influence their design, but we'll still see how the theoretical computer science toolbox is really sort of the perfect language to explain when forward auctions work and why. Okay? All right. So let me tell you how the reverse auction works. It was a format proposed by two economists at Stanford, Paul Milgram and Ilya Segal. It's something called a descending clock auction. It can be viewed as an extension of previous formats in both the computer science and economics literatures. So before I tell you how it works, let me just say maybe the number one design goal they had was to make this auction as trivial as possible for the participants. It's really easy to participate in. Because again, remember, in the reverse auction, the participants are television broadcasters who currently hold licenses. Many of them very small. They would like basically no experience in auctions. They wanted it to be super simple. So in a descending clock auction, it's an iterative auction. It works in rounds. And in each round, each remaining participant, which again is going to be some television station, as a remaining participant, you might be asked a yes or no question. Of the form, would you or would you not be willing to sell your license to the government for, say, $1 million? Okay? And you can say whatever you want. You can say, no, I would not sell you my license for $1 million. And that's fine. The consequence is you will be kicked out of the auction forever. What does that mean? That means you're guaranteed to retain your license. You're guaranteed to stay on the air. But of course you won't be getting any compensation either. Okay? Or you could say yes. I would very happily cash a check for $1 million in exchange for my license. That's fine. It doesn't mean that's necessarily what's going to happen. Because it's entirely possible that if you say yes today, tomorrow you'll be asked another yes no question of the same form at a lower price. Would you or would you not be willing to sell your license for, say, $950,000? And again, you can either say no and be kicked out forever or you can say yes and live to see another day. Now, if you're still in the auction when it ends and I'll explain in a second how it ends, if you're still in the auction at the end, then indeed the government will buy back your license and the price will be the most recent, which is also the lowest of the buyout offers that you ever accepted. Okay? So the mental model I want you to have of this auction, at least in the middle, is all of these different stations are being offered prices and these prices are dropping over time. And when the price gets sort of too low for a station, that station refuses, gets kicked out of the auction and it's going to stay on the air. So I owe you a description of how the auction gets started and how the auction ends. Answer to the first question is simple. It starts with insanely high initial buyout offers. Okay, so high that anybody would be ecstatic to sell their license at those prices. Of course, remember, these prices are going to be descending over time. So just to give you a sense, WCBS, which is the CVS Networks Affiliate in New York, the opening offer for their license was $900 million. It's almost a billion dollars for a single license, which is kind of crazy. Now, if you operated some, you know, small station in the middle of the country, I guarantee you your opening offer wasn't $900 million, but it was something else comparably lucrative given the context. Okay? So that's how the auction starts. Super high prices. How does the auction finish? Well, I need to say what the auction is tasked with doing. The goal of the auction is to clear a target amount of spectrum. So what does that mean? How do we measure amount of spectrum? Well, one way you can measure it is in terms of just channels, like television channels, each representing a six megahertz block of wireless spectrum. So for example, you might target the stations currently broadcasting on the 14 channels between 38 and 51. And you might say, I'm willing to leave four channels on the air, but I really want to clear 10 of these channels for sale in the forward part of the auction. Okay? And a decision that was made at the very beginning of this process was that clearing a channel should mean clearing it nationwide. So to say that you've cleared channel 51, that means that at the end of the auction, there is literally not a single station anywhere in the country broadcasting on channel 51. Now, for that to be at all viable, it was crucial that the government used its power to unilaterally reassign stations' channel assignments. So I told you that if you dropped out of the auction, you'd be guaranteed to remain on the air, guaranteed to retain your license. And that is in fact true, but you're not guaranteed to retain your same channel assignment. Totally possible you were broadcasting on channel 51 before the auction started and you're forced to switch to channel 41 after the auction concludes. Okay? So what this auction is tasked with doing, the constraint that it faces, is clearing some target number of channels like 10 channels, so taking enough stations off the air so that the remaining stations, the stations still on the air, can have channels assigned to them so that they occupy only, say, four different channels. So this is called the repacking problem, and it's really important. So let me make sure it's crystal clear with a cartoon. So in this figure, each circle represents a television station. The area of the circle represents that station's broadcasting radius and the color of the circle represents the station's channel assignment. You will note that whenever two circles overlap, they are colored with different colors. That is not an accident. Two television stations with overlapping broadcast regions have to be assigned different channels to avoid interference. Okay? Moreover, it's easy to see in this picture that you really need three channels for all of these stations to be on the air without conflicts. Because if nothing else, you've got these three mutually overlapping stations on the right. So you need three different channels just to keep them on the air. But remember, this auction is going to be buying back licenses, taking stations off the air. So for example, we might buy back the license of that big gray station in the upper right, making it go away. So that doesn't seem to help initially, because we're still using three different channels. But now, it is possible to make channel reassignments, i.e., to recolor the circles so that only two channels are in use, freeing up the brown channel for sale in the forward part of the auction. Okay? So that is what this auction is tasked with doing, making enough of the circles go away so that the circles that remain can be colored in a target number of colors without conflicts. Okay? So, you know, to an audience with this background, you of course all recognize this as an example of a famous problem, the graph coloring problem, which of course is a famous NP complete problem, which we teach to our undergraduates. And so I'm telling you that this auction, you know, fundamentally had to solve an NP complete problem as part of what it was doing. So you might have some questions about how that worked exactly. Maybe you're even thinking, oh, is that why the auction took a year to run? No, it's not the reason. I'll tell you the reason. So you know, when we teach NP completeness to undergrads, what do we say, right? It's not that, you know, it's not a death sentence. It doesn't say that NP complete problems are literally unsolvable in practice, but it means you generally have to up your game. You have to invest more human computational and or financial resources than you would have to if you just had to compute, say, a shortest path problem. Okay? So your first thought might be, well, how big were these graph coloring instances anyways? Maybe they're small enough that NP completeness didn't matter. And a representative instance size would be, you know, if you think of it as graph coloring, maybe 2,000 vertices and 15,000 edges. Okay, so that's not huge. I would call it a medium size instance of an NP complete problem, but it's not trivial either. Okay? Your next thought might be, you know, but come on. We have these massive sort of computing clusters, all this computational power, you know, take a day, take a week, take what you need, but just, you know, solve your graph coloring problem. It's not that big a deal. And if we only had to solve one instance of graph coloring, you'd be absolutely right. However, this auction has to solve thousands of graph coloring problems every day. Okay? Why? Well, think about sort of a round of this auction. Okay? Any remaining participant, you have the option of making them a lower buyout offer than last time. But before you do that, before you make a lower buyout offer, you need to be prepared for that station to say no to turn you down. And remember, if the station declines, that's now, that station drops out. That's one more station you're responsible for keeping on the air. Okay? So if this was a pivotal station, where before it dropped out, you could pack all of the stations on four channels. But now with this one additional station dropping out, all of a sudden you need five channels, that's unacceptable. Okay? And an invariance of the auction is that the stations it's responsible for can be packed into a target number of channels, like four channels. Okay? So if this station dropping out would cause you to violate the feasibility constraints, you have no choice but to freeze its price forevermore. You will not make a buyout offer, lower buyout offer, because you cannot risk it declining. So that means in every round of this auction, for every remaining participant, you have to do the speculative execution about whether you could tolerate the station dropping out, which is an instance of graph coloring. So that's why it's thousands of instances per day. All right? So in light of that, the FCC gave the team designing this auction, they gave them a budget of one minute per graph coloring instance, and ideally with the common case being more like a second. Okay? And now we're talking about a worthy engineering challenge for, you know, this decade. Okay? Reliably solving medium-sized instances of NP-complete problems in a minute or ideally even seconds. Okay? That's not a trivial thing. So how did they do it? Well, the FCC made a very smart decision. They hired a computer scientist. Kevin Layton Brown is a University of British Columbia. So he led the team that was responsible for rapidly solving these graph coloring problems. Okay? And the first good idea they had was to approach the problem using satisfiability, using SAT solvers. Okay? Two reasons for that. First, as I'm sure sort of obvious to all of you, very easy to encode those graph coloring instances as satisfiability formula. It's a very straightforward and efficient reduction. And second, again as I think most of you know, there's been an enormous amount of creative work over decades by hundreds if not thousands of people designing better and better satisfiability solvers. So why not stand on the shoulders of all of that work done for SAT solvers and apply it here? Okay? So Layton Brown and all his team, you know, just to sort of get a baseline to get calibrated, they started by just looking at sort of the latest and greatest open source solvers and running them on some graph coloring instances which they thought would be representative. Okay? And just you know, what you could download from the web, it did reasonably well. You know, I forget the exact numbers but you know, let's say it's all something like 80% of the satisfiability instances in the target of under a minute. Okay? So not so bad for an NP complete problem. But if you think about it, there's actually a tremendous economic incentive to get the success probability as close to 100% as possible. Okay? Let's think about why. Right? So suppose this auction is running along. Okay? You come to a station, you're deciding whether or not to make it a lower buyout offer and so you have to check whether you could accommodate this station plus all the previous ones who dropped out in say four channels. Okay? So you ask your SAT solver. If this station dropped out, is there a feasible repacking? Is there a satisfying assignment? And imagine your SAT solver sort of thinks for 60 seconds and then times out and says, you know, I don't know. Maybe you can repack this extra station and maybe you can't. Again, remember, an invariant of the auction is that you have to be able to repack everyone who's dropped out into a target number of channels, like four channels. So in the absence of a proof that you can repack this additional station, you have no choice but to be conservative and assume that you can't. And so that means you will not make them a lower buyout offer. You will just freeze their price forever more. What that means is that whenever your SAT solver times out on a satisfiable instance, that is literally a huge pile of money left on the table. Okay, this is a station. You could have offered them much less for their license and the only reason you didn't is because your SAT solver wasn't good enough. So I don't know about you, but I'm not sure I've ever seen any other example with such a direct linear relationship between the running time of an algorithm and huge piles of money as in this application here. Okay. All right, so the Layton Brown team, they did some sort of bespoke work to get the 80% up to more like 99. something percent. A bunch of ideas. Let me just mention two of them. So one of them is something we usually do when we tackle MP complete problems in practice, which is we build in domain knowledge. You stop worrying about solving it fast always and start specializing it to the instances of interest. And the team actually knew a lot about what these graph coloring problems were going to look like. Okay, they knew in advance what all of the stations in the auction were going to be. They knew in advance all the broadcasting regions and so the interference constraints. So really, they knew they were going to be selling solving graph coloring problems that were all induced subgraphs of some master graph. Okay, so that's a lot of advanced knowledge. They're able to take advantage of to solve more instances quickly. They also use some very clever caching tricks. So of course, obviously, if you encounter the exact same graph coloring problem again, you want to look up the solution and not resolve it from scratch. But more generally, they had tricks for, you know, even for sort of relative, you know, approximately the same graph coloring instances of very quickly deducing the solution building on the work of the previous instances. Okay, so they really were able to get it up to 99 point something percent a few timeouts still, but but not very many. Okay, and I hope the higher level point of this slide is clear. The high level point is that without absolutely cutting edge techniques from computer science for tackling NP complete problems, the government literally could not have run this auction format. They would have had to go back to scratch and redesign a different auction if they didn't have computer sciences contribution on this part of the design. So let me say one more thing about the reverse auctions before I move onto the forward auctions. So this slide I just want to just temporarily reverse the flow of ideas now in this sort of original direction from economics to computer science. And I want to argue that this novel auction format actually motivates I think very basic and interesting algorithmic question. So to explain I'm going to want you to think about this auction is in effect a heuristic algorithm for a hard optimization problem. An optimization problem where the constraints are you can only have so many stations on the air say at most four channels worth the objective function being you'd like to maximize the value of the stations on the air. So all along we've been sort of implicitly assuming or hoping that this auction outputs a reasonable feasible solution to that optimization problem. But you could ask about other heuristic algorithms too. And so Milgram and Seagal approved a lot of cool theorems about these descending clock auctions. And one of their theorems characterizes exactly which algorithms are compatible with the descending clock auction of formats. So some algorithms can be implemented embedded in one of these auctions and some cannot. And the characterization says that it's the algorithms that work are what I would call reverse greedy algorithms. So let me explain what I mean. It's easiest with an example. So take the minimum spanning tree problem which we're all familiar with recall Kruskal's algorithm which is what I would call a forward greedy algorithm. You sort the edges you do one pass from cheapest to most expensive including an edge in your solution as long as you don't destroy feasibility as long as you don't create a cycle. Something we usually don't teach in undergrad algorithms but is also natural would be a reverse version of this where you again do a sorted version you do a single pass through the edges for most expensive to least expensive. This time removing an edge as long as it doesn't destroy feasibility as long as it doesn't disconnect the graph. And for minimum spanning trees and more generally for matriot optimization problems it doesn't matter. The reverse in the forward greedy algorithms they always out put an optimal solution. They're optimal they do the same thing doesn't matter. However the plot thickens if you go even just a little bit beyond matriot optimization problems. So for many problems that have a very natural sort of forward heuristic with a good approximation guarantee. If you naively run that same greedy heuristic in reverse you get no guarantees whatsoever. And this is already true for a problem as simple as say bipartite matching. So that led me and a couple former postdocs to systematically study what are the power and limitations of reverse greedy algorithms. So for different optimization problems if I tie your hands force you to use a reverse greedy algorithm what's the best you can do and can you compete with the forward the state of the art forward greedy algorithms. And the main message from our work is that at least for the types of problems we looked at bipartite matching some scheduling problems etc. If you're willing to work a little harder have more sophisticated reverse greedy algorithms then generally you can replicate roughly the same approximation guarantees familiar from the forward greedy heuristics. Now to be clear it's not like no one had ever thought about reverse greedy algorithms before we did if you look in the algorithms literature you can definitely find a few scattered examples you know but if you read those papers you sort of get the impression that nobody really cared about them and it's kind of easy to see why right because it's sort of designing reverse greedy algorithms it's a little more awkward a little less natural than designing forward greedy algorithms plus for most problems they seem to do no better or even worse so like why bother with them. And so the point here is that you know for the first time coming from economics of all places there's an extrinsic reason to really care fundamentally about what reverse greedy algorithms can and cannot do which I think is a basic and technically interesting algorithmic question. So that concludes what I wanted to say about the reverse auctions this is a natural time to pause for questions if anyone wants to ask one now it does it's a good question. Yes the question was isn't it sort of not totally trivial to have your channel change like you have to change your transmitter to use different frequencies. So the FCC did give a modest amount of compensation to stations which had their channel reassigned but it was kind of you know you know one to two orders of magnitude less than what they were paying for licenses. So there was compensation but a lower level for those that had their channel reassigned. In order to find out when when they can or cannot offer a lower price. The channels themselves would also need a similar algorithm to know whether they can hold the government ransom and actually so isn't it true that both sides need to compute this well to know whether they still have bargaining power or not. Well so OK so I mean if you're on the station side it's sort of I mean so when it depends if you want to think about collusion or not but for the moment just think that you're someone who has all you do is you own one station. OK. And now remember I told you the format is designed to be kind of trivial to participate in and really you're just going to see this descending sequence of prices and even that sequence is decided up front. So the initial offer is decided up front and every round it was I think a five percent decrement. So you know what sequence you're going to see. So the obvious way to behave is well as long you know formulate in your mind what's the minimum offer you're willing to accept. As long as it's above that stay on the auction and then drop out as soon as it goes below. And in the absence of collusion there's really like any way you manipulate the auction can only can only hurt you. Now people do ask about collusion either in the sense that you have multiple different parties trying to sort of strategize together or maybe you have a single owner of multiple stations trying to bid in a coordinated way. And so you know a couple things. So first of all the auction does have some limited collusion resistance more so than many different auction formats. That was another of the Milgram Seagal theorems. You might want a stronger version but it turns out you know if you want very strong versions of collusion proofness is really not much you can do. They're kind of in possibility results. So they are the usual approach as you just make it illegal to just use a legal channel to enforce you know utilities limit collusion. Yeah. Other questions. Yeah. So it might happen that at a particular moment you have a choice of many vertices you can remove while still keeping the coloring thing feasible. And obviously it might not be optimal to just sort of go around lowering the offers. So sort of how do you I presume you might want to avoid particularly important vertices or something like that. So you're asking about the order in which you make the offers in a given round. Right. Yeah. Good. So observation number one is that you know it really is sequential. OK. So when you have a given round it's always important you know who's already dropped out. So you really have to order the stations one by one. You can't do it in parallel because you need to make sure you retain feasibility. And so the question then is you know how would you order them. OK. So I don't know I wasn't actually I wasn't part of the team but I've talked to everybody on the team I asked actually asked them this question point blank. They wouldn't tell me but they promised they didn't do anything that smart. So I don't know if they just use a fixed order every time I don't know if they sort of re randomized the order every time I'm not sure. They did have an idea which was very nice which didn't make it into production which was to actually indeed try to solve all of these repacking problems in parallel and then whichever one of those is the first to terminates use that one is the first in your order. OK. So that maybe for some future generation that idea will be used. Right. So you order them in some way and basically you only freeze them if you absolutely have to freeze them. Right. So it basically so many stations have already dropped out that you can't accommodate this one and all of the previous ones. That's how you choose to freeze it. So it's kind of once you fixed the order it's kind of fixed. I mean you have no choice as far as when to freeze. Yeah. Then let's move on to the forward options. So at this point the government has licenses in its hand and wants to sell them to the highest bidder. And again in this part computer science was sort of too late to the game to really influence the design of these auctions but we'll see that as far as the analysis of these auctions the theoretical computer science toolbox is really kind of the perfect the perfect tool. So first let me just sort of mention spectrum auction design is actually pretty stress is pretty stressful occupation. OK. As we've seen the stakes are very high and actually it's really easy to screw up. And debacles have happened. OK. So a famous one happened a long time ago in a galaxy far far away also known as New Zealand in 1990. This was before governments had much experience running auctions. And so at this time the New Zealand government was creating 10 new television channels. So there are national channels you know one two and three and they're creating channels four through 13. OK. So there's 10 items for sale these nationwide licenses all basically interchangeable. OK. They decided to sell these via auction and that in itself is not a bad idea. All right. However they for reasons lost to the sands of time they made a very peculiar choice as to what auction format to use. It's an auction format I would refer to as simultaneous second price auctions. Now if you're only selling one item then a second price auction also known as a vickry auction is a good idea. So it's an auction where you collect bids the winner is the highest bidder and the selling price is the highest bid by somebody else. So the second highest bid overall. If you have just one item these are very practical auctions and they have a lot of beautiful theoretical properties. For example there's no reason to strategize as a bidder overbidding underbidding can never help you in a second price auction. However if you take a bunch of items and run a bunch of second price auctions at the same time in parallel then all of those nice properties melt away. OK. Indeed like imagine you were a bidder in this auction. Let's say you were only interested in one license and you had some value for one license and you wanted to give it a shot. And ask yourself how would you bid you know keeping in mind that you can bid up to 10 times once for each of the licenses. Each license is sold separately to the highest bidder on that license at a price equal to the second highest bid on that license. So how would you bid in this auction. Well one defensible strategy would be to put all your eggs in one basket. Pick a channel or random say a lucky channel number seven and bid your maximum willingness to pay on that channel. It's reasonable. It's not the only thing you could do though especially if you suspected that there weren't that many other bidders in the auction. You might want to go bargain hunting and maybe submit a quite low bid on a bunch of different licenses hoping you get one of them for a bargain basement price. That's also a defensible strategy. And a good rule of thumb and auction design is if it's ever sort of highly unclear what bidders are supposed to do that's probably an auction where there's a lot of volatility a lot of unpredictability in the outcome and in particular where bad outcomes might might actually happen. And certainly that was the case in New Zealand. They were hoping to raise a quarter billion dollars from this auction so it would be twenty five million for each of the 10 channels that was the projection didn't quite work out. They made not even 15% of that projection 36 million. And in fact all of the bidding data was made public that was part of the deal and if you look at it you can find some extremely cringe inducing statistics. For example there was one license where the high bid on that license was a hundred thousand dollars. And so you already know this is a complete disaster. OK they wanted to make 25 million per license. I'm telling you that the high bid was a hundred thousand dollars. The second highest bid also the selling price was six not six thousand six. So I'm literally got one of these 10 licenses in this auction for six bucks. OK. So like I said stressful job presumably these days people don't use simultaneous second price auctions. So what do they use. Well especially in the U.S. auctions they use something that's actually not super different OK but it's different in an important way and that's an ascending auction rather than a sealed bid auction. OK so lots of bells and whistles have been added over the years but still at their heart modern U.S. forward auctions are simultaneous ascending auctions. You all know what a normal ascending auction is for one item. You know that's what you see in the movies or if you go to sort of an art sale a state sale. There's an auctioneer they keep naming higher and higher prices bidders keep their hand up as long as they're willing to pay that price. The auction ends when only one person has their hand up and the price is the most recent or also the highest price that the auctioneer announced. So it's what I mean by an ascending auction. And you know in a spectrum auction you have multiple items you have a bunch of licenses you're trying to sell. But conceptually you can just have one separate auctioneer for each of the licenses. And now as a bidder you're responsible for raising your hand on subsets of the licenses. Each license is sold to the last person with their hand up on that license at the most recently announced price for that license. So those are simultaneous ascending auctions and that's sort of the dominant paradigm in US spectrum auctions. So you know they've been used for a couple of decades so clearly people you know more or less like them. But no one's going to try to convince you that they're perfect. And so let me tell you about a couple of well-known flaws about simultaneous ascending auctions. And they're both easiest to describe with an example. So let's start with example number one which is demand reduction. OK so suppose there's just two bidders. OK so bidders A and bidders B OK and two licenses say one for California and one for New York. OK. And the two bidders are going to have different preferences. OK let's say let's say one of the bidders is me and let's say you know I'm kind of a small bidder. All right. So I don't want both licenses. OK. I'm willing to pay up to five for either of the two licenses. OK. But I don't want both. Let's say my competitor OK is willing to pay six for California. Six for New York or 12 for both. OK. So what do we want to see happen. Well the other bidder has a higher value than me on both licenses six rather than five and also was willing to pay for both of them. So we'd like to see the bigger bidder win both. OK. So what happens in a simultaneous ascending auction. A key observation is that you know I the small bidder I'm going to be pretty pesky. I'm not going to drop out of the simultaneous ascending auctions until the price of each of the licenses has hit five because again I'm happy to get either license for a price less than five. So you may say you know OK big deal. So the bigger bidder they can just hang around win the war of attrition. The price will eventually hit five all go away and the bigger bidder will win both and they'll get a value of 12 and they'll only pay 10. So they'll have a net utility of two. So it seems like a win. OK. On the other hand if you think about it there's actually something the bigger bidder could do that would be smarter which is the bigger bidder could just immediately at the outset concede New York to me. Never bid on it at all. Because there's no competition I'll get New York for the minimum price. Meanwhile I only wanted one license in the first place so I'm not going to bother to bid on California. The other bidder will get California basically for free. So it's true their value is now only six instead of 12 but now they're paying essentially nothing as opposed to 10. So the utility actually jumps from two to six. And that is what demand reduction is. It's when you request fewer licenses than what you rightfully deserve in order to get a fewer number of licenses at a much cheaper price. That's demand reduction. Ample evidence that it does indeed incur in practical spectrum auctions. So the second problem is known as the exposure problem and this is this arises specifically in the case when there are complementarities or synergies between the different items. So let's change the example. Let's say I'm exactly the same as before I'm willing to pay five for either one and I don't want both. And let's say the bigger bidder now either wants to go big or go home. So the bigger bidder is willing to pay six if they get both California and New York. Otherwise they have no value. They're not interested in having just one of the two items. If you think about it we again want the bigger bidder to get both licenses. You can only make one of the two of us happy. They have the higher value six versus five so it may as well be them. But what happens in a simultaneous ascending auction. Again I'm not going to go away until the price of both licenses has hit five. But at that point the bigger bidder would have to pay 10 for New York and California combined and its value was only six. So the best strategy for the big bidder in this case is to simply walk away and not even try. OK there's really just no way that they can express in this auction that they want both items and as a result they cannot get it at a reasonable price for them. OK so that's the exposure problem specifically when you have these synergies between items. So next question I want to ask is are these just kind of theoretical issues or do they really matter. Should they really kind of have an effect on real world auction design. And so you can ask around people in the trenches people who design these auctions people who consult for bidders in these auctions and you get a pretty good consensus around two sort of folklore beliefs two rules of thumb. Rule of thumb number one is that while demand reduction definitely happens there's ample evidence of it. It's not a big deal. There are some losses of efficiency due to demand reduction but they're dwarfed by the gains from trade. OK so it's just a total win overall. The second folklore belief is that actually if you do have a situation with their item synergies item complementarities then actually things are different. OK and the exposure problem might be a big deal. You might get lucky and maybe it doesn't come up but that there will be examples where the exposure problem really leads to a big welfare loss in practice. It's a common belief about the exposure problem. All right so these are not new rules of thumb. They've been around at least you know 20 years or so. And I won't surprise you to hear that auction theory is also a very kind of storied field over 50 years old. So it may surprise you to hear that at least as far as I know there was no theorem in the economics literature which maps directly onto either of these two folklore beliefs. That may surprise you but I have a theory about it which is that if you think about what a theorem mapping on to either of these beliefs would look like I claim it would fundamentally be a theorem about approximation. For example no one thinks that simultaneous ascending auctions are optimal. Everybody knows there's demand reduction but the belief is that you know if you don't have item synergies they're approximately optimal in some sense. Whereas the second folklore belief says that when you have the exposure problem you are not guaranteed to be approximately optimal in whatever that same sense is. In economics for whatever reason there has never been approximation as part of the culture. That's not just not the kind of theorems that economists tend to prove in economic theory. Whereas of course in computer science our discipline has grown up entirely over the long shadow cast by NP completeness so we have all these real world problems where we can't solve them exactly but we want to do theory which gives us guidance about how to approach them. And of course approximation is one of the formalisms that we've come up with in order to do this in order to compare different competing heuristic algorithms for the same NP hard optimization problem. What we're going to see is that's this exact same approximation formalism which is so familiar to us but alien to economists is actually perfectly suited to express theorems that map very directly onto these two folklore beliefs. So in my remaining time I want to briefly tell you about one theorem for for each of these two each of these two cases. Let's start with the first one. This is going to be a positive result saying that simple auctions are approximately optimal when you don't have item synergies. So it'll have the flavor of an algorithm's result. We want to say that an outcome of an auction is near optimal. Now don't forget bidders are strategic they're going to bid in their own interest. So when I speak about the outcome of an auction what I must mean is I must mean about its equilibria something in the sense of like Nash equilibria. So really what this first theorem should look like is an approximation guarantee for the equilibria of some game. So good news which is that since the dawn of algorithm the game theory this is one of the main things computer scientists have been thinking about the approximation guarantees for equilibria. The phrase here is the price of anarchy that just means the approximation ratio achieved by equilibria. So the ratio between the objective function value of a worst equilibrium and an optimal outcome. Initially 20 years ago if you saw me give a talk on this or Eva or Christos you would have seen in our slides a ton of sort of figures of networks. And again that's because at the beginning of algorithm the game theory we're all sort of obsessed with the Internet and games and networks. But I'm happy to report that now fast forward 20 years and at this point we have a very powerful and very user friendly and modular toolbox for proving strong price of anarchy bounds even in very complex settings like the spectrum auctions that we're currently talking about. So at this point right so I want to state a couple of theorems. So let me actually tell you the model for which the theorems apply. So this is the standard model for thinking about multi item auctions like spectrum licenses. So there's some number of bitters and bitters so that's like T-Mobile Sprint etc. There's some number of licenses that were selling M licenses. M will be an important parameter for us and the number of items. So how do bitters feel about various licenses. Well each bidder think of them as having a massive lookup table in their brain of length two to the M with one entry for each of the two to the M subsets of licenses they might receive. And that entry specifies their maximum willingness to pay for that particular subset like licenses number three five and seven. And then what the bitters want to do their strategic they want to just like in our examples maximize their net utility. So the value for the licenses that they receive minus the prices that you have to pay. And at this point already with these first three bullet points we have all of the ingredients necessary to specify a game in the sense of game theory. We have our players those are the bitters we have our strategies those are just the different ways of bidding in whatever auction format you choose. And we have our payoff functions that's these utilities value minus price. So the final thing we need to talk about the price of anarchy is an objective function. And so the usual thing people use in this part of the world is called the social welfare which just says that in a perfect world if we knew if we had all the information and we had all the computational power what we wish we could do is partition the items amongst the end bidders to make the bidders collectively as happy as possible to maximize the sum of their valuations for the items received. So when we speak about an auction outcome being near optimal what we mean is that the social welfare will be close to this utopian benchmark and if you're not approximately optimal that means the social welfare is much smaller than this benchmark. And I'm happy to report that at this point in our in the game theory in related fields there's now many many different formalizations of this first folklore belief that simple auctions do pretty well if you don't have item synergies. There's a lot of papers listed here and that's because there's a lot of different simple auctions you could look at. There's a lot of different ways to formalize the idea of not having item synergies. There's even a lot of different equilibrium concepts you could look at. But pretty much for all of the combinations you might think about at this point we have some very good price of energy bounds for them. And I'll delve down into one in detail on the next slot. I'm not really going to have time for proofs but let me at least just sort of mention what is the sort of most recurring theme throughout this literature. It's something I like to call smoothness arguments or alternatively extension theorems for smooth games. And so there's definitely like definitions and theorems in the theory of smooth games but I really think of it as more like a philosophy. Kind of a two step recipe for proving price of energy bounds in complex settings. OK. And in step one what you do is as the analyst you make your life a lot easier with some simplifying assumptions. So normally with Nash equilibria right you think about mixed strategy equilibria you think about players randomizing. So in step one of the recipe you just say no no no no that's too complicated. I just want to think about players playing deterministically and I just want to think about pure strategy equilibria. Also in the context of auctions you usually have uncertainty in bidder's valuations. So that's another source of randomness. And again in part one of the recipe you just say no no no too complicated. I'm just going to assume all the all the valuations of the bidders are common knowledge and look at full information pure strategy Nash equilibria. Then under these strong assumptions you prove some kind of approximation bound just using your bare hands using what's special about the structure of your game. Step two of the recipe is you apply what's called an extension theorem that lifts whatever approximation guarantee you prove for the simplified setting to the exact same approximation guarantee for the complex setting. So for example to mixed strategy base Nash equilibria. So that should sound too good to be true right. How can you just automatically take sort of a result for a special case and conclude that the exact same result holds for the general case. And indeed in general this would be too good to be true. However and this is really the key point of the smoothness theory. It's possible to impose conditions on the way in which you prove your approximation guarantee in part one. So that as long as your proof of an approximation guarantee conforms to those rules conforms to those templates then your approximation guarantee does in fact pass with no degradation to the general set. And that is how almost all of these results proceed. You directly handle the full information pure strategy case. And you use it using this prescribed template and then you extend it to the general case. So I know that's all a little kind of high level and abstract. So let me just give you something to hang your hat on and just sort of state one of these specific theorems for you. This one there's different ways to talk about complement freeness or no item synergies and this is a relatively permissive one sub additive valuations. That's exactly what it sounds like. It says your value for the union of two bundles is the most of some of your values for them individually. Notice that in our second example where we had the exposure problem the bigger bitter violated sub additivity. The bigger bitter had value zero for New York zero for California but six for both. So this restriction is getting rid of those kinds of valuations. So in a beautiful result a little over five years ago by Feldman Foo gravity and Lucia they proved a worst case approximation guarantee on the price of anarchy for simultaneous first price auctions with bidders with sub additive valuations. And when I emphasize that this theorem is really worst case in a double sense. The first sense is you know probably what you're thinking which is like OK the sub additive valuations could be arbitrary so it's worst case over the bitter sub added evaluations. But also remember that games generally have multiple equilibria. So this is also in the worst case over the equilibria over whatever game whatever game that you're looking at. The result is tight in the worst case there are examples of equilibrium sub added evaluations where you're off by a factor of two. Of course you'd expect it to be better most most of the time. And you can even prove a stronger result if you make a stronger assumption. So you strengthen some additivity to sub modularity then Sir Connus and Tardo should prove the 50 percent to 63 percent which again is known to be tight in this sort of double worst case sense. So I offer you this as a very specific way in which a very concrete instantiation of that first folklore belief without strong items synergies meaning with sub added evaluations a simple auction format specifically simultaneous first price auctions does pretty well namely it's a guaranteed worst case approximation factor of two relative to this utopian benchmark of maximum possible social welfare. So that's the first of the two results in my remaining time I want to tell you about a theorem about the second folklore belief. Our first one was a positive result had the flavor of algorithms there's going to be a negative result an impossibility result. So it's going to have the flavor of and indeed build on complexity theory specifically communication complexity. All right. So what if we want to prove that when you do have strong items synergies simple auctions do not necessarily perform well. OK. So the first sanity check is just to say well let's just make sure we weren't being that we can improve the previous theorem I showed you the previous theorem saying that when bidders are sub additive simultaneous first price auctions get 50 percent. So the first question is well maybe that exact same auction format is good even if we drop the sub additivity condition. That's the first sanity check to do. So Hasidim at all observed that in fact when you do have items synergies when you have general valuations simultaneous first price auctions are a total disaster. Forget about 50 percent. They don't guarantee one percent of the social welfare or even point one percent as the number of items grow large. And moreover the bad examples that Hasidim at all show they're not even pathological it's literally just that exact same second example of the exposure problem scaled up to more items already there you get no reasonable guarantees. Now this is an important observation but it's not totally satisfying because you know who said that we have to use simultaneous first price auctions. Maybe we could switch to second price auctions or all pay auctions. Maybe we really go crazy and we allow bidders to bid on pairs of items not just individual items. Well it turns out none of those ideas are going to help. So it turns out you can prove that no matter what simple auction format you look at in fact you cannot guarantee any constant factor of the optimal welfare at equilibrium as the number of goods grows large. Now for this theorem to make sense I have to tell you what I mean by a simple auction but actually the theorem is true with a quite permissive notion of a simple auction and it's easiest to explain in terms of the number of bidding parameters. So each bidder in their mind remember they have this evaluation so they have two to the M different numbers in their head value for different subsets. And say simultaneous first price auctions you don't ask them for two to the M numbers you ask them for only M numbers one bid for each of the M items. If we ask people for bids also on pairs of items then we'd be asking for M squared numbers from everybody. So by simple all I mean is that the number of bidding parameters the number of bids a bidder can make is sub exponential and the number of items M. For any such format with a sub exponential number of bidding parameters this theorem is going to hold. So no matter how smart you are even if you ask for two to the root M bids and you process it in arbitrarily intelligent ways it doesn't matter there'll be examples of equilibria where you don't even get one percent of the optimal welfare not even point one percent. So I want to tell you a little bit more about application or how this theorem is proved. OK. But anyways I offer you this as a formalization of that second folklore belief that when you do have items synergies simple auctions are not good enough you really have to add complexity to your auction if you want good worst case welfare guarantees. All right so the way that theorem works it's really an instantiation of a kind of black box translation theorem is going to be a theorem that takes as input a hardness assumption sort of a communication complexity negative result and translates it into an equally good negative result for equilibria of auctions. OK. So what you should be expecting here up here there's going to be some hardness assumption for communication protocols specifically it's important it's for non-deterministic communication protocols and then here we're going to have a conclusion which is a lower bound against equilibria of simple auctions. OK. So precisely suppose for the class evaluations you're working with the underlying optimization problem of maximizing welfare suppose in the number in hand model where each bidder starts with their own valuation and they're passing bits around to try to come up with a near optimal allocation. Suppose that communication problem does not admit any sub exponential cost protocol guaranteeing a welfare approximation better than alpha. OK. So deciding between some welfare W star and W star over alpha that requires exponential communication. Suppose you have a hardness result of that form. Then what this theorem says is that that exact same hardness of approximation factor alpha also applies to equilibria of any simple auction. It's simple in the same sense we have a sub exponential in M number of bidding parameters. OK. So if you can prove hardness for non-deterministic protocols you have automatically proved hardness or sort of a negative result for the price of anarchy of simple auction. Now of course for this to be interesting we better have some examples where the hypothesis is satisfied. So you might be asking well what do we know about the non-deterministic communication complexity of these welfare maximization problems for different classes of valuations. Turns out we know a lot. And again in the early days of algorithm that game theory that's something that was well studied. So for example Nome Nisan has a quite old result at this point where he studied exactly this question welfare maximization number in hand model general monotone valuations and Nome proved a lower bound saying that no sub exponential cost communication protocol can get any kind of constant factor at all. OK. So can't get one percent can't get point one percent. Nome was interested in deterministic and randomized protocols but you'll go to prove it's quite clear that it also holds for the non-deterministic case. So if you chain together Nome's theorem with the black box translation theorem on the previous slide then you immediately get that theorem that I stated. OK. Any simple auction equally well cannot get one percent or even point one percent guaranteed at equilibrium. You can also look at restricted classes of valuations. So for example instead of general valuations we could think about the sub added evaluations that we talked about earlier. Now it turns out that optimization problem gets easier. It is now a polynomial communication protocol that gets a factor of two but Dobzinski Nisan and Shapira showed that you can't beat a factor of two using sub exponential communication. Again even with a non-deterministic protocol. So if you combine the Dobzinski at all hardness result with that black box translation theorem you find that when bidders have sub additive valuations no simple auction can guarantee a price of anarchy better than a factor of two. And what's interesting here is that sub additive and a factor of two is something that came up earlier in the talk. A couple slides ago when I showed you the Feldman on all results saying that simultaneous first price auctions which certainly are an example of a simple auction right there's only m bidding parameters they actually do achieve a factor of two guarantee worst case at equilibrium. So with this matching lower bound we have a precise a precise sense in which simultaneous first price auctions are actually an optimal simple auction when bidders have sub additive valuations. And that's the kind of theorem I have no idea how you would ever prove this without using tools from complexity theory. So simultaneous first price auctions optimal simple simple auction format for sub additive valuations. All right. So I don't I should wrap up so I don't have time for the proof. I will say that the proof fits on a single slide and here's your slide. Those of you that were at the very nice workshop on complexity now we're in the game theory yesterday. I talked a little bit about the proof back then. Let me just for today sort of tell you what are the two key things that really drive this result morally why this result is true. So fundamentally what this proof has to do is it has to take a too good to be true price of anarchy bound and extract from it a too good to be true non deterministic protocol. Right. That's what the proof is going to look like. And really there are two things that allow this to work. The first is guaranteed existence of equilibria. OK. I'm thinking here about say mixed strategy Nash equilibria in finite games. Right. So there's always there's always a witness always a certificate that a prover can write down. Secondly given a description of an alleged equilibrium it's easy to check that the best response conditions are satisfied. Right. Because for a given player if I tell you that something's an equilibrium I know my own valuation I can privately check whether indeed I'm best responding to the strategy of everybody else. OK. So this combination of guaranteed existence and efficient verifiability basically means that equilibria are going to be bound by the same impossibility results that were used to proving for more familiar objects like communication protocols. So at the beginning of the talk I asked how does computer science inform modern auction design. I hope in this hour you've learned some satisfying answers. So the descending clock auction format used in the reverse auction that extends in parts formats that have been studied in computer science. And of course we saw the cutting edge techniques for coping with NP completeness were absolutely vital to the viability of what was actually deployed. OK. On the forward auction side we saw how the price of anarchy can be used to explain when those auctions work well and how communication complexity can be used to explain when they don't. So I'm out of time. Thanks very much. So there's no questions. I have one which is if you approved randomized lower bound would it show anything. I don't know of any stronger statement that it would imply. Yeah. Initially I sort of started from randomized lower bounds but somehow nondeterministic turns out to be the absolutely perfect match for this application. Basically because of this TFNP kind of flavor this guaranteed existence of a witness and then sort of a prover writing down that witness. Somehow you know this really fell into place once I realized that nondeterministic was the was the perfect model for it. So what the first part of the talk these graph coloring problems are the simple enough that you could use something. You could do this without set solvers. So you know are there simple cases that you are just trying to use a very powerful technology or set solvers were really needed for this. That's a good question. So the question is where our set solvers not just sufficient but also necessary for the graph coloring problems. I mean hard to know how to answer that. I don't know of anyone else who's put in as serious an effort to try to solve them. One question people often ask which I think is a good question. Is they point out that that the set solver approach basically throws out the geometry inherent in the problem. And so I think it's an interesting question. Could you leverage the geometry either in the context of you know pre-processing for set solvers or maybe just completely sort of combinatorial algorithm and to you know to do a good job with these graph coloring problems. As far as I know that's an open question. Yeah. So is it known whether or not with unbounded bids up. I mean amount of communication there exist auction designs which reach or approximate optimal set offer. Good. Yes. So the answer is yes. As long as you sort of have say you know finite precision for the valuations there's something known of the VCG mechanism where you basically just ask it's sort of a generalization of the second price auction. And so you just ask bidders for their entire valuation and then you just sort of compute the welfare maximizing allocation and you charge analogs of the second highest bid and that that will get you full welfare at equilibrium. So was there a reason why that wasn't used with this factor? Right. So again so communication is again the reason right because the valuations have two to the M parameters. And so you know even if M was 10 that would be a tall order for most bidders. And in fact M was you know I forget but certainly in the hundreds if not more. Yeah. So it's really that was never seriously considered it was kind of obviously impossible for communication reasons. OK. Thanks very much.