 This video is going to tell you about the revelation principle, which is one of the core ideas that makes mechanism design possible. So let's begin by finding out what the revelation principle says to us. It says that any social choice function that can be implemented at all by any mechanism can also be implemented by a truthful and direct mechanism. So let's recall what it means for a mechanism to be truthful and direct. We say that a mechanism is direct. If it works by having all of the agents simply make a single declaration, you can think of it as writing something down on a piece of paper, giving that to the person who runs the mechanism and having some decision be made. An indirect mechanism, in contrast, might have a whole sequence of actions and information revelation and subsequent actions that depend on that information revelation, multiple actions by different agents. An indirect mechanism is essentially any sort of imperfect information, Bayesian, extensive form game. In contrast, a direct mechanism is a simultaneous move Bayesian game. So it's kind of like the sort of Bayesian version of a normal form game. Now a mechanism is truthful, a direct mechanism is truthful if the equilibrium strategy for every agent is when they have to write something down on that piece of paper is just to write down all of their private information. So simply to declare what their type is. So a truthful mechanism gives agents the very easiest strategic problem that they can have. Agents are asked, write me down something on the piece of paper and the agents say, all right, I'll tell you all of my secrets, I'll just reveal everything to you on the piece of paper and you go off and make a decision based on that. And what the revelation principle tells us kind of surprisingly is that if I can implement a social choice function in any complicated way, I can also implement it in this very simple, truthful, direct way. So let's see how this works. I start with an arbitrary, non-truthful mechanism, which indeed may be indirect. So let me draw it here. So what happens is all of the agents have some types because we're in a Bayesian game setting, then they each take some strategy, which is a function of their types that tells them how to behave inside this mechanism. And then some complicated thing happens in here. Maybe we have information revelation. There's all kinds of maybe repetitions of agents acting multiple times. And then in the end, eventually something happens, the mechanism decides an outcome. Well, recall that we can look at this whole thing as being a game, right? In the end, this is just a mapping from agents types and strategies into some outcome that the agents have utility for in the end. And so I can think about an equilibrium of this game. It's just some Bayesian game. And so I can think about these S's all being in equilibrium with each other. And that's exactly what I do when I talk about implementation. If I say that a mechanism implements a social choice function, what I'm saying is that in equilibrium, the mapping from types to outcomes is the same as the mapping that would be chosen by the social choice function. Well, here's the punch line. I want to show that I can make a new direct mechanism, which is truthful. And here's how I do it. I take the original mechanism and I kind of embed it in this new mechanism here. So what I do is I say, I'm going to make a new mechanism whose action space is the same as the type space. So I'm going to ask each of the agents, tell me what your type is. And of course, the agents can do whatever they want. They don't necessarily tell me the truth, but they tell me something. And then I take that type that they've declared. So let me, first of all, give a name to that. So, so I'll call that policy that agent I has for declaring his type to me, his strategy S prime I, which is a function of his true type, right? So he has some way of deciding based on his type how to reveal something to me. So he'll come up with some type, which is a function of the real type that he has, but maybe it's not the truth. Maybe he'll tell me something else. I'll then take whatever he declares to me and I'll believe him that that's his type. And I'll stick that into S, the strategy profile that we had before, which is our previous equilibrium in the original mechanism. So I'll take that type, I'll then figure out what strategy he would have played in the original mechanism if his type had been what he declares to me. And then I'll simulate the original mechanism. So if it's indirect, inside this new mechanism, I'll kind of run a simulation of the new mechanism. I'll figure out what the original mechanism would have done if everybody had played their, their declared types under the strategies that were previously in equilibrium. And I'll figure out what the outcome is and I'll make that the outcome of the new mechanism. Well, I want to claim that this new mechanism is truthful. You can see that it's direct, it's direct by construction. I want to claim that it's truthful. And this should be kind of easy to see. It's truthful by exactly the same argument that S was in equilibrium. So if this mechanism isn't truthful, that would mean that S wasn't in equilibrium. Why? Because if this mechanism isn't truthful, then I want to, I want to declare some type here that isn't my real type. And if I'm declaring a type that isn't my real type, that means I'm changing the strategy that I would have played in the original mechanism. I'm kind of, I would have preferred to also, if I lie to the mechanism here, I would have preferred to lie to myself in the original mechanism so that I would effectively be playing a different strategy than I was playing before. And of course, the whole idea of an equilibrium is that nobody can gain by changing their strategy. So if I wouldn't have wanted to lie to myself in the original mechanism and change my strategy, nor would I want to lie to the mechanism now because it's already playing a strategy for me that's in equilibrium for the original mechanism. And of course, that being the case, it chooses the same outcomes as the original mechanism. So kind of the slogan for the Revelation principle is that the agents don't need to lie in this new mechanism because the mechanism already lies for them. We don't actually feed their real types into the original mechanism. Instead, we already apply this strategy of their types just in the way that they were doing before. And given that the mechanism is going to do such a good job of lying for me just in the way that I would have wanted to lie, there's no reason for me to lie again. The mechanism is acting kind of as my proxy. The mechanism is doing just what I would like to do if only I tell it the right thing. And so I should tell it the right thing. That's the idea of the Revelation principle. So there's one kind of technical concern to mention here, which is that the set of equilibria is not always the same in the original mechanism and the Revelation mechanism. The Revelation mechanism, of course, has the original equilibrium we're interested in. That's just what I've shown you. But it could actually introduce new equilibria which we don't like. It's a bit of a technicality, but it's worth telling you. But let me give you the high-level point here, which is that the Revelation principle really is important for mechanism design. So why? What is it good for? Well, there's kind of three ways to see it. It tells us that truthfulness is not a restrictive assumption from the point of view of what can be implemented. Likewise, it shows us that direct mechanisms are not less powerful than indirect mechanisms, which is a bit surprising because it sort of feels like direct mechanisms are much more rich and complicated and they ought to be able to do more. So this tells us no, truthful and direct mechanisms are enough. If you can show that something is true for truthful and direct mechanisms, you've also shown that it's true for all mechanisms. And that's really important for analysis. So if I want to prove something about the space of all mechanisms, in principle, that might seem really hard to do because I would have to grind through every possible non-truthful indirect mechanism. It seems like a hard set to think about. So what the Revelation principle tells me is that if I want to prove that no mechanism exists with certain properties or that some given mechanism is the best one that exists with certain properties, it's enough to show that it's the best one among the set of truthful and direct mechanisms. And that's a much more manageable thing to do. So really the Revelation principle in telling us that we can restrict ourselves to truthful and direct mechanisms without loss of generality opens up the analysis of mechanisms. And we're going to see in the future when we prove things about mechanisms, we always begin by saying, without loss of generality, let's restrict ourselves to truthful and direct mechanisms. And that step is very important. That's what makes mechanism design feasible in practice. And so that's why we're telling you about this so early on. This is a really important foundational principle to mechanism design.