 In the single peak preferences, we have looked at a interesting mechanism called the median voting rule in the previous module. And we have seen that that particular voting rule satisfies certain desirable properties, most prominently the strategy proofness and on-to-ness. And in this module, we are going to look at the this additional property of anonymity and we we are going to characterize these three properties strategy proofness on-to-ness and anonymity. We are going to ask the question which kind of mechanism satisfy these three properties together. And we will see that it is going to be the median voting rule. So, this is significant because we have seen in the unrestricted domain in the Giverts Southerway domain that even if you want to satisfy strategy proofness and on-to-ness, then the only mechanism that you are going to settle with is the dictator sheet. Clearly, median voting rule is not a dictator sheet. So, this is the power of the restricted domain. Once we focus our attention only to those domains which which are single peak, we will see that these three properties are uniquely characterized by the median voting rule. So, this is a result due to Mula 1980. It says that a strategy proof social choice function F is on to and anonymous if and only if it is a median voting social choice function. So, in this module, we are going to prove this at least a part of this and we will continue the proof in the next module. So, the first thing that we are going to do is the reverse direction or is the if direction. That is if we are given that a median voting rule social choice function which we have defined with the use of those phantom peaks, we know that it is strategy proof from the previous theorem. It is anonymous certainly because it is only looking at the looking at the peaks and it does not matter which peak belongs to which agent. If you interchange the or permute these agents, this just means the collection of the peaks remains the same only then the agent levels get interchanged. So, as long as those peaks are unchanged, the outcome does not change. So, this is clearly anonymous. It is on to as well because if you pick any arbitrary or native A, so let us say we are looking at a specific outcome A, then we can construct preference profiles and here is how we are going to construct it. We are going to put the peaks of all the players at A and because there are N players and N minus 1 phantom peaks, then always the median is going to be at this point A and it will be A irrespective of the position of those phantom peaks. And therefore, we actually constructed this P some preference profile P where the outcome is going to be A. So, and you can change it according to whichever alternative you pick. If you are picking B, you put all the agent peaks at B and so on. So, this is certainly on to anonymous and strategy proof. So, the direction, the forward direction is what we are going to discuss in detail because this is presumably the much harder direction. So, what is we are supposed to prove here? We are given a social choice function which is mapping from this restricted domain, the single picked domain to the set of alternatives and this social choice function is already known to be strategy proof, anonymous and on to. Now, a few things that we will be using throughout this proof. So, let me tell you the broader picture of what we really need to do in this case. We will have to essentially construct those phantom peaks because we are given the player peaks, we cannot change the player peaks. We have to pick the appropriate phantom peaks such that the social choice function can be written as a median of those phantom peaks plus the player peaks. That is what our objective is. And now based on the cases whether this median itself is a phantom peak or an agent peak, we will have two different cases and we will handle them one by one. So, let us first define what is the PI0. So, this is just a notation. This means that agent I's preference has all the peaks at the leftmost position. So, leftmost with respect to that common ordering over all these alternatives. So, you are putting all the peaks on the leftmost position. So, this profile is the one where the peak is on the leftmost position. And then from then onwards it is a single peak preference. So, the preferences go down in this way. Similarly, you can define. So, this 0 and 1 is the two extreme points of the domain of all the alternatives. And similarly, you can also define the P. So, this is PI0 and this is PI1. The peak is on the rightmost position with respect to the same common ordering. And all the alternatives lie somewhere here. Now the proof as we said is going to be constructive and we are going to construct the median voting rule which needs the phantom peaks to be designed such that the outcome of an arbitrary social choice function which satisfies these three properties strategy proofness and anonymity and oneness that matches the outcome of the median social choice function. So, here is our construction of the phantom peaks. How are we going to do that? So, let us consider the jth phantom peak and this is how we are going to construct that. So, we are looking at the first n minus j peaks of the. So, this first n minus 1 n minus j peaks are essentially all the peaks that are on the left most side. So, let us consider. So, we are using the same social choice function f that is already been given to us. Now we are putting. So, if you give a preference profile to it, it will give out a specific alternative and that is what we are going to consider as the phantom peak. Now, how are we choosing those preferences such that the preference profile gives out yj is the following. So, we have the first player's peak is to the left most, second player's peak is at the left most until n minus jth agent and then from then onwards n minus j plus 1 onwards until n. So, there are j peaks remaining they are all on the right most position. So, this is how we are defining the jth phantom peak and similarly if you keep on increasing j. So, starting from 1 to n minus 1 we will get a bunch of things. So, the first one will be where you have n minus 1. So, all the points until n minus 1 they have the peaks are at the left most position and the last one is on the right most position. When you are at the end n minus 1 that means you are only having the first agent's peak to be on the left most everything else is on the right most. So, and sequentially you are going from. So, one by one you are moving some of this right left most peaks to the right most peak. So, that is how the yj is defined and of course, after you choose this preference profile you apply f on it. Now, f is arbitrary we do not really know and we will have to argue that essentially this certain properties that we have discussed before. If they are satisfied then this yj and also the agent peaks will essentially match the same one as the median border outcome. The first claim that we are going to make is that all these yj's and yj plus ones. So, all these yj's have a monotone non-decreasing relationship and this inequality is with respect to the same common ordering over all the agent all the oddities the common ordering over all the oddities. So, it is saying that yj can be exactly equal to yj plus one or it should be on the left hand side of yj plus one these are the only two possibilities. And this is not very difficult to see the whole point is that we can we have defined this yj is in that way so that it becomes non-decreasing. So, what is so we have already seen what yj is up to the agent n minus j it is the left most peak and after that everything is right most. So, if I want to define in yj plus one then what will happen is that we will have one more agent. So, this will also essentially get converted into one and up all the agents up to n minus j before n minus j they are all all having the peaks which are at the left most position. And one thing that I forgot to mention is that this agents which agents have which peaks does not matter because we already have the anonymity. So, without loss of generality any agent can have any of these peaks as long as the collection of the peaks are the same we are we are happy. So, due to strategy proofness and this is an important point we have used this kind of a thing before. So, you notice that for player n minus j this particular outcome is an outcome when he is reporting his valuation to be this this one. So, if you consider that agent n minus j has this preference which is at the left most position then ideally that agent should prefer the current outcome which is given by this social choice function according to this preference ordering more than yj plus one because in yj plus one then we will consider as if that agent that n minus j the agent is misreporting something else. And because social choice function is truthful strategy proof then this relationship should hold. And now we are exploiting the fact that this preference profile has a very specific structure. It is a single peak preference which has the left most position to be the to be the most preferred and it is monotonically decreasing from left to right. Now if you if you have any according to this preference profile if you have yj to be more preferred than yj plus one then it has to be that yj is on the left hand side compared to yj plus one. This is quite self-explanatory. So, with that essentially we have proved this I mean you can repeat this argument and yeah. So, for any j starting from one to n minus two this relationship of less than or equal to will hold. All right. So, now we are getting into the actual proof we will save this result we will reuse it whenever it is needed. So, let us now consider an arbitrary agent profile which starts from p1 to pn. So, n agents have their own preferences and we are going to denote by this lower case pi their peaks. Now, the claim is that if you have this f satisfies this strategy performed to miss an anonymity then fp should be the median of all these agent peaks and be the corresponding phantom peaks. This is the final claim if we can show it then we have proved that the theorem by Mula. And again we can we can assume without loss of generality because of this property of anonymity it does not really matter who has which peak. We can without loss of generality consider the natural ordering of these agents to be in the increasing order of their preferences. So, again this increasing order of their peaks is with respect to that common ordering over all these alternatives. So, let us also define this a, a to be an outcome which is a median of all these agent peaks and also the phantom peaks. We will essentially have to show that this is the this is equal to f of p. So, as we said there could be two cases the median could be a phantom peak. So, let us say a is equal to y of j for some j in the in the set. So, what does a median mean? So, median means that you have total n minus 2 n minus 1 point. So, n agent peaks and n minus 1 phantom peaks then out of this 2 n minus 1 points this median is sitting exactly at the center which is at the nth position. So, there will be n minus 1 points on the left and n minus 1 points on the right. And now if you look at, so if you say that yj is the medium median then we also know that there are j minus 1 phantom peaks before it because all these yj's are monotonically non-decreasing. And so therefore, the, so here we will have to have n minus 1 total peaks. So, if j minus 1 of them is essentially phantom then n minus j has to be the agent peaks. Similarly, on the right hand side it has to have n minus 1 total peaks and we know that the n minus 1 minus j number of phantom peaks are there. So, there should be exactly j agent peaks. Fair enough. Now, we, so we can actually say that this yj is nothing but that a that we have to find. And all these peaks, so all these peaks because we have already ordered them in that way. So, all this n minus 1, n minus j agent peaks on the left and the rest of the peaks in this order are on the right. Now, we are going to use a similar transformation. We know what yj is, yj is by the definition as we have defined yj, applying the f, the social justice function on this preference profile. And then what we are going to do is we are going to look at what happens if you replace the first player's preference with its true preference, with its the preference that we are actually willing to prove. So, just change one preference in the whole set of preferences to p1. And let us say that is denoted by b. Now, what do we know? So, by strategy proofness, because under this preference profile if player one had this preference, then yj should have been more preferred than any other outcome because then this will be something like p1 is a misreporting. So, of course, yj has to be preferred than b under this preference profile of this agent. And again, using the same argument that this means that this preference is on the left most, the peak is on the left most position, so yj must be on the left hand side of b. Now, again by strategy proofness, now we are going to use the strategy proofness at this preference profile. So, this might be confusing if you are looking at it for the first time. We are just considering as if this player, player one, if it had this particular preference profile, then if it misreports to some other preference profile, that will not be beneficial according to that preference profile. So, if that player really had p1 0, then what we concluded here is that under that preference ordering, that player would have preferred yj more than b. But whenever your preference actually changes to p1, then we are going to say that, you should prefer this one more than yj under this preference profile, preference of that player. So, we can write again by the same strategy proofness, we can argue that b is more preferred than yj under this preference of that player. So, if that preference is this, then misreporting is worse than reporting it truthfully. Now, what we can observe that p1, we have already seen this, we have that is the reason we have ordered all this agent peaks and we have also seen this relationship. Certainly p1 is less than or equal to yj, it leads on the left hand side. So, p1 is the peak of this is going to be on the left hand side of yj. And that implies because it is single peak, b should be less than or equal to yj. So, why is that true? So, you have the peak somewhere here and yj is on the left hand side. So, if b was on the right hand side of yj because of the fact that this is single peak, that should have been less preferred than yj. This is clearly true because of the preference has this single peak nature. It can never be the case that b is living here and the peak is on the left hand side of this point yj, this alternative yj that we are looking at and still b is more preferred than yj. So, therefore we can conclude that b is on the left hand side. So, it can be somewhere here or it can be here, but it can never be on the right hand side. And because b has to be less than or equal to yj and in the previous conclusion we have made yj has to be less than equal to b, then it must be the case that b must be exactly equal to yj. So, we can repeat this argument for the first n minus j agents. So, we are going to replace this p20. So, the second agent's preference with p2 and the given preference of that player. And because we now know that b is equal to yj, so the first one is yj, we can repeat the same arguments to show that if we do it, then that will also be yj. And we can repeat this argument until n minus jth agent to get that that is equal to yj. Now, we are going to show for all pj's. So, not only up to jth agent, but all the agents up to n. So, now we are going to consider. So, after we have approved that this is yj, then we are going to replace it, replace now the preferences from the right hand side. So, now we look at the pn, the preference of player in replacing that with this first preference. Now, we apply a very similar argument, we can now show, I mean, we know that because of strategy proofness, yj has to be more preferred under this preference than the outcome b. And because of the fact now this preference is nothing but the right most, so the p key is at the right most, then it better be the case that now what you are preferring less should be on the left hand side. So, yj has to be on the left hand side, so yj should be somewhere here, let us say, and b has to be less than on the left hand side than that because this is a single big preference. And similarly, if you apply the pn and use the fact that the this peak was on the right hand side of yj and you know because of strategy proofness b has to be at least as preferred or strictly more preferred than yj, then this and this inequality holds, then it must be that is that yj must be on the left hand side because of b because again for the same reason that you have the peak p of n which is on the left hand side of yj. And because of the fact that now b is more preferred, b has to be leaving somewhere on this side. It cannot be here because then single peak will single pickness will force that that is less preferred than yj. So, yj now stays on the left hand side of b and together these two things we can conclude that b is equal to yj. So, we are also proved that b we are also done proving that b is exactly equal to yj and this is the way this proof percolates. So, now you can go to the pn minus 1 and replace up to this agent in minus j plus 1 and then at the end we will have all the preferences that you wanted to show and that is exactly equal to the yj which is the median of all these peaks phantom as well as the agent peaks. All right. So, that proves first part we will next see the case where it is the agent peak, the median is the agent peak.