 We have seen that we can bypass the Giver-Sotherwood result in the special domain called the single peak domain and we have also discussed one example, pick the leftmost peak or the rightmost peak or the median peak which is a mechanism which is on to and strategy proof but not a dictatorial one. So, in this module we are going to look at that specific mechanism in some further detail and we will show that this is indeed strategy proof. This class of mechanisms is actually strategy proof and the class of mechanism that we are going to talk about they are named as median voter social choice function. It is a class of social choice functions. So, how is it defined? So, it is defined in the following way. So, it is a social choice function which is mapping from this restricted domain of the single peak preferences to the set of alternatives. If there exists a collection of n minus 1 peaks and we will call this peaks as the phantom peaks. The name being phantom because it is not a real peak. So, it is not an agent who is having this kind of peak rather it can be chosen arbitrarily and based on that we actually define what is the median. So, the social choice function is actually defined in terms of in particular it could be it could be indexed by B itself that is given a specific B set of phantom peaks. We are picking the median of those peaks and also the peaks of the players. So, how can we actually represent the common things that we have actually discussed? So, for instance, if you have the peaks the phantom peaks all cluster together on the left, then you know that all the n minus 1 peaks are on the leftmost position. Then the first position of first of this agent peaks is going to be the median of this collection of B as well as the peaks of the players. So, in some sense if we pick this B carefully we can actually create the leftmost peak as your outcome. Alternatively, if all the peaks are on the rightmost position that is by leftmost and rightmost I mean the ones that are that can have them the maximum negative. So, if in the domain there is a leftmost position there cannot be any agent peak which is even further left to it and there is no agent peak which is for the right to to it. Then if we pick all those phantom peaks to be the rightmost peaks then we know that the rightmost agent peak is going to be the median of this collection and therefore we can also reproduce that rightmost peak picking mechanism using this description of the median border. And similarly, if you have if you want to pick the median of the agent peaks then you can actually pick half of them. So, half the phantom peaks to be on the left half the phantom peaks on the right of course depending on whether n is even or odd you can adjust accordingly, but that will give you the phantom peak which is the median of all the agent peaks. And this outcome the outcome of this mechanism need not always be an agent peak by choosing arbitrary phantom peaks you can also have a phantom peak as the median of this collection of peaks here. All right. So, here we are I am calling median, we have the computation is the median with respect to the common ordering the common ordering of all this of all these alternatives. We have already defined that a single peak preference is defined with respect to this common ordering over this alternatives and that is that is this one. And we are computing the median with respect to that ordering in this examples set of examples we are assuming that they are all located on a real line. Now, that is the that is the reason why we have introduced phantom voters so that we can look at all these mechanisms collectively. So, it is a it is not only a single social choice function it is a collection of social choice functions, but they have this property that by adjusting the phantom peaks appropriately you can output different kinds of medians different kinds of peaks as the final outcome. Now, here is a important result due to MULA 1980 every median voters social choice function that is every social choice function which belongs to this class that we have defined just now is strategy proof. So, that is one important result and it is not very difficult to prove and I will not do the proof formally because it is the very similar way that we have argued in the previous module how you can argue that median voter so median peak or a leftmost peak or a rightmost peak is strategy proof is because the only way an agent can alter this median is by going on the other side of the median. And presumably because this is a single peak preference if the only way it can change that median is by reporting its peak to be on the other side of the median which essentially is less preferred than the current outcome and that is essentially the idea of this proof again. Now, notice that this is a very special property of median and mean or any other kind of statistical measures of all these peaks does not have this property in fact you can manipulate means and you can construct one example. So, let us now go into a little further detail of this of this mechanisms and let us look at what are the structure of Pareto efficient mechanisms and here we have a claim. Let us imagine that P mean and P max are the leftmost and the rightmost peaks and again by leftmost and rightmost we mean the common ordering over all these alternatives. So, if they are the leftmost and the rightmost peak according to that common ordering then F will be Pareto efficient if and only if this outcome of that social choice function lives within that minimum and maximum point. So, to draw a picture, so let us say P mean was here and P max is somewhere above and if you pick any F of P that must lie somewhere in between. It can be the P max itself, it can be the P mean also but it will never be outside it. So, because there is an if and only if condition we will prove it in both directions. So, the first is the forward direction or the only if direction, so we can prove it via contradiction. So, let us assume that this Fp is not inside this, so it can be either on the higher side or on the lower side. But in both these cases what we immediately observe is because the peaks are somewhere here. So, the agent peaks are always living somewhere in between if you pick. So, if the Fp is somewhere here then it is strictly worse than their current peaks for all the agents and that is certainly dominated alternative. So, let us look at the other direction, the if direction that is if Fp lies between these two points P mean and P max then it must necessarily be predator efficient. And that argument is also quite straightforward then you cannot really find a condition that there exists some B which is strictly preferred by all the agents over this current outcome Fp, all the agents on the left hand side of this, so between P mean and this F of P they prefer they have their own choices which is more preferred than Fp possibly. But there are some alternatives for which it is on the other side. So, you cannot really find another alternative one consolidated alternative. So, let us say you pick some B from here and if you try to see if it is better than Fp by all the agents that is not true because B is on the other side it is strictly worse than Fp for all the agents who speak on the left hand side. Similarly, if B falls here for all the agents which are on the right hand side for them B is strictly worse than Fp. So, we will never find this kind of an example where this Fp is Pareto dominated and therefore the condition of Pareto efficiency is vacuously true. Okay. So, in the Gibbard-Sathuwait setting we have discussed certain properties and we are going to look at those properties, the implications of those properties with strategy proofness in this current setup of single B preferences as well. The first condition that we are going to look at is monotonicity. The similar results like this Gibbard-Sathuwait setting holds here too, even though the proofs differ quite a bit. So, let us look at the relationship between the strategy proofness and monotonicity and look at the easier direction. So, strategy proofness implies monotonicity and if you recall the proof we did, we transitioned from one preference profile to another preference profile by changing one agent's profile at a time and the same proof will hold even for this setup. So, there is nothing very interesting in this direction but in particular when we are talking about the reverse direction that if F is monotone whether it is strategy proof or not that particular proof is not so straightforward and the same proof that we have done earlier will not hold and it will not hold because of the fact that it was a constructive proof. We constructed it in a way that if there was a violation of strategy proofness that means B was more preferred. So, B is when the agent is manipulating and getting an alternative B and when it is not manipulating it is getting A and that was preferred by some agent I in that setup that and then we have constructed a different preference profile P double prime where B was on the top for that agent and E was directly below this and this was possible in the unrestricted domain in the Gibbard-Sathuwait setting but in the single pick preferences so you can imagine that the B could be somewhere here and A can be anywhere else so A can be here or here based on what is their relative position in this total ordering remember that the common ordering you cannot alter in single pick. So, you will have to also take care of all the intermediate alternatives which are lying between A and B and therefore we cannot just pick B and A right after each other because then you will also have to argue what happens to these alternatives here because they will certainly in a single pick preference they will certainly be between A and B and we will have to argue that. So, what I am trying to suggest is that the reverse in order to show the reverse direction you will have to also do a possibly different construction or provide an count example maybe this implication is not true at all in that case you can you should be able to give an example pad is monotone but not strategy proof. Let me give you the answer the reverse implication is also true but the construction will be a little different so I leave that as a homework exercise. So, the next result that we have already shown in the case of Giver-Sathabwet that is the unrestricted domain about on-to-ness unanimity and Pareto efficiency. As before the Pareto efficiency is the strictest notion and it implies unanimity and on-to-ness that holds even here too but the reverse implication that whether on-to-ness will imply unanimity and unanimity will imply Pareto efficiency under the condition that this social justice function is strategy proof that requires some amount of work and that is exactly what we are going to do here and that will also give you an idea that how we are going to do the construction in the single pick preferences we will also have to take care of certain additional things because there is a restriction in the domain. Okay, so let us assume so all that we need to show to show the this equivalence that is we know that Pareto efficiency implies unanimity implies on-to-ness so strategy proofness plus this will also imply that under the strategy proofness to show the reverse direction we need under strategy proofness this on-to-ness will also imply Pareto efficiency. So, let us try to improve this via contradiction. So, let us assume that F is strategy proof and on-to but not Pareto efficient. So, what does that not Pareto efficient mean? It means that there exists a pair of alternatives A comma B such that A is strictly preferred over B for all the agents yet the mechanism has chosen this outcome to be B so which is Pareto dominated by all the agents. So, since this preference is a single pick so we know that A is strictly above B so maybe somewhere here so A is strictly preferred over B then there must exist another alternative and this is an implication think about it of the single pickness that there must exist some other C which is strictly preferred over B and it is also neighboring alternative. So, in that ordering the common ordering of all these alternatives C just leaves next to B I mean of course A could have been on the other side of B as well in that case C would also be on the other side but it does not really matter C is a neighboring alternative of B and that also is strictly preferred over B and here we are only considering discrete alternatives so it is not continuous. So, in particular C can be A itself that is also a feasibility if A and B are neighbors then A and B then our job becomes a little easier but in general there could always be a C which is strictly preferred over B and that is not only for agent I it should be for every agent and because for every agent the preference is a single pick. All right so now we also know from the definition of on-to-ness because we have assumed that it is strategy proof and on-to. So from on-to-ness for this particular C that we have found just now there must exist some preference profile B prime such that the social choice outcome of that B prime is actually producing so that is an implication of on-to-ness. Now we are ready to construct our P double prime so what does P double prime do? It picks C to be the topmost one and the B to be the next second alternative and this is possible because these two are neighboring the neighboring alternatives so we can look at all these preferences which has C as big and B as the second alternative in that in that preference order for every agent I in N. Now the proof is very simple we look at two different transitions from P to P double prime and we apply monotonicity and here we are using that the fact that monotonicity and strategy proofness are equivalent here also then you can conclude and yeah this is very similar to previous one I don't want to spend time on that so P double prime will also be B because in P the outcome was B right so in P the outcome was B so you have B here because it is a satisfying monotonicity you can convince yourself that the relative position of B is only getting better in P double prime. Similarly if you are going from P prime so P prime is this alternative where C is on top so C is the outcome and because C is moving to the topmost position of course its relative position is also increasing using monotonicity you can conclude that this is this is equal to C the outcome P double prime is C and here is a contradiction because here it is B here it is C and B and C are not the same so we have proved this result that so we have started with a with an assumption not B so but we have proved that this is supposed to be B so then we have actually proved this equivalence which we had for the unrestricted domain but this implication is also true even in this single big preferences domain. So now we are interested in designing non-dictatorial social choice functions so the the non-dictatorial social choice functions we need to define one additional property called anonymity so what is anonymity anonymity says that the outcome is insensitive to agent identities I mean the name itself is quite self-explanatory that it is not looking at the agent identities it is just looking at their preferences and making the decision based on that so the permutation of so we will have to define for anonymity something called a permutation of the agents so let's look at sigma which is picking one of these agents and possibly renaming that the same agent with a different name so maybe agent one is now called agent three agent two is called agent one and so on now what we are going to do is we we are going to apply a permutation sigma to a preference profile B to construct another profile now the how is this permutation done the preference of I goes to the agent sigma of I which is the transformed version of that agent in this new profile and we are going to denote this new profile as B superscript sigma so let us look at an example to understand what it means so suppose we have three agents and sigma is following it maps one to two two to three and three to one so if you if you look at the original preference profile which looked looked as this now one has become two so the same preference profile will go now go to the agent two in the new game in the new transform game two will go to three so the two's preference has gone to the agent three's preference and three's has become one so three has become one here so that is the the meaning of this permutation of this permutation of the preference profile now the social outcome should not alter due to agent renaming so what we have done we haven't changed anything in the population that the same population remains we have just renamed the the agents and the idea of anonymity is that this social outcome that we are going to get from the from this population should not alter due to the agent renaming so formally the social choice function f is anonymous if for every profile B and for every permutation so this is very important to look at any possible permutation of sigma then the in this new world so in this world the social choice outcome that was there should be the same as the social choice outcome in the original original game now you can answer this question that can you find an example of a non-anonymous social choice function and the hint is that we we have actually seen that in the past think in the lines of the gibberish other way so if you if you tie the the outcome let's say an outcome if we define our social choice function in the following way that it will be the the most preferred alternative of player number one and now you are actually tying it to a very specific player so if the players if you alter these players using this permutation of these players then what the new player one has a topmost alternative of B while in the original game the topmost alternative of player one was A so you can you can begin to see that the social choice functions which actually tie to a very specific player let's say player one and its top alternative that is not going to be anonymous and you we know a name for that that is a dictatorial social choice function dictatorial social choice function is not anonymous and by we'll see in the next module that by introducing this notion of anonymity and enforcing this we are actually ruling out those dictatorial allocations