 The Gibbard-Sutherwood theorem is important and quite significant, but it still gives you the sort of negative result that you cannot really have any non-dictatorial social choice function with those two very desirable properties, on-to-ness and strategy-proofness. Now is that a end of the story for social choice functions? Turns out that it is not and that is because of the observation that we have made in the previous module that the Gibbard-Sutherwood result holds only for the case when all preferences are admissible. So, this, the domain of the social choice function was nothing but the whole of script p. Now, this may not be always true for certain applications. So, we might find some of the cases where we can actually restrict. So, the all preference orderings over this, over this alternatives might not be even feasible and that is where our story of positive results begin. So and in this module and also in the subsequent modules, we will discuss some of these domains which are strict subsets of the script p which we call the domain restriction and that is exactly the area of research where a lot of social choice functions start becoming, giving interesting results which are non-dictatorial and also satisfies these two, at least these two properties of on-to-ness and strategy-proofness. So, an alternative way how we can write strategy-proofness is the following. So, we have defined so far strategy-proofness in terms of non-manipulability. So, if you can find a social choice function which is not manipulable where we have defined manipulability formally, then we call that strategy-proof. You can give a direct definition in the following way that if the preference of a player i is given by p i and when it is reporting its preference truthfully to this direct mechanism versus when it is reporting, misreporting it to p i prime, in the former case it will get a better preference. So it should prefer the outcome that it gets when it does not misreport versus when it misreports. This preference ordering holds for all p i and p i prime and also for all p minus i for all the other players. So, in particular if we want to just make sure that this can be any arbitrary p i prime we can actually use the tilde to denote that other players might misreport their preferences. It does not really matter. So, in that sense it is quite equivalent to the dominant strategy incentive compatibility and we also made that remark earlier. So no matter what the other players are doing, reporting your true preference ordering is strictly beneficial for this agent i and that holds for all agent's i's. The other alternative which can happen is even after changing your preferences, misreporting your preferences you might not be able to change the outcome. So if that happens then you did not change anything so therefore there is no question of preferring one over the other. So either of these two things can happen and then we are going to say that this social choice function f is strategy proof. Now in this domain as we have already discussed so instead of taking a mapping from script p to the end to a we are reducing that p to a subset of that p, a subset s which we are going to call a domain restriction. So while it is a domain restriction it is reducing the domain of the social choice function. So there are some important domain restrictions that is already well known in the literature and we will be discussing three of them in this course. So the first domain restriction is single pick preferences which we are going to start next and later on we will see there are domain restrictions like divisible goods allocation and quasi-linear preferences. We will discuss this third domain restriction in quite detail because that is very applicable to various real world applications. So what is single pick preferences? Let me start with an example to motivate this problem. So suppose there is a room whose air conditioners temperature has to be set and there is a bunch of people in that room who has different preferences for their temperature. So someone likes and this is what we are going to assume. So if you like a specific temperature you have the most comfortable temperature setting let us say 25 degrees and anything hotter than that or anything colder than that is not comfortable for you. So if it starts becoming if your most preferred temperature is 25 degrees then if it is going to 24 then you dislike it less but you dislike it even lesser if it goes to 23 degrees or 22 and as it goes down your preferences for those temperatures are also going down. So in some sense and similarly you can think about when it is going hotter so from 25 to 26 you prefer it less but you prefer it even less if it goes to 27 and even more. So in some sense you can imagine that this Ti star is the most preferred temperature for this agent I and if it is going down it is going down in a monotone fashion and similarly on the other direction and that is essentially bringing us to this kind of a preference and its name called the single peak. So you have exactly one peak which you prefer the most and in both directions from that peak your preferences going down. So somebody else might have a different most comfortable temperature. So the other person might be liking 27 degrees the most and then it has the same kind of a preference profile it goes down in this way but the point remains that it has a different peak but it has the same structure of its preference which is single peak. Now what is common in this two different preferences the common thing is both referring to the same temperature scale. So it is the ordering of this temperature 25, 26, 27 which is a set of integers is same for both this players. So there is one common ordering which we are going to refer to so here the temperature scale is essentially the common ordering but your preferences might have might be different so based on where your peak lives and how so if you if I ask you to compare between let us say 24 degrees so if your peak is 25 then between 24 degrees and 28 degrees how you compare between these two things that might change. One might prefer more hotter temperature dislike hotter temperature less than colder temperatures that that is completely feasible under this preference profile but the point is if you are on the one side of this peak you have a monotone decrease in your preferences and on the other side you also have another monotone decrease. So you have a common order over these alternatives agent preference is a single peak with their expect to that common order. So there are several other examples of temperature example is one of them but you can have a facility location so if you have a hospital school or post office that is located on a real line so if it is far from your current location supposing you also have a house and you want your the school or the hospital to be as close to your house as possible. So then you have a single peak as the hospital or school goes further away from your house location you prefer it less. Similarly, you can think about political ideology if you have a specific political ideology anything left to it or right to it is not really beneficial for you or if you are at the extreme then your preference profile can only go down on one direction that is also a single peak preference. So there are various other kind of examples that you can construct and they all fall under this category of single peak preferences. So we will denote this natural ordering the common ordering that we discussed with this notation less than or greater than as we do in case of real numbers. So we can say that A is less than B and this will essentially denote that this is the the ordering the common ordering of this alternatives on a real line and for this discussion we are only talking about one-dimensional single pickness that is you have a single real line and on that you are defining your preferences. In particular it does not need to be a real number it can be any kind of transitive and anti-symmetric relation. So for simplicity of exposition we are actually using this real line example as the as the common ordering but it does not need to be so. So transitive we know so if A is less so A is less than B and let us say C is less than A then we know that C is also going to be less than B so this is this is due to transitivity and anti-symmetric says that if you have A less than B you can have either A less than B or B less than A but not both. This means that this preference this sort of common ordering has this sort of a strictness you cannot have both of them together and that is quite natural I mean we just want to distinguish different points on this real line and therefore they should maintain a complete ordering. Okay so how is it a domain restriction? So let us now discuss a specific example to understand why it is a domain restriction. So for without loss of generality let us assume that A is the leftmost position, B is the second left and C is the last one on the rightmost position. Now what is what can happen if you have this ABC as three arbitrary alternatives and in the give or sell point setting where we were allowing for all possible preferences over these alternatives then you would have had three factorial possibilities which are listed all here. Now because ABC actually follows this ordering this common ordering then single pick preferences actually rules out certain possibilities. So for instance you cannot have a situation where A is the most preferred and then followed by C and then followed by B or maybe C is the top one and then A and then B. So you can already begin to see that it has multiple picks that is not allowed according to the according to this domain restriction of single pickness. So we are actually ruling out these two alternatives if we assume that this common ordering is so the other four alternatives are feasible. You can either have this kind of a preference ordering this kind of a preference ordering or any kind of pick which is at B but you cannot have this kind of a multiple pick as your preference. So that is certainly reducing. So if you had all of this together then it is going back to the script P but because we are now in the subset so this part is essentially the script of S which is a single pick preference. So this is the pictorial description let us make it a little formal. So a preference ordering PI which we are going to assume to be a complete ordering or linear ordering over A no differences of agent I is single picked with respect to this common ordering less than of these alternatives. If the two conditions hold if B and C are in A with this condition that they both of them are actually living on the left hand side of the pick. So this is the case where so PI1 is the top most alternative of player I. So B and C are both living on this left hand side of this pick and in particular B is living on the more left position than C and both of them are actually on the left hand side. Then what we know by the definition of single pickness is C is going to be more preferred than B that is the assumption over this single pickness. Similarly if you have this thing on the right hand side so let us say this is the other direction where B is actually larger than or equal to PI1 and that is even that is smaller than C then what we know is B will be more preferred than C under this preference profile PI. So that PI is going to be a single pick preference so this is a more formal way of defining that. And as before we are going to denote by script of S the set of all single pick preferences. So therefore now our social choice function is a domain restricted social choice function which is mapping from S to the N to A. Now let us look at how it circumvent the GS theorem. So when we are saying that it is actually so GS theorem is not true in this single pick preference then we must be able to give a mechanism which is not dictatorial yet on to end and strategy proof. The on toness and prior to efficiency I leave for you to verify I am just going to argue that there exists a strategy proof as well as non dictatorial mechanism. And how is that? So remember let me explain this with the example that we have already discussed. So we had this preferences where the temperatures were single pick and you had different falls different alternatives were having different slopes. So this is a preference profile of one player and that could be a different preference profile for another player. And similarly maybe for the third player this is looking like this. Now let us look at one specific mechanism which is just collecting together all the peaks. So it is asking all the agents to say what their peaks are and it is picking the leftmost peak. So let us say in the air conditioner example the mechanism is saying that give me your most favorite temperatures and I will set the temperature of the AC to be the coldest one coldest one among all the most favorite temperatures. Now even though the other player might not like it so for instance if you are picking this as your outcome this is certainly worse than this agent's true preference and also even less for the for the magenta player's preference. But can you actually change the outcome by manipulating your preferences and will that be beneficial for you? The answer is no because the only way this temperature can be changed. So for instance we are looking at this green player here and it is trying to change the outcome because the mechanism is designed as such that when the outcome will always be the coldest temperature that is the lowest peak in this single peak preference. The only way you can change it is by putting your peak on the other side because if it reports its peak somewhere above the lowest position does not change so the temperature won't change the outcome won't change. The only way it can mis-report is possibly going here and mis-reporting. So this is the green's mis-reported preference here. In that case it can change the outcome and the outcome becomes this but remember that it is just mis-reporting its true preference over this temperatures remains as before and earlier it was getting an outcome which was still more preferred because of this single peakness property then the outcome that it is getting now so it's the same green player who has actually mis-reported its peak to here and got a temperature which it prefers less than the current outcome. Similarly, you can think about the imagined player even though it is getting something here if it mis-reports something here then it gets something worse which it prefers less than its current outcome. So none of the players can actually mis-report and gain and clearly this is a non-dictatorial mechanism it is actually collectively taking the decision of all these players and it is also strategy proof that is intuitively we have explained this and this is this mechanism is not unique here you can actually think about any kth lowest lowest temperature so you can pick any kth lowest peak from the left in particular you can pick the right most peak that is also feasible so if you instead say that I am going to set the temperature to be the report your peaks I am going to set the temperature to be the highest temperature then also you can argue in a very similar way to show that it is not manipulable by any of the players and in particular what is more popular or common is to pick that peak which is at the middle so which is the median of all these peaks so sometimes this mechanism is also called the median peak mechanism and that will also be strategy proof so it is not very difficult to argue why this is strategy proof so that is one example how you can circumvent GS theorem we can actually find mechanisms that are onto and strategy proof and not necessarily dictatorial.