 Like the single peak preferences, now we are going to look at a different domain restriction which is quite related to the single peak preferences, but slightly different. This domain is known as the task allocation domain. As the name suggests that we have one task, divisible task in this case, which can be allocated among the end players that we have. So each of these players get a share of that task which we are going to denote as Si. So Si is the share of agent i and this naturally lives within this interval 0 to 1 and some of the task share among all the players is going to be equal to 1. So this task is always going to be allocated so there is no part of this job is unallocated and each of these agents should get some non-negative amount of share of this task. Now what is an agent payoff? We are going to assume that each of this agent has some most preferred share of the work. So for instance someone likes the 50% of the of this job to be assigned to it and anything more or less will be less preferred to it. Some other agent may have a different peak so it may like 30% of the job and anything more or less will be bad. So how can we motivate this kind of a task allocation domain? So you can imagine that this tasks may have rewards or maybe it has wages per unique time. So let us assume that if a specific agent works for this ti amount of time, then it gets a constant wage for each unit of that time. So WTI is the total wage that it gets for working ti amount of time. But at the same time the agents have some amount of cost for working. So for instance it could be physical tiredness or maybe less free time or whatever it is. And this cost we are going to let us say as you it is quadratic. So if there is a multiplier ci and as time progresses the task gets harder and harder. So together the net payoff in a very linear way we can write it as the wage that this agent gets minus the cost it incurs. So this will be ci as we have to find. So WT minus ci ti square is the net payoff that this agent gets. So you can plot the payoff. So payoff on the y axis and on the x axis if we plot ti then you can see that it has a convex shape and it certainly has a maxima point. So this maxima point is achieved at ti star which is given by this. So you can imagine that this preference with respect to time of the share of this task is essentially a single peak. So that is exactly one peak which it prefers the most because it maximizes its payoff and anything more or less will be less attractive to this agent. So one way of pictorially depicting what is happening here is to draw the total task among different agents. So let's say we have agent 1 it has so the total task can be shown on the y axis. So this is 0 share of the task and 1 share of that task and maybe there are some n number of agents here and each of these agents have a most favorite share of this task. So maybe this red dot is showing that how much they prefer the most and this can vary according with respect to different agents and the total task so the finally we will have to divide the task in such a way that this dotted lines which is the share of this task or among all the agents they sum up to 1. So this amount of task is nothing but si if this is agent i and these are their peaks which is which they prefer the most. Alright even though this is single peaked it is not exactly the single peaked with respect to the definition that we have used in the previous module. So where the single peakedness was if you can order all the alternatives it was a single peaked with respect to that. Here it is single peaked but it is on the share of the task for each of these players. So how can we actually distinguish let's look at this example. Suppose there are two alternatives which are 0.2, 0.4, 0.3 so there are three agents this is the share that happens to this is the share that is given to each of these players and the second alternative is 0.2, 0.6 and 0.2 then player 1 actually likes both of them equally because the share that this agent gets is exactly the same. So there is a certain amount of indifference so these two alternatives are indifferent to player 1 this kind of a preference to alternative being equally preferred is not allowed under the single peaked preferences. So this is single peaked with respect to each of each of these shares while in the earlier case we could have ordered all the alternatives in such a way that according to that common ordering of these alternatives people had single peaked. So we are going to denote this domain of task allocation with this notation t. So now our social choice function is mapping this task allocation domains for each of these players into this set of alternatives. So the notation that we are going to use is this if i of p which means that what is the share that agent i is getting and the sum of all this if i p summed over p summed over i should be equal to 1. Now as before we are going to use this peak p i which denotes that which is the most preferred share of the task for player i. Now since we have indifferences here so we have defined the notion of Pareto efficiency even in the previous modules in the previous settings but here we will have to define it a little more carefully it's slightly different. So the social choice function f is Pareto efficient if there does not exist any other share of the task that is weakly preferred by all of the agents and strictly preferred by at least one. So remember in the in the previous setting the single peaked setting it was it was that if a specific alternative is dominated by some other alternative by all the players then that dominated alternative is never going to be peaked as in as the social choice outcome that was the definition of Pareto efficiency. Here we are the alternatives are essentially share of tasks so we will have to look at a specific share of task where we can actually find another share of task which is weakly preferred so all the agents prefer at least as much as the current alternative and one agent prefers it strictly better. If that happens and then we say that this social choice function is not Pareto efficient so the definition of Pareto efficiency means that there does not exist any other alternative such that that alternative actually weakly is weakly preferred then the current outcome by all the agents and there exists some agent for which that is strictly preferred so this is strictly preferred. So let us look at some of the implications of this task allocation domain. So how can we actually make sure that this is Pareto efficient so what are the most natural things so the some of the peaks so the peaks are arbitrary and therefore we cannot say that whether that some of these peaks should be equal to one. So if that happens to be equal to one then allocating then the allocation problem is very simple one can give each of these agents their favorite share of this task which is PI and that will be uniquely Pareto efficient because nobody would like to change that anything more or less will be less preferred to them. Now what happens if the sum of this peaks is more than one so that means if we try to allocate the tasks which is so this task such that everybody gets their peak then we will be over allocating so the sum of this allocated task will be more than one which is not possible so then there must be a must be one agent who gets less than its peak the allocation is less than its peak. Now let us ask this question can there be some agent j such that its allocation is more than pj more than its favorite share of the task if this social justice function f is Pareto efficient the answer is no because if you have so we already know that there exists at least one agent who who gets less than its peak now if you give someone more than its peak it also hates that this agent j will hate that rather both of them can transfer some of some of their shares so this agent would be better off because it gets now a little more share of the task and because it is single peak it is moving towards that peak and that can prefer that more than its current allocation similarly because this agent is also shedding off some of its task and it is also moving towards its peak which is pj then it will also be happier so f cannot be Pareto efficient if these two things happen so there cannot be any agent which has which got more than its its peak and therefore we can conclude that for all under this condition that is if the sum of these peaks is more than one for every agent the allocated share of the task should be at most pj cannot be more than that and similarly you can use the argument to show that when this sum is less than one the share of the task should be at least pj okay so the anonymity property that we have discussed before again applies here but here the the definition of anonymity is also slightly different as we saw in the case of Pareto efficiency in this in this context so the the overall idea here is that if the agent preferences are permuted so the you are you are permuting the names of the agents meaning that their preferences are permuted then anonymity says that the shares will also get permuted accordingly so if you look at a specific permuted agent so the agent j has been permuted and the new agent the permuted agent became sigma of j and you have also looked at the p of sigma so p of sigma is nothing but you have exchanged those columns so you have a favorite peak and it is monotone decreasing so that particular agent has actually moved it it's the preferences so that if that is the single peak preference over the share of this task when we have permuted the agents that means now a different agent permuted agent gets that kind of a preference while this agent gets something else so this that is p sigma once we have permuted that if you look at the share of that permuted agent after after their preferences are also transformed this will be the same as the in the original game before permutation whatever that agent j was getting in the in the in the original preference profile it's best to take a look at an example to to understand this point let us come come here so suppose there are three agents we have so after permutation agent one becomes two becomes three and three becomes one and the original preference original peaks were 0.7 for player 1.4 for player 2 and 0.3 for player player 3 and after permutation now what happens is that the because now the third agent is now the new agent one then agent one's peak will be the agent three speaking the original game so which was 0.3 that will become that will come here agent two is the first agent so that is 0.7 will come here and agent three is the is the agent two in the original game so its preference its peak comes here then what anonymity means is that if you look at the original game so here we have written the the right hand side first and the left hand side on the on the other other side so if you look at player one and look at the original preference profile that should be equal to the the permuted one so one gets permuted into two and the corresponding permuted profile and this should hold for all j's so in fact this is not all we can write down this for all this agent so f2 of the original preference profile will be f3 of the permuted profile f3 of the original profile will be equal to f1 of the permuted profile so we can we can write it accordingly so this inequality should hold for all j's so let us now look at what could be some candidate social choice functions which are prior to efficient and also anonymous so the first thing that we can immediately think of is serial dictatorship we have thought about we have already discussed dictatorship dictatorship was in the context of alternatives which which had only one specific alternative there was no share so there was not no component for each of these agents but here we are actually dividing share so it means that this a predetermined sequence of all these agents will be given at the very beginning of this game and each agent will be given either its peak which which is the most preferred so they will come in sequence and they will pick their most favorite share which is their peak or they will get a left over share so if the sum of these peaks is less than one then the last agent we will be given the whole left over share so notice that in this context in this discussion we are only focusing on deterministic mechanisms and we are not going to discuss randomized mechanisms randomized mechanisms are those mechanisms where the outcome is random so you do not give a deterministic share of the of the task or deterministically peak one alternative rather you pick that with certain probability so we are not going to discuss that because those discussions are much more difficult and the characterization in those domains are much more difficult than than deterministic cases so primarily we will be discussing in this course only deterministic mechanisms at the end we will see that there will give some pointers to some randomized mechanisms so serial dictatorship is a deterministic mechanism deterministic social choice function it satisfies Pareto efficiency why because if you look at this sequence none of this agents so because the the first n minus one agents have actually got their peak share or as long as the shares are available and they will never like to exchange anything so anything more or less will be worse for them so therefore they will not change like to change there is no allocation which can improve their preferences for the last agent there is no choice and because that's the that's the last agent nobody no other agent I mean agents who are coming after that agent who got the leftover share they will also not like to have their share but and also the agents who are coming before that agent they will not they will not like to change their allocation either so it is Pareto efficient no matter whichever order of this serial dictatorship you choose it is strategy proof because none of these agents now have any reason to misreport their their peaks because if they are coming in the in the beginning of the sequence they will get their share their most favorite share if they come after the the the share is over or only the leftover task is is pending then they don't have any choice they cannot change it by misreporting so the best response for each of these agents is to report their peaks truthfully but what you can observe is that this mechanism is not anonymous because there exists a predetermined sequence and all the shares are written with respect to their names so if we change the permutation or if we rename them then the the predetermined sequence will still go with respect to that ordering which are the numbers and because now the agents have been switched and their peaks have been switched so therefore they will now get a different allocation so the allocation does not remain the same and that's why it is not anonymous you can think about it a little more carefully and you can figure out that this is not anonymous the other allocation could be something like a proportional allocation and if we have something like p as some of this p i is more or less than one then we can actually either give more share to all the agents or less share to all these agents so we can overload so if we can we can give something like a c times p i c times their peak allocation in this in this proportional method so if some of this p i is less than one then c will be more so that that means everybody will be given more than their peaks and if some of this p i is greater than one then c will be less than one that means everybody will get less than their peak share now the question is whether this is anonymous prior to efficient and strategy proof you can easily argue that this is going to be anonymous prior to efficient for so this anonymity is coming because because of the same constant so we are either overloading or unloading all the agents in an equivalent way so even if we altered their their preferences and also their names then the the allocation will not change it will go to the same it will go to those permitted agents in the same share it is also prior to efficient because everybody is either getting under loaded or overloaded in the same way so any prior to efficient allocation is either going so giving the the more share for all the agents or less share for all the agents so there is nothing like someone is getting more someone is getting less so that you can readjust and make everybody better off so that is not possible so this is also prior to efficient but what about strategy proofness now suppose we have a peak which is 0.2 0.3 and 0.1 right so these are the the agent peaks and for these three players and then you know that c is going to be 1 by 0.6 that is just 1 by some of this their peaks now player 1 will now get one third of this of this task so because c is 1 by 0.6 and player 1's peak is 0.2 so it gets c times its peak which is 1.3 now this is going to be more than its share 0.2 which it likes more right so so one third would be more than that it likes so one by five is something that it likes the most so can it change its preference report so it can actually you can i mean this is very easy to see that it can actually misrepresent its own peak such that the c gets altered accordingly and then in that case so c times 0.1 you can actually have 1 by 5 which is 0.2 so by misreporting its its preference its peak agent 1 could get a better allocation better share of this task than what it is getting by by reporting it hopefully so this mechanism even though it it is quite fair it is allocating it everybody are under loaded or overloaded in the same way this mechanism is not strategic proof in the next module we are going to look at a very interesting mechanism which is strategic proof