 Now that we know the VCG mechanism in quite some detail along with its advantages and disadvantages, this is the right time to look at a generalized version of the VCG mechanism. So the VCG mechanism essentially picks the efficient allocation which is the allocation which maximizes the sum of the values of all the agents. One way of generalizing this VCG allocation is to incorporate some weights and some function which translates the allocations. So we have seen this affine maximized allocation rule when we gave some examples of allocation rules in some previous module. So what is different here is unlike the efficient allocation now we have an weight for every agent i. So wi represents the weight corresponding to agent i and there are different weights. These weights need not be always equal and we take the weighted sum of the valuations of all the agents now at a specific allocation and we translate that weighted sum with a function which is a function of the allocation. So we are giving, so this function is giving different weights or different values to different allocations. So maybe some of these allocations are more preferred than the other allocations that is being reflected by this function kappa. So the sum together is essentially an affine sum, we have a weighted sum and then we have a translation factor and all these are evaluated at a specific alternative or allocation A and this affine maximizer picks the allocation which maximizes this affine sum and hence the name affine maximizer. And we are also going to assume that these weights are non-negative weights of course nobody is going to be given a negative weight but all of them cannot be simultaneously 0 and kappa again is a function kappa has no restriction on being positive or negative it can take any real number values. Now when we define this we can easily relate it to the classical VCG mechanism where we are trying to find the efficient allocation. So there are some mechanisms that we already know of and those are special cases of this affine maximizer class of mechanisms. How? So if we keep this kappa function to be identically equal to 0 so that is it is 0 for all allocations then this term actually just disappears so this term is not there anymore and now if we have the weights to be equal to 1 or the weights are identical for all the agents then the outcome that we get is essentially the efficient outcome and this is the allocation that VCG mechanism picks. Similarly we can have a very distinguished agent called the dictator whose weight is exactly equal to 1 and for all other agents the weights are equal to 0. So in that case we are just picking the most favorite allocation of that dictator and not looking at the valuations of any of the other agents. So this is the usual dictatorial mechanism dictatorial social choice function and that both these two cases are special cases of the affine maximizer. So affine maximizer really is a general class a super class of all these mechanisms. What we can observe is that this because this WI can now be different for different agents this mechanism is not anonymous anymore. So in the efficient allocation it was anonymous because even if we change the identities of these agents so what that means is the if the agents names are permuted that then those new agents now get the permuted valuations but the sum of the valuation still remains the same and you are picking the alternative. So just by permuting the agents you are not going to change the outcome but what can happen in WIs because this WIs are chosen for each of these agents when we are permuting we are actually giving different weights to this different valuations now and because of that the allocation might change and therefore it might not be anonymous anymore. So this is one important difference than the VCG mechanism. So K could be a non constant function of course we have assumed here a constant function and the interpretation is as we said it is giving different importance to different allocations. So some of the allocations are given higher importance in the final social choice outcome and some of them are giving lower importance and as we have already said that affine maximus is a super class at least the allocation part is a super class of the VCG mechanisms. We will ask a characterization question so something very similar to the GS theorem or even in the in the restricted domains like single peak preferences or the task allocation preferences we have seen certain characterization results that if we need these properties then this is the class of mechanism in the in the case of single peak preferences it was the median voter rule for the task allocation it was the uniform rule and we will see a very similar result in the context of mechanisms with transfers in this setup. So but in order to define that we will have to first define one additional property and this property is known as the independence of non influential agents. The purpose of this name will become very evident when we define it. So it says that whenever there exists some agent whose weight is zero so it has no importance in making the final decision but there could be a situation where there are ties so suppose there are this affine maximizer sum is the same for two different alternatives two different allocations. So in that case what can happen is that we should not be making the the choice of or break this ties based on the preferences of this agent who whom we are going to call the non influential agent because it has no influence in the in the in the deciding that affine maximizer sum because its weight is zero. So if we don't do that so for instance for for that player for which the weight is zero if the affine maximizer sum the affine maximizer outcome remains consistent so no matter whatever the the preferences or the type of this non influential agent is whether it is theta i or theta i prime doesn't matter the outcome of the affine maximizer will be the same. So the tie breaking is not based on the the types of this particular non influential agent the the problem is if this property was not satisfied then the affine maximizer mechanism could be manipulated and here is how it can be manipulated. So for instance suppose there is a tie kind of a situation and there are two alternatives A and B allocations A and B and a prefer A is more preferred by agent I which is a non influential agent that means its weight is zero A is more preferred than B. Now in the in the preference profile where you are actually outputting B then because agent I if you if you break the tie in some way so let's say you are looking at the the bottom most alternative of of that player I then this agent will always be incentivized to misrepresent its preferences it will present itself as if it values ALS because in reality that that A will be the outcome that has been chosen and it prefers it more than the current outcome and so therefore one can easily construct an example if we break the tie by looking at this agent I's preference or the type then this affine maximizer might not be truthful anymore and that's a very subtle point but sometimes it is not so so much highlighted so I thought that I should make it very clear that independence of non influential agents if we assume this property that is for those players whose weights are zero in the affine maximizer sum the outcome of the affine maximizer sum should not depend on their their types so it should be same for all the types of that agent so that is essentially this property and if we assume that property then we can actually show certain good results so we we can show that this mechanism the affine maximizer rule satisfying INA is implementable in dominant strategies so of course we have already discussed about this tie breaking rule if as long as the tie breaking rule is is consistent will have this INA property satisfied and then all that we need to show is that is something which is very similar to the proof of gross mechanism so the the property so what is how we are going to define the payment right so as before we are going to say that this mechanism in the in the quasi-linear domain is implementable in dominant strategies that means there exists some payment which implements it in dominant strategies and this theorem does not say anything about it so we will have to construct it so here is how we can construct that so we can define so this is the the payment under this affine maximizer rule for this agent I when that reported types of theta and theta minus I and on the right hand side we have a large expression let us go over them one by one so let us first start with the the innermost expression so this is quite similar to the to the case where we are looking at the agents except agent I the the sum of the valuations of all the agents except agent I at the efficient allocation so here the efficient allocation is being replaced by the affine maximizer outcome affine maximizer allocation and also the sum of the valuations is being replaced by the sum of the the weighted sum of the valuations and as before because this whole term so remember what was what was this term in the case of VCG mechanism so it was just the sum over all these theta j's at that a star which was the efficient allocation and we are summing over all j not equal to I here we are also doing the same thing just that we are putting the corresponding weights and also adding this this function kappa so kappa is something like a translation on the allocation it has no relationship with the agents valuations anymore so we can keep it as it is and then we are summing over all the the weighted sum of the valuations of all the agents and this whole affine sum is computed at the outcome at the affine maximizer outcome now similar to the gross payment we had this this term h i of theta minus i which is not dependent on agent i's type at all so it can be any arbitrary function and finally we will have to divide it with respect to one over divided by w i which is the weight associated with this player i so we soon see that the reason we have chosen this numbers in such a way that we can prove that this is going to be a strategy proof or the DSIC and this is same the the proof technique is actually going to be same as the the gross payment rule and this expression is when the weights are positive if the weight is 0 the weight can only be non-negative so if the weight is 0 then this payment will also be 0 so this is the definition of this the proposed payment that will make this mechanism DSIC so now let us look at the case where this w i is is positive because w i equal to 0 situation becomes very simple because the the outcome is no way possible to change in addition to the fact that this there is this i n a property so agent i if it has a weight of 0 then its payment is 0 according to this payment rule and it has the affine maximizer outcome will not be dependent on its on its valuation at all so it does not matter what ever it reports and also if there is a tie that tie will also not be resolved with respect to the type of this agent so together that agent has no role to play in either the decision or the payment so therefore it is trivially truthful in that context so the only thing that can happen is when the weights are positive so if the weight is positive then we can write down the expression for the utility of player i and that is nothing but the utility the valuation at that affine maximizer outcome and subtract that out so subtract the the payment under this affine maximizer payment that we have just defined here now we can just unravel the the expressions inside and we can reorganize these terms what we will find is that this 1 over wi and then we have this the sum which is nothing but the affine maximizer sum so remember that we were so if you reorganize it appropriately we'll find that this is the this is the allocation part in both these parts and this the the whole expression within this blue parenthesis is nothing but that affine sum and by definition this affine maximizer is maximizing this term so therefore it is going to be greater than or equal to any other b so we could have written the same expression with any other b replacing these two these two places here and in particular what we are going to pick is when agent i is misreporting so theta i and the other agents are reporting whatever they are reporting apply affine maximizer on that that will give you some outcome and this inequality will get satisfied even for that so if you write that down and then notice that this h i theta minus i is completely independent of player i's type report so it does not really matter so it remains the same in both these sides of the inequality now we can reorganize this term and write it as the the valuation when agent i is misreporting to theta i prime and also the payment is calculated when it is misreporting so what we have actually shown here is that when agent i is reporting its type truthfully it is weakly better off than reporting its type untruthfully so therefore this is this is a DSIC mechanism and we have all already considered the case of weight to be equal to zero because it has no role to play in the decision-making so therefore it is this is trivially truthful when the weight is actually equal to zero. Now what we are going to do is similar to this Gibbard Sadovoy theorem so if you remember Gibbard Sadovoy theorem has said that we have two properties you have only the dictatorial class of mechanisms which is which is truthful so of course truthfulness is something common in all these cases now in in this case we are going to consider the a very similar setting where the types are unrestricted in some sense we will say that there is no additional restriction being put apart from the fact that it is picking those allocations from this allocation space and can take any real value so if that is true in this unrestricted space of valuations in the quasi-linear domain we can actually do an analog of a GS theorem we can characterize the class of DSIC mechanisms in this quasi-linear domain and this result was given by Roberts in 1979 so if we have this set of allocations which has at least three elements in it and if the type space of every agent is unrestricted then every onto and dominant strategy incentive compatible allocation must be an affine maximizer so it actually says the whole space together the whenever we have on to this and this dominant strategy incentive compatibility condition the unique class that we can we can have is the class of affine maximizers so why is it similar to GS theorem because in GS theorem also we have assumed that the agents can have any preference ordering so unrestricted preference ordering so for all these alternatives and then only we have a dictatorial result and similarly in this in this case also we are assuming that the types can be any arbitrary types the valuation can take any values so if we restrict our valuation class let's say to sub-modular valuations when we are talking about object allocations or additive valuations which is just if you have a bundle of objects the valuation of that bundle is nothing but the sum of the individual items in those kind of situations robots theorem does not hold clearly there are more mechanisms which are truthful than only the affine maximizer rule and that is essentially the domain restriction under this setup of quasi linear preferences so if you are interested in the in the proof we will not do the proof in this class because this in the proofs are really long if you are interested you can take a look at this very short and simple paper called two simplified proofs of Roberts theorem by Lavi Mualem and Nissan this is this has two different simple proofs of the Roberts theorem