 that we know the results from convex analysis, we can now define and prove this celebrated result of single object allocation called the Meier-Sens lemma. So, for that we are going to define something called monotonicity of an allocation rule. So, we are going to say an allocation rule to be non-decreasing if the following thing happens if for every agent i and for every t minus i. So, the type profile of the other agents, we have the following conditions satisfied that whenever t i is strictly greater than s i, the corresponding allocation is going to be non-decreasing. So, it is saying that if an agent has had a type of s i and if it increases its type to be to t i, if the probability with which it was getting that object that probability can just weakly increase. So, that is the meaning of a non-decreasing allocation rule. So, all the allocation rule that satisfies this property is called a non-decreasing or monotone allocation rule. So, now let us come to the result by Meier-Sens, which actually gives a characterization of all the dominant strategy incentive compatible mechanisms in the context of allocating one single indivisible object. So, let us assume that the type set of all the agents are between 0 and B i. So, it is an interval between 0 and B i. B i can differ, but everyone's lowest value for that object is 0. And the valuations are in the product form. This means that if you have a probability of allocation as B i and your value, the value for that object is t i, then your expected valuations or the valuation for your object under that rule is nothing but B i times t i. The allocation rule F which is mapping this type profile into an allocation which is a randomized allocation and the payment rule which has this components you want to P n for each of these agents is going to be d s i c if and only if two conditions hold. And what are those conditions? The first condition is F is non-decreasing, the property that we have already defined in the beginning that if your valuation increases, then your probability of getting that object weekly increases. And the second property is that the payments are going to be given by a very specific integral formula. Let us read it one by one. So, the first part you can denote that. So, this is the payment for agent i when the type profile is t i t minus i. And it has one component which is a constant component from the view of player i. It may depend on t minus i, but because in this context from the view of player i, the types of the other agents is held fixed. So, it is like a constant for that player. So, we will typically refer to this component as a constant component of the payment function. So, the second part is its value. So, something like t i times the probability with which it gets this. So, this is something that in the product form, this is the value of that agent for this object. But then there is this negative integral form which is integrating between 0 and t i, the probability of allocation of that agent. So, we will see what are the implications of this integral formula when we discuss examples and we prove this result. But at this point, it is just this integral formula. So, the first remark that we are going to make is that this is a little different. So, we have seen the characterization of truthful mechanisms even for Robert's theorem for the affine maximizer results by Robert's. But that gives a very specific functional form. While Meyerson's result is giving a more implicit property, it is saying that if a mechanism is truthful, then some monotonicity condition should hold. So, something like it is, if it has a very specific property here, then it should continue to hold that property in some other place and so on. So, let us now start proving this result. First, we will prove the forward direction. That is, we are given that this mechanism f comma p is d s i c, then we will have to show that these two properties should hold. So, how should we go about proving it? So, first we write down the utility of agent i. So, utility when its type is t i and the type of the other players is t minus i. We know that that is going to be t i f i t i t minus i. So, this is the valuation that this agent gets minus the payment that it is making under that type profile. And when you have a different type for that the same agent, they should change accordingly. Now, since this f comma p is d s i c, we will have certain property satisfied. It is going to be actually, so we can write this utility, which is nothing but this one. When this agent is misreporting its type to s i, this inequality should get satisfied. Now, we can actually add and subtract one quantity, which is this s i f i s i t minus i. So, this we will add as well as subtract. And we rearrange these terms in a certain way such that we have one quantity which is here. The other quantity is f i s i t minus i multiplied with t i minus s i. So, you can see that this terms will actually cancel out this and this. But we want to write it in this specific form because now we know that if we collect together these two terms, this is nothing but the utility of the same agent when its true type is s i. So, this is what we have defined earlier. So, this is from the second equation here. And the second term is just going to be the difference between these two types t i minus s i multiplied by the f i of s i t minus i. Now, if we go back what we have defined in the context of convex functions and sub gradients, we can actually define this g t i function to be the utility function. So, everywhere just ignore the t minus i, t minus i is appearing everywhere. It is like the other agent's type which is held fixed. So, we look at the utility of t i t minus i utility of player i when its type is t i and define that as the as that function g t i and phi t i we are we are using the same notation because we want to show that this g t g function will be a convex function and phi we will be nothing but a sub gradient of so f of t i is nothing but this the allocation and the probability of getting that object for player i. Now, this using this substitution we can write this equation one exactly in the same way as we have wanted for the sub gradient equation. So, this is nothing but the sub gradient equation. So, all that we are left with is to show that this function g is a convex function. If we can show that then we we know that this phi function is nothing but a sub gradient function. All right. So, we will make this conclusion after we have proved that g is a convex function. Rest of the things are falling in place. So, let us now try to see why g is convex. Okay. So, to do that let us do the following thing. Let us speak to arbitrary points x i and z i in the type set of player i and define a convex combination of that and denote that with y i. So, we have a convex combination of these two points x i and z i and that is defined as y i. Now, what we know from DSIC because we got this inequality of one. So, this first equation and its reduced form when we are substituting using g and phi. This is coming as a consequence of the DSIC. We have just rearranged the expression the condition for DSIC. So, therefore, we can just write that if this agent's true type was x i and if it was misreporting to y i then we can use this inequality to show that this is DSIC. So, that is between x i and y i and when its true type is z i and it is misreporting to y i, you can write a very similar expression. So, now what we are going to do is we are going to multiply this first inequality with lambda and the second inequality with 1 minus lambda and then add them together. This does not change the inequality, the direction of the inequality because both lambda and 1 minus lambda are non-negative quantities. So, on the left hand side we have this quantity and on the right hand side we have g of y i which is nothing but this lambda of x i plus 1 minus lambda of z i that is the expansion of y i and the rest of the part rest of this right hand side of this inequality happens to be phi of y i times this quantity. Now, we know that y is nothing but this quantity itself. So, this part will be equal to 0. So, all that we are left with so this disappears and what we are left with is this. So, this shows this is exactly the condition for convexity and since we have picked arbitrary x i and z i and their convex, their arbitrary convex combination we can say that g is convex. So, we have actually proved that this function is convex. So, what was g? g was nothing but the utility function. This is something that we have noticed in the very first example of a second price option that the utility was convex and its derivative or sub gradient in this case happened to be the allocation function. And that is not just a coincidence, this is this can be seen for any kind of mechanism, any kind of truthful mechanism which allocates one single indivisible object. Okay. So, yeah. So, now we are going to use the fact that whatever we know as the properties of this convex function and its sub gradient, we can apply lemma 3. Lemma 3 said that when you have a sub gradient of a convex function, then that sub gradient is going to be non-decreasing. So, therefore what we have is this function f i t i t minus i is non-decreasing in t i and that is what we wanted to prove. This is the part one of our Meyerson's result and this lemma 4 gave some integral formula and that integral formula is this and we are going to use that to find the payment expression. So, what can we do? What we can just replace this this quantities with their exact values, the actual values. So, we have this utility of 0 t minus i and this f i. So, the sub gradient is the allocation function under this of agent i. And then we expand it out further because this utility is nothing but their expected valuation expected to value for that object and minus the payment that has been made. And now we have because this is 0, this part only remains to be minus of p i 0 because only the payment part shows up. The first part because t i is actually equal to 0 that part disappears. So, that is just this and for the last part we have the same expression as before. Now, you will just rearrange and you get the payment formula that we have that we wanted to prove. So, this proves the first part, the forward direction that whenever we have a DSIC mechanism that should satisfy these two properties. Now, we will have to show the reverse direction that if that satisfies this monotonicity property and this payment formula then it must be a DSIC. And this proof is very easy to follow and I like this proof because it is entirely by pictures. So, what is given in this case? We are given that this function is non decreasing and we have the payment formula. Now, let us look at the payment formula and we know exactly what it is. And for simplicity of exposition, let us assume that p i 0 is exactly equal to 0. All that we will change is the origin point, here we will change if we use a non zero constant value for p i 0 comma t minus i but that does not matter. So, let us assume this. So, then we do not have. So, this term becomes equal to 0. All that we have is ti multiplied by the probability of allocation for that agent and this integral formula. And what we also know that this fi function is non decreasing. So, let us look at this fi function. So, let us look at the function fi ti t minus i and on the x axis we are plotting ti. So, this function is non decreasing, looks something like this. Now, when the agent is not misreporting its type. So, it is reporting its true type. So, let us say ti is its true type. If it reports that, then what is the payment that it is making? So, the first term of this payment formula is ti times f ti. So, which is nothing but the area under this this entire rectangle here and then you are subtracting out the part which is the integral formula and the integral from 0 to ti of that allocation. So, we are actually integrating out this green part from it. So, therefore, the payment is nothing but this yellow section here because we have this rectangle subtracting out this area under this curve is essentially the yellow one. But we also know that the utility is nothing but the utility is ti fi ti t minus i minus this payment ti of ti t minus i. And if you do that what you get is this part will actually cancel out and you will be left with this part. So, this green part is nothing but this green part is nothing but the utility here for this player when it is reporting its type truthfully. Now, let us look at two situations where in one case the agent is over overstating its type and the second case where it is understating it. So, if it is overstating, so si. So, what changes is its payment. So, it changes its allocation and its payment. Of course, now the mechanism will imagine as if this is the total allocation and therefore, what will change here is this term. So, because payment can only be computed based on what is reported. So, both this ti and ti here they will be replaced by si now. So, this first term will be nothing but this larger rectangle here now. And the second part will again go from 0 to si. So, which means this will be this area under this curve this entire area under this curve and the payment will be nothing but this part here. So, this is the payment that it is looking. But now if you are looking at the actual utility. So, the actual utility for this player will have this ti in it. The only thing that changed is this si because player i has misreported si. So, its own type does not change. So, this this part will not change. So, it will still have this ti multiplied by this fi which will be having this large value here. So, the actual valuation of this agent will have this value and now it is making the payment of this amount. So, notice that this will go all the way up to here because for the payment part it has no way to estimate what its true type is. So, it will just calculate everything by replacing ti with si and that will give it a payment of this part. So, you can see that its valuation was this much and it is subtracting out this larger quantity. So, compared to the earlier one it is having this part of course, but it is also getting this negative part. So, eventually this area under this curve minus the area that we have shown here this is going to be its utility. So, clearly that will be worse than this one because now you have the have this area under this curve this area here, but you are subtracting out some part. Fair enough so, that is that is one the part where it is misreporting its type to be higher than its true type. Let us look at the condition that it is reporting misreporting to something which is smaller. So, as before you are going to have this the mechanism will calculate its payments as this part only and whatever your actual utility what it is is again going to be replaced only in the allocation part not on this ti part because that is the ti that you already have. So, this ti times this this part so, this is going to be the new allocation here. So, this smaller rectangle is going to be the valuation part for this agent minus the payment that it is making. So, this part is going out. So, it is only getting this remaining portion. So, a kind of a the top part of this of this area under this curve it has been chopped off the rest of the thing it is getting at as its utility, but that is certainly smaller than smaller than the original one this area here because you are actually losing this part. So, this also shows that if you are misreporting your type to be something smaller then that is also going to be not beneficial for you. So, we could show in using this case this payment formula and the fact that this is monotone that it is essentially truthful it is dominant strategy incentive compatible. One can show this even more formally, but the proof by pictures is much more intuitive and perhaps will stay longer in your mind. So, one corollary that we can make is that an allocation rule in single-object allocation setting is implementable in dominant strategies if it is non-decreasing. So, we already know what kind of payment formula we have all that we need is to ensure that the allocation rule is monotone and that is it. So, with that we can ensure that that allocation rule is dominant strategy implemented.