 Let us now start looking at a very simple yet insightful domain of mechanism design with transfers and that domain is the domain of single indivisible object allocation. We have already seen one example one mechanism which is truthful in this domain namely the second price option. But here we will not only try to characterize that which other mechanisms are also truthful in the context of single indivisible object allocation. We will also try to maximize the revenue earned for the auctioneer while selling the single indivisible object. Now the reason is that we get a very rich set of elegant results in this simple setting and we can actually go beyond the limits of deterministic deterministic mechanisms and we will look at some randomized mechanisms as well. So, in this domain the setup is the following. So, the type set for each agent will modify the notation slightly so that it is falling in line with the standard definitions of single indivisible object. So, the T i which is the equivalent of capital theta i as we have defined elsewhere this is a subset of real line. So, now the type set is a subset of real line which denotes that if you pick a specific type of an agent i this denotes the value that this agent gets when she wins the object. So, it is as simple as that. And now in this context the allocation a is a vector of length n that represents the probability of winning this object by the respective agent. This is the first time that we have spoken about the probability of winning an object or probabilistically deciding certain allocation. So far we have discussed only deterministic allocations, but because of the simplicity of this domain we can actually give certain results which is also probabilistic in nature. So, which is also randomized mechanisms. So, this kind of mechanisms where the final outcome or the allocation probabilistic those kind of mechanisms we are going to call as randomized mechanisms. We have not discussed this in the context of the previous cases like the gibberish other bird setting or in the context of single picked preferences or task allocation because it is much harder to get a kind of a characterization result. But because of the simplicity of this domain we can actually take a look at that and we will show that there are certain good results. So, therefore, the set of allocations in this case is no longer the set a the set of allocations the set of pure allocations rather it is a probabilistic allocation. So, the vector so this delta of a is nothing but a simplex where each of these elements in this set is a vector of numbers between 0 and 1 and they all add up to 1. So, this is nothing but a probability distribution over this in agents. The object is going to get allocated probabilistically among one of these agents and in this case we are assuming that the object is definitely going to get allocated. So, there is nothing like the object is not allocated. Now, what is an allocation rule in this case as before the Cartesian product of all the typesets the allocation function actually maps it to the set of allocations. Now, the set of allocation is this simplex over this set a. So, now we have this valuation function which is defined on that kind of an allocation which could be a randomized allocation and its own type. But now this valuation is just the expected value for that agent for that object. So, EI is the probability with which this agent wins agent I wins that object and if it wins it gets a value of TI. So, this product is giving in the expected valuation. So, similarly when we are talking about f of t. So, if we look at the function f and apply so use the type report of all the agents as its argument then we get a victor whose ith component here is nothing but the probability of winning the object for that agent I. Now, let us try to familiarize ourselves with this notation when we are looking at one of the much discussed example of second price auction the victory auction. So, the type here in that case where VIs for all these agents and let us define when we are looking at agent I what is the maximum value of all the other agents except agent I. So, that is being defined as t minus I of 2. So, the 2 stands for the second bit this 2 will be the second highest bit when I is the highest beta. Now, agent I is going to win if its valuation is larger than the the second highest bit. So, if this agent has the highest value. So, the among all the other agents their maximum value the agent I's value VI is even larger than that and that is the maximum among the whole population of bidders. So, then this will be considered as the winner and it will lose if the valuation is less than that and we can also say we can define a tie breaking rule when there is an equality. Now, the payment so, we know that under second price auction the winner pays the second highest bid. So, if an I becomes the winner then it pays t minus I for all the other cases where the PEM where VI is not the highest bidder its value is going to be 0 anyway. So, we can write the utility of this player I in the following way if VI is less than or equal to t minus I2 it is going to get a utility of 0 if it is larger than t minus I2 the difference is going to be the net utility. So, we can plot this this quantity and we can see that the utility has a very interesting shape. Now, on the x axis we have plot the plotted the valuation of this agent and on the y axis the utility up to this point of t minus I2 the value the utility of this agent becomes remains 0 and after that it increases proportionally with VI and because this is just VI so this the slope of this curve will be 45 degrees. Now, the other thing that we can observe so, if we do another plot for the allocation of this agent we see that up to t minus I2 this allocation is going to be 0 and whenever the valuation is larger than that then the allocation is going to be exactly equal to 1. So, we leave this point of the exact equality open at this moment because in that case when they are equal that means that there are at least some other agent who also has the same valuation as this agent. So, in that case one can randomly pick any of this agents as as winners and therefore any of this numbers numbers could be could be a valid allocation for this agent. So, we can make a few observations from this two curves the first thing is that this utility curve is convex. So, if you look at this curve this is a convex curve and the derivative is 0 if VI is less than or equal to this and one if the the valuation is larger than this threshold point and it is not differentiable at this at this threshold point. So, at this point t minus I2 it is non-differentiable but whenever this function is differentiable it actually coincides with the allocation we can also see that. So, these are the two two points in fact a lot of our understanding of the single object allocation will depend on the properties of convex functions and therefore we in the in the rest of this model we are going to spend some time to discuss some of this known results from convex analysis for for most of this results we will not provide a proof because they are clearly out of scope of this course. If you are interested in looking at the proofs you can take a look at some standard text like your Rockefeller but wherever necessary you will provide some sort of a intuition of why this this sort of a result is actually true. The first fact from convex analysis is that convex functions are continuous in the interior of its domain. So, which means that this kind of a convex function. So, whenever you are talking about functions which are convex you cannot have jumps within a domain of this function. So, what can happen is if a if a function has a let us say a domain from here to here that cannot be some that can never be any function which has a jump inside this because then you can actually have some some sort of a chord joining these two points which is below the curve at some point. So, this is whenever you have jumps you you are certain that that is not going to be a convex function inside its domain but this property does not hold at the end. So, at the last point there could be some jumps. So, you can have functions which goes all the way till this point and at the final point there is a certain jump. So, that is clearly convex. So, the for convex functions it is known that the they are going to be continuous in the interior of its domain. So, the second fact of convex functions is that they are differentiable almost everywhere. Now, what is this term almost everywhere mean? It means that the points where the function is not differentiable they form a countable set. So, we can see the example before that exactly at one point or maybe there are multiple other points where you can actually have it but you cannot create an interval of points for instance where it is non-differentiable. So, to be more precise this set of the set of non-differentiable points of a convex function forms a set of measures 0. So, it is something like this points if you are looking at the whole set they are almost vanishing. So, if you are looking at only finite or countable number of points in an interval which has uncountably many points those finite or countable points has measures 0 that is how more formally this is set. But let us not worry about those things it is saying that except for a few points or a countable number of points the function is going to be differentiable everywhere. Now, I am sure that this definition of the convex function is known to all of you. So, whenever we are defining a function convex function then for every point x and i in its domain. So, this is the domain of this convex function and for any lambda which is living in this interval 0 comma 1 this is true. So, that means that whenever we are looking at a convex combination of any two points. So, let us say we have a function like this and this is our x axis and we have two points let us say x and y and we are looking at the corresponding values of this function at this points. So, gx and gy and if we join a chord between them. So, therefore, lambda times gx plus 1 minus lambda times gy is some point like this and that point will always be above the value of that function at that point. So, this is let us say this is the point lambda x plus 1 minus lambda of y and the g value at that point is going to be lying below that chord. So, the chord connecting any two points will always be above the curve that is the meaning of a convex function. Now, if that function is differentiable at a point inside this domain then we denote its derivative by g prime of x this is quite standard. But we are going to define something which extends this idea of gradient or the derivative and that is known that extension is known as the sub gradient of this function. So, how is sub gradient defined? It is defined formally in this way but before going into the formal definition let us look at what is actually what is this same. So, let us look at a very specific point let us say x and we are going to a different point y z in this in its domain and what we are doing is we are first looking at the value of that function at that point and then taking the distance. So, suppose I have a I have a point like x somewhere and I am trying to find out what is the value of this function at a different point let us say z and I do a kind of a multiply this with a this difference of z minus x with a with a constant quantity x star and then add that on top of on top of this g of x. So, g of x is nothing but this point here and then we are taking a slope. So, the slope of this line is nothing but x star and we are going to this point here. So, that quantity so, this point is nothing but g of x plus x star times z minus x and that point should always be below the actual value of that actual value of the function at that point. If that happens then we are going to call this x star to be a sub gradient of g at that point x and of course, this this I have only shown the positive direction that this should hold for all z. So, it should also hold for the other direction also. So, in some sense you are essentially putting a line finding the slope of the line in such a way that the entire curve entire context curve is lying above it and that is the point which also touches the g x. So, fair enough. So, what is the property that it is giving us? So, we can actually say that now we have even though we cannot define a gradient at the point where it is not differentiable. So, for instance the point where this point was not differentiable in our previous example. So, in this convex function, but we can definitely define something like a sub gradient and naturally the sub gradient need not be unique. So, in particular it is not unique in those points where it is not differentiable. One can have a bunch of sub gradients. So, similar to so this is one x star which is a sub gradient because it keeps the entire curve above it, but you can also think of another sub gradient which is also keeping the curve above it. So, there could be actually many sub gradients which are possible. The one observation that we would like to make the points where it is actually differentiable. So, let us say a point like this. So, this red colored x this point can you actually find multiple sub gradients? The answer is no because if you for instance try to find some sub gradient which is not exactly equal to its gradient here then you might end up having some points which are actually above the value of that point. So, essentially this inequality will get violated if we pick some other kind of a sub gradient some other line. So, one empirical observation that we can make here is that whenever there is a point which is differentiable of this. So, if the convex function is differentiable at a certain point then the sub gradient actually equals the gradient and in that case the sub gradient is going to be unique. So, that is something that we will state formally. So, these are the some standard results. So, you can refer to that standard takes on convex analysis any standard takes on convex analysis. The first result is that so suppose we have a convex function g and x is in the interior of i and g is differentiable at x then g prime x is the unique sub gradient of g. So, we have already told the intuition. So, this is the formal statement and it is not very difficult to show even formally. All that you need to do is to reuse this inequality for the points which are above x. So, let us say x plus epsilon and also points below x x minus epsilon prime let us say and then use the definition the fundamental definition of derivatives and one can show that this x star will be sandwiched between the left derivative and the right derivative and the points where it is actually differentiable this left and right derivatives are same. So, therefore this x star must be equal to that derivative. So, that is the that is the proof of this lemma. So, for lemma 2 again we have a convex function and we one can say that this function always has a sub gradient at all points in i. So, including the the edge points. So, the earlier result was only for interior points, but this this result is true the sub gradient exists for all points in i. The next fact that we will be using it says that if you collect together all the points where g is differentiable and denote it with i prime. Then the points where it is non-differentiable that has a that is a set of measures 0 and in addition the set of sub gradients at a point forms a convex set. So, let us say we look at a look at a bunch of points. So, there will be some points where the function is differentiable we have already seen that those points essentially. So, those kind of points are almost everywhere. So, the function is differentiable almost everywhere. So, and in those points that derivative exactly is equal to the to the sub gradient the points where it is not differentiable which is of a which is which is a set of measures 0. In those points the sub gradient might be many, but they actually form a convex set. So, it is something like an interval within which all the sub gradients leave. So, you can think of the previous example here. So, the sub gradients will start from this point. So, which is a negative gradient to this point which is a positive gradient. So, all this point. So, you can think of this as the left derivative at this point and this point green x and this slope is nothing but the right gradient and all the points between this left gradient to right gradient in that interval all those points are sub gradients of this function at that point. So, that is exactly what is we are going to state as fact 4. So, if you look at the right gradient and the left gradient of this function g at that point then the set of sub gradients at x which are non-differentiable leaves within this interval from g minus prime x to g plus prime x. So, we will use a shorthand notation to denote the set of all sub gradients of g at a certain point by this del g of x. Lemma 1 by lemma 1 we already know that the points where it is differentiable. So, these are the set this is the set where the function is differentiable it is a singleton for all the other points it is non-empty. So, the next lemma that will be that will be very much useful for us is that of a some sort of a monotonicity. So, we know that if it was a convex function and it was differentiable everywhere then we know that the derivative is monotonically increasing. I mean this is coming from the fundamental property of convex function. We can extend that idea to sub gradients and we can also say that the sub gradients when you are defining a function by. So, there are points which are where it is differentiable there you know that the sub gradient is unique for the points where it is not differentiable you just arbitrarily pick one of those sub gradients and define a function in that way. So, that is how we are going to define this function phi. So, phi of z is an element of this delta g in g z as we have defined for the points. So, for all the points in i. So, for all the points in i prime we already know that this is going to be this set is going to be singled in for all the points i minus i prime it is non-differentiable we just pick one point from that set and define the function in that way. Even in that situation we know so this lemma is saying that if you have so if you have x greater than y then the corresponding function phi will also be non-decreasing. So, it is a monotonicity condition on those sub gradient function and you can actually visualize this. So, here all these points on the on the left hand side they were having the sub gradient to be negative. So, let us say all this minus of 1 and suppose this is this gradient is plus 1. So, as we move from this direction to this direction we see that the sub gradient this phi function starts with minus 1 let us say it starts. So, this is the phi function and it starts with minus 1 initially and at this point you can arbitrarily pick any point. So, it might be any number between minus 1 and 1 and then you have this the rest of the points. So, this change over happens at this point this x and then onwards you can see that this function increases the function is always non-decreasing that is the conclusion from a convex function. And the last result that we are going to state here is a integral formula involving the sub gradients. So, we can actually find for any pair of points x and y that is living in this domain of this convex function through this integral formula. So, we can find the value of this convex function at point x starting from point y looking at the value of the function at point y and then integrating from that point to this new point from y to x using this this sub gradient. So, what it is actually saying this is something like similar to the the derivative formula. So, what is sub gradient? It is the gradient except for a few points where it is not not differentiable, but we already know that those points are kind of countable and has measures 0. So, this integral will just chop off. So, of course this requires this needs to be proved rigorously, but the point is that those points are just one point I mean they they have measures 0. So, we cannot really integrate the integral over those points will be 0 and for the rest of the points we are just looking at the derivatives. And therefore, the when we want to find this function g of x starting from g of y we can just use this integral formula and get the get the value. So, that this particular lemma will be very much useful when we try to characterize the payment of the single object option mechanism in our next modules.