 Let us now start discussing one of the most popular areas of mechanism design which is called the mechanism design with transfers. This is a domain restriction as we have discussed in the previous two cases, but this specific domain restriction gives us the opportunity to use something called a transferable utility. So, whenever we call transfers we mean that the utility can be transferred. The way this transfers, the utility can be transferred is what is known as money. And this is the first time we are going to use something like a money which is used to transfer utilities from one player to another. So, let us look at what this social choice function is and how we can design different components of that social choice function. So, the first thing is that the social choice function maps this type profile, the Cartesian product of all these typesets into what is known as script x. So, this is what we are going to call the space of all outcomes. Now, we have deliberately used this uppercase f to denote the social choice functions because we will be using the lowercase f for a different purpose. So, what is this space of all outcomes in this case? The outcome x has two components. The first one is something that we are going to call the allocation and we will denote it by lowercase a generally and the payment vector. So, let us say we are going to define that with pi and this gives a payment for each of these players. And we will shortly discuss examples which will give you an idea of what these payments mean. This is just a real number. So, you can just imagine that each of these agents are given a certain amount of real numbers to pay and that is exactly what these transfers are. This is exactly the meaning of money in this context. So, what are the examples of allocations? So, the allocation could be something like a public decision. So, let us say the municipal corporation is trying to decide whether to build a bridge, a park or a museum. So, in that case, this set capital A. So, we are going to denote the set of all possible allocations in a specific setup, a mechanism design setup with this with the notation capital A. So, all this lowercase a lives in that and this in this example could be park bridge and so on. Similarly, the allocation could be that of a divisible group. So, for instance, spectrum, this is a divisible group and this can be shared among different players. So, if in that case, we define this A as a vector of A1 to An where each of these AIs are nothing but some real number between this interval 0 and 1 and together they make equal to 1. So, that means the spectrum, the entire spectrum is of unit size and each of these agents are getting some fraction of that spectrum that is what an allocation in this case means. Or you can also think about a single indivisible object allocation. So far we have discussed only divisible allocations, the previous example, but let us say it is indivisible. So, either you will have to give the whole object to someone or you cannot give that object at all to an agent. So, in that case, the vector remains the same, the only difference becomes is that this AIs can take integral value, see the 0 or 1. And as usual, the sum of this AIs should be less than or equal to 1. So, which also keeps the opportunity that this item might not be allocated at all. And the fourth example is that you can partition multiple indivisible objects. Instead of a single indivisible object, there could be multiple indivisible objects. So, let us say yes, it is the set of all such objects. So, maybe a phone, a mobile phone, a laptop and so on. All these things are all indivisible objects. Now, you are planning to partition this indivisible objects among n players. So, what will be the set of all alternatives in that case? This will be something like a partition. So, what is a partition? So, let us look at the sets A1 to Am. So, these are the sets, the subset of these indivisible objects that goes to the corresponding player. So, Ai is the subset of these objects that goes to player Ai. Each of these sets are disjoint. So, none of them have any common item among that, among them. That is because the either agent Ai, if agent Ai gets it, then agent J won't be able to get that. So, that is the collection of all such partitions of the whole set is going to constitute the set of all possible allocations in this context. So, you can already see that in different examples or different context, there are different kinds of allocation sets and also the possible allocations. Now, we are going to define the type of an agent Ai with this lowercase theta Ai which belongs to capital theta Ai. So, this is a little more abstract at the current setup, but we are going to discuss what is the meaning of this private information of this type of an agent. So, we are going to assume that this type is a private information and that is something that will bring us to this idea of truthfulness or incentive compatibility that we have discussed. Now, how are we going to define the benefit of an agent? So, here we are trying to design the incentives for this agent. We will have to model that benefit component that if an agent has a specific type and a specific allocation decision has been taken, then how much satisfaction or happiness that this agent gets. And that is captured using something called the valuation function. So, what is this function? So, if you pick a specific alternative, a specific allocation that we have discussed in the previous examples and that agent has a specific type. So, consider Vi takes as an input, the allocation A and the theta Ai of that player, then this particular number is a real number which we are going to call the valuation of that agent for this particular allocation and the type theta Ai. And that is denoting that how much satisfaction or benefit that this agent is getting when this particular allocation is chosen and its type is theta Ai. So, in this case, as we have discussed long before in the beginning of mechanism design that theta Ai, so this valuation is only dependent on the type of that agent alone and not on the types of other agents. And this is exactly what is known as the independent private values and we will be discussing only independent private values in this course. So, let us look at an example. So, to give you a kind of a feeling of what these types mean. So, maybe an agent Ai can be a environment saver. So, let us say it cares about the environment a lot. So, that particular information is its private information and that is captured by this theta Ai environment. So, that is the type of this player. And if the alternatives, the allocations can be either to build a bridge or build a park, then possibly its valuation for bridge is less than the valuation when the park is chosen because park is more environment friendly. So, therefore, that is a better choice for this particular player when its type is theta Ai environment. So, environment saver. But if its type changes to let us say business friendly, so or transportation friendly. So, in that case, depending on its type, maybe now building a bridge is better than building a park. So, that is how we are going to look at it, that is one way of capturing it. Depending on the context, how we are addressing a specific or modeling a mechanism design problem into this mechanism design with transfer setup, the type will change and we will define that accordingly. So, let us now come to the payment part. So, what is payment now? So, we have defined this Pi i's which is the payment for player, payment charge to player i that lives in the real, lives on the real line and the payment vector is given by this Pi 1 to Pi n, the vector of Pi 1 to Pi n. Now, so with this payment as well as the allocation and its player's type, we can define the complete utility of this player i. So, the utility of this player i when its type is theta i and has an outcome. So, remember that this allocation and the payment together constitutes the total outcome. So, e comma Pi is something that we are going to call. So, this is same as x that we have defined in the beginning. So, x belongs to that script x. So, our social choice function was outputting this outcome and that is essentially the allocation and the payment decision. So, utility of player i when this particular outcome is chosen and its type is theta i, it is given by the corresponding two functions. So, we have already defined this valuation function which is just dependent on the first component of that outcome, only the allocation component and the second part is just the payment itself. This is the second component of that decision of that outcome. And you can already begin to see that this function, this utility function has a form which we are going to call quasi-linear. So, quasi-linear meaning that it could be, so it could potentially be non-linear in the allocation component. The outcome has two components allocation and payment, it could be possibly non-linear in the allocation component, but it is linear in the payment component and that is why the name quasi-linear comes in. So, notice that even in mechanisms with money, the utility is not necessarily have to be quasi-linear, but quasi-linear is something which is very natural. You can think of it as if you are getting a specific object, let us say you are buying a certain painting or an object, electronic equipment and you have a certain valuation for it. And the payment that you are making that is pi i and the difference is your net utility. And this particular quasi-linear modeling of the utility fits in a lot of examples and that is why this quasi-linear model is quite popular. Now, let us think about, so we are going to talk about mechanism design with transfers and the transfers has this and the utilities are of quasi-linear form. Now, why is this quasi-linear payoff a domain restriction? So, we have already spoken that pi, so in the case of single peak preferences or even in the uniform rule that is task allocation domain, we have talked about why that essentially restricts the domain. Now, we will also have to argue why this is a domain restriction. So, for that let us consider two alternatives. So, let us say a pi and a pi prime and all that changes, so you see that the allocation component does not change in these two alternatives. So, this is let us say x1, first outcome and x2. In both these outcomes, the allocation component remains the same, all that changes is the payment component and for focus on player i and consider that this pi i prime is strictly less than pi i. So, then what you can see automatically is that no matter what kind of utility you choose, so let us say we are choosing a specific utility, the quasi-linear utility where the valuation function, you can choose an arbitrary valuation function, but as soon as you put the corresponding outcome, so utility of at the outcome x1, which is a, pi i is going to be this one according to the quasi-linear payoff. And that in the x2, the outcome x2 will be given by this and we can already see that because this allocation is the same in both these outcomes, the first component of this first component of this utility function remains the same for both these cases. And because pi i prime is strictly less than pi i, so then this inequality will always go. So, player i will always prefer this x2, so let us say, so this was x2, so player i will always prefer x2 over x1. And there is no way, no instance or no choice of valuations where x1 will be more preferred than x2, no matter which ever kind of preference, whichever kind of valuation function that you can choose. So, this is a domain restriction because what we have assumed that in the unrestricted domain that if you give me any two alternatives, I can always come up with a preference ordering where both this x1 above x2 and x2 above x1 is feasible and that is not the case in this domain. No matter whatever way you design the utilities, x1 will always be worse than x2 for player i. And that is certainly a domain restriction. Even though this domain restriction is pretty simple and the point is very subtle, this actually opens up an opportunity for a lot of social choice functions to satisfy interesting properties. And that is what we are going to do in the next few lectures, in fact, till the end of this course.