 So, last time we started talking about games with communication and what I wanted to what I want to do is eventually go move to a very general model of games with communication. But before before I do that let us just recollect two types of issues that we were that that we were that we had to discuss and then the previous class we discussed one of them and that was called issue of moral hazard. A moral hazard was essentially referred to the situation where you had two players there is a principal who is like a manager or firm owner or some someone like that and then there was is the agent who was the worker and the manager wanted to get work done from the worker. So, he basically wanted to delegate a task to the worker. Now, and the situation that we saw was that the problem for the manager was to come up with a contract that would make the that would make the worker work for him, but the situation that we saw was the worker or the agent need not in general need not choose an action that is aligned with the interests of the manager. So, this is essentially the issue of moral hazard. So, I will just quickly go over it again. So, in the simplest case, let us suppose there is there is an output X that the manager is interested in and this output X is a function of the function of two things, it is a sum of two things, the effort A which is made by the agent and epsilon which is noise. So, the so, the actual output depends on both on the effort of the agent and you know what we what I described last time as his as his luck and what the principle was interested in is he derived a utility from the output minus whatever he paid to the agent and let us call that payment psi. So, what the principle was interested in doing was to come up with a so, this is a payment that he makes to the payment made by principle to the agent and what the principle wanted to do was come up with a with a payment so, as to maximize this utility. Now, this payment here the had some constraints. So, we could not I could the there was some some basic constraints on the on the nature of communication between the agent and the principle. So, in particular what we had seen was that the principle cannot observe the actual action of the agent, what the principle can observe is only the output. So, principle can only observe the output so, this psi has to be a function of so, we write it like this we write a sigma of X means that psi is a function of is if function of X alright, there was also another constraint which if you remember was what we call the participation constraint. Participation constraint basically said that you cannot make the payment to be such that the agent goes into slavery you know a form of slavery. So, essentially the agent has the choice to not take up take up your job or leave your job if the payment is not good enough. So, the agent had what we what we described as a reservation utility, let us call this U A and let us call this U B. The agent wanted to choose an action such that the expect he has the expected utility from his payment minus the cost of taking that action is maximized. Now, you can enforce a participation constraint in the following way you can you can ask that this when the agent acts optimally it is it is in his interest to be in the job. So, that we can ask that this is greater than equal to U U bar A right. So, this is a reservation utility this on the what I have on the left this is the problem this is the optimization problem faced by the agent alright. So, then the full problem for the principle the principles problem and let us call this value of this problem V P I will just put a max here. It is to maximize the expectation of U P of x minus psi psi in sigma of x subject to the constraint that subject to the participation constraint and so this let me call this V A subject to V A greater than equal to U bar A alright. So, this is what the principle had to solve for. So, the and the psi that you get from here is his is the optimal contract. So, you get psi as a function of x the function defines the contract ok psi you get as a function X. Now, the reason why model has a one way of capturing what model has a is all about is actually this issue. So, you can write out another problem for the principle. Now, here what is happening is A is being chosen by the agent to maximize this here and what the only thing that the principle cares about is that V A should be greater than equal to. So, A is being chosen to chosen in this way and the only thing that the principle cares about here is V A being greater than equal to U bar A right. Now, you can pose another type of problem in which so here what is happening is that the agent is choosing this is choosing this particular is choosing this choosing this A in order in order to maximize his utility and this is greater than and it is turning out that and the only thing that the principle is asking is that this is that the participation constraint is met right. Now, what the principle could instead do is not worry about maximizing this utility at all. So, here is a hypothetical other way by which the principle can think what is ideally in his interest he does not care about maximizing his utility he just says well I enforce it in the following way there should be some action for which some choice for the agent for by which he remains in the job and then amongst those actions or amongst those things that the agent chooses actions that the agent chooses I am going to choose the action that is most beneficial to me all right. So, this becomes the another problem which you can let us call this BP FB and I will tell you what this FB stands for this is this problem you maximize over Psi and A the utility of egg that the agent that the principle would get subject to the requirement subject to only the participation constraint being satisfied. Now, you see what is happening here here the principle is basically saying I am going to choose a payment and I am also going to choose what you are going to do so long as you do not leave your job so long as your participation constraint is still satisfied right I am going to give you a payment and I am also going to tell you what to do essentially micromanage exactly what you are going to do so long as you do not leave your job and I am going to choose them in such a way that maximizes my utility right. Now, this obviously is actually it turns out that this is better for the agent for the principle this particular thing but this is not implementable by the agent because this would require the agent to actually do go sorry this is not implementable by the principle. So, the VP so you will always find that VP this is greater than equal to VP itself. This is the value that the principle got by delegating this is the value that he gets by micromanaging F I will tell you what FB stands for so FB is a term that is used you know in management terms and so on it is what is called the first best and VP is what is called the second best first best is basically if you instead of delegating if you yourself went and did the job then what would you then what is what would you get you would get VP FB all right and without delegation what and sorry with delegation what you get is VP and this there is always this inequality and this gap between the two when there is a gap between the two we say that there is more moral hazard essentially there is there is a difference between the principle doing the agent's job himself or basically choosing what the agent should be doing and between him delegating and the reason there is a gap between the two is because once the contract is fixed there is nothing binding the agent to actually acting optimally in you know nothing binding the agent to solve this VP FB problem he is basically going to solve maximize his own utility all right. So the gap between these two is what also has a name it is this minus this is also what is called the information rent now this is even more stark when a when you have a so here remember there is there is noise also here in the output this will be even more stark when the noise happens during game play and is observable by the agent but not the principle so the agent can basically you know can if you know essentially based on how the noise evolves he can actually make suboptimal effort so suppose for example if you know if the agent is a worker who works in a farm the principle is the owner of the farm if he gets a good if there is good rainfall for example if his luck is good the worker doesn't need to put in as much effort right and he can essentially enjoy the outputs enjoy his payment as a result of luck rather than as a result of his effort okay so so whereas on the other hand had he put in even more effort then he could have you know that would have been better even even more better for the for the farm or not but he doesn't do that right so this is this this this small hazard problem basically is an issue that arises during game play where one after the contract is signed there is nothing that you can do to enforce a particular action because and the reason for that is because the contract is a function of the output rather than the action itself and the action is unobservable okay so at the at at its core notes I will use this term and the reason for using this term will become clearer at its core model hazard is the issue that relates to obedience the principal wants the agent to do something but the agent it may not be in the agent's interest to do that thing okay to so principal can instruct the agent to go and solve VPFP but it is not the in the agent's interest to solve VPFP okay he instead solve something else so to obey that the obedience of instructions is where the issue is all right now I will I will explain where the reason I brought this term up there is another type of phenomenon that comes up which I will discuss now and and then we what we will do is we will work towards sort of understanding this in a more general more system theoretic sort of way so the the model hazard as I said is an issue that comes up during game play because of the unobservability of certain things from the principal now you there can be another type of issue altogether which is what is called adverse selection okay and this you will hear this term a lot you know in the context of insurance and and many other many other type of interactions where risk is where risk is involved okay now what is adverse selection referring to adverse selection in adverse selection the question is not about observe observing the action and actions not being observable and so on but rather the capability of the agent being not being known okay so here in in the in model hazard the issue was that there is there is this that you cannot observe actions and there is noise and all that and so therefore there is a there is an imperfect information adverse selection refers to an incomplete information setting so you do not know what type of agent you are dealing with all right so for instance let us say for example there are two types of agents efficient and inefficient so the inefficient agent for him it costs him the cost if he has to produce a certain output Q cost of output Q is theta upper bar of Q that is if he is inefficient and if he is efficient then the cost of output Q is theta under bar of Q and so and theta bar upper bar is greater than theta under bar okay so this is the efficient guy it costs him less to produce the same output Q all right now the principal has to interact with this guy but the type of the agent is known only to the agent principal does not know whether he is the efficient guy or the he is an inefficient guy all right so now what is the way to then design a contract given that you do not know which type of agent you are dealing with right so it turns out in this case that an incentive has to be offered in order to get a person to reveal his type okay so the the that that there is the issue here is that there is a private information known that is exists even before the game begins there is this private incomplete information type of setting where there is a private type known to the agent okay even before the game which has evolved even before the gameplay begins and you have to offer an incentive in order to and you offer an incentive in order to differentiate between the types okay so this this here means that essentially some there is a cost to actually discovering the type of the agent itself on the part of the principal so one one mechanism that does this I will I will just tell you roughly what what the principal could do is how does he what does he mean to to offer offer to different to this thing what the principal could do is he could say well if you produce an output when the when the agent when the agents maximize their own individual utilities they produce output produce outputs q upper bar and q q lower bar okay so q lower bar is the output that is produced by the efficient guy all right and q q upper bar is the output that is produced by the inefficient guy all right so in other words if the if the efficient guy was employed then the output that you would get is q and lower bar if the inefficient guy was employed then you will get the output q upper bar right but you do not know which is the efficient guy and which is the inefficient there so now how does the how does the principal then delegate a task like this knowing that you do even that he does not know who the efficient one and who the inefficient one so he has to make a differential payment so what he does is he offers a menu of contracts not one contract but a menu of contracts and this is one way of doing it okay there are the more general framework I will tell you later so the menu of contracts where he says if you produce an output t q q under bar then you will be given a payment if you produce an output q up under bar you will be given a payment p under bar and if you if you produce an output q upper bar you will be given a payment p upper bar all right now you have a one more variable here p upper bar and now p p p upper bar and p lower bar now using these variables you have to ensure that the contract is the menu is such that it picks the right person all right so first you need to ensure that you get someone to do the job so you need a participation constraint so the participation constraint basically says that participation constraint we essentially just says that both players find it worthwhile to actually do the job so this is what they get paid this is where the cost they incur all right they find it worthwhile to pay to do the job all right now in addition to this you want to pick the right one this just ensures that none of them will leave the job right if you want if in addition to this you want to ensure that they you pick the right the the right incentive goes to the right person all right so the one what you the way you enforce that is to enforce the following constraint this is the payment that the this is the profit or utility that the inefficient guy would get now suppose this guy instead go this is the inefficient guy this is what he would get from the contract designed for the inefficient guy now suppose this inefficient guy instead chooses the contract designed for the efficient guy what would he get he would get payment p lower bar he would still incur a cost theta upper bar so he basically I am he the principal is saying I am going to buy an output quantity q under bar from you at a rate p under bar but this guy will now incur a cost theta upper bar times q under bar so what what this means is essentially this guy is the what is this is in equality saying the inefficient guy is better off choosing the inefficient contract than switching to the efficient contract okay and likewise I can impose the other requirement the efficient guy is also better off choosing the efficient contract than the inefficient contract okay now what does this mean essentially the way you can think about this is the following you can if the p p bars and q bars are chosen in such a way that these inequalities hold then I can do the then the principal can do the following the principal can simply go and ask the agents what is your type are you efficient or inefficient okay the what the principal does is he presents them this menu of contracts and then simply goes and ask them okay what is your type are you efficient or inefficient and it because these the the p's and q's are chosen in this so that these inequalities hold a both of them will take the job and b they will choose that contract which is appropriate for them because it is it is in their interest to stick to their true the to the contract that refer that is pertaining to their true type okay so these constraints are what are called incentive compatibility constraints compatibility now incentive compatibility basically refers to the to this kind of in this case it refers to the following setting that it is in an agent's interest to reveal his true type okay now this appears very broadly in many different games okay especially in particularly in in in games that involves security and so on you remember I had when I told you incomplete information the example I gave you was you are securing an airport and you do not know whether you are whether the passenger who is coming in is a whether he is a terrorist or innocent passenger or a drug peddler and you wanted to distinguish between the two now what you are going to do after that matters the scheme that the the scheme that you are going to use to act on each of these categories should be such that it that it actually incentivizes the person to reveal his true type at least the the innocent guy should be incentivized to reveal to say that he is innocent and the the the the others should not should have no incentive to say that they are innocent all right so so this is this is basically a framework for you know what should be the for some kind of graded penalties graded graded procedures and so on so that then that ensures that you know the right type people get that the when you do not have complete information essentially the the it becomes it is in the incentive of the of the person with the information to reveal his true type and then the then essentially the incomplete information disappears and the game becomes one where you have complete information from there all right so this also comes up in auction design for example you want people to you want the object to go to the person who values it the most right but nobody wants to reveal how much they value because if they reveal their true value they they will end up paying also a lot more right so people want to undercoat their value at the same time but at the same time they also want to get the get the get the item as well you put these two together it turns out how to design an auction becomes a question in itself right so that the true you know the truth actually comes out parameters are who wins and how much does he pay it is a design parameter what is the rule by which you decide the winner and how much does he pay which is the other thing okay so the rule is usually trivial the highest bidder wins but how much does he pay is a very important question and this this for the for the longest time you know mankind actually conducted auctions by saying who by what method you ask people to okay how much do you want to pay and then whatever they quote you just to charge them that right if I say that this you know this pen is worth whatever someone quotes 100 rupees for it you will I will say you know you take okay then pay 100 it turned out that that mechanism is not incentive compatible there is a problem with that mechanism the correct mechanism which is which preserves incentive incentive compatibility and makes people reveal their true type is do you know what that is no each person pays the winner pays the second highest bid not the highest this is what is called the wickery auction so in fact so in this case it is a dominance in wickery auction where you pay the player pays his second highest the winner is the one who bids highest but the price he pays is equal to the is equal to the second highest bid not the highest bid that means not his own bid it is the second highest bid and it turns out in this case it is a dominant strategy to reveal your true true valuation and so so wickery quarter actually it is just incredibly elegant how this whole thing works out that it was this beautiful insight that actually you should be charging you have to take away this difference between the first and second then then everything falls in place so wickery got a Nobel Prize actually for that and today eBay and all of these things run on on wickery's auctions in fact are so you know the recently concluded 5g auctions are also one form of wickery auctions so essentially what we two issues that we found were one is this what happens during gameplay about obedience to a contract obedience to whatever the principal is saying the other is in order to for the principal to tell you what to do he needs to know what who you are whether you are efficient or inefficient and so on and that is so this is about revealing revelation of types so these so these two things we can put together into a one beautiful elegant framework which involves communication in which you know in which we are allowing for communication of various kind between players okay so this is that is what I will I will talk about talk about now and to till the end of the course that is what I want to focus on