 Suppose, so let us take this game, neither effort nor output can be observed, okay. Now, if neither effort nor output can be observed, then what kind of a wage can this principle offer? It would have to be a constant, because it cannot be a, it is there is no performance based incentive, right. You cannot pay anybody more as a function of the effort they are making or as a function of the output they are producing, right, so it is a constant wage. If it is a constant wage, then what is the constant wage that the, that the principle should offer? No, no, no, you have to satisfy the participation constraint. So you offer a, so suppose, so here is the situation, you offer some wage and let us say that is enough to produce some effort that ensures a, that ensures participation, okay. So if neither effort nor output can be observed, then the principle must offer a constant wage. Now, if he is offering a constant wage, okay, and suppose that wage meets the, meets the participation constraint, alright, so he offers a constant wage to meet the participation constraint to retain this guy. What should they, now the participation constraint is satisfied, alright. Now what should the agent do, it is good, so the wage is good enough for him to be in the job, so what should the agent now do, maximize utility by doing what? By making zero effort, right, and then knowing this what should the, what should the principle do? He should reduce the salary, and effectively actually what that is what happens, it comes to the point where you can, I mean you come to some kind of base minimum salary at which there is at least some you know epsilon effort is possible to meet the participation constraint. And then whatever is that salary even for that, the agent once he meets the participation constraint makes zero effort, okay. So if neither output nor effort can be observed, we get to this, what is this situation, what is, so first let me write that only constant wages can be observed, offered. Now what this means is, so therefore utility u of e comma w, this is constant, okay, and then he makes, so therefore optimal effort is zero, right. Now so this situation, what has happened in this situation, in this situation you have got to this, to this, to what is called a model hazard. What does model hazard refer to? So it is, you can think of it as the following, in the following way that once the contract is offered, right, there is nothing that the principle can do to ensure that the agent can in fact act, does in fact act the way the contractor, the contract expected to act at the time of designing the contract, okay. And the reason this model hazard is coming up here is because of the observability constraints. The agent cannot, the principle cannot observe certain things that the certain, some information is just not available with the principle, right. Now so essentially it, once the contract is offered, the only thing is, only thing that is, that the agent has constraining him from making zero effort is essentially his morality, essentially, right. Otherwise the rationally the optimal effort for him to do is zero, right. So essentially this kind of a situation, well the reason the word model hazard is used is because this kind of a situation poses a hazard to his model. So essentially challenges his model that after having, now that he has the contract, he has, he can, there is, whether he should make any effort or not, okay, I mean, I know of course not, I understand, but I am just telling you where the, the term model hazard actually arises from, okay. And so this is, so some papers in behavioral economics have used this term quite a bit and in fact, you know, the originators of this term also got one of the Nobel prizes in the 1980s, yeah. So I will, so for that I will have to write out the formal model properly, I will just come to that in a moment. So exactly how the participation constraint is to be enforced. So the participation constraint in the previous case here, you know, you are able to enforce it for every, okay. Now you can also ask for the participation constraint to be enforced at the optimally. So this is actually related to what you were asking. So, so here is another way you can solve for this game, you in which the, but this because this is a kind of degenerate case, it may not be a very interesting answer, but the correct way to solve for this is the following. The agent maximizes his utility by choosing an E. At the optimal E, you choose a contract such that at the optimal E, the participation constraint is satisfied. So it is in his benefit to be in the contract if he is making full effort. Now Q, Q is, see Q do not think of, I mean Q is not such a big deal. See Q is essentially what, how the translation, it is just your transfer function, it translates your effort to a level of profit, right? So E is say, let us say you can count, okay, how much time is the sales agent spending in trying to convince people to buy, you know, his toothpaste or whatever. But Q of E is eventually the number of units that are actually sold. This has nothing to do with how, I mean, you could eventually it could be there are multiple principles and all that, but typically has nothing to do with W, okay, yeah, where E star itself depends on W, right, okay. But that is as I said, there is a in that because this because you have this E is observed here, there is a trivial solution as I said which is make W independent of E. No, no, no, if W is minus infinity, you are done, nothing more needs to be said, right, because there is this constraint is just not there. If this constraint is there, then the problem changes, that is the point. So the slavery issue arises when the when this constraint is not there, this constraint is there precisely to prevent degeneracy, I mean, so I was just motivating where the why this constraint is important, okay. So now as I said, you can also as another possibility, third possibility is that you can have W as a function of the output, okay. So this is this would lead to another game. So here wage is a function of output and the most general one is the following and this is where the situation actually gets really tricky, okay. The most general one is this, W is a function you can you cannot observe effort, okay, you do not know how long how much how much time the farmer is spending in the field, how many time this guy, whether when he goes to sell, when your sales agent goes into the field to sell, whether he is going to actually to the to your, you know, there is in fact doing working hard and going and selling or is he just doing time pass, you do not know, okay. So this is you cannot observe the effort, okay. Now what you can observe is the output, but the output itself is a function of two things, it is a function of effort and a function of his luck, okay. So the output is a function of effort that the agent makes and also a function of luck, I mean it may be that you know he does nothing, but he finds some customer to we can sell 200 toothpaste or something like that, okay and he is done for the day. So it is a function of this and theta where theta is chosen by nature, alright. So this function, okay. So then in that case, the problem becomes something like this. So this then becomes a essentially the following kind of situation, you have the agent wants to maximize the expectation of, so what the principal wants to now do is, is maximize this by choosing a function w. Now what have I put in here, I put in the, this is the expectation of the utility that the principal would get, the utility comes from the sales or the value of the effort minus whatever wages he pays, wages are a function of the output, okay. The output itself is a function of effort and theta which is noise, alright. Now it is, so here e star here is the effort, okay is the effort that the agent makes. So the agent makes, so e star is just the arg max, the expected utility that the agent would get and it is subjected, subject to this participation constraint that, so under at e star at the optimal effort, the agent should be willing to participate. So this is a less demanding condition, you are saying that if you do your best, then it would be okay for you to be in the job, that is the way the wage is defined, alright. Not for, it is not for every effort, but this is a function of, if you make your best effort then you would be, it would be in your interest to be in the job, alright. Now the reason we cannot talk of w as to, so that this holds with equality here is because w cannot be designed as a function of e alone, okay, w basically cannot, e is not observable so you cannot make this equality and solve for this, that is just not possible, alright. So this condition is what is called a participation constraint, as I said it is also another situation constraint, it is also what is called the, sometimes called individual rationality. So this is an example, so this is the most general sort of problem. So what do we have here, we have a principal who can, who is to declare a contract, principal declares a contract agent response, alright by choosing an optimal effort level, the effort level gets transmitted to the principal through this transform queue and some noise theta. So what the principal observes is q of e star, theta that is what is observable to the principal and the contract is a function of that, is this clear, okay. Now can you tell me when is theta realized, value, value, distribution is known, distribution is common knowledge when, after e is chosen, before e is chosen or does it matter. So let us take these two cases, so firstly theta is of course realized after the contract is chosen, okay, so the contract is not chosen before knowing theta naturally, so contract is chosen, after that there can be two possible sequences, first is that theta gets realized and then the effort is made, the other is, effort is made and then theta gets realized. So if theta comes first, okay, so actually rather than realize I should ask what is the, what does, if theta gets comes first, the question and what I mean is basically does the agent know theta, exactly, so what we are assuming here is that the agent does not know the value of theta, see if the agent knew theta then he could choose an effort as a function of theta, right, so this is assuming that the agent does not know theta, so what this means is that the action, so he agent makes his effort and after that based on his luck a certain output gets realized, okay, it would be a very different situation if theta gets realized first and then the agent has to just you know make an effort in order to you know sort of let us say compensate for whatever luck has not given, this is a very, that would be a very different problem and from the point of view of the principle he has no, he would probably have no way of telling which of the two has actually occurred, I will give you a concrete example something that I am working on right now, so we have this problems for in renewable energy for example, one of the main issues is how do you compensate for someone, the renewable energy is basically produced randomly, wind comes and goes it produces renewable energy, right, it is whether you will be able to produce a lot or whether you will be producing less really depends on how windy the conditions are or at that time, right, so it is essentially a function of your luck. Now, so how should you be compensating the renewable energy generator given that he is basically producing, his output depends a lot on his luck, now there are two things here, one is that his output will of course depend on his luck but he can make an effort to estimate how much his output is going to be and maybe try to procure from somewhere else whatever is the shortfall or whatever. So, given this there is a distinct, there is from the point of view of the company that is buying electricity from the generator, the company has no way of distinguishing in trying to design the contract, the company has no way of distinguishing whether the output that you are seeing is because the wind was actually low or because this guy has not planned well enough, okay. So, essentially you cannot, you do not have a way of distinguishing between effort and luck in this particular way and as a result of this essentially what happens is there will, there can be scenarios where there are tremendous windfalls, you try to incentivize you know information gathering and estimation and so on to do better you know that the generator should make an effort to prepare for the conditions that are coming tomorrow or and in the process you end up sort of overcompensating or you end up undercompensating him and penalizing him for his bad luck. So, exactly what should be the form of a contract like this when you have this kind of a when you cannot distinguish between effort and luck is actually a sort of a concrete question and you know what and related to this is also like I mean the basic problem also of estimation which is that you know one can, I mean tomorrow may be windy and but these guys may say oh well my estimate was that it was not going to be windy so that is why I did not do this or tomorrow may be may not be very windy but he may say my estimate was that it will be very windy and therefore I planned for something else. The point is you have no way of knowing whether the estimate effort in effort in estimation has been sufficient or not from just one estimate that he produces a single sample does not tell you whether this whether the effort was good enough or not. So, you will only know whether this guy is making enough effort by looking at lots of samples but then if you look at lots of samples by that time you know a lot of damage has also happened. So, question is how do you in fact make this so essentially the if the output or the effort is not a number like this but rather a probability distribution what should be the how should you be you know what should be the form of the contract essentially when the space of this thing is in fact a distribution and the distribution can only be judged based on samples yeah yeah yeah. So, that there are the reason for the reason for that is because actually the the electricity case it is a little more complicated than that because essentially the if he does not produce enough it turns out that you will need to compensate as a because there is a chain of contracts essentially the generator has a contract with some some intermediate company the intermediate company has a contract with the final consumers. So, this because of this this chain of contracts if this generator does not make his effort to actually do his bit you would have to then go and you know like compensate for whatever whatever he is not producing and so that becomes so it creates a it creates a whole bunch of effects other effects the other thing is this effort at estimation is important in other respects also because it lets you plan for other problems. So, you want to actually get get true estimate. So, more generally this becomes a problem of distributed state estimation essentially from on the point for use that essentially you want to estimate the state but you do not want to estimate it just you are not happy with just this thing in the sense that just point estimate you want to you want a distribution and the distribution should reflect the true distribution that will come about. So, I can tell you more offline and writing out the problem but essentially this is it is a classic case of moral hazard over there also that this guy has there is no there is no verify there is no way for anyone to verify that you made your effort at estimating something because your estimates even if with the best effort or with the worst effort could go equally wrong no but but then the horizon you are but you need large horizon then that those are practical issues that come up. See, nobody can really design a contract which says that okay I will pay you this much over the next 365 days based on how long you have gone in the you know in the in the next 365 that kind of a contract is not possible to design. So, the issue is actually fundamentally the issue is that the so this is goes into another type of this thing but basically the issue is that it is that you do not that the principle in the problem that I am talking about the principle actually does not know the distribution exactly. So, and the job here is to so what they know is some kind something else and what he so the the so the theta here in that in the problem is some background state conditioned on that background state a an actual distribution will get realized and we want and that actual distribution is visible only to the agent and you want the agent to estimate that that distribution and report to you that distribution accurately but he can only report to you an output not a distribution or even if he deports you distribution what will get realized is only an output. So, essentially the the the that becomes slightly more sort of complex version of this problem where through samples you want to enforce what should happen actually on a on the entire sample space okay. So, so I will just tell you a few problem classes that that are that come up as a result of this. So, as I said the here the assumption is that the agent does not know theta if the agent knows theta then then the problem is then the problem is again is of a different nature there alright. So, what we saw was essentially we have this now in this in in in each of these cases alright. So, let me just write out some of these. So, here are some these are the problem classes. So, the first problem class is what this is what is called model hazard with hidden action. So, here you have you have principle, principle offers a contract agent can accept or reject the contract makes an effort and then after he makes the effort nature plays and you get output okay let us say nature plays high or low high output or low output. This is the case of model hazard with where the actions where the actions are not observable output is observable and and you cannot distinguish the output from you know you do not know how much of it is due to effort and how much of it is due to nature okay. Now there is another variation as I said the problem the other problem is what is called post contractual hidden knowledge. So, this is the case where the so you have a principle gives a contract you have agent agent can accept or reject he has if he accepts now after the contract is signed the agent gets to know what major place right. So, the principle does not know what nature is going to be but the agent gets to know. So, there is some knowledge that the agent has which is hidden from the principle alright. So, this is where nature plays first and then as a function of that this guy your low is a function of that he has to take an action. Now the there is another version now again depends on where nature plays okay. Now this is this I have not talked about this we will talk about next time this is what is called adverse selection. Now adverse selection is when nature is playing first okay let us say nature plays high or low. Now the agent is okay the principle the principle actually cannot distinguish between these two nature plays something okay. So, principle cannot distinguish between these two and has to come up with a contract. So, this is a classic case of incomplete information is to come up with a contract alright. So, what happens is the adverse selection the type of the agent is chosen by nature but is not observable to the principle. So, the agent is either high or low so principle cannot observe this without knowing this he has to make a contract he has to come up with a contract okay. And the agent's problem is you know either accept or reject and you can go further down also after this. So, this is a classic example of this is let us say insurance right in insurance for example you want to offer an insurance policy but you do not know if this person is let us say healthy or unhealthy may not know if he is a smoker or not smoker etc. etc. But the policy has to be formulated in such a way that the right type of agent finds the right type of policy attractive okay. So, we will discuss more about this in the next class. And what I want to do for the remaining part of the course is discuss these two models one is what is called signaling. Now in signaling you have this situation again nature plays first let us say high and low. Now the agent can the agent observes this and he sends a signal to the principal based on which this principal takes an action or let us say you can here is simplest thing is where he takes an action or he could also you could also have a situation where the principal offers a contract and it is for the agent to decide whether he wants to accept or reject the contract. The last model is where you have screening the in screening it is the similar sort of information but it is the principal who is committing first. So, in signaling what happens is that there is some information that the agent knows which is which it wants the principal using which it shapes the agents principal's information and then the principal offers a contract based on that. So, it is this agent who is the leader okay. The agent shapes the information of the principal in screening again it is the some information that the agent knows but it is the principal who wants to know it alright. So, the principal wants to sort of separate or screen agents based on by coming up with some framework or some contact okay. So, here nature has played let us I will write it like this but the principal does not know this okay. When nature is played principal picks a type for the agent principal does not get to see the type he offers a contract and the agent has to accept or reject. Next class I will we will go through adverse selection and then whatever remains we will spend some time on these two. So, right now these are both particularly signaling is a very sort of actively research topic you will see lots of papers in you know transactions and so on on this. So, we will go through these two in the for the remaining part of the course yeah. So, screening is essentially refers see it is the same information structure the agent knows his type the principal does not principal does not know the agent's type but principal commits first okay. So, here this is poorly drawn actually it is here here it is it is essentially actually just this this think of this information structure but it is the with the principal committing first okay. So, see the distinction between signaling and screening is that in signaling the agent influences or attempts to influence the information of the receiver. So, sender attempts to influence the information of the receiver or rather influence the receiver by shaping its information. So, it takes the lead to shape the information in order to get its get its certain ends met. Whereas in screening it is the receiver who decides okay I he commits first says okay what by declaring a you know a contract or of some some strategy first that constraints what the agent can actually report. So, these are two different different this thing two different models where yeah I mean that could be further variations on top of that here there is just an action of except or reject and then you can have a further chain on top of this okay there could be effort etc. etc. I mean a full long contract here on top of that is not I have not mentioned that here but the point here is the distinction here is that between these two is that who commits first all right. In both cases it is the type of the agent that is being chosen by nature but the yeah who plays first is the difference.