 So, let us actually go to that now, so for in order to talk of agreement we need to extend our model a little bit by introducing beliefs, so Oman model of incomplete information with beliefs. So, now in addition to what the model that we had earlier, earlier we had players had partitions and then we were discussing events, whether a player knows an event or not. Now, in addition to that we are going to introduce one more element here which is that we will introduce this P, P of omega is this is which is we will take this to be positive. This is common prior of players, so again we have n players they have their own partitions and so on, they have their own partitions and again we will be discussing events, but in addition to this there is a belief that they have about the probability with which various events, various states of the world are going to occur, so earlier we were referring to a situation in which a fixed state of the world was there, so the fixed state of the world and we were asking what do players know about a certain event in that state of the world. Now players in addition to all the stuff that they had earlier they also have a prior belief saying that well this is the probability in which various states of the world are going to occur. So, this is the probability with which it is going to be raining, this is the probability with which it is likely that my sensor is going to malfunction, this is the probability in which with which say this customer is going to do a fraud or whatever that is what this is this is this is the this is the problem this is the belief prior belief that they have. Now why is this called prior, this is called prior because this is the belief that players have before they get any information, so this is the belief with which let us say nature is going to choose, this is the probability distribution with which nature is going to choose a true state of the world. Now nature chooses a true state of the world, players get observation about it, they get their limited observations based on their partitions and based on that they now have, they can now say well what is more what is likely to be the true state of the world. They cannot actually see the true state of the world because they are limited by their partitions but they can say well what is my belief now about what is good what is in fact the state of the world, so for example I have say for example if you are a common situation for instance is suppose you are an investor, you have a certain belief that a certain company is going to do well, now you get indications about it. You get some news say for example you get you get news that this company has won a certain number of orders or that this company has appointed a new CEO, based on this you can update your belief about what you think is going to be about what you think are going to be the future earnings of the company. So that would be what is the updated belief that you have is what is called the posterior belief. The belief that you had before you got any information is what is called a prior belief. So we will go about describing how players update their beliefs and so on but what we are going to assume throughout is that all players have this common prior belief, a common prior belief. If you want to allow for uncommon prior beliefs or non-uniform prior beliefs or beliefs that depend on the player then the problem changes and that leads to a different type of model. You have to generalize this model even further. We are going to stick to the situation where everyone has a common prior belief, okay. All right. So now let us consider this, so there are four states of the world omega 1, omega 2, omega 3 and omega 4, all right. Now what we will do is just as for sake of illustration I am going to take some numbers here and then we will see I will modify them to make my point a little clearer, okay. So let us take for example take these as one-fourth, one-fourth, one-third and one-sixth. This is the prior belief that players have, okay. The numbers here denote the, these here they denote the prior, okay. So everyone starts off believing that omega 1 is going to occur with probability one-fourth, omega 2 is going to occur with probability one-fourth, 3 with one-third and 4 with omega 4 with one-sixth, all right. Now there are two players, player 1 and let us say player 2. The partition of player 1 is like this, he cannot distinguish between omega 1 and omega 2 and player 2 cannot distinguish between omega 1 and omega 2 and he cannot distinguish between omega 3 and omega 4, okay. Player 2 cannot distinguish between omega 1 and omega 3 and he cannot also distinguish between omega 2 and omega 4 ok. And now let us see what how do these players update their beliefs ok when they get some information. Now let us suppose that so I will write out player once updated belief or posterior belief. Now the belief will be based the new belief that they have ok depends on the information that they get ok. So, suppose the players get infim suppose players get so I am going to write out the information and what is going to be how I am going to write the belief remember the belief is going to be a new probability distribution on y again ok. So, suppose player 1 knows that 1 of omega 1 or omega 2 has occurred ok that means true state of the world is either omega 1 or omega 2 this is. So, his information is either is that either omega 1 or omega 2 has occurred. So, how is he going to compute his updated or posterior belief about the various events. So, now that he knows that either omega 1 or omega 2 has occurred he knows for sure that omega 3 and omega 4 cannot have could not have occurred. So, those have to get probability 0 right. So, and so naturally omega 3 and omega 4 are have probability 0 what about omega 1 and omega 2 sorry. So, in this case it turns out that they should they he should take them as with he should logically say that well omega 1 and omega 2 are equally likely because to begin with also they were equally likely right. So, the thing that I am talking about is essentially player 1 has to apply Bayes rule ok. So, player 1 has to apply Bayes rule. So, he gets an information that either omega 1 or omega 2 has occurred he has a certain prior on how what probability they should have occurred with and based on that now he updates his prior to get his posterior belief on the occurrence of omega 1 to omega 4 ok. So, let me write this out. So, so player 1 for example, so he how would he compute the probability of omega k when omega 1, when omega 1 or omega 2 have occurred how does he compute this yeah. So, he computes this as follows. So, he looks at the prior of omega k intersection this divided by the prior probability of this is clear. So, he looks at the intersection of omega k with this with the information that he has divided by the probability of that information. So, in particular let us take for example, in so more generally if he has any event he wants to prime the probability of that event when his information is omega 1, omega 2 then his posterior probability is computed with this ok. So, this kind of a player who does who who changes his beliefs based on new information right is what is called a Bayesian rational. So, a rational player what we had said earlier was someone who just maximizes utility given considering a certain distribution of about the probabilities with which various lot possibilities occur, but then if his information is changing then the probability distribution with which various things occur would in his mind should also change and how to update that probability itself is a requires some axioms ok. A player that updates his probability in this particular way is what is called a Bayesian rational ok a Bayesian rational player. So, basically he updates his new probability is equal to the posterior probability or the conditional probability of his new probability of an event is the conditional probability of that event given the new information ok. He could have done many other things for example, he could have said well I get some information based on that let me do MLE for example, maximum likelihood estimation about what could have been the what could have been the true state of the world right. He could have said ok it is in that case he would not be say he would have said well is omega 1 more likely or omega 2 the higher the one with the higher probability would have been his belief that this is the state of the world that would have been by that would have been doing maximum likelihood estimation that is another axiom and that turns out that leads to a different theory altogether. Our axiom here is that he is he is he is Bayesian rational and Bayesian reasoning is has been established as has been consistent with all the other axioms of rationality. So, if you are rational you know in a holistic way you basically have to be applying Bayesian. This is also the reason why we use you know regularization in machine learning and so on and so forth all of those you can I can go along that tangent on a you know on another day but essentially that is the that is also the the same logic of that supplied here ok. So, what does so what does the player do basically he he computes the posterior probability for the conditional probability of that event given his information all right ok. So, now based on this let us just quickly write out what is what is going to be his posterior distribution. So, what is the probability of omega 1 now when his information is omega 1 omega 2 his probability is half when what is the probability of omega 2 that is also half what about omega 3 it is 0 and omega 4 is 0. What if his information was omega 3 omega 4 if his information was omega 3 omega 4 then then you know the omega 1 omega 2 could not have occurred because this intersection is empty right the intersection here this intersection here would now be empty. Now in place of omega 1 omega 2 I have omega 3 omega 4. So, that could not have occurred. So, these two get probability 0 what is the probability of omega 3 now updated probability of omega 3 2 thirds right. So, with 2 third probability it is omega 3 and with 1 third probability it is omega 4 clear ok. So, let us also write out player 2's posterior. Now player 2's posterior has to be written in terms of the information that player 2 has player 1 could not we wrote out his player 1's information in this way because player 1 could not distinguish between omega 1 omega 2 and omega 3 omega 4 right. Player 2 has a different information structure his he cannot distinguish between omega 1 omega 3 ok since he cannot distinguish between omega 1 omega 3 now when his information is omega 1 omega 3 what are these probabilities? Yeah 3 by so omega 1 occurs with probability 3 by 7 omega 2 occurs with probability 0 omega 3 occurs with probability 4 by 7 and omega 4 occurs with probability 0 and what if you have omega 2 omega 4 here. So, if it is if his information is omega 2 omega 4 then omega 1 occurs with probability 0 omega 2 occurs with probability 3 by 5 omega 3 occurs with probability 0 omega 4 occurs with probability 2 by 5 this is clear. So, this is how you get these are these are the players posterior beliefs. So, the way things happen is the chain is like this they have a common prior about the probability with which nature is going to pick a true state of the world. The true state of the world is actually chosen but they do not know which one it is what they know is just partial some partial information about it based on that partial information they update their prior to get a posterior belief about the true state of the world this is clear this is this is the chain. So, now, let us let us I put as I said I will just change the numbers a little bit because there is a interesting coincidence that I want to show you and this coincidence is turns out to be actually a very nice and interesting theorem deep theorem. So, I have got to change these numbers. So, I have now one-sixth for omega 1 I have one-third for omega 2 one-third for omega 3 and one-sixth again for omega 4. And now suppose the true state of the world of the world is omega which is omega star is say omega 1. Of course, players do not know it is omega 1 this is they just they get their information and the way the partitions are if you remember we player one could not distinguish between these. So, this is player one's partition one element of his partition this is another element of his partition and for player 2 we had these vertical ones. The true state of the world is omega star which is omega 1 that is the true state of the world. And now let us look at an event A and the event is I am going to take the event as omega 1 omega 2. Now the true state of the world is omega star which is omega 1 and the event that we are discussing is omega 1 omega 2 either omega 1 or omega 2 has occurred. Now what is the posterior believe that player that these players have let us write posterior posterior given see the truth. So, yes see the true state of the world is omega star now which is omega 1 here but players will not know it is omega 1 they will only know their their the element of their partition. So, each player basically knows f i of omega star has occurred right. So, player i knows. So, player i knows f i of omega star that is all that player i knows now based now that he knows this he has to update his belief earlier his belief was just p of a whatever was the probability of a p of which is p of omega 1 plus p of omega 2. Now his belief would be the posterior belief would be this given so for player the posterior belief for player 1 would be p of a given f 1 of omega star and for for player 2 it would be p of a given f 2 of omega star right. So, based on so this is the players are being based in Bayesian rational here. So, they are saying well I have this is the information I have based on this now here is going to be my belief I am going to compute p of a given the conditional probability of a given this information. So, what is p of a given f i of f 1 of omega star f 1 of omega star is what f 1 of omega star is the upper red red one it is just this omega 1 omega 2 ok. So, what is this let us compute this p of a intersection f 1 of omega star divided by p of f 1 of omega star which is equal to what which is sorry I wrote a wrong sorry I need a equal to omega 2 omega 3 sorry my mistake here yeah. So, what is this a intersection f 1 of omega star given divided by p of f 1 of omega star. So, a intersection f 1 of omega star say f 1 of omega star is is this a intersection and a is what let me write draw a also here in green a was omega 2 omega 3. So, a is this this is your a yeah. So, a intersection f 1 of omega star is is just omega 2 the red one is player 1's partition right this is for player 1 this is player 1's partition the blue ones are the blue ones are player 2's partitions yeah. So, this so this is going to be p of omega 2 divided by omega 1 plus omega 2 and that is equal to what 1 third divided by 1 third plus 1 6 which is equal to which is equal to 2 third ok. All right what about the second what about player 2 again p of a intersection f 2 of omega star divided by p of f 2 of omega star a intersection f 2 of f 2 of omega star is equal to what it is omega 3. So, p of omega 3 divided by p of omega 1 plus p of omega 3 and that is equal to again again 2 by 3 right. All right these two numbers have turned out to be the same and here is the interesting thing. So, now suppose I asked you the following I asked you the following question I am going to write out this this set. So, I will write it in red this set tell me what is this this is omega is all the states of the world omega such that what is this set this is the states of the world in which player 1 ascribes a probability 2 thirds to the event a right what are the states of the world in which player 1 gives probability 2 thirds to event a first year probability 2 thirds to event a. It means what are the states of the world in which player 1 would have a belief that a would occur with probability 2 thirds right that is this this. So, what is this what kind of set is this this is itself an event right this is itself an event this is any it is a it is a subset of the states of the world it is itself an event and because this is an event player 2 can now have a belief about this event. So, just so player 2 can now have a belief about the the probability about the belief that player 1 has about a certain event correct. So, now with this now we can start recursing and again create just like we had a knowledge hierarchy in which player we could talk of what player 1 knows of what player 2 knows and and so on we can now have a belief hierarchy in which we can talk of what player 1 knows what player 1 believes of what player 2 believes of what player 3 believes and so on right. So, this is so the so this is now an event so we can talk of the probability that player 2 assigns ascribes to this event that would then be the the belief that player 2 would have about the belief that player about the fact that player 1 has a belief 2 thirds about an event a is clear right. So, so I would not go into formally describing what a belief hierarchy is but essentially you can see you all you need to do is to generate a belief hierarchy you create all possible events and you create all possible numbers here you also need to tell me give me a number here right put all possible numbers here because the number tells me this value between 0 and 1 also is part of describing the belief see in a knowledge in knowledge it was either you have the knowledge or not have the knowledge it was always about that a player knows something right now he has a belief but belief of how much is the question right. So, belief of so so so when you are generating a belief hierarchy it is not just about yes or no questions it is about you need you take all possible events and you put all possible probabilities and you talk of what you know questions like the player i has belief that player 2 has belief that player 3 has belief equal to equal to something right. No so see well yes I do not know exactly what you mean but essentially now this is now just an event I do not need to know I do not need to I can treat this as any other event and talk of what is the problem what is my positive what is the player's posterior belief of this event by just knowing that this is now an event so this is now just like any other set right this is an so in fact can you tell me what is this set can you tell me what are the states of the world omega in which player 1 ascribes probability two-third to the event A. So, he ascribed probability two-third to event A in in state of the world in when his information was a phi of omega star which was which f 1 of omega star which was what was f 1 of omega star omega 1 omega 2. So, he ascribes probability two-thirds in event in states of the world omega 1 and in states of the world omega 2 as well but if you actually check it will also you can do the calculation again it actually turns out that he ascribes probability because of the symmetry and all that here you can do the calculation again and you will turns out that the he ascribes probability two-thirds also in event omega 3 and omega 4 or in other words the states of the world in which he ascribes probability two-thirds to event A this is actually equal to y. So, which means that in every state of the world player 1 ascribes probability two-thirds to the event A. I imagine event A was something significant you know this yeah yeah yeah because of the number. So, as I said there are some coincidences here I have these numbers have been crafted in this particular way to make this point. So, but by what is the point the states of the world in which player 1 ascribes probability two-thirds to event A is the entire set which means regardless of what has actually happened he is going to event A is such that he is going to ascribe probability two-thirds to it. What about the same thing for player 2? What are the states of the world in which player 2 gives probability two-thirds to event A? Turns out this is also y again you know you can do the calculation turns out this is also y. Now here is the interesting thing you know and this is where the agreement theorem comes up and we do not have the time for a proof today but I will I will let me state what the theorem actually says. Now in so these two events are y which means what these two events are actually common knowledge right that means both players are in complete agreement that the other player gives the probability two-thirds to the event A. So, there are so the so there is a state of the world there is an event some state of the world omega and there is some event A and the event that player 1 ascribes probability two-thirds to that event A is common knowledge and the event that player 2 ascribes probability two-thirds to that event A is common knowledge. So, can it ever happen that these two numbers two-third and two-third here are not two-third and two-third they are one-third and two-third or whatever. So, these two events are common knowledge which means players are in complete agreement over the over the fact that they each ascribe a certain probability to a certain some other reference event A. So, they are in complete agreement on the process with which they have arrived at their new probabilities. So, there is an agreement over the application of Bayesian rationality or application of Bayesian rule or application of whatever process is there for updating your belief. They also started off with a common prior. So, they started off with a common belief there is agreement over the process by which they apply the by which they update their belief. They are also referring to the same event then could they have possibly come up with two different beliefs itself. So, there is a particular event that they are referring to they start they have a common prior they update their beliefs through a through a process that they completely agree on. But then could they have come up with two different answers on this and it turns out that is the theorem that players have to agree. So, it cannot happen that players can agree to disagree essentially. They cannot so if they agree on the on the prior and they agree on the process which means they agree that there is there is the fact that each guy applying Bayesian rationality is common knowledge. If these two events are common knowledge, the event that player 1 ascribes a probability p1 to event a and the event that player 2 ascribes a probability p2 to event a. If these two events are common knowledge then p1 and p2 have to be equal. They could not have come up with two different answers about. Now, what does this mean? See for example, I will just give you a give you a quick quick sort of interpretation of this. So, one of the one of the implications of this is let us say it is in finance for example. So, if two players have the same piece two people are observing the same piece of information they start off with same piece of information and that is something that is publicly made known. Like for example, the results of a company or the resignation of a CEO or something like that. There is no and there is complete agreement that both players have on the process with which they are going to be updating their beliefs. Then they should both have the same belief about the few about they should both have the same posterior belief about you know let us say the event the stock price rising or something like that. That means it cannot be that one guy that both players start off as as being bullish. Both players get the same information they are both in agreement on the process and yet after getting the information one guy is turns bearish and the other guy turns is remains bullish. They have to have the same beliefs later as well now that is the point does not matter. So, remember the agreement is on the process the agreement is not on the information they are not getting they are not getting the they are not getting the same information and that is why this theorem is not revealed. It is not like both are seeing the exact same information they just agree on the process by which they update their procedures. They get they see the world through two different perspectives but they update their posteriors in a way in a manner that is common knowledge to both. In that case, the posterior probabilities themselves have to coincide. So, which means so for example, this has an immense implication on the role of automation. There is so much automation in anything involving uncertainty. You want to predict say for example, if there is going to be traffic on the road tomorrow. I have a certain we all have a certain prior belief. We look at Google Maps today. Google Maps tells us a prior belief that this is likely the travel time and so on. Tomorrow you get after that we each get some pieces of information about the likely road of road condition and so on. We each have an agreement or in complete agreement about how we are going to update our belief. Could we now decide to that could we now end up with a different probability of there being traffic on the road? Because this probability will decide what time we leave and so on and so forth. Could we possibly end up with a different probability? We started off with the same probability we agreed on the process but of course, we got two different pieces of information. Could we have possibly gotten two different two different probabilities after getting that information? The answer is no. So, the reason so the so what this theorem basically tells you is actually I will state this formally next time, but what it is essentially telling you is that disagreements can occur essentially due to one of two reasons. Either that there is no agreement on the process of updating or you did not agree to begin with. There were no common priors to begin with. You had people had some biases. It is not that trivial. See information of course, the fact the information ensue the information structure is what contributes to this becoming common knowledge. Once it is common knowledge, it does not matter that they are getting in different information. I will just write the theorem and we will then end the class here. So, theorem is this. Let A be an event and let omega star in Y be a state of the world. If the event that player 1 describes probability q1 to A and the event that player 2 describes probability q2 common knowledge, then q1 equals q2. So, this is the theorem. We will prove this next time.