 Now, identifying the visits to states like often when we have a Markov chain like it is going to be taking different different states fine, I may be interested in when is the first time it is going to visit the particular state and to visit that state possibly on an average how much time I am going to spend, so such questions you may be encouraging right. For example, on an average how many times my share price crosses a certain number and you want to have a want to understand the distribution of such counts. So, for that we have this notion of hitting times and recurrence. Let us take a state I and state J, I am interested in let us say if I start from state I then is the first time I am going to hit state J. Now, I am going to define if i j n as probability that x 1 not equals to j x 2 not equals to j all the way up to x n minus 1 not equals to j and exactly x n equals to j given x not equals to i. So, what I have done here starting from i I am interested in hitting state J only in the nth round not before that I am assuming that i and j are different here now. Now, I could set n to be anything I could just say n to be 1 that means what I start from state i and I want to go to state J in one step or I could state any n positive number. And now this probability is giving me what is the so this expression is giving me what is the probability that I will go to that state in this exactly n many rounds p of j this one is it this is p i j of n what is p i j of n it is just like it is going if you start from i and you want to be in state J after n steps. It does not say that you can this one will say that you cannot go there in between this allows that option then. So, now this is what in a way if I am interested in knowing my first visit to state J here starting from i my first visit to state J of starting from i in n number of rounds ok. Now, suppose let us say T n is first visit to state J I mean I am assuming that starting from i you can visit from state i to state J in either one steps or after two steps or after three steps or maybe like it can go on right you do not know how many times. Now, if I want to say what is the probability that T 1 equals to finite what is this what is this probability. So, I could go from state i to state J in one step or two steps or three steps or whatever right can I express this in terms of F iJs what would be that. So, the rate may be just at the tk those may be just at the tk 3 may be just at the tk all these are fine right. So, ultimately I can say that like this can also I can say probability that there exist n such that X n equals to J starting from eventually I want to I am going to hit state J starting from i right. So, this is nothing but nothing but F iJ of n and what is n n is going to be greater than or equals to n. So, this is saying I am going to hit n, but right now this is not telling at what round and you could be hitting in 1, 2, 3, 4, 5. So, there exists some edge such that I am going to hit state J starting from i. So, this is nothing going to be the sum of all these F iJs. Now, I am going to denote this quantity as F iJ equals to summation F iJ of n if this quantity is going to be less than or equals to 1 or equal to 1. So, I just showed that this is nothing but some probability right probability are some event happening T 1. So, this has to be necessarily it is going to be less than or equals to 1. So, now suppose let us consider a case when F iJ is strictly less than 1 what does this mean? So, F iJ less than strictly less than 1 means there is a possibility that when I start from state i, I will never come back to state J that is why we are calling it as hitting time and recurrence whether this is happening or not. Suppose F iJ is less than 1, we kind of understand that there is a positive probability that if I start from state i, I will never come back to state J. I will never hit state J again and now if F iJ is equals to 1 what does this mean? I will come back to state J starting from I in finite steps. Now, whenever let us consider this case whenever F iJ is equals to 1 that means what this F iJ is does it make a probability distribution on n? So, all this so we know that each F iJ of n is going to be greater than or equals to 0 right this quantity cannot be negative because this is a probability here and if I am going to say F iJ equals to 1 that means this is a vector and this is a infinite dimensional vector and it sums to 1. So, this is going to be a probability distribution on what or n whether it is going to take 1, 2, 3 or up to whatever value. So, this is going to be a distribution whenever that is the case whenever let us say whenever F iJ is a distribution by the way when F iJ is less than 1 we are going to say that state J is transient starting from I what means by transient that means yeah I mean if it means it is possible that that it will never come back to that state and when this F iJ is equals to 1 I know that starting from state I will surely come back to state J right after sometime I may be whatever time it is going to take I will surely come back to this. Now, this is a distribution let me write this we call it as first process time distribution. This is a distribution taking what values discrete values and what are these values n equals to 1, 2, 3, 4 right. So, probability that it takes 1 is F iJ 1 probability that it takes 2 is F iJ 2 and probability that it takes value n is F iJ n like this. Now, let us say instead of starting from state I and looking at some others hitting other state let us take now let us take I equals to J. Now, what I am interested in starting from state I starting from state J and again visiting that state. So, in that case if F iJ is less than 1 that means if I start from state J there is a possibility that I will never come back to state J again and if F iJ is equals to 1 that means if I start from state J it is possible that I can come back to this. So, for example, let us say that your share price of a particular stocks hits some 100 rupees then the probably. So, let us say it is base price started from some 100 rupees probability that it come back to 100 rupees again if that state has F iJ value equals to 1 that means it is going to come back the probability given again to state and then if this F iJ has value 1 that means that it will never hit that 100 rupees value again I mean it with some positive probability it may never hit again it may hit, but there is also chances that it may not hit 100 rupees again. Now, I know when F iJ equals to 1 this F iJ offense forms a distribution in that case further I am going to define another quantity called expected value which is defined as F iJ of F iJ. Now, this F iJ is here so what is F iJ and this is the probability that the first time I am going to hit state J again in n rounds. So, what is this is going to give me expected time to hit state J again expected time to return to state J. So, when I have this F iJ if this F iJ I am going to call it as probability of first return to state. So, I am calling it return because I stating from state J and again looking at coming back to this or returning to that state J and this is mean return time. As I said let us take a J S. Now, we can classify that state based on this observations. Suppose if my F iJ is equals to 1 or my F iJ can be less than 1 further when my F iJ equals to 1 2 things can happen right. That is my VJJ can be so sorry new J J can be finite or my new J J can be infinite. As I said when F iJ happens to be 1 we are going to call that state J as recurrent because we have a positive probability of coming back to this and then F iJ is less than equals to 1 that means it is transient. It may come back sometime and then die out it may not come at all right. So, in that case it is transient. Now, when it is recurrent yes we know that it is going to come back. Now, the question is whether it will come back soon with high probability or it will come back but after a long long time. So, when you are you know it is going to come back. Now, you may be interested in knowing the frequency of coming back right in a way. It is going to return quite often or it will return but very rarely right. You may be interested in these two scenarios right. So, when it is going to come back frequently what is the case you expect this mean return time to be. It will not be large right in that case it is coming back frequently. So, that may most of the times may be n are smaller and those smaller when will have higher probabilities right and the larger ends will have smaller probabilities. But on the other hand if this guy is not returning so frequently it will coming up very very long time or this is happening very very rarely. That means the n values which has having positive value or the mass right those are going to be very very large. So, because of that this Vjj may be may be large or may be unbounded also. So, that is the case we are going to distinguish here. So, when it is recurrent yes we know it is going to come back but whether it is going to come back frequently or it is going to come back rarely after a long long time. So, if it is going to come back after a long time or it is going to come back rarely we are going to call this as null recurrent and then we are going to call this case as positive recurrent. Let us assume that you are in an insurance business you start with some capital money whatever you are. Now, how the insurance guys will work they will all charge you some premium right and whenever you get into trouble you go back and ask for coverage and then they will pay you from this. So, what insurance guys like they like that you pay regularly the premium and do not come back and claim anything ever just keep paying ok. So, but anyway they are supposed to pay you when you are in trouble what if some tragedy happens and lot of claims come. If lot of claims come insurance company has to reimburse lot of money right and their money may their balance may go negative also. So, suppose let us say so in that case what insurance like company want to do it want to make sure that at any given time its capital does not go below some threshold. You keep paying premiums its capital keep accumulating but at some point if large claims come and happen and small small claims are fine like everybody is paying premier some guys claim some coverage the capital may be still growing. But if some point large claims come their capital may start going below right. So, what insurance companies want is they do not want to go bankrupt right they do not want their threshold to go below some level maybe if it goes once or twice it is fine but if it happens often maybe they cannot survive in that business. So, let us take such a let us say insurance company is going to define some states if my capital is this much this is my state if my capital hits this much this is some states let us say some states and they will define one state which is they call it as let us say a danger danger level. If my capital goes this danger level that means I am in danger. Now what that state the insurance company likes to be they want it to be happened earlier right they do not want it to happen often yes some. So, in that case if you are going to model that case you want if your capital whatever the state you are going to define if it happens to be a null recurrent possibly then you are going to enter into if you are a initially a planner who want to get into this business and you have some capital with you and you have to decide whether I should this enter this matter and let us say you are able to model all these things and come up with all these distributions and then you are able to also compute the expectation. Then the question is should I enter if you feel that no it looks like my state of getting into danger happens to be a positive recurrent. So, this is looks very risky business I do not want to get into this. So, say calamities you cannot avoid right like I mean so far whatever like let us say for some reason some big tragedy happens and you cannot say that the tragedy will not happen again right and it can happen not in our control, but yeah so what fine this fjj so fjj is one that is means if the risk you some calamities happen today you cannot guarantee that the causality is never going to happen in the future right it may happen in the future. So, you are going to visit again that is what fjj one means it is not saying that not so fjj is equals to one means you cannot say that nothing bad happens in the future it can happen, but it may happen after a long time or short time that is what now matters to you the fjj equals to one it is just saying something bad will happen, but it the rate at which it is going to happen what matters to us right. If the bad things are again going to come again and again very frequently then you are I will say here that is the opposite right here this bad things are actually good things here because we are talking about this n to be large means ok. So, when n is so what is this when n is equals to large that means bad things are happen only very very later that means actually null recurrent is good for you if the bad things are happening very frequently that is a bad thing for you that is the other way around here. If you are in transient yes yes yes and actually we are in a good case like and also like I mean depends on the value of fjj it also says that you may never see a bad case again with 0.3 probability yeah exactly yeah exactly. So, that so as a modular you do not want to assume that it is basically assuming that ok some with some positive probability nothing is going to have bad is going to happen in the future you have to account it ok something may happen, but that may not be that frequent ok. So, this is all again the design choices like how risky or how risk averse you want to go for it based on that you are going to set these values ok. One more thing before we compute the other distribution. So, let us say this fijn right we know this is not pijn ok, but if but can can I compute this quantity only using my transition probability matrix TPM why is that if yes. So, what I need to know is x not equals to i I am not coming back to state j till nth round can I compute this probability using transition probability matrix verify that this and this is just like you just argue that yes we can compute this probability fijn using only transition probability matrix. So, all this characterization of whether my state is transient or recurrent and further positive recurrent nl recurrent this all can be explicitly stated just by knowing your transition probability matrix ok next ok may be before that let us look into this example ok. I know let us say I have been given some simple Markov chain with three states and I am going to express its transition probability in terms of this transition diagram like this. So, I have written in terms of a transition diagram you can also express in terms of the transition probability matrix the same thing ok. Now, can we compute f 0 1 1. So, what is this say that I start from 0 1 and I hit state 1 in one step what is this probability that is going to be 1 minus p. So, this is the only path right first time I am going to do and now if I want to do it is 2 steps p into and if I want to do it in n steps p to the power n or p to the power n minus 1 and now can you give me what is f of 0 1 f 0 1 a summation of all will be what. So, right right this is going to be yeah. So, there will be infinite. So, it is going to be 1 minus p into summation of p to the power n where n is going to start from 0 to infinity and that is I just work out this is safe over and now what does this say that if I start from state 0 that I go to state 1 is going to be 1 with probability with probability 1 I am going to hit state 1 at some point and because of that my state 1 here starting from 0 it is going to be a recurrent 1 ok. And now similarly maybe you can just also 0 ok now let us if I want to do 0. So, what is this probability that I go to state 2 starting from 0 in one step 0 and what is this is going to be. So, the only way I can go from here to here is this root right in one step many for the first time and then f of 0 3 is going to be what. So, I want to go from here to here in three steps. So, one possibility is go from here and then stay here and then go here right. So, what is that probability 1 minus p into p into 1 minus p is there any other possibility. So, you stay here in the first round on 0 then you go here. So, what is that value p into minus p and is it possible to write generally this one for n. So, I want to do it in n steps no in n steps 0 to 1 no we keep first time it is not all possibilities we are looking at ok. So, I compute this then you have to just basically work out in how many possibilities. So, one possibility is just like go from here to here in one step stay back here for n minus 1 n minus 2 steps and then go there then other possibilities you spend n minus 2 steps there itself and then go here, but there are other possibilities also right. You can spend one step here go here then step spend another step here like or whatever like. So, there are many many other combinations which you can write down ok. So, as you see that then even for the simple things writing this can become too much of a combinatorial issue you have to look at all possible paths that you can go from one state to another to reach it for the first time ok. Now, let us compute this. So, one thing is the number of rounds required to hit a particular states and other thing is when you going to granite continuously how many times we are going to revisit that state right. So, revisit or return to that particular state. So, suppose let us say I have a state J and I am interested in this in this ok just to clarify here to the way we have defined the recurrence transient and all right. We have stated like ok you start from that state and again see visit to that state, but here what we have done is we have taken some initial state which could be different from the state we are going. So, if you want to in this case you had said f 0 1 equals to 1 that means if you are going to start from state 0 then state 1 is recurrent in this right. So, in the instead of this you can also compute what is f n 1 f 1 1 you start from state 1 and then again look at coming back to this in one rounds two rounds or three rounds whatever. So, is it easy to check this let us say let us compute this f 0 0 suppose if you start from 0 what is the probability that you come back to that state in one step p and then what is the probability that you come back to that in two states 0 right. So, what is that in that case this is going to be simply all other terms are 0 right you came back one times, but once you left it there is no return part to 0 right we are not going to come back to that again. So, if this is going to be simply p in this case if suppose p is less than 1 what is state 0 transient right that is obvious right 8 bar to either say hat k state say then you may not come back to that state again and if p is equals to 1 then in this case it is 3 well like you will be always in that case. So, like that you can compute for all the states and then based on that you can classify which one is transient and which one is recurrent and the recurrent one you can further classify whether it is non-recurrent or positive recurrent. So, to compute all this say all you need to tell you this transition diagrams or your transition probability matrix ok now look at to this what is this quantity guy is telling you. So, what is this is going to say 1 here if nth state is j. So, what is this term is telling you number of times you are hitting state j ok where the entire process. So, that is why this is called number of returns to state j and now I may be interested in computing expected value of m of j starting from a particular state ok. Now, what is this going to be expected value of can anybody now see see now this is a limit right what I so this is nothing but what I can write it as expectation of is this correct. So, this is nothing but n equals to 1 to infinity right I mean let us assume this limit exists limit as n goes to infinity s 1 to s this 1 now we are faced with now I want to interchange this expectation and this limit can I do so here can I do so in this example. So, let us say I call this as something as y n ok. So, I have this sequence of y n's and I have limit as n goes to infinity of this y n and now I want to interchange this right now I do not know where this y n the sequence y n convergence in any of the senses all I want to do is I want to check whether I can interchange this limit and expectation when I can do this if at all I can do this we can apply what research monotone convergence theorem you can we can can we apply here in this case. So, ok when so limit to to interchange limit and expectation what all the results we have studied for a sequence of random variables they are bounded convergence theorem and dominated convergence theorem and monotone convergence theorem. So, to apply bounded and dominated convergence theorem what are the hypothesis we needed that is one thing and we needed x n to be converging to some x in probability right. So, we needed to know that that sequence x n converges in some sense, but here and what is the last one monotone convergence theorem to apply that we did not need to know what is the limit of my sequence all to what is the hypothesis to apply monotone convergence theorem non-negative therefore, each sample point my process should be sample point is that happening here why is that summation of indicators right. So, indicators are all non-negative values. So, if you increase in for a given omega it is only going to increase and increases. So, I can apply my monotone convergence theorem and then interchange limit and expectation, but then this one is a finite there are only finite sum I can further interchange this. Given. Yeah, yeah given right and then I can interchange that this is going to be s equals to 1 to n what is the expectation of indicator that x n equals to j probability of x n equals to j given what is this is it x n equals to j given x not equals to i right and what is this by our definition Pij and this is nothing but I can write it as n 1 to infinity. So, my mean number of visits to state j starting from i can be expressed as simply as Pij of n for n going from 1 to infinity okay fine. So, let us stop here next class. So, what we have just now done is we have just written the expected value of this mj in the next class we will also try to see if you can derive the distribution of this mj itself okay mj is the random variable here which takes any value between 1, 2, 3 up to infinity right. So, how to find its distribution we will see that in the next class.