 So, last time you recall we were talking about the master equation for Markov processes and just to refresh you your memory we said consider a conditional probability density that the system is in state some state k at time t given that it was at j at time 0 and this quantity this set of quantities obeyed a set of couple differential equations which was of the form d over d t of this was equal to on the right hand side you had a summation over all possible intermediate states. So, l equal to 1 to n w k l p of l t j minus w of l k p of k t given j k and these quantities were the transition rates to go from the state l to the state k and there was a constraint here it said that l not equal to k and this is what I call the master equation it is a couple set of first order differential equations for this quantity and of course the initial condition could be anything you want you care to specify since we are talking about conditional probabilities the initial condition is p of k 0 j equal to delta k j. So, the task is to solve this set of equations given that initial condition and then you have a whole host of probabilities okay now notice that I wrote this rewrote this as I wrote a column vector. So, I said let p of t be a column vector with elements p of 1 t for each j etc up to p of n t j in this fashion and then this equation became d p of t over d t equal to a matrix w acting on p of t and this matrix w as you can easily verify has the following elements w k j equal to w k j for all k not equal to j all off diagonal elements were of this form and the diagonal elements w k k was equal to minus the sum of all the other elements in that row okay. So, this is equal to w for a given k k is 1 for example it is w 2 1 3 1 etc etc. So, it is w l k summed over l 1 to n l not equal to k. So, we have this picture of a matrix where each column each row adds up to 0. So, the determinant of w is 0 now w has real elements all the off diagonal elements are either 0 or positive and the diagonal elements are all negative just go back to this equation let us write this equation for instance for the case when k is 1 okay and let us suppress the index j that is not necessary here. So, essentially you have a equation like d over d t p of 1, t equal to on the right hand side you have to sum over all k 2 3 4 etc etc. So, it is equal to w 1 2 p of 2 t plus dot dot dot plus w n sorry I put k equal to 1. So, 1 n p of n comma t minus this quantity here which is just a sum over l leaving out the index 1. So, this is equal to minus w 2 1 plus w 2 3 plus 1 etc times p of 1 comma t right and this is what I call w 1 2 capital. So, it is clear that all the off diagonal elements of this big matrix w are precisely w 1 2 w 2 3 and so on so forth multiplying p 2 p 3 etc column vector and what multiplies p 1 because after all what does this equation imply it says p of 1 t dot dot dot p of n t the d over d t of that equal to w times p. So, that is w 1 1 1 p of 1 p plus w 1 2 p of 2 t plus dot dot dot dot dot etc on the first column and then similarly for the other guys so d p 1 by d t is this fellow here and w 1 1 as you can see is minus of all these just writing this out explicitly okay. So, the important thing to remember is that the off diagonal elements of w are the transition probabilities transition rates and they are non-negative they are either 0 or positive okay if 2 states are not directly connected then that particular w happens to be 0 okay. We are making some assumptions here which I have not technical assumptions about the nature of this w matrix we will talk about it later on when I give you special cases of it there are there could be situations where we are talking about the generic case in other words we are talking about the Markov process in which you can reach any state from any state no matter where you start there are no subsets of states which are you know closed among themselves and so on. So, we are taking the general case where all the w's exist and then you have transitions possible now you must remember that it is entirely possible that you may not be able to go from say state 6 to state 8 directly there may be a 0 transition probability but given enough time you may go from 6 to some other state 5 and then 5 to 8 and so on so that possibility exists of course and that is really what happens most of the time because we will look at cases where you may be able to jump only from a given state to neighboring states on either side and yet over a period of time the system wherever you start will reach any other state okay. So we have this w matrix and then we wrote down a formal solution to it and that was just the exponential of this matrix w so we had d over dt p of t equal to w times p of t and this implied that p of t was e to the w t is equal to e to the w t times p of 0 whatever that is and in the case we are talking about this p of 0 is a column vector in which the elements are all 0 except for the jth row which has got 1 because you are starting with the state j okay. So the task reduces to finding this quantity here and the statement made was what sort of time dependence can we generically expect and the answer is you expect things to decay in time to some equilibrium distribution and we already know what that distribution is we have some idea of what it is because remember that determinant w equal to 0 implies lambda equal to 0 is an eigenvalue of w you can actually prove more than this you can show that the 0 is a simple eigenvalue that is not a repeated eigenvalue generically moreover that it has a right eigenvector which will be the stationary probability distribution and in the simplest cases we are looking at it is a unique vector what is important is when you want to identify the column the elements of this eigenvector as corresponding to probabilities we want to make sure that the elements cannot be negative okay. So we need a guarantee that this eigenvector has elements which are not only real but also have are also non-negative okay there are theorems which assure us of that so one of them was the statement I made that the eigenvalues of this w can never have positive real part and the reason I said was due to this disc theorem which said that if you give me a general matrix M and this was the famous Geshe-Corin disc or circle theorem and what it says is we give me an n by n matrix M then take its diagonal elements and mark those points in the complex plane and take the sum of the module I of all that off diagonal elements in that row add them all up and draw a circle about this center point of that radius and element the eigenvalues are guaranteed to remain inside these are on these discs that was a statement this is not a hard theorem to prove because look at it here is a simple way of doing this suppose lambda is an eigenvalue then it says M times some eigenvector u equal to lambda times u now let us look at a particular eigenvalue lambda and let us look at a corresponding eigenvalue lambda and corresponding eigenvector u then suppose this u is of the form u 1 u 2 etc and let us say the kth element in it is the largest in magnitude okay so this fellow has elements u 1 u 2 up to u n and let us suppose that some element k this fellow has the largest magnitude in magnitude over all these of all these elements then if you write this element down it is clear that m k j u j must be equal to lambda times u k I write this down for that element out here and there is a summation over j running from 1 to n so let us take out the j equal to k element and put it on the right hand side and then it says summation j equal to 1 to n j not equal to k m k j u j equal to lambda minus m k k on u k now take modulus on both sides then it immediately tells you that modulus of lambda minus m k k times modulus of u k I will transfer it to the right hand side equal to summation j not equal to k summed over j m k j u j over u k modulus but of course this is less than or equal to summation j not equal to k modulus m k j times the modulus of this because I take the modulus inside the summation I get maximize the sum itself but these numbers are guaranteed to be less than or equal to 1 because we took this to be the largest right so this is less than equal to summation j not equal to k modulus m k j that is the disc here because it says this Eigen value is inside us or on the circle of radius m k k whose radius is given by this so this region is a disc and you are guaranteed the Eigen value is inside or on that disc okay so all the Eigen values of this m are either in or on the union of all these discs or the Gershkor and discs okay there are further refinements to this for example if one of the discs is disjoint you can show that that disc has at least one Eigen value etc definitely has an Eigen value and so on okay. Now what we need is just one part of this theorem applied to this matrix W remember that we had this W whose off diagonal elements jk were all W jk these guys were all positive numbers and the diagonal element k was equal to minus summation j not equal to k W jk so that immediately tells us that 0 is an Eigen value in the Eigen value plane and all these numbers the diagonal elements are all at negative values because these are non-negative numbers inside here so this equal to minus modulus W kk and so you immediately know that the centers are all sitting here and then this look like this each disc looks like that it touches 0 therefore the real part of lambda less than equal to 0 for all so the Eigen values are such that e to the minus lambda t could at must dk to 0 except for the 0 Eigen value where it remains at 0 there is no T dependence at all therefore W is a relaxation matrix things relaxed to the equilibrium state and what is that given by well that stationary distribution is not difficult to write down so the stationary distribution corresponding to lambda equal to 0 satisfies p state W d of course it satisfies dp stationary over dt equal to 0 which must be equal to W times p stationary so it is the right Eigen vector corresponding to the 0 Eigen value of W since each column of W adds up to 0 it is immediately clear that the uniform Eigen Eigen uniform row vector is an Eigen vector a left Eigen vector 1 1 1 1 1 etc. But this is not a symmetric matrix so the left and right Eigen vectors need not be the same in general and the right Eigen vector is indeed the equilibrium distributed the stationary distribution is that clear now how do we find it well you need to put in this condition and discover what it is so let us put that in and if I write out what W is and let us call this p of p stationary p 1 p 2 up to p n it is stationary there is no time dependence so I would not put in a t index at all there is no t dependence it is the single time probability independent of t right and what is that given by well we have a summation l running from on to n l not equal to k W k l p of l minus W l k p of k must be equal to 0 for the stationary distribution again it is a couple set of equations simultaneous equations linear in all these fellows is there a guarantee that this will have a non-trivial solution it is a set of homogenous equations and when does a set of simultaneous homogenous equations have a non-trivial solution when the determinant is 0 which indeed it is we know determinant W is 0 right so it is got a solution right it is got a unique solution and you need to find that by solving this set of equations here whatever it be so this problem reduces to an algebraic problem nothing more than that so this thing will lead lead you to implies you can find this now there are some important cases where you can write the solution down explicitly remember it is this sum that is 0 summed over l but you could ask what happens if each of these terms is 0 inside the sum okay then you get a very special kind of distribution okay which corresponds to saying that term by term this sum is 0 and that is called detail balance so since it is so important in physical applications let me write it down it is called detail balance implies that this quantity Wklp of l equal to Wlkp of k for each kn for every pair this is true pair wise and of course if that condition is true then immediately you know that this that is going to give you the stationary distribution without doing much solution at all and what is this actually saying it says wait for a long time get to the stationary distribution let p of k be the probability that you are in state k then that multiplied by the transition probability rate that you go from k to l must be same thing in the reverse order so here is the initial state here is the final state if you interchange the two you get just this guy here so the flow both ways pair wise is exactly the same the weighted flow of probability mass now this should be a curiosity just a curiosity except for the fact that systems in thermodynamic equilibrium physical systems satisfy detail balance provided the underlying dynamics has a property called time reversal invariance I would not get further into the details of this at the moment but just to tell you that it is a very very important special case there are lots of physical systems under very general conditions which satisfy detail balance in which case this thing is immediately true and what will that tell us it gives us the ratios of all these probabilities so you choose any one of them say p1 then you can find p2 p3 and so on in terms of this p1 and how would you determine p1 itself you normalize the probability remember you have to normalize right so remember that you definitely need this summation k equal to 1 to n p of k must be equal to 1 so we know all the ratios p p2 over p1 p3 over p1 up to pn over p1 and we know p1 itself from that set of equations provided you have a normalization condition there is got to be one inhomogeneous condition which is what this is and with this normalization you can find all of I leave it to you as an exercise to show that in the general case for arbitrary n when you have this detailed balance condition valid then this will imply this will imply that the solution p of k looks like this it looks like 1 over and this is a well known form so it is worth memorizing it 1 plus a summation l not equal to k 1 to n ratio I could verify this so there is a very simple formula essentially an algebraic formula for the stationary probability distribution of this Markov process in the stationary Markov process in the case when detailed balance is valid and then it is just that simple algebraic formula of there what happens if the rates are all the same the rate from l to k is the same as a rate from k to l what do you think would happen now you are saying not only read the balance you are saying that look this rate is equal to that rate that tells us this must be equal to that the probability is distribution must be a uniform distribution there are n states you are in the steady state and each of them is equally probable so what is the probability of any one of them 1 over n and indeed that is true because if these rates are all equal this cancels gives you 1 there is a summation here except for 1 index k so this is n minus 1 you add it to 1 and you get a 1 over n okay so this goes over but when you do not have that when you do not have that extra condition that the rate transition rates are also equal then of course you have a non-trivial solution in terms of all the transition rates now let us look at the simplest case the simplest possible case is when you have just 2 states possible and this is such a famous case and it applies in so many places it occurs in so many places that it has got a special name to itself it is called the dichotomous stationary dichotomous Markov process corresponds to n equal to 2 just 2 possible states. We can write down what the solution is in this case because that stationary distribution is utterly trivial in this case you have summation over l and for any given k it runs over only one other value for 1 or 2 right so l k etc run over value is 1 to 2 and you have w for the stationary distribution you have w say 1 2 p of 2 minus w 2 1 p of 1 should be equal to 0 because there is nothing else to sum over if I said k equal to 1 I have this and I sum over l there is only one other value 2 and similarly you have w well you have essentially this equation and nothing more so this is what we call w 1 2 and let me call this p of 2 equal to w 2 1 p 1 here so p 2 is this fellow 2 is this and p 1 plus p 2 must be equal to 1 so this must be equal to 1 minus p of 1 and what does it lead us to it gives us explicit solutions it says w 2 1 over w 1 2 plus 1 p of 1 equal to 1 or p of 1 equal to w 1 2 w 2 1 2 is a rate transition problem rate to go from 2 to 1 and similarly p of 2 equal to w by symmetry that is it those are exact solutions for the stationary distribution of course we also have to write the time dependent solution for p of t we have just solved the problem w on p stationary equal to 0 and we have this but now think about it a little bit and you see immediately how physical this solution is because what we have is a process and we cannot draw this let us draw this is a process which has a function of time it takes on two states possible now we need a symbol to say what the values of this random variable are which can take two possible values let us call those values to constant c 1 and c 2 for example so let us say c 1 is sitting somewhere here and c 2 is sitting somewhere here then what it does is it starts in say state 1 goes on for long time so that any arbitrary instant of time you can put the origin of time and then it abruptly makes a jump to c 2 and then goes on for a while it makes c 1 and then it does this etc. So this is c 1 and that is c 2 two states it is a two state process and in each state the value of the variable is either c 1 or c 2 and it keeps switching back and forth randomly between these two states so you can now imagine can easily imagine how many possible situations this will model immediately anything which has got two possible states either on and off or a passive state and an active state you name it a huge number of applications where this model will work the simplest possible model now what is the rate or average rate at which it switches from one to the other so we have already said that we have said that w 1 2 is what we call w 1 2 is the rate from at which 2 to 1 switching is happening so let us call this lambda 2 the rate at which the system switches state if it is in state to the rate at which it switches to state 1 and similarly to 1 is a rate at which it switches from 1 to 2 then according to what we have here it says p 1 must be equal to w 1 2 that is lambda 2 over lambda 1 plus lambda 2 and p 2 must be equal to lambda 1 over lambda 1 plus lambda 2 now what is the physical meaning of this lambda 1 and lambda 2 if you look at this picture you see that it stays suppose it had started like this somewhere it says stays for a time interval in state 1 random time interval another random time interval another random time interval etc and the average over all these things is the mean residence time in state 1 so if you say what is the average in between switches from state 2 to 1 in between what is the average time it spends in each each time it reaches state 1 let us call it some tau 1 so average residence in state 1 let us call it tau 1 and similarly the average duration of a stay in state 2 let us call this average equal to tau 2 then clearly the switching rate is just the reciprocal of these things right so it is clear that lambda 1 tau 1 equal to 1 over lambda 1 tau 2 equal to 1 over lambda 2 so it says this time is obviously equal this is equal to tau 1 over tau 1 plus tau 2 and this is equal to tau 2 over tau 1 plus and that is exactly what you would expect physically because now you would say huh if I look at it as a function of time and I ask I put my finger on one particular point in time I ask what is the stationary probability that it is in state 1 or state 2 well the probability that it is in state 1 will be the fraction of the time that it spends in state 1 over a long period of time and similarly for state 2 and they are precisely proportional to the mean residence time in state 1 divided by the total residence time these are the fractions okay so if we have a simple physical interpretation of what is meant by the equilibrium distribution for a dichotomous Markov process it is just the ratio of the fraction of the mean mean residence time in one of the states divided by some of the residence times in both the states okay. So it is physically very clear this is what is happening we still have to solve this problem of the time dependent problem we still not dealt with that but the stationary distribution is completely clear in this particular case. Now even if you have a three state process the formulas that you write down in general without detailed balance are not so trivial they are quite involved they get more and more algebraically more complicated but when you have something like detailed balance then of course it simplifies enormously but the most popular model is the dichotomous Markov process in this case now what is what remains is to ask what happens as a function of time what happens when you put in the time dependences everywhere and this is what happens we need to solve this problem by going back and saying d over dt so we need we write the solution down p of t is to the w t p of 0 and we need to know what is w and w was equal to w 11 w 12 but w 12 was the rate to go from 2 to 1 on this side which was lambda 2 and similarly w 21 was the rate to go from 1 to 2 and the diagonal elements are minus those guys so that is w the transition matrix and we need to find the exponential of this matrix so you square it you cube it and things like that and find out what is the exponential but matters are made a little simpler by noticing that w squared is minus lambda 1 lambda 2 lambda 1 minus lambda 2 this is w squared and what that equal to its lambda 1 squared plus lambda 1 lambda 2 so take out a lambda 1 you get lambda 1 into lambda 1 plus lambda 2 and then minus lambda 1 squared minus lambda 1 lambda 2 so again take out a lambda 1 and then on the other side you get minus lambda 1 lambda 2 minus lambda 2 squared so it is minus lambda 2 lambda 1 plus lambda 2 and then finally lambda 1 lambda 2 plus lambda 2 squared so that is lambda 2 plus lambda 2 so we can take out the lambda 1 plus lambda 2 and write this as equal to lambda 1 plus lambda 2 times lambda 1 minus lambda 1 that is just minus of this guy out here so this is turning out to be equal to minus and then a w itself okay now the rate to go from 1 to 2 is lambda 1 and 2 to 1 is lambda 2 and the average rate is half the sum right so let us define the mean transition rate lambda is lambda 1 plus lambda 2 over 2 so it says that w squared equal to minus 2 lambda w that of course immediately makes the problem of finding the exponential trivial because it says w cubed is proportional to w once again and so on so forth so where does it get us it says e to the w t is the identity matrix plus w t plus t squared over 2 factorial w squared which is minus 2 lambda w and then t cubed over 3 factorial and then w cube but that is w times w squared and so on so it is minus 2 lambda whole squared w etc all the way so let us write this as i plus let us take out the w let us take out the w and then let us write this term as minus 2 lambda w t divided by 2 lambda write it in that form so this becomes over minus 2 lambda a minus 2 lambda take out a minus 2 lambda in this fashion and then inside I have minus 2 lambda t minus 2 lambda t whole squared over 2 factorial plus dot dot dot I took out a minus 2 lambda so put let me put that back in here so that matches this power out here but what is this fellow e to the minus 2 lambda t minus 1 right so let us erase this 1 minus e to the minus 2 lambda t and get rid of the minus sign and it is over right now we can write the full probability distribution down completely I leave that to you as an exercise to write this down so you should be able to write down p of now let us let us put in the values that we have for this process so you should be able to find p of c 1, t given that you started in c 1 p of c 2, t given that you started in c 1 how would you find this what is this guy equal to this is e to the w t on the initial matrix and what is the initial matrix if I start with c 1 what is this equal to it is equal to 1 at equal and what is this 0 so all you have to do is to apply that to this fellow all you need to do is to apply this matrix which is a 2 by 2 matrix because we know about w and I apply it to this fellow and read off these two numbers and similarly p of c 1, t c 2 and p of c 2, t c 2 this column matrix is equal to e to the w t that is this matrix acting on 0 1 and you can write the full matrix now by the way just to check since we have this statement here if you let this if you let t tends to infinity if you let t tend to infinity be better recover the values that we already had for the equilibrium distribution what happens to this guy it becomes a stationary distribution p 1 p 2 right so you have p 1 p 2 should be the limit of e to the w t on the initial state whatever that state is whether it is in 1 or 2 we do not care now what is the limit of e to the w t you can read it off from here yeah so we know that the limit t tends to infinity e to the w t turns out to be the identity matrix plus w over 2 lambda because this goes away so you have 1 0 0 1 plus w over 2 lambda and now you have to tell me what is 1 over 2 lambda what is w by the way it was lambda 1 here lambda 2 here minus lambda 1 minus lambda 2 added to this guy out here and this is lambda 1 plus lambda 2 so what does this become equal to lambda 1 plus lambda 2 so it is lambda 2 again a lambda 2 and then a lambda 1 and a lambda 1 1 over 2 lambda times that on whatever initial state you want to put down now one of our articles of faith is that as time increases the initial value should not matter so whatever distribution I should still get the equilibrium or stationary distribution to be the same thing so whether I apply it on 0 1 or 1 0 it should not matter it should not matter at all otherwise I am in trouble right in fact I could have had a of this and 1 minus a of the other so let us do that and see what happens so suppose the initial state this fellow here had been some a b with a plus b equal to 1 non-negative numbers such that a plus b is 1 so we apply it to a b and what do you get you get lambda 2 a plus lambda 2 b so lambda 2 comes out and you have a plus b which is 1 it goes away so this indeed gives you lambda 2 over lambda 1 plus lambda 2 and lambda 1 over lambda 2 independent of a and b well independent of a b is 1 minus a so that corroborates the requirement we had that the stationary distribution should not depend on the initial distribution at all okay so it does it does cancel out this thing does that is why you had the same element both places out here and that is exactly the this is exactly the equilibrium distribution we discovered earlier okay but you can write the time dependent one out here write this out explicitly I leave you to do this little bit of algebra but you get expressions what is that so what is the decay to equilibrium what is it going like there is a constant part in these probabilities which gives you the stationary part and then there is a part which decays there is only one more eigenvalue and what is that other eigenvalue 2 2 minus 2 lambda minus 2 lambda so the correlation time you expect the correlation time in this process is going to be 2 lambda inverse and again that corroborates what we are going to find the correlation time explicitly I am going to define it in a minute but you see it is this time scale that is the correlation time in a very specific way we will see that the autocorrelation will die down with this exponent out here but what is this guy equal to this is equal to 1 over lambda 1 plus lambda 2 right which in turn remember this is it is equal to tau 1 tau 2 over tau 1 plus tau because this is 1 over tau 1 that is 1 over tau and what is this number what sort of mean is it it is the harmonic mean of the individual residence times right so we have this very simple relationship which says in a dichotomous Markov process with mean residence times lambda tau 1 and tau 2 in the two states the correlation time of this process time on which it loses memory in some sense is the harmonic mean of these two individual times now let us define correlation well some things can be written down immediately and then we will if time permits do this or I come back to this a little later we will write the answer down at least in equilibrium we will write the answer down so I ask now what is the mean value of this process let us call that process X so what is the mean value of X X has values C 1 and C 2 the sample space of X comprises the two values C 1 and C 2 and the system is switching randomly forth back and forth between these two values with mean residence time tau 1 and tau 2 respectively in the two states so what do you expect is the average value of X X stationary X stationary is with respect to the stationary distribution by definition this is equal to C 1 P 1 plus C 2 P 2 by definition the stationary or t tending to infinity average has got to be this because P 1 P 2 are the stationary probabilities and C 1 C 2 are the corresponding values therefore this is the weighted average and it is a normalized probability and what is this equal to not surprisingly this is C 1 tau 1 plus C 2 tau 2 over tau 1 plus tau 2 or in terms of lambdas it is C 1 lambda 2 plus C 2 lambda 1 over lambda 1 plus lambda 2 what is the mean square going to be the same guy but with squares right so it is C 1 squared tau 1 plus C 2 squared tau 2 over tau 1 plus tau 2 again this is the stationary and what is the variance going to be so what is delta X that is the difference between this and the mean squared stationary delta X whole squared stationary what is that going to be that is equal to this minus the square of this fellow now the square of this is a first of all if I take this minus square of that there is a tau 1 plus tau 2 multiplying this over here so let us quickly do that it is C 1 squared tau 1 plus C 2 squared tau 2 tau 1 plus tau 2 minus C 1 squared tau 1 squared plus C 2 squared tau 2 squared plus 2 C 1 C 2 tau 1 tau 2 the whole thing divided by tau 1 plus tau 2 squared so some terms definitely cancel out we can write down a very simple formula and that is delta X whole squared in the stationary state is equal to first of all the tau 1 squared cancels with this guy the tau 2 squared cancels with that and then you have a C 1 squared tau 1 tau 2 a C 2 squared tau 1 tau 2 and then you have minus 2 C 1 C 2 tau 1 tau 2 so this is equal to tau 1 tau 2 that is it okay now what is tau 1 tau 2 over tau 1 plus tau 2 it is it is the correlation time right so there is a direct connection between this what happens if what happens if C C 1 C 2 equal to minus C 1 this becomes a square of whatever value so sometimes it switches between plus 1 and minus 1 or something like that then the formula simplify that is called the symmetric dichotomous process we will talk about that little while later but what happens if there is time dependence so the next question to ask is the autocorrelation the generalization of the variance and I will do that tomorrow we need to generalize this to ask delta X at any instant T 1 delta X at any instant T 2 what is that equal to that generalizes the variance but this is a stationary process so this thing must be a function of T 2 minus T 1 or something so I might as well look at delta X at any time 0 and then call this just T what would this be if T equal to 0 it should reduce to this right so it is this time something or the other and you expect the memory is going to drop as a function of time exponentially with the correlation time which is 2 lambda inverse so what do you expect this formula to be what do what do I expect this to be in general that is equal to this multiplied by e power minus 2 lambda so the dichotomous Markov process is exponentially correlated and we will prove this we have to define the correlation time correlation self-auto correlation more precisely we will do that and then take it from this point tomorrow.