 Right, so we had started looking at the dichotomous Markov process last time and I mentioned that the process was exponentially correlated. This is a two state stationary Markov process in which the system flips between two values or two states with some mean rate lambda 1 to go from 1 to 2 and lambda 2 to go from 2 to 1 and we discovered that in general we could write down an explicit expression for the probability density the conditional probabilities themselves for system starting either in state 1 or 2 and going to state 1 or 2 at the end. So if you recall this dichotomous process which we call DMP for short system flip between values C1 and C2 and C1 to C2 the rate was lambda 1 and C2 to C1 the rate was lambda 2 and we discovered in general that there is a correlation time which is lambda 2 lambda inverse which is 1 over lambda 1 plus lambda 2 and that was just the harmonic mean of the mean residence times in state 1 and state 2. I also mentioned that there are a huge number of potential applications for this very simple model of a random process and we will come across some of them as we go along one or two specific applications. One for example could be the following you could say what about a particle that is diffusing along a straight line and one dimension say and what it does is to move with the velocity plus C in one direction or minus C in the other direction those could be the two states and it is randomly flipping back and forth between these two how far does it go what is its mean displacement like mean square like and so on and so forth. So those are valid questions which have practical applications they are models for various kinds of random processes. Now I mentioned something about the autocorrelation function and so let me explain what it is in general I said it is the generalization of the variance of a stationary random process. So let us see what it actually tells us autocorrelation function of a random process. So let us suppose that you have a random process which has values x 1, x 2, x 3 etc. So the typical value is some x j and I call this the state j of this guy. Then I asked for what is x of 0, x of time t the average value averaged over all possible realizations of this random process okay. By definition this would be equal to suppose it starts with some value x of j and we are asking what happens at time t and we ask for what let us say the state at time t is k you sum over all those possibilities. So you sum over j you sum over k and if the value in x of 0 is x j then this is x k and then you multiply this by the probability that you are in state k at time t given that you started in state j at 0 multiplied of course by the probability a priori probability that you are in state j. That is the definition of this average as you can see okay. And that is what the autocorrelation function is. So you can call this c of t for the variable x is defined to be this quantity here. At t equal to 0 it of course reduces to the mean square value. So this generalizes it says on the average what is the degree of memory that is implicit in this that is involved in this random variable what is this average what does it reduce to I expect this to start at some finite value the mean square value and d k to 0 as t goes to infinity either monotonically or in an oscillatory fashion this is what one would generally expect okay. This is provided the mean value is 0 as you know in the definition of the variance you also have to subtract the square of the mean. So a more general thing would be to say what is delta x of 0 delta x of t this quantity where this stands for the deviation of x from the mean value at t equal to 0 and at time t here and since the mean value is independent of time because it is a stationary random process this thing here turns out to be exactly the same as what we have here x j x k minus the average value square that is not hard to show and what is average value square remember that this quantity is just summation over j x j u of j. So this is what the autocorrelation is and the claim is this quantity tells us a great deal about the random process of course in a general random process this quantity stationary random process this depends only on the conditional density here what started off as a two time conditional density that has now reduced to one time argument because of stationarity and it does not say anything about the higher probabilities the joint probabilities or the higher conditional probabilities but in a Markov process we have already seen that this says everything there as to that as to be said about the process right. So in that sense for a Markov process the autocorrelation function gives a great deal of information says and one generally uses this as the characterizer of a random process to start with to measure how much memory it has etc we will see examples of this. Now in the case of a dichotomic Markov process this quantity turned out to be a pure exponential a single exponential now in general of course you might have more than one relaxation time in a process it need not even be an exponential it could be a power law decay or something like that but very typically for this process you have an exponential decay here and we saw what this is we could now we can now act we wrote the answer down but we can actually put this in we know this quantity here for a dichotomous process so we can put it in do the calculation and work out what this autocorrelation comes out to be explicitly okay but I gave a simple argument to show that it is got to be C of t has got to be of this form this quantity for the dichotomous Markov process turns out to be something like lambda 1 lambda 2 and then C 1 minus C 2 whole square e to the minus 2 lambda t divided by lambda 1 plus lambda 2 whole square so it is exponentially correlated here and this is what tells you this is the reason for saying 2 lambda inverse equal to correlation in this case it is quite straight forward to identify this quantity with the correlation time of this process now of course once things get a little more complicated then evaluating a correlation time becomes non-trivial evaluating this quantity becomes a little less trivial and eventually we will talk about not this quantity itself but it is Fourier transform so it is interesting to see what happens if you decompose it into Fourier components and then ask how much is of the amplitude of this Fourier transform is in a given frequency window okay and that will lead us to the concept of the power spectral density of a random process we will come back to that aspect right now what I would like to do is to go on to another process a very famous one which occurs everywhere and this is the so-called Poisson process that is the process in continuous time okay you have an integer valued random variable because of Poisson distribution as you know refers to a non-negative integer valued random variable and here is a simple example of it you ask you are waiting for a bus for instance and these buses appear at random where you are and you ask for the distribution for the number of buses in a given time interval and it is Poisson distributed if there is no correlation between different events okay similarly you have a sufficiently large sample of a radioactive nucleus and you ask I start the clock now and you ask what is the probability that you had n dk is occurring in a given time interval t this turns out to be Poisson distributed too okay so let us ask what we know about the process and let us ask how to derive this expression here so let me denote by p of n comma t it is in continuous time the probability that n events have occurred or whatever I am interested in n such events have occurred in the time interval t we are discussing a stationary Poisson process in other words again the statistical properties are independent of time altogether okay so I would like to have a certain equation for this in continuous time and I argue as follows I say that if I choose a sufficiently small increment of time delta t I choose it to be small enough such that in that increment of time either one event occurs or no event occurs at all okay if the events occur with some mean rate of events is some lambda inverse of some time it is time scale then it implies that in a time interval delta t on the average lambda delta t events would occur and 1 minus lambda delta t is a probability that no event occurs at all so either a single event occurs with probability lambda delta t in the interval delta t and 1 minus lambda delta t is a probability of no event in this time interval these are mutually exclusive events and there are no other possibilities so for a given finite value of lambda it is clear I can choose my delta t small enough that only one event or no event at all occurs in this interval delta t okay that is the only assumption you need the other assumption is there is one more assumption which is that the events are not correlated to each other okay they completely independent of each other so if you took the time axis here and you put a cross every time an event occurred these crosses could occur completely at random in an uncorrelated fashion such that the mean time between gaps this mean time mean gap equal to lambda inverse that is the meaning of saying the mean rate is lambda the mean residence time between the mean time elapsed between successive successive events is 1 over lambda okay then we need to write down an equation for p of n, t and the way one does it is to say alright let us look at p of n t plus delta t there are only 2 possibilities now so I start with t equal to 0 here time flows in this direction and here is t here is t plus delta t I asked for the probability that n events have occurred in the time interval from 0 to t plus delta t okay then there are 2 possibilities one of them is that all n have occurred here and nothing happens in between and the other possibility is that n minus 1 have occurred here and one of them happens in this infinitesimal time interval. So the probability that nothing has happened here no events have occurred means that sorry n minus 1 events have occurred and the last one appears here is just this so this has got to be equal to p of n minus 1 till time t and then you need one event to occur in that interval and the probability for that is lambda delta t the other event possibility is that all n have occurred in the time interval t and nothing happens in the remaining time and that is it there are no other possibilities okay now of course the immediate thing to do is to move this p n t to the left hand side and divide through by delta t which will immediately tell us that d over d t of p of n comma t is there is a lambda here and there is a lambda here too the minus sign so this is equal to lambda times that is a rate so it has got the right dimensions p of n minus 1 t minus p of and that is a differential equation for p of n comma t okay what is the initial condition that we need to impose no events have occurred at time t equal to 0 which is starting okay so this equation for what values of n is this valid one upwards right n takes on is the number of events so it is n greater than equal to 1 but you also need a probability equation for the probability that in a given interval no events have occurred at all so you certainly need an equation for that therefore you need to know what is 0 t plus delta t equal to this means nothing has happened right till t plus delta t which means nothing should happen till time t and nothing should happen in the remaining time as well right so this has got to be equal to p of 0 comma t times 1 minus lambda delta t this is the probability that nothing happens in the interval delta t and this is probability that nothing happens till time t okay so this says dp 0 over dt equal to minus lambda p of 0 whereas this is true for all n greater than equal to 1 so it is clear that this probability can only decrease you wait for a sufficiently long time something is going to happen so the probability that nothing happens is constantly decreasing with time and the solution is immediate the solution immediately says this is equal to implies that p of 0 t equal to e to the minus lambda t times 1 that is the initial condition on p of 0 t because p of 0 comma 0 is 1 okay. So this thing exponentially decreases but the rest of these fellows satisfy this set of couple differential equations all the way to infinity and how does one solve it in many ways of doing this we could do a Laplace transform and solve it or better still find a generating function that is the obvious thing to do so let us write a generating function for it incidentally what is the random variable out there in this problem it is n it is n of t and this n of t cannot decrease as a function of time it could remain constant and then it will increase increases by 1 each time so it is like a step it is a birth process because it is constantly increasing there is no death involved here okay. So this is like a step it is an irregular step so if I plot typically if I plot as a function of time I plot this random variable n of t starts at 0 and then all of a sudden it becomes 1 remains like this and then maybe increases and then does this kind of t increases by 1 each time okay and it is a simple example the simplest example of what is called a birth process so let us define a generating function f of z, t to be a summation n equal to 0 to infinity p of n, t z to the power n and all we need to do is to multiply this by z to the power n and sum and this by z to the power 0 this this by z to the power 0 which is 1 and add it up along with the rest okay and therefore on the left hand side this differential equations therefore tell you that delta f is e, t over delta t use a partial derivative because it is got another variable as well okay is equal to there is this lambda here and all these terms are going to add up lambda times this plus lambda z to the power n p of n, t is going to give you f itself and this is going to give us what you are multiplying this by z to the power n so take out a z and then it is z to the n minus 1 and you are summing from 1 to infinity so shift the n minus 1 n equal to 1 to infinity by 1 you will sum from 0 to infinity but a z has come out okay so it is immediately clear this thing is equal to lambda times z minus 1 times f of z that just follows by multiplying both the this equation by z to the n and summing summing up this whole set okay what is the boundary condition on this what is the initial condition on this well we know the initial condition on this is delta of n comma 0 so f of z comma 0 must be equal to 1 because z to the 0 is 1 and we need to solve this with that initial condition and the solution is immediate z comma t is e to the lambda t z minus 1 that is it my handwriting is becoming execrable my z's are looking like t's the t's are looking I use okay okay so that is the generating function and what do we need the probability we need p of n comma t is the coefficient of z to the power n expansion of this exponential so it at once says that p of n t we have a solution for it which is p of n comma t equal to e to the minus lambda t and then there is a lambda t to the power n or n factorial if I expand this I hear that is a Poisson process this is a Poisson distribution with the mean value given by lambda t n of t equal to that is it once you have specified the mean value of a Poisson process you have said everything there has to be said all the the variance is lambda t all the cumulants are lambda t so it is completely characterized by this by this single parameter lambda okay now what is the physical assumption that went into this whole business the most crucial assumption that these yes that these successive events are completely uncorrelated independent of each other so the probability that you have a cross here an event has occurred here the probability that you have another event here or here or here etc is completely independent of what happened here totally independent of this so in this process stationary Poisson process is characteristic of something which is completely uncorrelated if there is any degree of memory anything other than this exponential form for p of 0 comma t then immediately you have some memory in the problem and we will see examples of that but otherwise this is certainly the simplest form you can think of now I asked the following question suppose this thing is this process has been going on for a long time I have this radioactive sample and the decay is going on for a long time so on the time axis every time there is a decay there is a click and I put a cross on this I think etc what is the mean time between successive decays lambda inverse that is the mean time but I could ask for another question I could say if this is an equilibrium what is called an equilibrium process going along for a long time a renewal process and I come along and start my experiment at some instant of time so I start my clock at this time t equal to 0 I start looking at the system at this instant of time and then of course this successive things will I record all the successive all the subsequent decays that happen I could ask what is the mean forward recurrence time namely what is the mean value of this time having started the clock at an arbitrary instant of time I could ask what is the mean value of this time I could similarly ask what is the mean value of the time since the previous incident so this is called the mean backward recurrence time and this is the forward recurrence time what would you say how about this argument I say look the time difference the mean value of this gap here is tau say which is lambda inverse and now I am putting a bar in between the two and I ask what is the mean value from the cross to this bar on the ones on the left and mean value from the bar to the cross on the right hand side so the naive guess would be to say half tau sort of split equally and so on and that would be wrong that is wrong the correct answer is each of these is again tau each of these is again tau because you see in some crude sense this system is completely uncorrelated so it does not care whether it is a bar or a cross it does not care at all you could regard this as an event so no matter where you start the mean forward recurrence time is equal to the mean backward recurrence time is equal to the mean recurrence time itself all this is tau once again so probability theory is full of these little paradoxes apparent paradoxes and subtleties but in this case it is very straightforward to actually show that this is the forward recurrence time worth remembering because if you recall this old simple derivation of the conductivity the mobility of charge carriers in metal metallic system one of the quantities that appears in the formula for the mobility is a mean time between collisions and so on and what actually appears is the mean forward recurrence time and that is the same as the mean recurrence time otherwise you miss a factor of 2 in the formula okay so much for the Poisson process we will look at other processes which are governed by a Poisson process in fact we are going to look at a whole family of random box where something else happens at instance of time given by a Poisson process so this thing if you like drives what happens next okay so let us look at an example of that and let us do this in several steps because it is a very very basic process and the simplest of these is the so-called simple random box so let us do a simple random box and let us begin in fact with an even simpler problem let us imagine we have an infinite lattice linear lattice given by sites which are all distributed at regular intervals on the x axis let us call this site 0 this is site 1 this is 2 etc etc this is site minus 1 minus 2 on this side and infinitely linear lattice and the rule of the game is the following I take a coin I toss the coin and if it comes up heads I move one step to the right and if it comes up tails I move one step to the left and I do this at the end of every second so every time unit it is in discrete time and I ask where am I likely to be after n steps okay so I would like to know in discrete time time in discrete steps or number of steps is n that is given to me and I take say 100 steps or 200 steps or whatever and I label these sites the general site by J the index J and J is any integer and just to be specific I start at a site which I call 0 okay as long as this is an infinite lattice it does not matter which site you call 0 the starting site I call it 0 this translation invariance all these assumptions you have to reexamine if you put other conditions for example if I am in a finite medium with boundaries and so on then it makes a difference whether I start close to a boundary or whether I am in the middle and so on but otherwise on an infinite lattice it does not matter so the site is labeled by J and I ask for the following quantity I ask for P the probability that I am at site J at time M and this is the number of steps having taken n steps I ask what is the probability that I am at a site any given site J and remember the probability of jumping to the right let me call it some alpha and jumping to the left let me call it beta such that alpha plus beta equal to 1 so I even include a bias in this walk and the problem is clear with probability alpha I get heads with probability beta I get tails the bias coin and I start the walk at t equal to 0 at 0 steps at some point which I label 0 and ask what is the probability of being at J okay it is immediately clear that I can write a difference equation for this thing at once so we could write P of J n must be equal to well at the n minus 1 step I must have either reached J minus 1 or J plus 1 only then will I reach J at time and that is in the n step the probability that I have reached J minus 1 in n minus 1 steps is this quantity and then I need to take a jump to the right to reach the step the point J and the probability of that is alpha or there is a mutually exclusive event which is P I reach J plus 1 at time n minus 1 and then I take a step to the left and the probability is beta here so that is my difference equation remember that J takes all integer values positive and negative and 0 n takes positive values starts at 0 and goes 1 2 3 4 etc and J is the random variable we are asking for this distribution as a function in J out here for a given n now there are many ways of solving this you could write a generating function and do this and so on but actually we can write the solution down by inspection first thing to notice is that it is clear that J must run from minus n to plus n because there is no way you are going to reach a point greater than n in magnitude on either side in n steps so J is less than equal to n in modulus so it is bounded on either side by the number of steps the other point is that if n is an even number you can only end up with an even site because to go to the site 3 for instance you cannot do it from 0 in an even number of steps you need an odd number of steps to do it you go from 0 to 1 1 to 2 2 to 3 in the odd number of steps otherwise 5 you shoot over shoot go to 4 you come back to 3 and so on so it is immediately clear that n minus J n equal to J modulus n minus J must be even and then J must have the same parity if n is odd J is odd you can only end up in odd number site and if n is even you can only end up in an even number site then the way to write the solution down is to argue as follows suppose here is J some site J I started the origin I have to take at least J steps to reach that point right that is for sure so J steps to the right and that leaves n minus J steps and those steps have to cancel out in pairs I overshoot undershoot etcetera and I cancel them out in pairs so the number of steps to the right that I take has got to be J plus n minus J over 2 right steps assuming that J is positive we will come back and see what happens when J is negative so I need that many steps to the right and the remaining steps are to the left so it says n minus J over 2 left steps but this is n plus J over 2 so the probability P of J comma n has got to be alpha to the power n plus J over 2 that many steps to the right and they are all independent of each other the probability of each step to the right is alpha so you have this factor and then the rest have to be beta to the n minus J over 2 and of course you do not care in what order you take these steps any sequence will do will get you at the point J so that tells you P of J comma n plus n equal to n n plus J over 2 binomial coefficient times this and that is it that is the answer so this says mod J less than equal to n n minus J n minus J even and in all other cases the answer is 0 0 that is it that is the solution and I leave it to you to check that it actually satisfies this these two binomial coefficients will add up to give you this quantity here for any alpha plus beta equal to 1 now I leave you again I leave it to you as an exercise to convince yourselves that if J 1 negative exactly the same formula is still applicable it does not really matter so this is the solution out here what is the sample space the sample space of J what is the sample space of J here we have to be a little cautious remember n is given to you for a given n what is the least value it could have minus n can be on the left hand side it can be at minus n right so the sample space of J is minus n can it be minus n plus 1 no no because for a given n it must have the same parity as n right so this is minus n minus 2 dot dot dot those are the only possibilities for a given n whether it is even or odd we do not care but this is the set of values that J can have okay so it is almost like a binomial distribution if you like what should we do to make it a binomial distribution so you see immediately it is a binomial distribution we would like to change the variable such that the variable runs from 0 to n overall integers from 0 to n so the way to do it is to say alright I call a step to the right a success and a step to the left a failure for instance then I am counting the number of successes that is n plus J over 2 so let us introduce a random variable let K equal to n plus J over 2 this has got to be divisible by 2 because n plus J is even and what is the parameter space of sample space of K K runs from 0 to n and then what is the probability that you have in n steps you have K successes that is P of K comma n n K alpha to the K beta to the n and that is of course the binomial distribution so we see that the simple random walk on a linear lattice a biased random walk is essentially got a binomial distribution it did not look like it when I use this variable the site index J then you have to be careful about the parity and so on but once I write it in that form it is exactly the binomial distribution okay what is the average value of J what do you think will be the average value of J we can now compute it you can put a J here and you can sum it or you can do a generating function we could put a generating function here you put a generating function here it does not matter it is essentially the same thing what do you think is the generating function here remember that this J runs from minus n to plus n in steps of 2 always so you could again say I take a Z to the power J and I sum it up and things like that and you are going to get a Z every time you move to the right one over Z every time you move to the left so it is of the form alpha times Z plus beta over Z the whole thing to the power n so what do you think is the average value this is a function of n it is an n step random walk so it is a function of n and what is this equal to you got to put that in here well first of all if alpha is equal to beta you expect it is not going to go anywhere the average value is going to be 0 right but now alpha is not equal to beta and if alpha is bigger than beta you expected to be positive if it is less than beta you expected drift to the left so you would sort of naively expect that this thing is proportional to alpha minus beta it essentially is because this difference alpha minus beta is acting like a drift is acting like a drift velocity so that is like the mean drift velocity because in every time step you have a chance alpha minus beta of going either to one side or to the other side and of course distance is time multiplied by the velocity average velocity so time here is n so I expect this check this out use the generating for by the way you can write f of Z equal to summation J equal to minus n to n P of J comma n Z to the power J and with all those restrictions because I already said that unless J satisfies this these conditions the P is 0 so formally you can just write it like this and this will turn out to be not surprisingly alpha Z over beta Z plus beta let inverse to the power f of one is one generating thing is normalized and when you differentiate this once you are going to get the average value and that is going to give you an alpha minus beta you differentiate this this fellow will go to n minus one but it will become one so this is the answer this acts like a drift velocity this alpha minus beta and this has got significance because when we do diffusion in the presence of an external force like in sedimentation or something like that you will see that it is precisely modeled by the continuum version of this and a mock here or I have a charge carrier and I have an electric field and I am asking for the mobility this is going to depend this drift velocity is going to depend on this bias what is going to be the variance of J as a function of n the variance there is a drift it is very clear so that is increasing with time and if you square the mean it is going like the square of the time square of n it is ballistic motion almost on the other hand you are asking for the variance so you are subtracting this portion out this systematic drift out and then what do you expect you expect it to behave like an unbiased random walk and in an unbiased walk what is the mean square displacement proportional to as a function of time it is linear in time that is the famous diffusive process so this will turn out to be 4n alpha beta because if alpha is equal to beta equal to half unbiased you expect it to be exactly n that is the famous diffusive behavior the mean square displacement goes like exactly goes like goes linearly in time okay so check these out and again you can do that by take generating function differentiating etc. Verify that this is true what we would like to do is to take this and do it in continuous time and now the rule is different I am still on this lattice I still toss a coin and with probability alpha move to the right or beta move to the left but I do so at random intervals governed by a Poisson process so every time there is a cross I toss my coin so this means the instance of time at which I take the jumps are random completely and the next step will be to say even that is continuous in the sense that I have a continuous diffusion process this we will see when we go to the complete continuum limit but right now we started with the discrete lattice in space and discrete time and now I am making time continuous and saying that I have a mean rate of jumps governed by a Poisson process and I want to know what is the probability that I hit this point J at some instant of time I am at J so that is now a birth and death process because I look at negative J as death and positive increasing J as birth and what is the equation going to look like I have P of J, T and I ask what is P of J, T plus delta T equal to and what are all the possibilities here is the site J so let us draw picture here is the site J and this is the neighboring site J plus 1 and that is the site J minus 1 so to be here at time T plus delta T with a mean rate of jumps lambda it is clear that I should have been here or here at time T and then in the interval time delta T I either jump to the right or to the left right so what is the equation what should I write here P of J minus 1 T and then with probability lambda delta T I jump to the right hand side and that probability is lambda alpha delta T remember alpha is the a priori probability of getting a head in the coin plus lambda beta P of J plus 1 T and alpha plus beta is 1 but we are not done yet you might already reached the point J at time T and you do nothing in the time delta T so you must add to this plus 1 minus lambda delta T P of J so that is the equation the master equation if you like for the probability P of J, T and all possibilities have been covered again I take this term to the left hand side and I differentiate divide by delta T and take the limit so I get D P of J, T over D T and what is the equation that I get there is a lambda that comes out and then alpha P of J minus 1 T minus P of J, T plus beta that is the equation. What is the initial condition well I tell you I start at the origin at T equal to 0 right so what is the initial condition a chronicle delta delta of J. So there are these two gain terms but there is also this loss term here and the sample space of J is all integers minus infinity to infinity again I write a generating function etc etc and solve this equation as before but now the generating function will be over all integers so define F of Z comma T equal to summation J equal to minus infinity to infinity P of J, T Z to the power J so I multiply this by Z to the J on both sides and sum over from minus infinity to infinity on the left hand side I get delta F Z comma T over delta T and that should be equal to on the right hand side well this thing here if I multiply by Z to the J just gives me an F straight away with a lambda this gives me alpha times what I got a Z to the J but there is a J minus 1 here so I pull out a Z and then it is a summation from minus infinity to infinity whether it is in J or J minus 1 makes no difference you still get the quantity F right so it immediately says this is equal to lambda times alpha times Z because there was an extra Z here which you needed and there was a 1 over Z here the Z you need Z to the J but you need a Z to the J plus 1 so you divide by Z you multiply by Z to the 1 Z and you divide it so it is plus beta over Z minus 1 that is still going to be there times F and what is the solution to this equation where there is no T dependence anywhere so it is just e to the power whatever is there times T times initial condition but what is the F of Z comma 0 1 because there is a P says delta of J comma 0 so it is clear that the initial condition is 1 initial condition because this quantity is delta of J comma 0 at T equal to 0 so this implies e to the minus lambda T that comes out and then e to the lambda T and that is the exact solution for the generating function what we need is the coefficient when you expand this in powers of Z to the J in powers of Z you need the coefficient of Z to the power J but now remember J runs from minus infinity to infinity it is not a power series in positive powers of J it is a Laurent series clear but we know by our experience now that this is the generating function for something or the other for what for the modified Bessel functions yeah this is our famous formula which keeps appearing all over again we will see why it is the modified Bessel function so if you recall if you recall e to the power T over 2 Z plus 1 over Z equal to summation J equal to minus infinity to infinity I J of T Z to the power J this guy is the modified Bessel function of the first kind it is got a lot of fascinating properties it is like an exponential it is an entire function nice smooth analytic function of its argument we know what its behavior is for large values of the argument etc we will write that down we will write down some of those things but it is the generating function therefore you can read it off you can therefore read off what P of J comma T is this factor remains e to the minus lambda T I need a 2 here so I will put a 2 lambda T and divide by 2 so that gives me that half but this has to be of the form something plus its reciprocal so the trick is to take out the square root of alpha beta right so this immediately becomes alpha over beta to the power J over 2 I J of 2 lambda T root that is the solution it is a very weird kind of distribution it is not a binomial it is not negative binomial does it remind you of anything it reminds you of the skellum distribution the distribution of the difference of two Poisson random variables right and indeed it is so because you see we are now saying that you have a Poisson process with rate lambda and that is when the jumps occur but the jumps are completely uncorrelated with each other so you really have two Poisson processes mixed in one corresponding to jumps to the right with average rate lambda alpha and jumps to the left with rate lambda beta and now if you say I am going to be at the point J it is the number of steps to the right minus the number of steps to the left so it is the difference this J is the difference of two Poisson processes with different rates but we know what that is we know what that distribution looks like we know that if you have a single Poisson process then with mean value mu then the generating function is this but now I got two different processes with rates mu and nu and I want the difference between these two guys right so it immediately says that it is this plus mu times 1 over z minus 1 because it is a minus sign when you write the generating function and it becomes z to the minus n in that case but all we got to do is to write mu is lambda alpha t and nu equal to lambda beta t and that is it that is the generating function so I know that the generating function f of z, t in this case has got to be e to the lambda alpha t z minus 1 plus lambda beta t 1 over z minus 1 but alpha plus beta is equal to 1 so this is precisely e to the minus lambda t e to the whatever was written there lambda t alpha z plus beta over z that is precisely this quantity and then of course you use the generating function formula for the modified Bessel function to write the solution down explicitly okay. I urge you to find out if this actually satisfies that difference equation so go back and ask does this satisfy that differential equation it is an interesting exercise to check out that the differential equation is satisfied okay but it is a unique solution in this case. Now we can ask interesting questions actually we can either work with that Bessel function which is a little messy unless you know all the properties of it or better still work with the generating function because if you want to find mean values and so on it is sufficient to work with the generating function. So compute now what this quantity is it is now a function of t and then the variance and you can do so very easily by looking at just this guy here so take successive derivatives etc and confirm that indeed the mean value of j of t depends on alpha minus beta and the variance is linear in the time again characteristic of diffusion. So this is an example of a birth and death process okay again it is a very simple process because we have said that the rate is exactly the same everywhere and we have said only nearest neighbor jumps are allowed it is possible that you may have random walk in which you jump not just nearest neighbor but with some probability beyond the nearest neighbor in fact it could be a range a whole range of values with decreasing probabilities those are much much harder problems to solve because you would not get difference equations with constant coefficients going to get coefficients which are functions once again and those are much trickier to solve this but this is the simplest random walk here. What have we done so far we looked at the discrete space discrete time random walk the simple random walk biased random walk and got an exact solution for the probability then we looked at the discrete space but continuous time and a walk and got another exact solution it remains to make the space also continuous all we got to do is to put a lattice constant a and then I make a go to 0 in such a way that I get a finite limit the time already has become continuous here. So what you need to do is to say the step length is going to become infinitesimal the rate of taking steps is going to become infinite or the mean time between steps is going to go to 0 so we are going to take the limit lambda tending to infinity and the lattice constant a tending to 0 in such a way that this goes into the diffusion process continuous diffusion and then we will work backwards and show it satisfies the diffusion equation okay. Now what sort of limit would you expect this to be would you say lambda times a should be finite no because you see it is a diffusive process so in some vague sense in some very general sense the square of the distance goes like the time so you got to be little careful you want lambda goes to infinity a goes to 0 such that lambda times a squared is finite and then you get a finite answer the diffusion constant because what is happening otherwise is that you see if you say a system has the square of the distance is proportional to the time it means in an infinitesimal time delta t and infinitesimal distance delta x the ratio delta x over delta t must be infinite because it is delta x whole squared divided by delta t that is finite so this means delta x over delta t must be itself infinite equivalent to saying that a diffusing particle formally has an infinite velocity and we will see what the meaning of that statement is what is meant by saying that Brownian motion has infinite velocity okay we will look at it from various angles but it is already emerging those things are emerging here. So although this is a very simple process this guy it has very intricate properties and we will look at some of these properties as we go along but before that we will look at other random processes related to this thing here let me stop here and we take it off on Monday.