 Alright let us start today with trying to establish a little more carefully this problem in random box about recurrence. I have been throughout mentioning that in one dimension on a line for instance the first passage from any point to any other point is certain for normal diffusion is a sure event and I have also been saying that the mean time for first passage is infinite however in the unbiased case and we also saw what happens when you have bias this first passage to any point is not a sure event or a return to the origin in a linear lattice for example is not a sure event. I have also often on mentioned that in one and two dimensions walks are recurrent in the sense that if you do not have any bias then starting anywhere you are guaranteed to get to any other point return to this point any number of times in infinite number of times possibly with the mean time between events which is infinite. We need to establish this so we will do that in one particular context and then this result can be actually generalized after that I want to go back to what we had started looking at namely non Markov walks and I want to put that in a slightly more general framework because there are lots and lots of physical processes which cannot be described by Markov processes or Markov chains and for which you need something of a generalization one of these generalizations is called a renewal process and we will talk a little bit about renewal theory okay. So first about the problem of recurrence and let me do this in the context of random box on lattices which are either linear or a square lattice or a cubic lattice or a hyper cubic lattice in the dimensions the problem is the physical phenomenon of recurrence depends on the dimensionality and not on the actual lattice structure. So if a walk is recurrent for say a simple cubic lattice it is also recurrent for a face centered cubic and so on there could be minor changes of details depending on the lattice structure for instance in three dimensions it will turn out that an unbiased walk is transient it is not recurrent. So the total probability of return to the origin for example is less than one but what number that is between 0 and 1 will depend on the actual lattice structure okay. So let us try to put this in a slightly general context I have in mind a d dimensional so this vector R is an element of it is a site in a d dimensional lattice and let us call its coordinates for example J1, J2, JD just to remind ourselves that these are integers so they are lattice points they run over all the integers okay and now I want to ask the question if I start at the origin I want to find for example P the probability that you are at any point R at time T or in discrete time it would be N given the fact that you started at say the origin at T equal to 0. So this is the quantity we need to compute which we can do fairly straightforwardly if it is a Markov process and then remember that the problem of recurrence came about in the following way in the one dimensional case we had explicitly a thing like P of Jn 0 and then I said the generating function for this quantity for this probability z to the power n summed over from n equal to 1 to infinity I had a symbol for this I do not remember what I called it pi J0 of z that is the symbol I use for it and then I said that the first time if it hits the point J this would be F of Jn 0 this is the probability that starting at 0 you hit the site J for the first time and the generating function for this was n equal to 1 to infinity I call this Phi J0 of z Phi J0 of z right it summed over n and then the statement was that Phi 0 0 of z was equal to Pi 0 0 of z over 1 plus Pi 0 0 of z that was the relation which we got by using the renewal equation between the generating functions and just to refresh your memory if Phi 0 0 of 1 is equal to 1 then recurrence to the origin is a sure event because it says the sum over all these probabilities without the z is 1 okay. Now the way that can happen is for this to diverge that is the only way it can happen so this quantity tends to infinity then you are sure that recurrence to the origin is a sure event and if you do this on an infinite lattice this is true for in general for Markov chains. But if you now do this on a lattice and talk about the random walk problem in that context then it is clear that for the random walk problem on a linear lattice for example where the sites are labeled by J in this case so you have a random walk on a linear lattice in this case it is clear by translational variance that Pi J J of z is equal to Pi 0 0 of z this probability depends only on you can choose any point as the origin essentially and similarly it also follows that Phi J J of z equal to Phi 0 so although I spoke about return to the origin I could have chosen any point and said return to that point okay it is exactly the same thing so we want this quantity to diverge in some sense in other words you want this quantity summation n equal to 1 to infinity P of J n J should be equal to infinity implies recurrence so we want to check if this quantity is divergent or not and that is what we want to do in higher dimensions in the general context of a Markovian random walk on a d dimensional lattice let us say it is a hyper cubic lattice and I want to find out if starting at any site say the origin the probability of coming back to that site at time n that probability if you sum over n does it give you infinity or not if it does then this quantity then this walk is recurrent so this is what we want to check out and I would like to specifically prove that for 1 and 2 dimensions this quantity will diverge but in 3 and higher dimensions it will converge and therefore the walk will not be recurrent this is what we would like to establish okay so let us see how to do that and we will do this we will not spell out all the details but we will do this by writing out what this equation is actually now what is P of R n starting at the origin say what is this thing equal to well in any general lattice here is the point R the lattice point R and it is got a certain number of nearest neighbors in a hyper cubic lattice for instance there are neighbors on this side on this side and front and back so in all 3 dimensions there are neighbors and we need some unit vectors to denote these neighbors so these unit vectors would for example be exe yez in a cubic lattice in 3 dimensions so let us suppose that you have R is therefore J1 to JD and you have unit vectors e1 up to ed these are the unit vectors in the Cartesian along the Cartesian axis in d dimensional space now this gets fed in because it is a Markovian walk in the previous step you should have arrived at one of the nearest neighbors and then from that point from any of the nearest neighbors you are going to jump into this point here with probability equal to 1 over 2d because the number of nearest neighbors in d dimensions is 2d in a cubic lattice right. So this is equal to 1 over 2d P of R minus e i summation from i equal to 1 to d at time n minus 1 and that is it that was the difference equation we also subtracted from this minus P of R n minus 1 and then you can go to the continuum limit if you like or else you can call this quantity the discrete Laplace n or whatever right. So writing it out explicitly this guy here was of the form so we had an equation of the form P of R n 0 was equal to 1 over minus P of R n minus 1 0 this quantity equal to 1 over 2d summation i equal to 1 to d P of R plus or minus you need both because you can have a jump from here into this or from this into that both vectors you got to there are 2d nearest neighbors right. So you can write this as P of R plus e sub i at time n minus 1 0 plus P of R minus e sub i n minus 1 and 0 minus twice P of R n minus 1 0 that was the discrete Laplace right. Now let us just check you are going to sum this d times so the 2 cancels against this and the d will cancel against the d here. So indeed I have subtracted 1 and now you can proceed to the continuum limit of this guy. So this is the usual Markovian walk right but now how do you solve a thing like this? The solution is obvious what we should do is to do a Fourier transform in space right. So let us define a Fourier transform with respect to the space variables so let us write P of R n starting from the origin let us write this as equal to 1 over 2 pi to the power d and integral ddk in d dimensions I use a symbol k very often for a lattice point so let us call it q or something like that this is the momentum the conjugate variable q d dimensional integral times e to the i q dot R times P tilde of q and n. So that is my Fourier transform but this is periodic this P is on a lattice this guy is on a lattice so you have to integrate over the fundamental period which is minus pi to pi for each of the components of q because it is periodic. Now or another way of saying it is that this R is P of R is 0 except when R is a lattice point so it is just a bunch of supported only on the lattice points. So formally one can write a thing like this then what happens here if I plug this in into this equation if I do a Fourier transform here then it is becomes clear that P tilde of q n equal to I am going to leave out factors and things like that 2 pi z et cetera we can adjust that this is equal to 1 over 2 pi to the power d and integral et cetera let us plug that in and see immediately what is going to happen you are going to have q dot R plus e I this is what is going to appear inside so write it as q dot e to the i q dot R plus e sub i this is equal to e to the i q dot R plus e to the i qi because that is a unit vector. So the moment I put this in I am going to get an extra phase factor e to the qi qi and from this guy I am going to get e to the minus i qi and I sum both these I get 2 cos qi right and that is going to happen for each one of these fellows. So this gives you finally 1 over d summation i equal to 1 to d cos qi the 2 goes away in the cosine in the definition of the cosine right. So I got e to the i qi plus e to the minus i qi which is 2 cos whatever it is and this 2 cancels here and I am left with just 1 over d cos qi and this fellow multiplies p tilde of q n minus 1 because that is the same thing out here I have taken out that extra factor due to these 2 guys and you are left with just that from this equation. So that is a very simple solution now it is just a recursion relation in which the Fourier transform at time n is some constant times the Fourier transform at time n minus 1 right. So this implies this is equal to 1 over d summation i equal to 1 to d cos qi to the power n p tilde of q and 0 with just this factor but what is p tilde of q, 0 it is the Fourier transform of the initial distribution the initial distribution we started with the assumption this fellow here satisfies p of r 0 0 vector equal to a delta function a d dimensional delta function at 0 I start from the origin it is just a d dimensional delta function and we want the Fourier transform of that which is 1 with this normalization here I have put this guy in here is equal to 1. So that solves this this guy here is equal to 1 so you immediately see that all you have to do is to replace this by 1 over d summation i equal to 1 to d cos qi to the power n if you can do the inverse Fourier transform of that you got the probability distribution formally but now we can therefore ask what is p 0 0 and these are vectors now 0 0 of z of 1 this is equal to a summation over n and equal to 0 to infinity n equal to 1 to infinity does not matter 0 to infinity acts of 1 we just we are going to show it diverges so we are not interested in this 1 or not 1 over 2 pi to the power d integral minus pi to pi d d q e to the i q dot r but I put r equal to 0 it is the origin right so that factor goes away and you are left with 1 over d summation cos qi i equal to 1 to d to the power n but that sum can be done this is just a geometric series we can do this sum and notice that these cosines are less than 1 in magnitude and there is a d sitting here so this number is a number less than 1 and therefore you can submit the geometric series and it gives you 1 over 2 pi to the power d integral minus pi to pi d d q divided by 1 minus 1 over d well let us write it out cos q 1 plus and that is the answer for pi 0 0 now if this diverges if this integral diverges then the walk is recurrent return to the origin is sure but if it converges we know that the probability of a return to the origin is less than 1 now how would you do this integral these are magnitudes of q 1 to q d so the way to do this is to try to do this in spherical polar coordinates this is called a Watson integral it appears in lattice dynamics very often when you study the normal modes of vibrations of lattices this sort of thing will appear this precise factor depends on the fact that we started with a hyper cubic lattice you would get different factors here because the unit vectors would be different in different lattice systems but you would get something similar to this right now can it diverge at all well if it diverges it is got to do so at the origin at q equal to 0 you got to see what the behavior of this fellow is if q vector equal to 0 each component is 0 so all these fellows become 1 and cancels against the d the one will cancel against this and give you a divergence in the denominator which should be it could still be finite depends on what is happening on top here so we are looking at what is going on near the origin in the magnitude of this so if you did this in spherical polar coordinates for instance you wrote it out and I wrote out an expansion of this of small values of qi the leading term is q squared the 1 cancels against this here so the denominator goes like q squared magnitude so you certainly have q squared in the denominator but upstairs remember you have q to the d minus 1 dq so near the origin this is going to behave like that if at all it diverges this factor is going to go like q squared where this q squared stands for q 1 squared plus etc up to q d squared square of the magnitude and upstairs you have that phase space factor volume element q to the d minus 1 dq right so d equal to 1 you have integral 0 dq over q squared this tends to infinity no doubt about it d equal to 2 this is 0 q dq because in one in two dimensions the line element is r dr volume element the area element is r dr d theta whatever so there is a q dq divided by q squared which is like dq over q this tends to infinity so in both 1 and 2 dimensions this is how you show that the walk is definitely recurrent in 3 you can see this integral becomes finite in three dimensions d equal to 3 you have integral q squared dq divided by q squared near the origin this is less than infinity this is finite okay so this is the reason why return to the origin is sure in one and two dimensions the walk is recurrent for unbiased walk and in three and higher dimensions it is transient. The actual probability total probability of return when you must compute the first passage time etc you must compute this integral and compute pi 0 0 and then extract the coefficient of z or put z equal to 1 to get the total probability of return remember very carefully that this quantity itself this summation itself summation p of r n 0 sum from 1 to infinity here this is not a sum over mutually exclusive events probabilities this is not one I mean this this quantity can be as large as you please that is why it is diverging out here okay on the other hand this quantity f of r n 0 this quantity cannot diverge it cannot be bigger than 1 if it is equal to 1 you know the event of first return is a sure event otherwise it must be less than 1 right because it is very structure as you can see in the general case we saw a relation like pi j k j of z was equal to phi phi k j of z was pi k j over 1 plus pi k k of z this thing here was the generating function for p of k n j z to the power n sum from n equal to 1 to infinity this quantity here we said z equal to 1 here then this quantity here on this side this cannot diverge this one the sum over probabilities that we put z equal to 1 it cannot diverge okay on the other hand pi can this quantity may well be divergent and that is what we have shown here this object here is divergent if it is convergent then we need to find that ratio to find out what is the actual first passage time or return time distribution like or what is generating functions like okay I hope this is clear this is not this is not a probability this guy is a probability the sum over probabilities here this is sum over first time return probabilities and the total probability cannot exceed 1 so we have been very careful to skirt the issue computing this requires you to compute this and the behavior of this dictates whether it will recur or not okay alright so much for this return probability I computed in the one dimensional case we actually saw what the return distribution was for instance return to the origin I think we got something like in 1d with alpha equal to beta equal to half we found that Phi 0 0 of z was 1 minus square root of 1 minus z square that is the result we got right for the generating function of return to the origin. Now of course if you did an expansion of this in power series it is clear from this that it is singular at z equal to 1 straight away you can see that you put z equal to 1 you get 1 which means a walk is recurrent in the unbiased case but the coefficient of z to whatever power z to the power k will give you the first return probability that the first return occurs at time k now the fact that this is a function of z squared tells you that only even powers of z can appear that is a reflection of the fact that if you started the origin on a linear lattice you can get back to the origin only in an even number of steps so P f will be 0 for odd n right and we can expand what this fellow is this is equal to the first term there is a 1 minus 1 minus z squared over 2 and then plus half into half minus 1 so that is minus 1 and then 2 factorial so 1 8 z 4 dot dot dot right. So this gives you z squared over 2 plus z 4 over 8 plus all terms are positive they have to be positive for for z equal to 1 okay. Now the series as it stands will converge this binomial series converges for mod z less than equal to for mod z less than equal to 1 it is it is got a singularity at z equal to 1 the square root singularity but it is quite finite at that point and now you can see that the probability that you are going to start you are going to come back to the origin at the second step equal to half that is the coefficient of z squared and that is trivial to establish this is completely trivial to establish because you are on this lattice here is a side 0 here is 1 this is minus 1 you want the probability that you come back to the origin and the second step okay there are only 2 ways of doing this one of them is to go here and come back here and the probability of that is 1 4th because you have to go there half to come back here instead of going off and the other side half to come here half to come there so that is 1 4th and 1 4th plus 1 4th is a half so that is immediate enumeration similarly f 0 4 0 equal to 1 8 now what are the possible ways in which you can do this so here 0 1 minus 1 2 minus 2 and so on we want the probability that you come back on the to the origin on the 4th step well clearly you cannot do this because you are back on the second step right so you got to go here and you got to go there and then back here back here what is the probability of that happening there are 4 of these fellows each of them you have a probability half factor so that is 1 16th and the only other way you can do it is to come here here and go back here in this fashion that is 1 16 2 you add the 2 you get 1 8 okay so that is okay the next one is going to have non-trivial factors you are going to be there is going to be 3 factorial in the denominator and things like that so there are more than one way of doing that right so clearly what is going to happen is that you could go up to this point and then come back in 6 steps but you could do this and then come here so you have to allow for all those factors in other words you have to enumerate all these walks but this is doing it automatically this fellow is automatically doing this so once you write the binomial expansion of this the matter is over it is actually automatically telling you what is the coefficient so what has happened is actually an enumeration of walks you counted you have counted all distinct paths such that on the end to end step you are back into 0 okay but we want to come back in 4 steps so here is 1 here is 2 so I come here I cannot come back there I have to go here and then I got to come back in the 4th step 2 are over already so the only way I can do it is to come back in this fashion there is no other possibility right so there are only 2 walks but the moment you have go to the next step 6 etc etc you have more possibilities and of course you go to 2n in general you have a large number of possibilities the number of these walks would increase okay incidentally that also brings me to the following and this is a good place to do that but I will come back to this because it also leads to the concept of what is called renormalization of random walks we saw already in the case of a linear lattice that it takes on the average it takes 4 times as long to go twice as far so we put barriers at 2 ends at plus minus j and I said the first passage time to go from 0 to j is just j squared on this lattice we solved a set of difference equations and I said it is just j squared oh incidentally we we did that problem by saying here is 0 here is 1 here is j and on this side I said minus 1 and this is minus j and I said there is a barrier here barrier here and starting from 0 what is the mean time to get to either of these fellows a distance j right but and then we got the answer j squared so this means if I increase the j from j I go to 2j then the answer is going to be 4 times as long it is going to go to 4j squared right so that is the reason the walk dimension was 2 in this problem okay you need to have done it that way you could have just said alright I have a lattice starting at 0 on one side I asked what is the time to go here and asked what is the time to go twice as far and this thing is in some sense self similar I can make this structure as self similar one as follows just like in the sierpinski gasket instead of considering lattices with one side and then two sides and three sides four sides and so on instead of doing that I can rescale it in the following sense I start at 0 1 2 pretend there are bonds here and I put I decorate it by putting a site extra site in the middle and then blowing it up so from this the next step would be to double the size so what I have done is to put a site here and a site here and double the site lattice and it gives me that okay I do the same thing the next step how many sites will there be well there is 1 2 3 4 more I am going to add to the existing 5 right so it is going to be 9 next time and the next time it is going to be 60 17 so it is 2 to the n plus 1 the fellow in the middle and then there is a 2 to the n because I am doubling it each time right so this structure is now self similar although the original lattice is a regular Euclidean lattice if I consider lattices with 3 sides 5 sides and then after that 9 sides and so on 2 to the n plus 1 sites that is a hierarchical structure right and you can play the same game on it and discover that the time doubles each time in fact once I know that I can do it even more cheaply I can say I start at 0 and ask what is the time to go to 1 mean time it is 1 unit right now I ask what is the mean time to go there 2 what would that be what is the answer I should get I should get 4 for the average right how does that happen how does that happen in the first case to go from 0 to 1 there is just one there is just one walk that is it I flip there with probability 1 I am out I have finished so the mean time is just the deterministic time it is 1 right the next time around I can start here go here come back go do any number of times and then finally go there so I should be able to prove by counting over all the time steps here and all the walks that I end up with an answer which is 4 now to go to this place I can only do so in an even number of sight steps so you only have to sum over you cannot do it in 0 steps you cannot do it in 2 steps right so you have to sum over 2 4 6 2 K step random walks you have to sum over all these walks and for each one of them you have to find out what is the time taken multiply by the time taken and calculate the average and the answer should be 4 for instance how many 4 steps walks would there be to go here well I go here and I must do it in 4 steps so I cannot go here I must come back that is second step third step fourth step right and there is only one such walk there is only one such walk and it is weighted by 4 number of steps is equal to time so it is weighted by 4 how many 6 steps walks are there I got to go 1 2 3 4 5 6 right how many such walks are there just one so each time this is a trivial problem it is a trivial problem these 2 K steps these fellows of that the last 2 steps must be 1 2 finished so that leaves 2 K minus 2 steps right so you have 2 K minus 2 which means twice K minus 1 pairs of going back and forth and that is all and each time when you are here remember the probability of jumping here is a half you have to be careful about that so I leave you to do this is an exercise to show that the weighted average the mean time to go here is going to be 4 you must count all the walks carefully weight it with appropriate probability factors and so if you did the same thing here on this and this is going to take us to a non-trivial problem so I have 0 I have 1 and I have minus 1 and now I say alright I started the origin I have barriers at plus 1 and minus 1 and I ask what is the mean time it takes me to hit a barrier and the answer is 1 because whether I go right or whether I go left I hit a barrier and that is it at unit distance right so in this case t0 equal to 1 let me call the mean time to hit the trap t0 is 1 now from this I go to this so here 0 minus 1 1 minus 2 2 and the traps are here what is t0 equal this should be 4 this is what we have been saying all along it should be 4 right but you can do this now either by writing the difference equation which we wrote down for a Markov walk or better still by actually counting all the walks with appropriate probabilities and I want to do that for a specific reason because I want to generalize to non Markov walks so I do not want to use the Markovian backward Kolmogorov equation which had those nice difference equations or something going to do is somewhat more general kind of random walk so now I say alright how do I figure out when I am going to hit this what are the all the possible walks again remember to hit this point or this point you can only do so in an even number of steps so let us suppose that you do so in 2 r steps the last 2 steps must be straight like that that is the only possibility that is the only way you can go from here to there from the origin right so clearly what is happening is that you have twice r minus 1 steps which consists of this or this interchanging between these two and then at the end you jump from here to there in this fashion right so if you want to compute the number of walks from the origin to 2 or minus 2 in an even number of steps you have to enumerate all possible walks got to ask how many of these walks there are and clearly the last 2 steps are quite deterministic depending on where you want to go here or here but then this many steps this many walks are either to plus 1 and back to 0 or minus 1 and back to 0 because you should not hit plus or minus 2 you want to hit it for the first time in the 2 r step okay and you have to know add all such walks and there are clearly 2 to the r plus 1 such walks 2 to the r minus 1 such walks because there are r minus 1 pairs and each pair is determined by do you go to the right or go to the left you have a choice and therefore you have 2 to the r minus 1 choices such walks okay so we will use this fact I will come back and we will use this fact here to show how a walk on a linear lattice can be renormalized even in the more general case of non mark off walks okay now let us switch gears and go to what a non mark off in walk is and we will do this in a specific framework called continuous time random walks this is a technical term which is used in the physics literature it does not just refer to the fact that time is continuous because we looked at Markovian walks where the time was a continuous variable right so that is not what is meant by this what is denoted by continuous time random walk is a very specific non Markovian or generalization of a Markovian random walk in continuous time okay and it is the problem is like this if you go back to the original one dimensional random walk problem we did this in the Markov case in several ways first we said let us look at a lattice one dimensional lattice discrete space and then I said discrete time as well so we asked what is the probability that if you start at the origin you are at a side j at time step n okay we found the solution to that in fact the solution to that was if you started with the origin here and some site j here then the p of being a j at time n starting from the origin this was equal to n in n steps the random variable here was j because you given n time steps and where I where do you end up you end up at some j and the distribution of that j is given by this binomial distribution n-j over 2 1 over 2 let us look at the unbiased case for simplicity and what were the conditions here you given n the number of time steps so what are the conditions here for this probability what should j be are there any restrictions on j it is an integer of course but what are the restrictions on this j it should be less than equal to n so this is mod j less than equal to n and n-j you cannot end up at a site for an even number of time steps you can only end up with an even site not an odd site so p was equal to 0 otherwise otherwise it was equal to this binomial distribution so that was one way of doing then I said okay let us look at this problem not in discrete space we leave space discrete but let us look at it in continuous time and then I wrote different equations with a differential on the left hand side with respect to time so I said dp over dt so we said dp j t over dt was equal to and I said huh you had to reach you had to jump from the site j-1 or the site j plus 1 and with some mean rate lambda so I put some lambda here and said this is p j-1 t plus p j t minus sorry j plus 1 t these were the gain terms but you could also jump out from this fellow with the total probability 1 so this guy was minus so I had a set of difference differential of this kind t was continuous here and the statement was that in the time axis these jumps occur completely at random dictated by a Poisson process with some mean rate lambda so that lambda over 2 is the mean rate of jumps to the right and lambda over 2 is the mean rate of jumps to the left because it was an unbiased walk so what is the solution to this we got an explicit solution here it is a binomial distribution what is the solution there now there is no time step it is gone so there is no question of even number of steps or number of steps time is continuous out here and this I did by solving I solve this by saying let us find a generating function for this fellow sum over z to the power so I put b of j, t z to the power j summed from j equal to minus infinity to infinity I call this sum generating function let us call this g I do not know what symbol I use then g of z, t it is not a power series in positive powers of z because this fellow here is running minus infinity to infinity it is a Laurent series you do not care what and what did this give you finally we got a simple equation for this guy for this g and then I write out the coefficient in fact the answer I got for this was e to the minus lambda t e to the power lambda t z plus 1 over z that is the answer I got for that and from there I said okay I now look at the coefficient of z in it and what was the solution this got solved and p of j, t starting from the origin of course was equal to e to the minus lambda t the modified Bessel function of index j and i minus j is equal to i plus j because this is completely symmetric with j to minus j unbiased walk if I had a bias then I had an alpha over beta to the power j over 2 sitting there and then this was lambda t is twice lambda t square root of alpha beta but in the unbiased case you had this. So discrete space discrete time discrete space continuous time again a Markov process that was a Markov chain there is a Markov process and then I went to the continuum limit in the space variable by putting a lattice constant and putting in limit. So I said j times a tends to x this is a lattice constant and I said lambda a squared over 2 tends to a finite number D. So lambda tends to infinity a tends to 0 such that this fellow tends to D and I got the diffusion equation. So the solution for the probability density function in x was the famous Gaussian e to the minus x squared over 4 d t and so on. This equation this thing here went over into delta p of x t over delta t d to p of x t and I found the Gaussian solution to this. So what really happened was that from this if you know went over to continuous time by saying that these jumps are happening at random instance of time guided by some process uncorrelated jumps then you end up with a Markov process whose solution is this Bessel function here and if you now went to a continuous space limit as well with this particular limit you had the diffusion equation and its solution was the Gaussian solution. So this would imply the same initial conditions it told you that p of x t was e to the minus x squared over 4 d t square root of 4 pi. So that is a continuous Markov process it is a Brownian motion essentially Brown x of t was a stochastic process a Wiener process which is a non-stationary Markov process and there is a fundamental Gaussian solution for the probability density okay. Now if I want to say okay at what stage can I make this process non Markov one way to do this one way to do this from this is the following is to say alright this is okay this is the probability that you are at the side j at step n nth time step and let us suppose I always start from the origin so for ease of notation let me write it like this n but now I say look in a given interval of time I might have made any number of jumps I do not have to do this at uniform one second intervals or a times fixed time step I might have made it randomly with some random process underlying I might have made any number of jumps then this quantity here p of j t this would become a summation out here over n the number of jumps times the probability that let me use a subscript here so that the idea is clear p n of j let me call this the binomial coefficient that we had I want to distinguish it from this function here times the probability that you make n jumps in time t so this is the probability of n jumps or n time in time t or n lattice steps in time t summed over n from 0 to infinity in principle so what is it that we are doing we are saying that the same random walk that I have in discrete space by saying at the end of every second or every time stepped out I flip a coin I move to the right or left gave me this guy so in n steps the probability of being at some point j and this was a binomial coefficient this is the guy which was n n minus 0 to 1 over 2 b but now I say all right I could have made any number of steps I could have taken in a given time t continuous time because these steps are now being taken randomly all I have to then do is to say all right the probability of reaching the geometrical point j in n steps is this that is a combinatorial factor with this probability factor that I put in because I can go right or left but now that is multiplied by the probability that you take exactly n steps in any given time t okay and then you sum overall n overall possibilities and you guarantee to get the probability that you are going to be at the side j at time t because you could have reached that in any number of steps going back and forth okay and all the constraints about j less than n etc are included here in this guy whatever this is this is quite independent now under what circumstances what kind of w is going to give you that what a Poisson distribution of course because that is the whole point this is a these jumps are happening in an uncorrelated way completely so the only way that can happen it can become a Markov process is if the number of the probability that you have n jumps in time t is a Poisson process is Poisson distributed with the same lambda so if this guy if and only if w of n t equal to lambda t to the power n over n factorial e to the minus lambda t then it implies this the Markovian walk okay any other normalizable distribution of course what you are trying to say is that at any given time t at any given time t you must certainly have this summation n equal to 0 to infinity w of n t equal to 1 for any positive t because any given time t you must make either 0 jumps or 1 jump or 2 jumps normalized okay so this is a probability distribution in the number n for a given t in the random variable n which takes on non-negative integer values okay and when that becomes a Poisson with this mean rate lambda it is exactly equivalent to saying that the random walk in continuous time is a Markov process which satisfies this differential equation okay therefore any other choice of this probability distribution is going to give you jumps which are correlated to each other so this is not an uncorrelated sequence crosses on the time axis there is some memory so any functional form other than the Poisson form for this w of n and t is going to give you a non Markovian random walk okay and this is called a continuous time random walk. Now what kind of process do we want here input anything you like but we would like to have something which is a generalization of this Markovian walk what characterize this walk actually what characterized it was the fact that in any time interval d t you had a probability lambda d t that there was a cross in it and a probability 1 minus lambda d t that there was no cross in it the probability in an infinitesimal time interval of having two crosses was of order d t squared and higher order that was the whole point about this Poisson process right now the basic point is what is the probability waiting time distribution for a jump to happen for an event to happen that is that was a crucial point because everything got generated from that right so the question is starting the clock at 0 what is the waiting time distribution such that between t and t plus d t you had a cross what is that distribution well for the Poisson we know the answer what is the probability that till time t nothing happens e to the minus lambda t right the rate of change of that probability with a minus sign is going to be the probability that you have a transition so e to the minus lambda t is the probability that if you start the clock at 0 till time in t nothing has happened no jumps right now you want the probability of a jump that is the holding time waiting time or holding time distribution in renewal theory and this holding time is got to be some function so this thing is also called holding time it is some function psi of t which must satisfy the following properties first of all it is a distribution probability function so it cannot be negative it must satisfy 0 to infinity d t psi of t equal to 1 which means you wait long enough there has to be a jump so it is normalized and it must be the time derivative of this guy e to the minus lambda t with a minus sign because at the rate at which like the survival probability the minus the time derivative of it gave you the first passage time distribution exactly the same way what is minus the derivative of this guy lambda e to the minus lambda t so this means that psi of t equal to lambda e to the minus lambda t this guy you can see that it immediately satisfies that normalization condition is the waiting time distribution for a Markovian random mark or more generally for a Markovian process now of course once you say that one event occurs maybe we saw how to generate the Poisson sequence from the 0 event probability you can find the probability that one event will occur by multiplying this by lambda d t integrating and so on and you generate the rest of the Poisson sequence so a general statement is that if you give me an arbitrary psi of t which satisfies this condition non-negative psi of t which satisfies this integrable this normalization condition I have a non Markovian walk in general but a very special kind of walk in the sense that it is the same waiting time density for all these events okay even that need not be true it could be that the waiting time for the first step is different from the waiting time for the second step or the third step and so on then I lose translation variance in time right but if I say look there is a common psi of t and the waiting time is independent of the step number is drawn from the same common distribution this is a much simpler problem it is called a renewal process it is a generalization of a Markov process so a non exponential waiting time density this is a density probability density because integrated I get total probability this non exponential waiting time density implies a non Markovian walk but what is the great advantage of choosing this psi of t the common psi of t well what will be the waiting time density w of n t for n such guys what will w of n and t be so now I tell you the waiting time density for a jump is psi of t some probability density function psi of t what is w of n and t and I also tell you further that it is the same waiting time density for all these guys so what will psi of t n, t there is translation invariance here so what is the what is w of 2, t w of 1, t is psi of t by definition what is w of 2, t why should it be so it is clear that you want 0 here and in this interval t I want 2 of these guys they could happen anywhere one of them happens with some psi of t that leaves or psi of t prime that leaves time t minus t prime for the other guy to happen right so what will the waiting time density for 2 steps be psi 1 of t is identically psi of t what is psi 2 of t equal to it is got to be equal to 0 to t dt prime psi of t prime psi of t minus t prime because that is the time left for the second guy to happen and we already said that the waiting time density for one fellow to happen is precisely psi of t so this is the time available for it and what is this in the form of a convolution right so you immediately see that we do not have a simple expression for this but we do have one for w tilde of n, s it is psi tilde of s to the power n that is great because this now tells you we work in Laplace space all the time we work in terms of Laplace transform and finally we will take a inverse transform right. So it is really telling you that we can solve this problem because now look at what is going to happen to my p of j, t there is a p of j, t and this was equal to p n j and then there was a w of n, t summed over n but I take the Laplace transform of this I get this out here and that gives me this and I cancel this and write this as psi tilde of s to the power n that is like the generating function for this fellow with z replaced by this function. So if I can write the generating function and substitute psi tilde of s then in principle I can invert the transform and find this p of j, t without going through the any master equations or anything like that at all so we will do this tomorrow. So you see the strategy and the idea that how you generalize from a Markov process of course you check all the time by putting psi of t is lambda e to the minus lambda t you check whether it is going to work or not in the other case what is what psi tilde of s for the Markov case when it is lambda e to the minus lambda t remember psi tilde of 0 must be equal to 1 because of this you can ask what is the mean waiting time that is of course integral t psi of t dt which is the first derivative of psi tilde of s with minus at s equal to 0. So once you actually give me this I do not even go back to time if you give me this I work entirely in terms of Laplace transforms I can find any moment of whatever I like using this. So the thing to do now is to generalize this to higher dimensions I will show you how to solve this in a general lattice in arbitrary dimensions by exploiting the fact that for a renewal process all that happens is that this probability in the Laplace space the probability of n jumps becomes a power of some transform of some waiting time density and then we can determine from that that is enough to determine what is going to be the long time behavior of this walk long time in t space is equal to s going to 0 in Laplace transform space. So we are going to use these asymptotic theorems to discover what is going to happen at long times if it is diffusive or not sub diffusive or whatever. So this is the general strategy we will do this tomorrow.