 Now, we looked last time at some properties of first passage times and I would like to elaborate this a little bit. I mentioned the so-called backward Kolmogorov equation. Let me show you what it is explicitly and how you can get information on first passage times, first passage time distributions using the backward Kolmogorov equation for instance. Now, there are several ways of looking at this problem of finding first passage times and the canonical problem which we looked at was first passage from the origin to some point X for a particle undergoing Brownian motion on the X axis. We already found the distribution of the time that it hits the point X for the first time. We discovered this distribution. It was a levy distribution. I want to show you a little bit more about how you attack first passage time problems in general. In particular, we would like to look at it on fractals which I mentioned earlier. But before that, let me show you a couple of ways of finding first passage time distributions. The first of these has to do with what is called a renewal equation for the first passage time and it goes like this. It is most comfortably illustrated using discrete valued Markov process. So, remember that for a discrete valued Markov process, we label the states i, j, k and so on and we will assume the process to be stationary. And then the quantity of interest is the probability P that you are in some state k at time t given that you started from some state j at time 0 for instance. This was the conditional probability and k, j, etc. label the states of the system. Now, in particular, if for example, you have in mind a lattice on which diffusion is taking place or random walk is taking place so that these states are ordered in some sense, then you see this is simple equation which does the following. If these are the states that we are talking about and let us suppose that this is state j and this is state k out there, j is less than k and we would like to know for a system starting in state j what is the first passage time distribution for it to get to some state i in between. So, we have in mind this geometry and the observation is that there is no way that the system can go from j to k without going through the state i in this geometry. So, the probability that it is at the state k at time t given that it started j at time 0 this must be equal to an integral over intermediate times dt prime 0 to t. The probability that it is at k in the time interval t minus t prime from the intermediate state i times the probability that it hits the state i for the first time at time t prime given that it started from j at time 0 as you know our notation has been such that we omit this. So, this is called a renewal equation it says the probability that you start at 0 at t equal to 0 in state j and you reach the state k at time t must be equal to the probability that you start at j and hit the state i for the first time here at some time t prime which is in between 0 and t and then the rest of the time you spend in propagating from i to this final state k remember this is a first time probability whereas this does not say anything like that at all in between it could have gone past k it could have gone either way any number of times and then ended up at the state k but this says that you did not hit i before t prime okay. And this is called a renewal equation this thing renews itself each time you as if the whole thing is starting from the point i it is now propagating to t. So, it is called it is an example of a renewal equation it is satisfied by a class of processes even more extensive than just Markov processes but this is immediately going to tell you how to compute this quantity here because notice that this is in the form of a convolution. So, if I define Laplace transforms with respect to time then it immediately implies that p tilde of k s let us call the transform variable s j must be equal to p tilde k s starting from with i and then q tilde of i s starting from j. So, the Laplace transforms of these probability distributions actually multiply each other because it is in a convolution form so this implies of course that q tilde of i s j must be equal to the ratio p tilde of k s j divided by p tilde of k s i notice that the left hand side has no k dependence no dependence on this state at all. So, it could have been any state greater than i to the right of i and it is still valid. So, you might ask how is it that the k dependence here cancels out here and there is a theorem for Markov processes which says that if the process is stationary in time then this cancellation is guaranteed in the sense that this Laplace transform here is the product of a function of j and s and another function of k and s. So, the function of k and s cancels out top and bottom I would not prove this that theorem it is not very hard to establish but it can be shown that there is no k dependence. So, this of course you are supposed to find by solving the master equation for p the Fokker Planck equation for instance or the master equation and then once you have that that ratio is guaranteed to give you the first passage time distribution that is inverse Laplace transform of this quantity will give you q of i t for any given j. So, that is a very common method very simple and straightforward method of finding the first passage time distribution. Now, some things about this first passage time distribution should become very clear one of them is what is this thing here this guy here remember is integral 0 to infinity dt q e to the minus s t q of i t from j. So, that is the meaning of this Laplace transform the definition here what should the value of this q tilde be at s equal to 0 should be equal to 1 because it says if you integrate over all time this is the probability that you hit i for the first time at time t and if you integrate over t from 0 to infinity and you get 1 it means that the first passage from the state j to the state i is a sure event. So, the way to check that first passage does occur at all that is a sure event is by doing this is by finding this q tilde and finding out if at s equal to 0 it is equal to unity or not first passage namely the probability is equal to 1 if it is less than 1 then you know that the first passage is not a sure event it is a probability. So, it lies between 0 and 1 you have integrated over all t and added up all these probabilities ok. Now, notice one thing notice that this quantity here says this t is the first time of first passage from j to i. So, different t is a mutually exclusive events. So, one particular value of t and another particular value of t first passage at 10 seconds excludes first passage at any other time ok. So, they mutually exclusive events therefore, it makes sense to add them up you get the total probability that at some time or the other you will go from j to i and that is what is normalized out there ok I will give you an example of a case where it is less than 1 a very simple case where it will turn out to be less than 1. Similarly, you could ask what is the time of return to the origin for example that is a recurrence time and we will come to that a little later ok that is intimately connected with the first passage, but we will come to that little later ok. What is the mean time of first passage from j to i what would be the definition of that b? So, this is the Laplace transform and we can ask mean first passage time from j to i how is it defined in terms of this q you multiply by t and you integrate over all t right. So, this by definition t j to i equal to integral 0 to infinity dt t times q of a i t that is the definition assuming that is normalized to unity this is the definition can I extract that from the Laplace transform derivative with respect to s will pull down a minus t. So, I put another minus sign and then I set s equal to 0 right. So, it is clear that this is also equal to minus d over ds q tilde of i s j evaluated at s equal to 0. So, that is a very convenient way of doing things you work entirely in terms of Laplace transforms you find out what this quantity is by this ratio and then differentiate it for the first time once with respect to s set s equal to 0 but a minus sign and you got your mean time higher moments can be found similarly you differentiate each time you are going to pull down a minus t downstairs. So, this is going to give you higher moments as well here is an example of when the event of first passage from one point to another is not a sure event and that has to do with the following on a on a line we can do this on a linear lattice as well on a line for example or even on this lattice let us do it on this lattice itself. So, you have here is the state i and here is here is the state j and here is the state i to the right of it and let us say that he does there is a random walker who does random walk by jumping to the right one step with probability p or alpha I have been calling it alpha and to the left with probability beta a biased walker and the lattice extends an infinite extend on either side this side does not matter but this side extends all the way to infinity then we are sure we can prove this can be proved that if alpha is bigger than beta first passage from j to i is a sure event so in this case it says fpt fp from j to i is a sure event if alpha is greater than or equal to beta even if it is equal to it still is so but if alpha is less than beta then it is not a sure event the probability that you go from the state j to a state i to the right of it against the bias is a problem is less than one okay it turns out to be one minus mod alpha minus beta on an infinite lattice that is a recurrence probability to the origin but the first passage thing is related to it very closely related to it but if I put a boundary here at any point and put a reflecting barrier so that the system does not escape to minus infinity but it stays within this region then even if there is a bias in this direction it is still a sure event because the system is then ergodic in this right to the right of this barrier it will visit all points but the catch is if alpha is equal to beta this case unbiased case then the mean time to go from here to there from any j to any i is infinite so the interesting thing is the event is sure but the average time it takes to happen is infinite it diverges and I will show you where this divergence comes from we are going to look at recurrences little bit and I will tell you where the divergence comes from what will that imply that will imply that this fellow here actually diverges this derivative diverges and you can sort of tell what is going to have when it is going to happen this integral therefore diverges this integral does not if you remove this and integrate which is equivalent to this it does not that is a sure event so the integral of Q itself converges but the first moment of Q diverges you put a T there then because of the range going up to infinity diverges and that derivative diverges what would that mean in well in general in general if you say that if you say that Q tilde has the following form this fellow here is if I expand it in powers of s if it is analytic at s equal to 0 the first term will be at s equal to 0 and that we know is 1 if it is a sure event right so this is 1 and then next term if it is an analytic function will be of the form plus s times d Q tilde over d s at s equal to 0 okay plus higher order terms now what does it mean to say that this guy here is infinite this diverges what will it imply for this function Q tilde if for example Q tilde goes like 1 plus order square root of s rather than s you differentiate square root of s you get 1 over square root of s in the denominator and that blows up right. So the symptom of this the fact that the time of first passage will become mean time will become infinite will show up in the fact that the small s behavior of this Q tilde will not be analytic in the neighborhood of s equal to 0 because if it is analytic it is got to be an s here by Taylor series and then this guy is finite etc but the moment you have a behavior which goes like 1 plus some power of s less than 1 some positive power of s less than 1 you know that it derivative is going to blow up immediately and then there will be no sure first no mean first passage time which is finite okay. So we will see what these things look like in specific instances okay similarly for the higher moments etc so keep this in mind that the Laplace transform is going to be a very useful way of deducing the first passage time distribution itself okay the other method which is very common in continuous Markov processes is the following incidentally we can write down the just to be sure just to be sure that we understand what we are talking about we can write down the first passage time distribution in some cases which we have already looked at like random walk bias random walk on a line or for that matter diffusion on a line in the case of diffusion on a line. So you have continuous the x axis and say without loss of generality I start at some point x naught and I want to hit some point x for the first time the diffusion occurs on the x axis infinitely to the left here then the first passage time distribution now it is continuous the continuous process so the first passage time density Q as a function of that you are going to hit the point x for the first time at time t starting from x naught so you are going to hit x between t and t plus dt this is the density probability density of this random variable t this quantity is equal to what for normal diffusion what does it look like this is the levy distribution which we have been talking about right. So you have the usual diffusion propagator so it is x minus x naught over squared over 4 dt and then you have square root of 4 pi dt right but you also have an x minus x naught over t that is how you get a t cube to the power half and that gives you a levy distribution in t this is a stable distribution with exponent half as I have been saying throughout in t so it is just multiplied by this that is it this looks almost like a velocity this is the distance you got to go and that is the time what is the mean time now what is the mean time in this case infinity infinity and how do we see that immediately up to infinity you to multiply this by t this is going to be your mean time to go from x naught to x by the way if x had been to the left of x naught and you have an infinite x pants on the right then this is modulus of x minus x naught so it does not matter which way there is no bias in this walk so what is this integral given this what is it going to be infinity because what does this do at t equal to 0 what does this whole thing do at t equal to 0 what is the limit of this as t goes to 0 this factor goes to 0 very strongly it is like e to the minus positive number over t and this goes like 1 over t to the 3 halves that blows up but that exponent goes to 0 much faster e to the minus 1 over t goes to 0 extremely fast so the product goes to 0 no problem over that what happens at infinity is that this goes to unity but this integral goes like 1 over t to the 3 halves so you are stuck with an integral which goes like up to infinity dt over t to the half because there is a t multiplying it to find the first moment it cancels this and leaves that alone and this is infinite it diverges like square root of the upper limit so this is equal to infinity so this is the first passage problem in which you have an assure event is for sure that is going to happen because it is possible to check this is a normalized levy distribution and it is integral equal to 1 but at the same time the mean time is infinite so in this instance if you actually computed the Laplace transform of this guy here you discover that there is no analyticity at s equal to 0 so it starts with 1 and the next term is of order square root of s therefore if you differentiate you pull a 1 over square root of s and you put s equal to 0 it blows up which you can see directly from here suppose you did this on a lattice now we talk about discrete space then to we can write down the first passage time distribution no problem on a lattice etc. So let us say you start at some state J and here is the state I we can write the solution down for this problem without loss of generality let us put the state I to be at 0 for example and I ask put J equal to 0 and I want to know what is the mean time to go to this point I for the first time well first I need the solution of the probability distribution itself in that as you know if you had a bias to the right alpha and a bias to the left beta then the probability that you are going to be at the state I at time t given that you started at origin at time 0 this quantity we found what it was let us not call it I let us I do not want to confuse square root of minus 1 let us call it some other state K some other point K we wrote this down explicitly we found what the answer was right and what was that we are in let us say we are in continuous time but discrete space and then the walks are happening with some rate lambda for example pardon me this that is the skeleton distribution but we need to know what the exact damping factor is in front right. So if you recall this is equal to e to the minus lambda t I K of 2 lambda t square root of alpha beta and then there was an alpha over beta to the power K over 2 that was the distribution yeah quite right it is related to a skeleton distribution it is the difference of two Poisson processes a right jump process with rate lambda alpha in the left jump process with rate lambda beta and this was the distribution exactly as in this case it turns out that the corresponding Q of K t 0 is equal to K over t times this guy times the same thing it is almost like distance over time now things are a little interesting because in this case it turns out that you could have a damping factor because of this when alpha is not equal to beta okay. So what you need to do is to integrate this over t and find out if the integral is equal to 1 or not and my claim is that if alpha is bigger than beta or equal to beta the integral is 1 but if it is less than beta the integral is less than 1 okay. So if you have a forward bias you are sure to get there if you have no bias you sure to get there but if you have a negative bias in the other direction then the probability of getting to that point on the right is less than 1 okay so it is not a proper random variable it is not normalized to unity you can still say alright let us find the probability that it is going to reach that point and take the mean value mean time to reach there over those realizations that is what makes sense to do that but that is a separate exercise what will happen when there is no bias at all we saw here that when there is no bias the first passage was sure but the mean time was infinite now when you go to a discrete lattice that should not change really that property should not change we should be able to see that directly from here and indeed you do because you can see that when alpha is equal to beta equal to half this two cancels out you get i k of lambda t asymptotically that goes like e to the lambda t over square root of lambda t so that factor e to the lambda t cancels out and you get a 1 over square root of t here when you put this in here you get a 1 over t to the three halves multiply by t you are back to 1 over t to the half it diverges okay in exactly the same way right. So whether it is discrete or continuous does not matter it is still going to be divergent now in the case of continuous processes there is another way of finding the first passage time and this has to do with the backward Kolmogorov equation so now let me mention what that is and show you how it is related to the first passage if you recall we introduce the forward Kolmogorov equation or the Fokker-Plank equation by saying if you got a random variable which satisfies this sort of equation f of x plus g of x times 8 of t where this was a Gaussian white noise generally to equation of this kind when we said that the probability p of x t starting from x naught at time 0 t naught if it is stationary for instance a necessary condition is that f and g should be independent of time but it is not sufficient as we know we are going to let me do this for a stationary process although you can write the same thing down in general this quantity here satisfies the following equation delta x f of x p of x t let me write it out because you will see what happens in a minute given x naught and the task is to solve it with some initial condition and some boundary conditions okay this is the forward Kolmogorov equation which we also call the Fokker-Plank equation it is a general diffusion process specified by this stochastic differential equation corresponds to some Markov process continuous Markov process corresponds to this Fokker-Plank equation for the conditional probability density okay. Now it turns out that you can also show from this same equation that there is another equation satisfied by this quantity p but with these regarded as the variables rather than the final position and time right and that equation needs delta over delta t naught with a minus sign p of x t given x naught t naught that is equal to it is the adjoint of this operator and that becomes a plus delta over delta x naught times and in F of x naught p of x t x naught t naught sorry I need the adjoint of this guy so of x naught delta p over delta x naught plus g squared of x naught now you have so for given x and t this is the equation satisfied in terms of the variables x naught and t naught as the independent variables this is called the backward and now you will see how this is going to help us find first passage time distributions to go from some x naught to some point x a what happens is the following if you went back to this thing here and notice in particular that the Anstein-Ohlenbeck process is included here for the Anstein-Ohlenbeck process this g was a constant and this F was just a linear function of just x constant times x with a minus sign maybe okay. Now the way it works is like this you have x naught here and you have x here at this point and I would like to find out what is the time of first passage to go from this x naught to x okay what I have is this probability density but this says you are at the point x at time t on an interval dx at the point about the point x at time t but you could have gone back and forth in between so this is not the first passage time density it is just the probability density function right on the other hand if you say that I am going to end the walk at the point x the moment it hits x the process is over what you have to do is to put an absorbing barrier there and then ask what is the first time it takes to hit the absorbing barrier and then that is that is the point that is the first passage time we want right so you need to solve the diffusion problem with an absorber at x so the original Fokker-Planck equation this one has to be solved with the boundary condition that there is an absorber at x and not a free boundary condition it says it vanishes at x equal to plus infinity or anything like that now the way to do that the many ways of doing that after all this thing here requires you to specify boundary conditions so one of the conditions would be that P is 0 at x once it hits x so let us call that solution the absorbing solution so let us put absorber just to make sure that we know what we are talking about this equation itself or this equation has many solutions to make it unique you need to put boundary conditions and the boundary condition that is of relevance to me is to put an absorber at x okay. Let me assume that I can then solve this equation in principle then to find out what is the rate at which the particle is hitting this point that is going to be my first passage time density I want to find q of x t given x 0 this is what I want to find explicitly I argue in the following way I have this equation here the backward equation and I can write this delta over delta t naught if the process is stationary everything is a function of t minus t naught I can write d over dt naught as minus d over dt so I can write this side as delta over delta t P absorber x t x naught t naught equal to this whatever it is on the right hand side and let me call this x prime on this side I have f of x naught delta over delta x naught P absorber of x prime t x naught t naught by the way I can get rid of this t naught now it is completely stationary so this is gone t minus t naught everything is a function of t minus t naught again set equal to 0 so this is gone P absorber x prime t right in this square and let us integrate both sides with respect to x prime from minus infinity up to x the final point x prime I am going to integrate up to x remember the walk ends once it hits x so P is 0 at x and beyond that it is 0 but I now integrate up to x and I can put that here this in so once I put that integral in let us call so I have integral minus infinity up to x dx prime P of x absorber at x x prime t starting from x naught that is going to appear everywhere so I can take this integral inside up to this point because it is integration over x prime alone similarly these are all x naught so I put that in here these are all x naught I put that in here integral and I end up with that quantity there and x prime is finished it is integrated over the answer is a function of t it is a function of x naught of course and of x where the barrier is what is the physical meaning of that quantity this says you start at t equal to 0 at x naught and you are integrating over the entire range up to x to the left of x the probability density okay so what is this equal to you are integrating over x prime this final point this thing here is equal to the probability that your random variable which is x at time t is less than or equal to this quantity x if you integrate up to x given that you started given that you started with x of 0 equal to x naught that is the initial condition so it is the probability that you started here and at time t you are still in this range so it is the survival probability it is the probability that you survived in the safe region to the left of the trap till time x till time t okay so I should call this the survival probability in minus infinity x so could call it for example survival probability till time t so it is a function of x it is a function of t and of course a function of x naught that the walk in the walk is not extinguished it is not been absorbed yet if this is the survival probability at time t what is the rate at which it is disappearing 1 minus this is the total probability right so what is the probability density the derivative the derivative of this right so the derivative of 1 minus s of t is going to be the rate at which it is going to hit that point x therefore q of xt given x naught equal to d over dt 1 minus s of t xt x naught minus ds over dt so put a minus sign here the minus here and a minus here right and you are guaranteed that this quantity is equal to q this guy here is equal to q of xt x naught so you have an equation for this guy it is convenient to use Laplace transforms now then things become a little easier and you get an ordinary differential equation here in x naught which you solve with suitable boundary conditions and you have your first passage time distribution so this is how the backward call my graph equation will tell you what first passages are not just for the ordinary diffusion problem but anything which has a stochastic differential equation of this kind in particular things like the on-strain rule and back process so to tell you when any point is hit for the first time what the time what the distribution and time is of this first passage so it is not saying anything like the first passage is sure or anything like that you have to solve this to discover what it is for this first passage time distribution looks like so I thought I do this just to explicitly show you how for Markov processes you assume Markov processes here this works all this whole thing works we saw that on a lattice yesterday we saw on a linear lattice how this system actually gave us the mean first passage time we could solve it easily it was in the case of we found the mean first passage time but you could actually find the distribution itself seen today that explicit ways of finding distributions the problem we looked at yesterday was something starting at the origin and hitting either plus j or minus j barriers at two ends for that you got to solve the diffusion problem with two barriers both of which are reflecting barriers so you have to put two boundary conditions saying that the walk ends if it hits either plus j or minus j and that is it so one can write down explicit solutions but I would like to do a little more I would like to show you we need to do fractals so I need to show you how the walk dimensionality changes and so on but before that let me introduce the idea of a random walk dimension because we are going to do this on other structures as well and I want to show you what happens because when we do anomalous diffusion this will become crucial yesterday we found that on a linear lattice of this kind if you are at zero and you start at zero and you go to plus or minus one every time step you take nearest neighbor time steps then we discovered that to go to a site either j or minus j on this site for the first time t to go from zero to plus or minus j for the first time was equal to j square we showed this explicitly starting from zero so to travel at distance j in an unbiased walk it took you the square of this distance that was the mean time it took so it in particular it means that to go twice as far you need four times as long that is what it immediately implies and that you can see right from the beginning you can see that if I start at zero and here is one here is minus one well the mean time it takes to go unit time step is one in this case because you in one time step you either go here or here in the matter is over but you want to go here or here now it is a different story because you can go from here and back here and any number of times you do back and forth and then you of course finally hit that for the first time right so it now it is using the general formula j square the answer is going to be four in this case right so to go twice as far it takes four times as long that is this the model of this story right so the question is how do I define a walk dimension in this case so the way I do it as follows you say that to go a distance r r squared goes like t to the power two over d if I say this here then it is clear that r one squared over r two squared if I take two different distances then the times taken must be of the form t1 over t2 to the t over dw and this thing is called the random walk dimension now if I say I am going twice as far then r2 is twice r1 so this implies that one fourth equal to t1 over t2 2 over d1 which says that 2 log 2 equal to 2 over dw log t2 over t1 or this cancels or it tells you dw so we have a general definition of what this walk dimensionality is it says take two distances one of them twice the other find the mean times it takes to go to these two distances take the log of the ratio divide by log 2 and that is equal to the walk dimension in the case we just looked at t2 was over t1 was 4 so you got a two log 2 here and a log 2 here so you had dw equal to 2 which we always knew because we have got r squared goes like t to the power one so for standard random walks normal now random walks the standard diffuser processors d walk is equal to 2 always and this is independent of the dimensionality of the space in which things happen Brownian motion the random walk dimension is 2 but now we are going to look at structures where this is not going to be 2 it is going to be greater than 2 and the way it will happen is that the structures got little complications little nooks and crannies where the particle can get lost for a long time and change the dimensionality itself we will look at that explicitly but to set the stage for that we need to go back to what these structures look like they are called fractals or hierarchical structures and let me first introduce you to these fractals and then we will look at specific examples so let me first ask are you already familiar with the idea of fractals how many here are not familiar at all what a factor is how many do not know what a factor is because if you do know what a factor is then I will go ahead and use no not in detail okay well let us give a let me give a quick introduction to this business okay very roughly speaking what a fractal is there is many ways of defining it is to start with some geometrical construction and then iterate this construction over and over again in such a manner that the figure you get is self similar that with suitable magnification or de-magnification depending on how you look at it it is exactly the same as the previous figure except that there are cases when you have random fractals where the similar self similarity is statistical rather than exact okay it is a little bit like saying that if you got a sufficiently irregular landscape for instance and you go closer and look at it it also is equally irregular and in some sense if this statistical similarity of this landscape is remains the same you would say it is a fractal of some kind but the more precise definition is as follows and it takes into account the fact that these fractal objects generally are not smooth objects because of the manner of construction they would be irregular objects they would be curves if they are curves they are not differentiable if their surfaces they are not smooth they are rough on all scales and so on and the simplest of these is the so-called Koch curve the triadic Koch curve and the construction proceeds as follows so you start with a line of unit length okay and then you put a kink on it so you take one third of this and then you put a kink on it so this is still 0 and 1 but you put a little kink on it each of these is of size one third so here is one third here is one third here is one third here is one third in length but now you got a curve of length four thirds okay and the next step you do the same thing you take this little segment and you put a kink on it that segment you put a kink on it and so on this is still 0 to 1 but now the length of this curve has increased of course because this portion is 1 over 9 the original portion as you can see and there is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 of them so the total length is now 16 over 9 and you keep doing this and it is quite clear that between 0 and 1 you will now have an extremely kinky object which will not be differentiable anywhere in the limit and whose actual length is infinite actual physical length will diverge and the question is how does it diverge with the scale factor now it is clear that this fellow is one third the original guy so what has happened is that the original length unit interval has been broken up into four pieces each which is each of which is one third the original piece so the scaling down is by a factor one third but the number of pieces has increased and it has become four and not three but remain three then of course nothing there going on right. So one can ask how is this length going to diverge as I keep increasing the generation in what way does it diverge okay and that is measured exactly as I measured the as I gave for the walk dimension again by taking ratios in this case so at any stage if epsilon is your scale factor you compute the log of the number of pieces of size epsilon divided by the log of one over epsilon epsilon is less than one so one over epsilon will have a positive log okay this will tell you precisely how the length diverges as epsilon as the generation increases okay in this case the limit is very trivial I can take it at any stage and the reason is that it is a regular fractal because each generation's relation to the previous generation is exactly the same as the next generation's relation to this present generation nothing changes at all. So these things do not depend on generation at all this is a regular factor here and in this case so this is called the fractal dimension is a symbol for it defractal and what is that equal to in the present case this is log four divided by log three which is greater than one so this curve it is also called a snowflake curve if you did this with an equilateral triangle to start with this curve is got a name it is called the triadic coach curve has a fractal dimension of this now what is meant by this thing this object here in the limit in the limit is a continuous curve it is in a bounded region of space then go very far right but it is length is infinite it is formal length is actually infinite we went on this curve over every no con corner it is infinite right. So what is happening is that if you took a gross scale if you took a foot rule and measured its length and here is my foot rule it is called the least count of some one unit for example then it would be one unit fits here I put that on this and that resolution I still see one but if I have a foot rule which is got a better resolution at least one third resolution then of course it is this and now I say the length is four thirds and if I got a better foot rule even more refined the next time around it is going to be four thirds squared sixteen over nine and so on and the length is actually diverging with the resolution. So these fractal curves of properties that depending on how fine your measurement practices you get a different answer in any units whatsoever it is not just a question of choice of units in any units whatsoever so if the original length was one meter you really see four thirds meter here and then sixteen over nine meters there and so on but you need a finer resolution to do this now for a smooth curve this property is not there at all for a smooth curve of this kind no matter what resolution you use you still going to see exactly the same length the topological dimension of that curve is one and it is actual fractal dimension is also one but in this case the topological dimension is one but the fractal dimension is greater than one okay and what is this dimensionality has another name in this simple instance it is called the box counting dimension because we want to count a volume for instance what would you do take small cubes and see how many cubes fit into this volume if the cubes are sufficiently fine all nooks and crannies will be covered and you will get a certain answer right now the statement here is how many of these boxes do you need to count what the dimensionality is and the box counting dimension is greater than the topological dimension it turns out to be log four over log three in this case okay so this is the idea behind what a fractal is enough for our purposes it is a geometrical object for which there is a hierarchical way of constructing it and the topological dimension of this object is in general smaller than the box counting dimension right but the box counting dimension is in all cases less than the Euclidean dimensions into which it this curve is embedded this object is embedded here I have drawn it on a plane the planes topological dimension is 2 so the fractal dimension of this object cannot be greater than 2 it is on a plane that is the end of the story right so it is bounded on the top from above by the topological dimension of the Euclidean space into which you embed it and it is bounded from below by the topological dimension of this object and it is somewhere in between could be a fractal could be a fraction or an irrational number of this kind there could be cases when you have a fractal but that fractal has a fractal dimension which is approaches to so there are constructions for these so called piano curves or space filling curves for which the box counting dimension will approach to in the limit that is the maximum it can be because these are planar curves right so in that case it is not a fractional dimension the fractal dimension happens to an integer in those cases Brownian motion does that so if I set a particle in Brownian motion on this plane it does this crazy thing like this and I can ask what is the fractal dimension of this Brownian motion of this vener process it turns out to be 2 in this case so it is actually space filling and that is because you are guaranteed that with probability 1 every point will be visited and visited infinitely often so the whole space is filled up given enough time okay this is an example of a statistical fractal nobody is saying that there is a regular fractals you cannot do a construction need construction of this kind to do this but this object here is again a fractal nevertheless and the fact that the walk dimensionality turned out to be 2 is linked to the fact that the box counting dimension of Brownian motion is 2 is linked closely to that okay now what we intend to do is to look at another problem of a random walk on another fractal structure which is a little more interesting than this this guy is not very interesting because it is still a linear object in this one dimensional object topologically we would like to have something with nooks and crannies and so on with some ramification more than one way of getting from one point to another and then we will see if interesting things happen you see in this problem to go from here to there you have to go along this curve at any stage so it is not a very interesting problem in that sense but if you had loops and things like that then things become more complex and interesting so we will look at one of those fractals called the Sierpinski gasket as a kind of case study of a non trivial fractal in which you can get answers which depart from the usual Euclidean space here okay by the way you can just as I define this fractal curve you can also define more complicated objects far more complicated and then the Sierpinski gasket is one of them and then we will generalize that to higher dimensions you can define any number of dimensions see what those fractals look like and then we do random walk on that to see what will happen to this whole thing okay but essential to the whole approach is the fact that this object is self-similar if you look at this piece and magnified it is this you look at this piece and magnified it is this or demagnify you go in the other direction you get exactly the same thing over and over again repeated okay so this self-similarity is the defining characteristic of a fractal object here all the other properties may or may not exist in equal measure but the self-similarity is completely crucial either regular self-similarity or statistical self-similarity in some sense I will say more about this as we go on okay so let me stop here today and take it from this point.