 We now move over from the center limit theorem to a random walk problem. This is akin to moving over from a statistical description of phenomena or stochastic description of phenomena. So far, we discussed random processes via statistics. We discussed the distributions, we discussed the sampling, a sample consisting of an elements. So, n the elements in the sample were not ordered that is why we call it as a statistical description. Now, we order this, we order in such a way that the events are labeled by an increasing sequence of numbers. For example, the coin tossing this process will be designated as a sequence. It will be a discrete sequence increasing order. When we do that the very same description that we followed takes an interpretation as a stochastic process. In fact, much of the mathematics that we use in stochastic descriptions are almost the same as that in statistical descriptions. But by sequencing them in a set specific order gives us a new dimension to the same set of descriptions that we follow and that gives us a better connection to the world real world phenomena and that is where the importance of the stochastic description comes. As an example to this connection between the statistical description that we discussed via central limit theorem and the stochastic description which we are going to discuss henceforth. We start with the problem of a simple example right now the example of a random walk. This is a very well discussed topic and we take a simpler variety of it. So, called one dimensional random walk and also further qualified as a symmetric random walk or unbiased random walk or often it is also called unbiased random walk. So, what is a random walk? First of all random walk is a very fascinating history around the time when Einstein was writing his paper 1905 on Brafnian movement to explain the random motion of colloidal particles Carl Pearson had published a work on random walk. In fact, random flights in 1905 later it was pointed out by Lord Rayleigh that some work has done also is completely mappable to the random walk problem that Carl Pearson has published. So, a physical interpretation that given by Einstein to the same random walk problem was in terms of diffusivity of particles the whole diffusion process. So, it this subject is therefore, an underlying the topic connecting to several phenomena. So, the model of random walk. So, what is random walk? The model is stated as follows we will call it as R w for gravity it is stated like this. A random walker is a person who has no clue which direction his house belongs to on a lane. In fact, some authors call it as a drunkard's walk, but it is not necessary he is a person who has somehow lost the memory of which direction his house is. He knows that it is on a lane. So, he starts taking one step to the let us say to the right and another left to the right to the left another step to the left and every time he takes a decision he still does not know which direction is. So, randomly he takes a decision to take a step either to the left or to the right and so on and so forth the process go goes on at infinitum let us say. So, then the question that we have to ask is what will be the what statements can we make on where he is going to be after he takes n such steps. So, we can restate that problem a idealized very somewhat idealized problem into a problem of a random walk on a lattice. Supposing his lane is equivalent to a lattice and it is starting point is 0 then he randomly takes a step either to right or to the left. So, his step length is fixed. So, we can virtually label all integer distances along the line say real line and it includes negative sides also where L is a step length they are fixed. So, the walk this model for example, could be a model of an atom which jumps from some one lattice position jumps to another lattice gets adsorbed and gets reemitted randomly once again. So, it is either jumps to the left or to the right and proceeds. So, the question has or the problem has closed connection with certain physical phenomena as well. So, he starts from 0 origin. So, his initial distance d 0 equal to 0 where d is the distance as measured d is the distance measured from the origin and we can now further specify the way to conduct this experiment random experiment in a lab setting. We can say that originally the walker is at 0 then it tosses a fair coin. If the coin is h he takes a plus L step if the coin falls tails up then it takes a minus L step in the next step. So, this is let us say in the first step n equal to 1 the same thing proceeds again. Again it tosses a coin in the second step follows the same rule. We can actually construct the various paths that he will execute in the course of the various steps that he is going to take. Let us for example, consider that this is the lattice 0 1 2 3 let us say 5 steps I am omitting the L here because it is in terms of L similarly let us say minus 1 minus 2 minus 3 minus 4 and minus 5. So, for example, let us say he the in one sequence of paths let us say that he got the following sequence h h t h h t let us say this is path 1 and in the second one he obtains let us say t t h h and h. So, if we now map it this is the second occasion. So, if you take the first one for example, he gets the head first. So, in the first step. So, these are steps steps let us say we say 1 2 3 4. So, 5 steps let us say 5 steps. So, in the first step he has got a head. So, he is going to be here plus 1 value. So, in the second one he has got a tail. So, he will move to minus L. So, he will come back to the origin because you have to subtract minus 1. In the third one he again gets head. So, in the third step he will be at this place fourth one again a head. So, in the fourth step he will again move plus 1 from here let us keep the dotted convention and in the fifth one he gets t. So, he will move minus 1. So, he will come back to this position in the fifth step. If you in another realization for example, he has got to begin with he has got tails. So, he will move minus 1 say here. So, from 0 he would have moved minus 1 and then again a minus 1 t t in the second one and in the third one it would be head again. So, he would have moved to plus 1 and fourth one head again. So, he would have moved to plus 1 and fifth one head again moved to here. So, if you see various realizations are possible. So, this is first this is second. So, these are called realizations. So, various realizations are possible and each of these realizations is called a path or a sample path. Eventually in order to interpret this statistical phenomena we have to take averages and these averages are called averaging over the sample paths or in the jargon of stochastic phenomena it is called ensemble averaging. We have to construct very large number of ensembles to be able to obtain somewhat reliable results. This particular experiment random experiment also brings us to the concept of transition probability. First time we introduce a new concept of probability called transition probability. The simple random experiment of tossing a coin in which there was an equal probability half of going either taking left or right. Now, assumes a meaning of a transition probability when it is indexed by an ordered number. Each step each time we throw a coin and decide on which step to take it is called a step. Hence, a transition occurs from a step first step to the second step, from the second step to the third step and so on. So, transition probability is basically the probability. So, in fact, the transition probability is defined as the probability generated by let us say an index notation P. It is the probability to change or transit an earlier state to a earlier earlier meaning previous step state to a new state. We will discuss this in more quantitative rigorous terms. Now, let us get back to the problem of how do we describe this work. For example, if you ask a question what is the distance the person would have traversed after 10 steps or in general n steps, we quickly realize that it does not have a definite answer. We can only say that the minimum the minimum distance he would be from the center that will be 0. He could have as well every time he could have got h t h t h t sequence and after every even sequencing he would have come back to 0. So, this is the minimum, this is one extreme situation. Similarly, the maximum distance he would have he could have traveled could have happened by a conspiracy of all the steps becoming in one direction. A rare probability of all the n steps taking place in either plus cell direction or in minus cell direction or plus direction or minus direction. So, that the maximum distance could be either minus n l to the left or plus n l to the right after n steps. Since the probability the so called a transition probability P is either half of jumping plus cell it is also half of jumping minus l total probabilities to jump either plus cell or minus l is 1. So, the length he suffers at each step we can denote by a random variable this is the jump length random variable and we denote it by sigma. So, sigma each I mean it can take up either plus l or minus l values. So, at every time a coin is tossed and an experiment is made we can say that it takes up values of sigma 1, sigma 2 etcetera, sigma n values each of them either plus minus l there about only 2 states now plus l or minus l. So, now I can I can introduce a probability of occurrence of a value of sigma constrained to the fact that it can be either plus l or minus l as probability P sigma equal to half delta sigma minus l plus half delta sigma plus l. Here delta is Kronecker delta this is not delta function it is a Kronecker delta which is 1 we have already defined delta let us say n m equal to 1 if n equal to m it is 0 if n not equal to m. So, this notation or this equation captures the statement that in at every step the random walker jumps a length either of minus l or of plus l only it does not pause every time there is a possibility you could have paused, but we are not allowing that neither does it take a longer step neither does it move let us say a step length of 2 l. So, this random variable sigma has very interesting or simple, but very useful characteristics. So, with the probability function P sigma equal to half delta sigma l plus half delta sigma minus l with this notation we can of course show that there are only 2 states. So, some P sigma will be 1 this is either sigma equal to 1 and 2 only 2 values. So, the probability is normalized because it is half it will be a plus l or minus l. So, 2 values are allowed. So, it will become 1 the mean sigma value for example, sigma bar which will be should be defined as sum sigma P sigma. So, I would put it as it is either minus l or plus l similarly either it takes minus l or plus l that will be sigma of a sum of or write written more explicitly it is minus l plus l because the realization the functions are only when sigma equal to plus l the first half will come. So, there will be half here and there will be half here, but the net will be 0. So, this is mean 0 process. Now, we can calculate the variance sigma square bar minus sigma bar square which is only sigma square bar since sigma bar is 0. So, this is therefore, equal to same sum minus l or plus l sigma square P sigma and granting that P sigma exists only for plus l and minus l. So, this is going to be minus l square half plus l square half. So, it is going to be simply l square. So, we have variance sigma square I can use this notation also equal to l square since sigma bar square is 0. I can now ask for a correlation between 2 different sigmas that is let us say I have sigma i sigma j and I want to obtain the average of them I use this notation and since the mean is 0 the correlation is going to be just this quantity and that is going to be you can actually now sum over sigma i and sigma j, but i not equal to j. If i equal to j it becomes sigma square and we have already considered that case this will be sigma i sigma j P sigma i P sigma j. So, the most important assumption at this point is that my coin tossing in the next step was completely independent of that in the previous step. In other words the joint probability of obtaining sigma one sigma value in the first step and another sigma value in the next step is just the product of individual probabilities that is why we separated it and wrote and we can see that this is simply sigma i bar and sigma j bar which is each of them is 0. So, it is 0. So, we have a important third important result which says sigma i sigma g average equal to I can write it in terms of delta it is l square delta i j. So, it is l square if i equal to j it is 0 otherwise. Now let us come to the question of how far the random walker has moved in n steps. So, I define a distance function d n which is nothing, but the length traversed in the first step the length traversed in the second linear sum of all of them like that length traversed in all the n steps. So, if I want to estimate the average distance that he has traveled in this particular realization and I have always done that average is a kind of a sample mean by d n by n just length divide this total length by n would have been an estimate of the average distance traveled. Obviously, that this is not the population mean, but this is a sample mean sample mean of distance then we can calculate very easily that my d n average will be sigma 1 average etcetera sigma n average and each of them is 0. So, it is 0 which is as expected because on an average the walker should be at the origin where he started on an average all the ensembles if your average finally, he should be where he was. Now let us calculate another interesting quantity what about the d n square mean. Since the mean is 0 it will be a measure of variance or the dispersity of his position with respect to the mean. So, d n square alone will be basically we can write it is sigma i i equal to 1 to n its whole square and the series can be written as i equal to 1 to n j equal to 1 to n again sigma i sigma j we continue in the next lecture.