 So, the mean square displacement or the ensemble average square displacement denoted by d n square expectation is basically i equal to 1 to n sigma i whole square its expectation which we can write as i equal to 1 to n when you square a sum can always write it as a double sum j equal to 1 to n sigma i sigma j and then the expectation process will come come out with the sum. So, you can take the expectation inside if you expand the double terms there will be n terms where sigma i will be equal to sigma j that is i equal to j. So, one could then write the first terms where i equal to j as basically sigma i square i equal to 1 this is corresponding to i equal to j term and then of course expectation and the remaining one will remain as double sums, but subject to the constraint that i not equal to j, but all i i equal to 1 to n j equal to 1 to n taking care to see that it does not include the same terms. So, this will be again sigma i sigma j expectation. From the property that we have seen the sigma i square expectation is l square. So, it will have l square n such term. So, it will be n into l square and here we have seen that the cross correlation is 0 because of the mutual independence or the process is iid. So, we can write it as 0 since sigma i sigma j equal to 0 i not equal to j which is independence or it is identically independently distributed r v property. Hence we see that my so, the square displacement is n into l square or d r m s if we define root mean square distance from the origin in n steps which is a square root of d n square average just take the square root it is going to be l root n. We can actually if I divide by n for the mean. So, we can one can say that the standard deviation sigma with respect in the mean distance for a given realization is going to be d r m s by n. So, that will be l by root n. So, we have obtained very important result that although on an average the random walkers displacement from the origin is 0, his root mean square distance the measure of dispersity that he would walk he would have increases with the number of walks, but it goes as a square root of the number of walks. So, number of steps. So, as he takes more and more steps although he is random there is a chance significant chance that he would in one step he would have reached by the nearest house. And if that particular house let us say the person living there recognizes him and he could have taken him inside and he could have reached the destination although he was actually having no clue which way his house is. We now connect it to the CLT property that we saw. So, in the CLT property or in the CLT connection every one of the sigma i's that we were taking it maps to this value sample value x i the realization of a particular random variable x. So, here sigma i is a realization of a particular random variable sigma. And the d for example, d n that we denoted between the sum of sigma i. So, this corresponds to sum of x i. So, d n by n this corresponds to sigma x i by n which we had denoted as y the so called sample mean. And from the property that we saw of the sample mean follows which follows from the central limit theorem. We had established that the probability density that the probability that you will obtain a value y different or y whereas, the absolute mean is mu in this case the absolute mean or the universal mean is 0. So, this is going to be 1 of root 2 pi e to the it was actually that distribution is s e standard error e to the power minus y square by 2 s e square since mean mu equal to 0 and s e is basically the standard error sigma by root n. So, accordingly we can establish by mapping with respect to this a distribution from y you can easily go to d n because it is just y equal to d n by n. So, a distribution function can be mapped onto a distribution which is a distribution of a quantity which is a product of another constant n. So, I can obtain a distribution function for the displacement in n steps as e to the power minus d n square by 2 n l square 1 by square root of 2 pi n l square. So, we without having done explicit detailed study of this random work phenomenon by mapping the problem of a marching process stochastically marching process to a statistical process arrive at the details of the distribution probability that the person will be at a distance d n after n steps and how this probability distribution increases or changes with changing or increasing number of steps. There is a very deep physical connection of this random work using steps to the classical diffusion problem let us say of a Brownian particle. So, that connection to physics and engineering is very interesting. Supposing now we try to connect the steps to time, suppose he took each step at regular intervals of time tau that is tau 2 tau correspond to n equal to 1 2 etcetera. Hence if he has taken n steps the time he would have spent at t is a if the time he has spent is t then n is t by tau to take n steps he would have taken at time t and one can relate the steps to the time in terms of the mean time per step this way. Hence we can replace n with the t by tau which means we can write our RMS displacement as it was n square root of l root n it was if I take l inside it is square root of n l square and n I am going to write it as t by tau. So, it is l square by tau into t. So, there is a convention in physics to define a quantity called a diffusion coefficient denoted by d and it is a microscopic quantity independent of the time taken or the position of the particle and that is often denoted by l square by 2 tau. So, if you define d as l square by 2 tau we get d RMS equal to square root of twice d t where d is now a constant of the system expressed in terms of the microscopic parameters the jump length l and the mean time taken to perform this jump tau in terms of the ratio of them then we obtain a very famous law which discovered by Einstein first regarding the mean square displacement that is square root law it is called Einstein's relation a square roots law. So, using this law Einstein proposed that experiments conducted with Brownian particles the should be interpreted carefully instead of looking for a law for the mean displacement which is on an average should be 0 one should look for the RMS displacement calculate each time plot it as a function of square root of t and you should get a straight line or you can plot the square of the RMS distance as a function of time that would be also a straight line then the slope of that straight line is a measure of the diffusion coefficient of the system. So, this is a very practical connection established by a very simple random walk model starting with the coin tossing giving it a direction and a sequence putting it on a lattice and then interpreting the steps via time and this of course, is consisted has been validated by large number of experiments and is the cornerstone of modern non equilibrium physics. However, coin tossing as a measure of performing a random walk has very limited possibilities since a coin has only two sites and they supposed to be a fair coin and you can only have a plus half probability or a minus half probability most stochastic phenomena have different probabilities for taking different values for their jump or random variation or random fluctuation in the variable. So, in order to perform this experiment one must have a very satisfactory way of generating random numbers other than half which cannot be done with the coin. If you use a dice for example, well you can probably have unequal probabilities one third or five one third or any other combination two third etcetera, but you cannot have any number that you want. So, generating a required probability generation of say transition probability other than half in general can be best done by a random number generator or it is sometimes called a pseudo random number generator because these most softwares have algorithms by which random number sequence of random numbers can be generated. However, there is some limitation very large after very large number of cycles they may have some periodicity and that is why for theoretical reasons they are called pseudo random number generators, but for all practical purposes when we do simulations with smaller number not very large number I would say of steps more or less the distribution is random. So, this is based on the concept of uniform distribution between the interval 0 to 1. So, a random number generator like in Mathematica the command random moment you execute this command one gets a random number and the probability of that number is lying in the interval 0 to 1 is uniform. There by there is an equal probability of getting another number lying between 0 to 1 to next time when you execute the command random. So, if I want to have a stochastic process to which I want to assign a probability let us say not half, but let us say 0.3 and then of course, 0.1 minus 0.3 as the con converse process then I would I will execute this process by by the commands let us say random in such a way that if the random number generated lies between 0 to 0.3 the action corresponding to that probability executed if it is not then the counter action is executed. So, if I have to execute a random walk in which the probability of forward jump is 0.3 and let us say p probability of forward jump that is probability of taking a plus l let us say this is 0.3 and probability of taking a minus l let us say is 0.7 then all that I have to do is to generate a random number and if that number lies anywhere between 0 to 0.3 I will ask the random walker to take a plus l jump otherwise I will ask him to take a minus l jump. So, this way we can assign arbitrary probabilities for the process random walk process at each step. We now discuss something more on this transition probability now that we have learnt how to assign different probability values we have to understand the concept of transition probability in a little more detail. In fact, transition probability it sets the rule of propagating the stochastic process it is the cornerstone every time that probability that we assign to march the random walk or march the stochastic phenomena that is the importance of transition probability. So, how exactly this probability differs with respect to other probability. So, what are the other probability concepts which enter into the domain of stochastic processes. So, to understand that we first have to differentiate it from let us say occupancy probability, conditional probability or joint probability. Other definitions or other concepts of probability we will take up as and when we are faced with it. So, we note one thing that perhaps the transition probability is closer to the concept of conditional probability because they have very similar connotations we often say conditional probability as the probability of occurrence of an event A given event B has occurred. So, often one denotes conditional probability let us say of an event A given that an event B has occurred. So, the previous event or the conditional to which this probability is defined is written on the to the right of the probability to which the quantity to which the probability is assigned. So, this is to be read as A given B probability for A given B. In the same way when we say transition probability B let us say of a site or taking up a step length sigma 1 in the next step then it is also conditional in the sense it is also defined as given the present state what is the probability of the system being in the next state in the next step. So, we can say probability of being in a certain state say A given that it was in state B previously, but however here there is a clear cut step sequencing. So, one actually implies that probability of the system let us say is being in a state A n plus 1 state A taking up a value n plus 1 in the n plus 1th step subject to the fact that it had a value let us say A n equal to A n. Let us say random variable is denoted by uppercase A the realization is lower case A then the probability that it acquired a value A n plus 1 given that it had a value A n in the previous step. So, this therefore has very close analogy with the concept of conditional probability in statistics. We understand it in a little more detail how it is connected to other probabilities by taking up the following example. For example, let us take a detailed example it is this will illustrate the whole concept. Let us take natural numbers divisible by 1. So, I call it as multiples of 1 m 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 and 20. Now, this is the multiples of unity up to 20. Now, let us ask a question of multiples of 2. So, m 2 so, this is a multiple all the even numbers are multiples of 2 it is very simple space of even numbers. So, these constitute a set of multiples of 2. Now, we ask those numbers which are multiples of 3. There will be far fewer numbers and these will be for example, the first one is 3 itself then 6 9 then we will have 12 will have 50 then we will have 18. Now, this is the multiples of 3. So, we can record that we have the probability of getting a multiple of 1 p 1 equal to 20 by 20 is unity. So, if I randomly pick up a number in between 1 and 20 whether what will be the probability of it becoming a multiple of 1 it is 1. The probability that a randomly selected number is a multiple of 2 will be the total numbers are total number of even numbers are 1 2 3 4 5 6 7 8 9 10. So, it will be 10 by 20 this will be 0.5. Similarly, the probability that a number randomly selected will be a multiple of 3 is going to be a 6 such numbers. So, this will be 0.3. Now, let us create those numbers we can say which are both multiples of 2 and 3 that is we call it as m 2 and 3. So, we note that is here 6 is one such number the very next number is going to be 12 and the third number is going to be 18 and these are only 3 numbers which are available. Hence, the joint probability of getting 2 and 3 we can also call it as p 2 comma 3 and that is going to be. So, joint probability is going to be 3 by 20 which is 0.15 15 percent chance that number is divisible by both. Now, I ask a question on a conditional probability p probability that a number which is divisible by 3 is also divisible by 2 that is p 2 given 3 this conditional probability is defined as p 2 and 3 divided by p 3. So, the conditional probability of obtaining number divisible by 2 given that it is divisible by 3 is basically a contracted sample space normalized with respect to the probability of 3 from the joint distribution and this is going to be 0.15 divided by 0.3 which is 0.5. However, the probability of obtaining 3 or divisible by 3 given that it was divisible by 2 will again be given by p 2 and 3, but divided by p 2 the probability of obtaining even numbers and this is going to be 0.15 was the joint probability the even number probability is 0.5. So, it is going to be about 0.3 which is different. So, in other words p a b is different from p b a obviously, because the probability space in the denominators are different. This therefore, brings us to the definition in general of a conditional probability p a given b equal to p a comma b divided by b b. This definition is very useful for relating the transition probability with the joint probabilities of the system given the occupation occupancy probability is p b. So, if b is the state the probability of occupying that state is p b the probability that distribution for the joint distribution for finding both a and b is p a comma b or p a and b and the conditional probability of finding a given b is p a b which is similar to transition probability. From this background we now move over to any increasing level of sophistication in the theory of stochastic phenomena oh among the wide and very wide vast range of stochastic phenomena a very narrow subset of them are tractable at times and such processes which are relatively simpler to to follow formulate they are called Markov processes. So, we in the next lecture see what this Markov phenomena are, what are the examples and what perhaps could be the counter examples to Markov processes. Thank you.