 The physical meaning of saying that there is a finite survival when the walker has a slight positive bias as opposed to 0 survival when there is a slight negative bias can be given by a nice a simple example. Consider the case of random walker which in the present case let us say a pair of ions. So, let us say we have a x axis like this and there is a plate and let us say at far away point very far away we have given some opposite plate and I have applied an electric field between these plates. The other plate is quite far away, but there is some small little voltage and then there is let us say an electric field E acting now in this direction. I just let us say I have created an ion pair at some short distance from the this is x naught this is 0 a short distance from the absorber. So, this is a plate which is also an absorber if a ion strikes it then it will be lost. I create an ion pair let us say by a very short pulse of an x ray or something. So, I will have plus minus and negative ions are created. Now the electric field is let us say quite weak, but it exists positive along this direction which means a positive ion is its tendency to move forward and a negative ion will have a tendency to move backward towards the plate and the other plate is quite far away at the moment. So, whose influence is not felt at x naught. So, as the time proceeds the negative ion which is also undergoing random walk there is a chance for it to get trapped by the plate. However, has a certain probability of escape negative ion can escape random walk, but positive ion has 0 chance of escaping even though it is also executing random walk. So, positive ion will be trapped. So, this is the physical meaning of saying that the probability of escape of a particle either particle against absorption when there is a favorable electric field or force is given by 1 minus e to the power minus of u x naught by d that expression that we saw. So, we can we can illustrate this by a graph of survival probability as a function of time. So, if this is unity then for the case in which the bias u less than 0 this will be the form of the survival probability. Whereas, for the case when the u was greater than 0 the survival probability perhaps could take this form and this value here is e to the power minus u x naught by d. So, this graph basically illustrates that result of a finite survival when the velocity is positive. From the survival probability we can estimate the first contact time or the first passage time or the current these 3 are now equivalent as we saw in the last lecture and we call that first contact time distribution as f t. So, the first contact time distribution is also the same as the current is also equal to let us say j t which is also defined as minus ds by dt. So, minus ds by dt we can evaluate by differentiating the expression that we have derived in the previous slide. So, since s t involves the error functions. So, we know that when we differentiate error function we should get a Gaussian, but there is a further differentiation to be carried out with respect to t that will involve some power law terms. Then again similar terms will exist here we can combine all of them it is a bit of tds algebra there. So, we merely write down the results and this turns out to be a simpler a neat looking expression of the form x naught divided by 4 pi d t to the power 3 by 2 e to the power minus x naught plus u t whole square divided by 4. So, as we mentioned this is we can first contact time distribution and this equivalently is current or probability current to be specific they are equivalent. So, basically this is the distribution of times spent before a first contact occurs. Absorption is equivalent to first contact because there is no second chance given and hence the flux that is impinging on the absorber itself becomes the first contact time distribution. And we can see that this distribution if we plot this function it will have a peak coming from the fact that at t equal to 0 it will be 0 because e to the power minus infinity will over dominate and at t equal to infinity there will be a t to the power minus 3 by 2 power lot tail. So, it goes to 0 and hence somewhere it must have a peak since f t is a positive quantity. So, it will have when the have a form like this. So, if u is negative it will have let us say this form then it is very likely that when u is positive it will have form like this because the contacts are going to be less and less for the positive case. Most likely it will be surviving in the lattice space, but less occasions it will be contacting the absorber. Whereas for u less than 0 it has a much it has a definite chance of eventually contacting the absorber. So, the entire area under the curve is unity because we know that every particle has to be finally, absorbed for u less than 0. So, we can obtain many features now such as the question of whether there is a mean time. For the moment we come to the end of this problem let us say of single absorber. We now move again continuing with our one dimensional exploration to cases of when there are two absorbers in a one dimensional lattice. So, one can have it is a two absorber problem which is more realistic in the sense when you have an electric field for example, in the previous example I gave there must be two plates. So, obviously one cannot have just one plate situation. So, two absorber problem means I have a lattice my point 0 and then let us say there is another point L which is a maximum and there is a point lattice point let us say generally k 1 2 and let us say k and k lies between k equal to 0 1 can up to L integers. So, if the point L let us say could also be an absorber and the point 0 also is an absorber and this problem is of considerably importance. In fact, although we called it an absorber traditionally this is called gambler ruin problem. It is quite interesting to see how one can map a certain mathematical situation to different types of physical problems. Gambler's ruin is once outstanding example of this feasibility of mapping. Let us say that we have a gambler who starts with some initial fortune let us begin with start he starts with 1 dollar and then every time he a gambling he continues. He may lose a dollar or he may gain a dollar at each successive gamble. The gambling proceeds in successive steps let us say it could be just a tossing of a coin. If the coin is supposed to be fair then it is just that he loses with equal probability and gains 1 dollar equal probability, but he has a rule. Supposing at any point in time the amount he accumulates in the process of this gaining and losing 1 dollar each if it becomes 0 then he has to quit because he is left with the no more but no more capital with him. So, the game is over for him. Similarly, supposing he accumulates now a total amount L then immediately he quits he has decided that he will stop the game if he gets a total amount of L rupees or L dollars. So, therefore, both the point 0 is where the game stops and the point L also is where the game stops and hence these are like absorbers because the process is truncated. We if we say in the in languages of our random walk that he is actually a random walker walking on the money lattice where he is a single jump length delta i is plus or minus 1 units of dollars then the quantity of interest is the total length he has accumulated let us say R n which is equal to sigma delta i which is a statistical quantity let us say i equal to 1 to n in n steps and if R n becomes a 0 becomes 0 the gambling stops the gambler is ruined and the gambling stops. On the other hand if R n becomes L some prefix amount he says he may say that 100 dollars if I remember entire reach I will quit the game then gambler quits the game. Of course, it is not necessary that he starts with 1 dollar he can start from any k dollars for example, but each each time he wins or loses we can assume that that unit is 1 dollar then you can always change the unit and keep it always 1 dollar. Now it can so happen that some gamblers have a winning streak even though the coin is fair, but they have any winning streak by that what is meant is although the coin is fair it is possible that because of his good luck a succession of tosses come by which he gets only heads which means it only moves towards right he goes on gaining. And if such conspiracy of probabilities occur by which he gains a L dollars all in a in a sequence of tossing then he is having supposed to be having a winning streak and he has won. So, that is one way of looking at it. There could be a winning streak it each trial itself and if that is let us say a probability p and if the probability of losing is q and p is more than q then he could be having a winning streak or he could be be a losing streak if p is less than q. So, in any case his random walk in the money space can be considered with the bias also as a biased random walk. So, the now the question that we ask is about the various probabilities of him either hitting a 0 or hitting L. So, this is a very important problem in physics as well there are many applications where this question is answered is to be answered and we illustrate how we can arrive at simple analytical solutions for this problem. This is also restricted boundary value problem. So, we generalize now this is 0 dollars this is maximum of L dollars and it takes he starts with the k dollars and then each successive game he plays with either a gain of 1 dollar or a loss of 1 dollar and there are L such sites. If the walker starts from k what is ultimately the probability of him hitting 0 without hitting L that is important. So, what is the probability that a gambler hits 0 that is he gets ruined without hitting L because once he hits L the game is supposed to stop. One can ask similarly the vice versa what is the probability of the gambler hitting L without being broke. Once he is broke he is not supposed to restart is supposed to the game is supposed to be over then other probability will be another game another game altogether. We can now solve this like our regular problems that is I can define occupancy probability Wn or W where n varies from 0 to L at various steps. So, we can one way is it can be formulate as occupancy probability that is we can write for example, the probability that he occupies a site m at n plus 1th step should be given by the probability that he was at m minus 1 in the previous step and gained or jumped to the right then only he can reach m with the probability p or that he was at m plus 1th site that is he had higher amount and he lost it by in the next jump next step. So, this is like any of our random work problems, but why there is a difficulty our space of m now is limited to 0 and L. So, if you go back to our method we are not able to construct a generating function because as the game proceeds I cannot sum from minus infinity to infinity or even 0 to infinity. So, all our methods of generating functions fail this will have an initial condition W 0 let us say m is the delta mk he starts it from the kth site some site and of course, it will have a boundary condition that W n 0 will be 0 W n L also will be 0 basically we are implying that once they strike this end points then they will not be available for the game is ruined game stops. As we can see now the methods that we adopted the generating function method will not work because of the finiteness of the space involved. So, how does one solve the problem? So, we can now say that ok I am not interested in seeing the story in finite number of steps or let us frame the question in terms of eventually what will be the probability that he will strike 0 without striking L that the game will continue until it strikes either 0 or L and I am not worried about the number of steps at which this will happen. So, we ask a most general question what is the probability that he strikes 0 after infinite number of steps are allowed. Of course, we know that since it is a finite space problem he should either strike L or strike 0 total probability is 1. So, we are asking a question of partitioning of this probability between 0 and L. This problem can be addressed in a simpler way by an out of the box approach and we now proceed to we will enunciate that approach. So, let us introduce a novel probability called the probability that ultimately the gambler will contact 0 for the first time that is without of course, having contacted L. So, it is something like a ruinage probability eventual ultimate ruinate probability if it starts from K or similarly ultimate winning probability starting from K. So, we define those probabilities they are not occupancy probabilities they are the probabilities of ultimately contacting one of the end points. So, correspondingly we introduce introduce new the probability concepts. So, that is as follows we define f K superscript 0 as probability that the gambler gets ruined that is reaches 0 before reaching L that is he has not contacted L at all. Eventually this word basically implies that I am not bothered about the number of steps he has taken to do that the game goes on and on until he has got ruined of course, starting from some point K that is why we have held a place holder for K. One can of course, correspondingly define f K L same way probability of making it to L hitting L first time starting from K same definition, but instead of instead of 0 it is now L. Now, since there is no limit to the number of times he plays or number of steps he takes in our in our language it is steps, but in the in the players language it is number of times he has played the games. Since there is a restriction he has to either win wherever he starts from within 0 to L or he has to lose which means total probability should be 1 which means if I know one of them I know the other. Hence we can formulate the problem in terms of one of these probabilities. So, we do it for the case of f K 0 let us say or we can also do it for f K L ok. Let us take up the case of f K L which has a conceptually you know more sense of happiness because it is about winning. So, let us say that we will formulate the problem in terms of winning probability. So, what is f K L to recapitulate? f K L is the probability that the player starts from K and has hit L before ever having touched 0 has not been ruined he has straight away walked into a success trap in that game of course, regardless of the number of times he has played. Let us divide or break this probability into subunits and formulate a recurrence type of equation for f K L it can be done as follows. Take our region of interest 0 to L there are many discrete steps. Let us take some point K from where he is currently at at any point let us say he is at some point K. Now you can now consider that as the starting point. So, that means, he is starting with an amount K. Now the probability that from K he would have succeeded eventually and reached L could have happened in 2 ways. Since only nearest neighbor transitions are allowed which means every time he he can reach L because he had a tossed once again and in the tossing either he has won or he has lost. In other words he has reached L let us say having gained 1 dollar and from there K plus 1 then eventually he had 1. So, he is winning from K to L can be broken this probability can be broken because one cannot straight away jump from K to L it has to go through steps. So, this probability is by having won the next toss and from there on having reached L or maybe he lost the next toss and his amount got lower to K minus 1 and from there he went and won. So, let us to avoid confusion we remove the middle line we just consider these 2 probabilities. So, from K he went by this path or he went by this path these 2 paths. So, if we if we combine this in from our Markovian assumption of saying that probabilities depend only on the states and not on history we can write down a jump equation for this. So, to complete our notation this is now f K plus 1 L and this will be f K minus 1 L this implies that the probability of winning from the point K we can write down f K equal to it is the probability that he lost from K to K minus 1 he jumped with the probability Q and from K minus 1 he straight away reached L not straight away he the probability of his winning is f K minus 1 L because K defines an index or he won the toss or he won the gamble at the Kth step and arrived at K plus 1 and then from K plus 1 he succeeded or he won for all K this should be the equation for the winning probability f K L. The exactly the same logic can be used to develop an equation for the it is complementary value f K 0. One must distinguish this equation from the random walk jump equation must please note that in the case of random jump equation it was K minus 1 was multiplied with you know the index probability with the K minus 1 was multiplied with P and the K plus 1 was multiplied with Q whereas, here it is vice versa. So, although it deceptively looks very similar to our random jump equation it has a structurally different and the way to understand is the way we worded now that the first term gives the probability that the random walker jumped left and then from there he reached the target L plus or or the probability that he jumped right and from there he reached the target L each of them respectively being the local jump probabilities being P for jumping right and Q for jumping left. So, this equation will have the boundary condition like this now supposing he has started from 0 itself. That means, he really did not have any money with him there is no chance that he can bet there is no chance that he could have won. In other words if K equal to 0 f 0 L necessarily is 0 the boundary condition because L represents winning probability if you start with no cash then there is no way you can bet. So, there is no chance of winning similarly if he has already is having let us say the amount L if L is let us say 100 dollars he already had put 100 dollars then the probability that he will win if he had already the amount L that will be always 1 this boundary condition we assign. Now the problem reduces to solving this difference equation under these boundary conditions. Even that is not really quite simple it requires some ingenuity and we study about this nice way of solving this differential equations. This technique that was introduced several hundred years ago first by classical statisticians has been used in quite a few different disciplines it has become a standard method of solving equations. So, the learning this technique will go a long way in handling many such differences equations in the future. We discuss this in our next lecture. Thank you.