 My best again I would encourage people to like shout a question if something is unclear That's the best I can do because normally I'd see your faces and know what's happening So the title of this course is a kinetic view of statistical physics Okay, and I Am very I would guess I would call myself a working-class type of physicist in the sense that I'm not into Formalism, I'm into just being able to solve example problems And so the point of this course is to show you how to solve a bunch of Paradigmatic problems using every trick I know in the mathematical physics book and hopefully from this we gain some wisdom So let me begin by first introducing something called the master equation and so in a Most generally I view the state of a system is being described by a set of points So this is the state space I'm sorry. We can't really see the blackboard Can you see now as I was standing in the way? Is that okay? It's a bit too much to the left Too much on my screen. It's just it's in the range So I just okay Let me just ask if can you see this line that I just drew I can't you cannot yeah, you cannot all right. Let me first figure out what the boundaries of the of the of where I can write Can you see this line This one moments Yeah, I can okay, and now on the right-hand side. Can you see the line here? Yes, okay, so this is this is my boundary, so I will not write anywhere here. Okay, so again So master equation and in general Yeah, if you're not talking turn off your mic The state space Is basically a collection of points, so I'm thinking of describing the state of the system by Discrete points. I mean in a classical mechanical system. You could think about a continuous State space but first that neck purposes is generally much better to think about a discrete set of points So there's a point I and it's described by a probability that you're at point I and in the Canonical ensemble and equilibrium statistical physics P is proportional to e to the minus beta energy of the site I And in general there will be transitions between two states So there'll be a wi j which describes the rate of transition from site I to site J and This in some sense is the most So in equilibrium you are you are you are given the probabilities of being at each site in a non-equilibrium Situation one is normally given the transition rates wi j and then the goal is to understand the evolution of the system in state space So this is the program that I'm going to try and show for a bunch of paradigmatic examples of how to Given transition rates how to calculate the evolution of the probability distribution and perhaps understand the equilibrium properties But also understand the non-liberal relaxation properties. So that's the basic goal So as I said, this is more an example driven approach of lecturing. So let me begin by working out some examples So Examples and I have to say I'm finding this extremely awkward because I feel like Nothing from the virtual world here. So the first example is a random walk RW on a triangle What someone's saying something so Imagine you have your state space an extremely simple state space three points one two three and the Transition rates from one to two and two to one one to three one to two and backwards and two three two one the transition rates are all the same and Because the overall transition rate doesn't play any role. Let's just come Let's just take the transition rates equal to one and what I want to do is I want to compute The evolution of the probability distribution on this triangle And so I realize already that I forgot something more fundamental here Which is some which is the master equation itself. So first I need to go back one step and Write down the master equation for some arbitrary set of points with some arbitrary hopping rates And in general the master equation has this generic form P I Dot so the over dot denotes time derivative and it's a sum of two terms There is a gain term a gain of probability Because you could be at some site here and hop to I and then there's loss Because you could be at site I and hop out of site I at some rate Wij and these two terms describe the evolution of the probability in State space one thing to notice about the master equation is written is that it's a first-order equation in time Which means it breaks time reversal invariance. So in general in Non-equilibrium stat Mac. We are dealing with approaches to an equilibrium state and let me write this a little bit more explicitly So the gain term this is the sum of all sites J that are nearest neighbor to I Wji Pj So if you're at a site J and the and there's a rate of hopping J I to site I the total Probability of hopping from J to I is is the rate times the probability And then I have to sum up over all neighbors of I and then there's the loss term where I'd have to sum up Again over all neighbors to I w i j P I So with the rate w i j you hop from I to some neighboring site you sum up over all neighboring you sum up over all neighboring sites and this is the flux leaving the system and So in equilibrium this time derivative would be zero and then you can study the equilibrium properties of the system More generally as we're going to see sometimes by taking a non-equilibrium perspective It turns out that it's actually easier to solve the full time-dependent problem and extract equilibrium as the long time limit of the solution So now given this let us now turn to our example of a random walk in the triangle So we have P1 the probability being at site one P twos P three and these are all time-dependent probabilities And so the most general understanding of the problem would be want to compute P I of T at all times T Given some initial condition because this is a first order equation in time All we need to do is specify the initial condition and nothing else So let me write down the master equations for this process. So let's first of all write P1 dot as a function of T So there is a gain term because it could be a teeth P2 I could be at site two and hop to one I could be at three inside hop to one I could be at one and now there's lost turns because I'm at one and I hop to either two or three So there's two such terms and if you put this all together you'll get P2 plus P3 minus 2 P1 and similarly P2 dot and by the way as I go along like when arguments are kind of obvious I don't bother writing them. So it's just less writing to do so P2 It's just you can obtain immediately by cyclic permutations. So P3 plus P1 minus 2 P2 and P2 dot P3 dot sorry Is equal to P1 plus P2 minus 2 P3 and then we have to supplement these equations with an initial condition and Again, one can choose anything, but let me choose for simplicity the initial condition that P I at t equals zero Is equal to delta I one that is I start with a particle at site one And I want to ask how does the probability distribution evolve as a function of time So we have three couple linear differential equations We know that first order differential equations have exponential solutions So you could assume an exponential and work it all out But again to try and develop some toolkits for this a much more elegant way of solving the same problem is by the Laplace transform method And so I'm going to solve this using Laplace transforms And so here is where Having the students here would be very helpful because I would ask like how many people have seen the Laplace transform method And some people would say I've never seen it before some people say I'm bored with it So I'm kind of assuming here that most people have seen it And so the Laplace transform is defined as P I of s is equal to the integral zero to infinity e to the minus st p i of t v t and the reason for doing Uh Laplace transform or in general any kind of a spectral analysis on a system of differential equations is that it usually transforms partial differential equations into ordinary ones or ordinary differential equations to algebraic equations You should view this as exactly the same as a Fourier transform in space It takes you know differential equations in space and turns them into algebraic equations So it's a very useful way of simplifying the equations And so in the case of this system of equations if you introduce the Laplace transform Right It's general information for everybody. So there is a mike sign on the bottom left of your screen That you can turn on and off Okay, so if you want to ask a question You should turn it on Like this and then Ask the question, but then uh, if you don't want The noise from your room to be heard from everybody else, then you should turn it off Okay, thanks Okay, all right. So anyways As I say these sorts of spectral Analysis like Laplace transform Fourier transform Melon transform generating function. They're all the same So once you've seen one of these things the principle behind Generating function is the same as the Laplace transform. So let's take this And try and and see what happens over here So you see that on the left hand side We have d by dt if I if I take this equation here For example and multiply by e to the minus st Integrate over all time dt Then on the right hand side, we just have the Laplace transform of these functions on the left hand side We have the derivative and you see just by looking here that Making derivative brings down a factor of s So just you know, there's a simple step involving integration by parts that allows you then to take these Differential equations and transform them to algebraic equations. So here I'm skipping And in general through these lectures, I will skip steps which are entirely straightforward But you know are prone to to error on the blackboard, but if you Introduce the Laplace transform here. What you're going to find is sp p1 of s now So I should also say that Oftentimes in textbooks, they'll put you put a tilde over this to emphasize that it's a Laplace transform I like to have minimalist notation So when it's clear that i'm talking about the Laplace transform, I don't bother with this So i'm not going to put a tilde here and that there's an argument s Which ours is already redundant with the tilde to denote that this is a Laplace transform But once i'm used once we know what we're talking about then I often don't even bother with the argument So things are simple So anyways, it turns out that when you do this integrate if you if you look at the time derivative of p You integrate it by parts and you get the the term you'll get two terms And one is just multiplying by s and one is minus the initial condition. So minus P1 at t equals zero So notice this is the Laplace transform. This is the probability at t equals zero And uh, if i'm choosing this initial condition This quantity is just equal to one And then on the right hand side, I have p2 plus p3 Minus two p1 and again, these are all functions of s and similarly I have sp2 And now i'm not going to write the argument p2 of s and it's equal to p3 Plus p1 minus two p2 sp P3 is equal to p1 plus p2 minus two p3 And so one sees that by introducing Laplace transform One has reduced a system of linear differential equations to a system of linear algebraic equations So this is now just grade school algebra to solve This is precisely the kind of thing that i'm very bad at on the blackboard. So i'm not going to do it I'm just going to tell you what the answer is and you can just verify for yourself that it's correct So we have p1 Of s Is equal to s plus one divided by s s plus three And p2 equals p3 and that's equal to One over s s plus three Now ultimately what we want to do is solve for this as a function of time And we I will do that momentarily but the point is that just as in like Fourier transform analysis We're interested in Fourier transform analysis There's a relationship between the Fourier transform of a function and the function itself And similarly here there are nice relations between the Laplace transform of the function and the function itself that you can infer Without doing any serious calculation So in general we're interested in the long time properties of this probability distribution We like to ask if I start with a particle here, how is it evolving as a function of time? so The question is what is the large time limit Correspond to in the Laplace domain and you can see that because Everything appears s and t are like reciprocal here Long time course. I'm yeah long time corresponds to the s going to zero limit of the Laplace transform So if we examine what happens to these functions as s goes to zero We can already infer something without doing any work. So let's look at this limit as s is going to zero So when s goes to zero we can forget about the s compared to one So here's one then we have one over s. So this is diverging like one over s but then I have s plus three and as s goes to zero we forget about the s compared to three and so we get that p one goes to one over three s And p two p three well s goes to zero Forget about this s compared to three and we get one over three s So p two equals p three also goes to one over three s So first thing that we can infer is that in a long time limit all the Laplace transforms go to a common limit one over three s Which suggests that in the real time domain the concentrations at all three sites is going to be the same It's going to and and and it actually is And the other thing is that there are simple rules for computing the inverse Laplace transform That are very much like the inverse for you transform And the very first lesson you learn in Laplace space is that the inverse Laplace transform of one over s is just constant So this tells us that for t going to infinity p one of t equals p two of t equals p three of t And this goes to one third So in the long time limit all the concentrations go to one third Another thing that you can infer with no work is that in a limit s going to zero again And for the Laplace domain if you add up these concentrations, so take this plus p two plus p three So if s plus one Plus one plus one so it's s plus three divided by s s plus three the s plus three cancel you get one over s So the summation p i of s is equal to one over s And that's the Laplace transform of the constant one So this shows that like the probabilities started off normalized. They stay normalized I started with the one unit of probability on site one And in the long time limit the sum of the probabilities adds to one So nothing was just nothing has disappeared nothing is created, which we expect to occur So the last thing here is now let's ask what is the behavior as a function of time So what we have to do is do the inverse Laplace transform And let's just do one term Because it's it's simple and then the other one we can infer because of normalization So again, just as when you learn in Fourier transforms There's like rules for how you go from a Fourier invert the Fourier transform There are rules for how you invert the Laplace transform, but And so one basic thing And i'm running out of space where i'm going to put it So let me just put it over here One over s plus a s plus a If I want to ask what that corresponds to in the time domain That's e to the minus at And the way you can verify this is plug in e to the minus at So you're integrating e to the minus s plus at dt And so clearly when you integrate from zero infinity you get one over s We don't see that What yeah, so it's it's not really straight the camera. So the that angle is slightly It's too high Yeah, a little bit. So can you see this line here? Yes, definitely. Okay. So let me let me put this over here So one over s plus a Corresponds to e to the minus at So now can you see that? Okay, so anyways the point is that you see that if I plug in e to the minus at into the definition into this thing here I have e to the minus s T e to the minus at I integrate from zero infinity. I just get one over s plus a So what I can do here to sort of invert this Laplace transform with little work is let's look at and and again Shout if you can't see this p2 So it doesn't look like just a simple Linear thing, but you can do a partial fraction decomposition. This is equal to one over s Let me let me think for a second s plus three If I add these things up I have s plus three Let's subtract this And then multiply by one third That's the same as this And so this is nothing more than so then in the so that's in in the Laplace domain And so in the time domain, we know what the inverse Laplace transform of constant of one over s is just constant So this is one third minus one third e to the minus three t and So that's also the same as p3 of t I'm afraid to Let me try and do this so this is equal to p3 of t And finally p1 of t You can write as equal to one minus twice p2 of t Because everything is normalized and p2 equals p3. So one minus p2 minus p3 Is is p1 of t and so if I can do that subtraction, which is not obvious One minus two so that's equal to one third to add one to this Twice this add one so one third plus two thirds e to the minus three t So you see that at t equals zero This is one Which matches the initial condition and in the long time limit all the probabilities decay to uh one third exponentially in time So that's a very basic example But you know from this example one can then study the same type of random walk on larger and larger graphs and build some intuition for like how uh The relaxation time so this number three here we can think of this as one over a time There's a relaxation time one third and it's telling us how quickly the system is relaxing to uh the micro canonical ensemble of equal probabilities at each site And so one can play with this now with this example one can study like four sites One can study a four site complete graph one can study any graph you want And develop intuition for how the probability relax relaxes on simple networks But one reason for showing this example is paradigmatic It's very simple and it begins to introduce the Laplace transform technique as as a natural way of solving these sorts of problems So I'm hoping this is accessible. I'm hoping it's at the right level and I'm crossing my fingers and if uh Nobody is shouting into the microphone. I'm going to continue on Okay, by the way, what time did I start at? Okay, very good Okay So what I want to do now is I want to solve exactly the same problem everything the same But I want to use the generating function technique because I want to show that line by line They're exactly identical everything conceptually is the same the details are a little bit more complicated And in fact, this is an important philosophical point because oftentimes students are first exposed to the random walk in the context of discrete time Discrete space you say that in each time step you make a single hop one side or the other And so conceptually the discrete time random walk is somewhat simpler But on the other hand calculus was invented to make discrete problems into simpler continuum problems And so you're going to see that the technology for solving the discrete random walk is actually more complicated Than solving the continuum random walk But it's good to show it because as you're going to see I'm going to use a generating function throughout this set of lectures so with much Regret I'm going to erase some of this because nor if I had all the blackboard space in the world I would show everything every equation would look exactly identical, but I don't have room to do it So I have to erase so please excuse that feature so Let's now do a discrete solution So again, I have my my Triangle one two three I know I'm standing in front of what I'm writing. I'll get out of the way in a second And I want to write down then the master equations for this process So bear with me a second. Let me find my notes Okay so Let me write down the discrete master equations. So here the basic quantity will be p I of so let me write it over here p i of n that's equal to the probability that Random walk on site I at nth step So it's just the analog of p i of t It's just that now the time argument is a discrete argument that changes by one every single time I take a step And so let's write down p one of n So how can I be at site one at the nth step? Well, the only way I can do that is I could have been at site two at the previous step And then I hopped over here and if the hopping rates are all the different on all the different links are one So there'll be a term here p two of n minus one And then there's plus another term because I couldn't have been at three and p three at n minus one n minus one Somebody asking a question And then one more thing which is that if i'm sitting at site two So in a single time step I can either hop to one or I can hop to three And for reasons of convenience, but it doesn't really matter at all I'm just going to say well one half of the probability went this way one half that way So this equation should be multiplied by an overall five of one half Yep, okay Okay, very good. Let me try that that's very unnatural for me. Anyways Similarly p two of n just by cyclic permutations will have one half p three of n minus one plus p one of n minus one And p three of n is equal to one half p one of n minus one plus p two Of n minus one And I also need an initial condition An initial condition And again Let me choose the initial condition that p one at the start of the process. It's at n equals zero is equal to or pi The probability site i when I start the system is going to be a particle sitting at site one So again, I forgot to mention this earlier on this symbol is the chronic or delta function Which is one if i equals one is equal to zero otherwise and it just is a shorthand for writing that initial condition Okay, so notice the difference between the master equations that unfortunately just erased because they were differential equations These are difference equations and again Difference equations are simpler conceptually because you kind of see them in the grade school But they're harder to deal with to actually solve them. And so like how do we solve this set of Uh difference equations and so I'm going to introduce something called the generating function And it's exactly Of the Laplace transform So let me define p i of z Is equal to summation From n equals Zero to infinity p i of n z to the power n And so at least this time I didn't have to erase this first So you see that it looks somewhat similar to Laplace transform instead of any continuum integral one has a discrete sum The argument t Well here instead of writing e to the I mean you can think of z is e to the minus s and then this would be exactly the same Except just a discrete sum versus A continuous integral, but it turns out to be more convenient to write it this way so Now that you see that the generating function conceptually looks identical to the Laplace transform I now need to erase this so I have some room to work And the idea here is the same as before is that I take these equations every single one of them I multiply by power z to the power n and I make a summation I hope you can see what's going to happen here on the left I'm going to sum this equation Um, and I'm going to sum from n equals one not zero to infinity And I do this for all these three equations and let's see what happens when when we do that And just as before what we're going to see is that we're going to transform a set of difference equations into a set of Just algebraic equations that can be solved right away So let's look over here So the first thing here we have pi of n z to the power n summation from one to infinity But the generating function was summed from zero to infinity So the left hand side is nothing more than the generating function Minus the n equals zero term So we're going to have on the first equation we'll have pi of P one of z minus p one at n equals zero But this term we already know what it is it's equal to one and then on the right hand side we have one half So we have z n p two Of n minus one z to the n minus one and what we can do is we can factor out an overall factor of z Factor it out and then there's overall factor of two and then I'm left with p two of n minus one z to the n minus one summed from n equals one to infinity Which is same as if shifting the index so we can say it's the same as summing p two of n z to the n from zero to infinity And so this is nothing more than the generating function again. So we're just going to get p two of z Plus p three of z And then there's plus cyclic permutations. So we're going to have p two of z Minus the initial condition, but we started with the particle at site one So there's no initial condition for for two and so this is we don't need anything here So we're going to get z over two p two p three Plus p one and again i'm being lazy not writing the arguments and p three of z Is equal to z over two p one plus p two Now this is where it's very difficult for me to actually do this in real time on the blackboard But as a simple exercise one can solve these three Algebraic equations and let me write down what the answer is One gets p one of z Is equal to uh one minus z over two one minus z One plus z over two And p two equals p three And that's equal to uh z over two One minus z One plus z over two So unfortunately um Because the Laplace transform is a race you can't see that this looks exactly the same inform to that of the Laplace transforms But let's try and extract some useful information from this Uh without doing any calculation because ultimately what we want to do is you want to invert these generating functions and compute Pi of n but let's see what we can learn without doing any work first So um the first point to make is that we saw that in the Laplace transform the limit s going to zero correspond to the long time limit So here what we're interested in is the large n limit many steps What is the form of the probability distribution after many steps? And just as S you know t going to infinity correspond to s going to zero Here n going to infinity corresponds z going to one So there is this correspondence n going to infinity. I'm sorry n going to infinity in real time is the same as z going to one And one can see if one looks at the Laplace transform domain one saw that t going to infinity was like s going to zero But e to the s Goes to one as s goes to zero. So that's why z which is e to the s so if s went to zero if z is going to one So if we look at the s the z going to one limit of these functions, we can learn something about the long time limit So let's do the same text things that we did uh for the uh Continuous case. Let's look at the summation pi of z So if we add up p one plus p two plus p three In the numerator one minus z over two plus z over two plus z over two So we get one plus z over two which cancels this and we just get one over one minus z And if you expand this in a power series in z, this is nothing more than one plus z Plus z squared plus z cubed and the crucial point is that the coefficient in front of z is equal to one So it says that at any end the total probability being at any site is always equal to one So we just have conservation of probability And now let's look at the z going to one limit over here Well, um One sees that When z goes to one this is blowing up, but this sort of contains the asymptotic behavior And for everything else we can just say well, let's just plug in z equals one So we have one half And here we have one half divided by three halves. So this is one third So for example as z Goes to one uh p two of z goes to one third times one over one minus z And if we were to expand this in a power series, well, this is trivial to expand in a power series We see that the coefficient is one third So it shows that in a long time limit the probability of being at site two goes to one third So we start with the particle at site one then because of the mixing of this discrete random walk Ultimately it's going to go to um one third but now to invert the Generating function is in general it's a little bit more involved Then uh the Laplace Inverting the Laplace transform and let me just sort of tell you how to do it formally But this example is so simple that we don't have to do all the formal tricks but the point is that This is a power, you know now I treat z is like a complex variable and I say well This is a power series in z But if I then take if I want to compute say the nth term in the power series representation in this the way that I do this So let me write to compute nth term I take my generating function p i of z I divide by z to the power n plus one If I do that then on the right hand side now It's a power series with negative powers of z in it, which is called the Laurent series And what we want if I multiply by z to the power n minus one is that The coefficient of one over z will be the nth term in the original Taylor series So to extract this nth term in the Taylor series you take p i of z divide by z to the n plus one Perform a contouring a girl around the origin times one over two pi i And then the coefficient p i of n is just the residue of this integral Um, and so this is equal to p i of n So that's the formal way one should one should um invert the uh generating function But it turns out again that things here are sufficiently simple that we don't have to worry with all that In this particular case because we again we can do a partial fraction Decomposition of say let's look at this guy over here. Um, and so, um I can write this And so there's a z over two sitting out in front Um, but let's see let let me I have to fiddle for for a moment. So one minus z One plus z over two and if I do um so two minus sign and um So if I put the I get one minus z over two so I want to Sorry, I'm gonna I I'm not going to do it in my head. I'm going to look at my I'm going to cheat and look at my notes Because I can't remember Anyways, yeah, so this turns out to be if I do the two thirds And one third So you take two thirds of this and one third of that you do you you know You put this over here this over here and do this algebra you're going to get the same thing as over here But the point about this is that the Taylor series expansion of just a simple pole is trivial And so this thing is equal to z over two times Two thirds and then I have one plus z plus z squared plus dot dot dot Minus one third and then I have one minus z over two plus z over two squared plus dot dot dot and from this you can then compute p i of n as The nth term in the power series representation of this function And so now I'm skipping a trivial step which is to extract that but what you'll find for example is p two of n Is equal to the following series it starts off zero at at the at the start And then one half And then it's it's um three eighths And then the next term is five sixteenths And so on down the road and as n goes to infinity this converges to the value one third So more or less line by line everything In the generating function solution is the same as in the original Laplace transform solution But I wanted to show these examples side by side because it's just a useful pedagogical exercise And sorry a comment on like the writing on the blackboard. Yes, the car is kind of blurry So little things like eyes and pluses and minuses if you maybe could uh, okay Pay attention or make them a little bit bigger. Uh, it could help Okay, I'll try and hopefully by tomorrow have a better camera, but But you know, but also feel free to interrupt if something is too small. Just say it's too small Yeah, don't have to wait to the end. Okay. Thanks. Okay. All right So, uh, anyways, that is the story about random walks on small graphs But what I want to turn my attention now to is the class of random class of hopping property Hopping problem problems known as just a 1d random walk And so some of the things I'm going to show you now Perhaps are familiar to some of you and hopefully those some things will be brand new and I want to uh describe again Using things like the generating function and Laplace transforms how to look at 1d random walks in a very unified framework So let's now talk about um 1d Random walks So in the most general case what I have is a bunch of points that are along an infinite one dimensional line And there's at site n there's a hopping rate r sub n going to the right and a hopping rate l sub n going to the left And what I'm interested in is p n at time t the probability that I'm at site n at time t and um again, uh, maybe I'll write it capitalized and The master equation tells me how this probability changes as a function of time And for this one dimensional hopping process in general, there's uh two ways that you know this p n can change You can either have hopping into site n or hopping away from site n and hopping in well I could be at n minus one and I hopped to the right. So there's a term r n minus one p n minus one I could be uh one step to the right at n plus one and hop to the left sub plus l n plus one p n minus one And then I could be at n and then I hopped out and I can hop with rates l n plus r n is a total hopping rate out So there's minus r n plus l n r n plus l n p n So that is the most general uh master equation and now let's work out some uh Very specific examples. So the most simple example I can think of is the so-called Poisson process And in the Poisson process we say there's no hopping to the left. So l n is equal to zero And all the hopping rates of the right are all equal to the same And one last thing is that I will also assume that I start at n equals zero And I chop the system there. So I started at the at the left end of the system And it's only going to hop to the right. I don't have to worry about the negative Access at all. So let's study this And you know, this is a classic process. You first get in the first course in probability theory, but let's just write it out So p zero dot So that means that I can't hop from my from negative axis. So there's this term doesn't appear and um And there's no l n and so there's just minus r p not Because if I'm sitting at site zero, so I'm sitting at the end of my system all that can happen is I can hop to the right There's no way I can gain probability. So there's only the lost term so minus And then I can write p one dot is equal to um So we're going to have um r p zero minus p one And p two dot is equal to r p one minus p two dot dot dot dot dot So there's two ways one can solve this one can um just solve it what you can solve these equations one by one Clearly the equation for p naught is going to be p zero of t Is equal to e to the minus r t One can then plug that into here and now one has a solvable equation for p one And then plug this into here and get a solvable equation for p two and so on and so one by one You can develop the solution to this Let me do a slightly different way by introducing the Laplace transform But I leave it as a simple exercise for any student to just solve these one by one and get the solution But the Laplace transform solution So again, I define the Laplace transform p n of s is equal to integral e to the minus s t p n of t v t So if you plug that into these equations again, um, they change into just purely algebraic equations And I realize I forgot one thing when I was talking about the Poisson process I need to define an initial condition where did the particle start and the simplest case the natural case you start at n equals zero so p n at t equals zero is equal to delta n zero So i'm starting the particle at the origin So if again you do the Laplace transform method Then you transform these differential equations to algebraic equations. So time derivative sp not minus the initial condition, which is one Is equal to minus r p not So this is pretty easy even I can solve this it says that p not is equal to one over s plus r Which tells us that p zero of t is equal to e to the minus r t which also you could have gotten That's what's over here, but let's look at continue this game again. So we're going to get sp one Is equal to r p zero minus r p one So I get from this that p one Is equal to r over s plus r p not And so this is equal to r over s plus r squared So that's simple and now if I do what p two is well, I look at this equation and I'm going to get R p one times s divided by s plus r. So you're going to get r squared divided by s plus r q dot dot dot dot dot So at least algebraically the in the Laplace transform you can sort of guess the salute You can see the solution without doing a heck of a lot of work And now we have to invert this Laplace transform and again This is one of the first lessons in Laplace transform Laplace transform technology. What is the inverse Laplace transform of this? So I'm not going to tell you how to do that. I'm going to leave that as something for you to Think about but the answer is the following p n of t Is equal to e to the minus r t R t to the power n divided by n factorial So this is the Poisson distribution the very famous Poisson distribution and The main points of this is that the average position of the particle if I look at what is n average Which is nothing more than the summation over all n n p n of t You know from n equals zero to infinity. Where is the average position of the particle? This is just growing like r t And it just stands to reason because if i'm hopping to the right at a rate r after Time t i've done r t steps to the right and so I should be at r t And a nice feature about the Poisson distribution that in the limit of n goes to infinity This goes over into a Gaussian distribution whose mean value is r t and whose width is growing like the square root of r t Okay, so that is um, I guess more or less everything I wanted to say about the Poisson distribution Here's what I'd say any questions. I look over this empty room with three People who are now getting infected by me Okay So let me now turn to uh the classic nearest neighbor random walk So presumably all of you have seen this in the context of discrete time Discrete space and what I want to show you is how you solve this in continuous time So, uh, I'm going to erase this with regret like once again So let's now do The nearest neighbor random walk So I'm assuming here that I have an infinite one dimensional line. There's no longer n equals zero as a constraint n n plus one n minus one and the hopping rates are the same everywhere And so my equation of motion p n dot is equal to p n minus one So I can hop from n minus one to n with the rate one I can hop from n plus one to n with the rate one And then if I'm at n I can hop either to the right or left each with rate one. So there's a minus two p n So this is the equation one solve Um And again, uh, I'm going to assume that uh p n At t equals zero in this case is equal to delta n zero So I start with a particle at the origin I want to ask how does the probability distribution evolve in time? We know that in discrete time discrete space. It's just the binomial distribution We know that in the continuous limit it becomes the Gaussian distribution But let's actually do an explicit calculation to see how that all comes about So in this case, I'm going to use the Fourier transform method And in fact, I'm going to solve this example in two different ways Of course by Laplace I mean first by Fourier transform and then by Laplace transform So let me now introduce the Fourier transform p n of k p k of t Sorry p k of t And this is equal to the summation from n equals minus infinity to plus infinity p n of t e to the i k n so hopefully, uh The Fourier transform is something most is generally more familiar to everyone But uh the point is that by introducing this Fourier transform if you take this equation And multiply by e to the i k n and sum From minus infinity to plus infinity Well on the left hand side just get the time derivative of the Fourier transform For example, here you have p n minus one e to the i k n but you can change this to e to the i k n minus one times e to the i k and the e to the i k n minus one p n minus one summed overall n is just the Fourier transform again just multiplied by a phase factor So once again the point of doing these spectral analyses is that you transform A difference equation into an algebraic equation, which is easily solved So if I do this Fourier transform, I'm going to get on the left hand side p k Of t dot And here I'm going to get p n minus one e to the i k n minus one with an extra factor e to the i k So it's e to the i k Here I'll get plus e to the minus i k minus two p k of t And this I can write as two cosine k minus one p k of t Oh, sorry. Thank you. Two should go outside. Correct Okay, and so you see that I've transformed a You know difference differential equation Into nothing more than a simple differential equation whose solution is simple So this is p k of t Is equal to p k At t equals zero e to the two cos k minus one t Now what is p k at t equals zero? So here is my initial condition if I Fourier transform this take this multiply by e to the i k n and summed over all n There's only one term in the series when n is equal to zero e to the i k zero is one So this object is just one So that is the solution in the in the Fourier domain now in this case It's somewhat more advanced and I don't expect anyone to know this but It turns out that this Fourier representation you can invert it, but it's not inverted by any elementary means So i'm just going to tell you what the answer is it turns out that p n of t What you see here is e to the minus two t and then you have e to the two cos k t And if you expand that in a power series It involves the Bessel functions. And so the answer is i n of two t e to the minus two t So this requires a little bit more work to actually show this But if you look in abramowitz and steggen that beautiful blue book that you young people who aren't in this room with me now Probably don't know about but us old farts We all have our dog-eared copies of abramowitz and steggen And we we love this book because it tells us so many useful things And one of them is the fact that this The you know the inverse of the Fourier transform of this is this thing and here i n two t Is the modified Bessel function of the second kind of order n and Thing is that now it takes a little even more work to then look at what is the asymptotic behavior of this function And it turns out then the limit t going to infinity n going to infinity such at n squared over t Is is is finite and not blowing up or not going to zero In this limit, uh, this thing is just the good old Gaussian distribution So this turns out to be equal to one over four pi t Square root e to the minus n squared over four t So this is the good old Gaussian that probably you're all familiar with But you're seeing it derived in kind of a crazy way Okay Okay, so um Let me now solve Exactly the same problem again because I want to use this as sort of a warm-up for the techniques I'll use throughout this set of lectures. Let me solve exactly the same problem Using the Laplace transform method. So again with great regret. I'm going to erase this Because you know, I'd like to do it side by side Um, but let's I have to do it. So please excuse me. Sorry. Why is uh I'm sorry. Yo, go ahead ask What is the Justification of n square over t being finite in the Well, no, it's just so that's that's sort of like if I was giving a course in mathematical physics Then I would show you how to take the limit of you know The best of function is a special function of mathematical physics It has interesting asymptotic properties for large n large t small n small t And it just turns out that in this particular limit and only in this particular limit This goes into just the Gaussian distribution. So I haven't shown it to you And if I had another lecture, I would show it to you Okay, thanks. So sorry for the non-answer Okay, anyways um, so let's now solve it by the Laplace transform method so our equation was again, so I guess I don't need to have the picture anymore So we have uh p n dot is equal to p n minus one plus p n plus one minus two p n So I want to now do the Laplace transform solution So what I do again is I take this equation Multiply by e to the minus s t Integrate from zero to infinity d t And let's see what comes out So we're going to get um s p n minus p n at t equals zero So I'm putting here the argument So we we know that we're talking about the zero time behavior of the original function and then we're going to have here p n minus one plus p n plus one minus two p n So it's only for n equals zero that we have a special term So we're going to have for n equals zero. We have s p zero minus one is equal to p minus one plus p one minus two p n minus two p zero um And again if we start with a particle at the origin clearly the probability distribution is symmetrical So p one and p minus one are the same. So this just goes over into two p one minus p zero So this is true for n equals zero for n not equal to zero We're just going to have s And allow me now to put the minus two. Let me put it over here. So let's write it s plus two p n Is equal to uh p n minus one plus p n plus one so once again we transformed a system of a differential difference equation Into just a difference equation which we can solve and to make this simpler. Let me write this as p n It's equal to one over s plus two p n minus one plus p n plus one and i'll define this thing to be a some constant a p n minus one plus p n plus one So it's a two term recurrence relation And by the way, if uh what i'm showing you here the technique is just you could also use this to solve something like the Fibonacci sequence You can use it solve it by this technique. So how do we solve this equation? It's a second order difference equation linear difference equation And the calculus of differences is just the same as a calculus of differential equations only clunkier So we just assume an exponential solution just as you do for second order differential equation. Let's assume That p n is Which is a function of s it's just something lambda to the s where we don't know what lambda is but we're going to figure it out So we plug it into here and and and a substitute we're going to get uh lambda to the um, sorry Lambda to the n Sorry for my dyslexia Okay, so and so this lambda will be some function of s Anyway, so we're going to get lambda to the power n Is equal to a Lambda to the n minus one plus lambda to the n plus one And so let's factor out a common factor lambda to the n minus one and so we're going to get uh We're going to get a lambda squared minus lambda Plus a is equal to zero and so There's two solutions because this is a second order equation that there should be two linearly independent solutions There are two solutions lambda plus minus solving the quadratic and so i'm going to get minus So i'm going to get one over two a plus or minus one over four a squared minus one So we're almost done because now we know what this you know what how how this thing depends on n Notice that this is an exponential solution on n and what in order to fix things we have to um Oh, I should I forgot one small thing here is that Let me just make this look nicer We have a linear equation. So we don't know the amplitude. So I need to put here some amplitude I don't know what it is yet. We'll figure it out. But the point is that the amplitude doesn't appear here So we don't have to worry about it One more thing is that if you look here this function a is one over s plus two s goes between zero and infinity Which says that s that if s goes between zero and infinity then a goes between Zero So it goes between one half and zero And you can see right away that here lambda plus is bigger than one and lambda minus is less than one Lambda plus bigger than one is a bad solution because it blows up exponentially We know that the probability has to be less than one all together. So we have to we have to reject the solution So in principle, we have two linearly independent solutions a lambda plus of n plus b lambda minus of n But there's only one solution p of s is a lambda minus of n And the way that we figure that out what what a is is by plugging it into the one Boundary condition or special equation, which is different than everybody else this equation here. So, um We determine a By plugging into this equation. So we have s p zero But p zero is just a Minus one is equal to two p one minus p zero, but p one is a lambda minus Minus p zero, which we said was just a So we can now solve for a so again because i'm really bad at doing Sort of really simple algebra on the blackboard. I'm just going to now cheat and tell you what the answer is here a And you can check that I did it correctly a is equal to one over s plus two Minus two lambda minus okay So We've now solved the problem in the Laplace domain, but you know you might say well What does this look like? You know, where can we graph this function and In general for arbitrary s, it's not necessarily very easy But what we're again when we're interested in is a limit of long time which corresponds to the s going to zero limit of the Of the Laplace transforms. Let's let's look. Yeah please Can you just remember what is p minus one? What is p minus one? Yeah Where where where which equation are we talking when you take n equal to zero s Oh, yeah, yeah No, but the point is that I start with a particle at the origin And so clearly if I start with the particle of the origin the probability distribution is spreading symmetrically So p minus one or p minus n for any n is the same as p plus of n So that's p minus one is the same as p one. So this guy is equal to that guy Okay, okay All right. So again, we have to look at the limit s going to zero and you know This is one of these things where if you're not used to this sort of asymptotic analysis It can seem like you're standing on quicksand But once you get used to it it turns out to always be the easiest way of getting asymptotic information So we want it where we want the s going to zero limit Because that's going to tell us everything we want to know about the large n limit of the probability distribution So let's figure out what the s going to zero limit is. So first of all Again, a was equal to one over s plus two So as s goes to zero, this is one half and I can write this as one over one plus s over two And then for for small s, this is one half One minus s over two And again, you know, you might say well, don't I have to keep more terms? But if we're interested in the long time limit, we don't need to worry about additional terms So it's only the first term that's relevant. Okay. So that's what a is and one over two a is uh so so that's Yeah, one over two a so two a will be This is just equal to approximately equal to one plus s over two So for example lambda minus is equal to two a so that's one plus s over two And then I have minus a square root And then I have one plus s over two minus one. So that's just s over two A little bothered because I remember there's no two here. Let me just check my notes on this one For a square. Oh, I I see what's wrong. So sorry. Let me just go back here So I want I want this thing squared inside the square root. So when I square this So let me put it here one over two a all squared. So when I square this I'm just going to get one plus s So I plug that in here. So I have one plus s minus one. So this is just root s. So it's even simpler But again asymptotically Square root of s is much bigger than s over two when s goes to zero So I don't have to worry about this term. So this is roughly one minus the square root of s Okay, and so now I think and let's see what a is so a is equal to one over two Plus s minus two lambda. So that's two plus root s So these cancel And again, I can forget about this compared to root s. And so this is just equal to one over root s So finally I get that p n of s Which is equal to a Lambda minus to the power n So this asymptotically goes like one over the square root of s And lambda was um one minus root s to the power n Which I can write as one over the square root of s e to the minus n root s And now if you look at your your handy dandy table of inverse Laplace transforms the inverse Laplace transform of this is very simple So now I have p n of t which is the inverse Laplace transform of this. This is nothing more than one over four pi t e e to the minus n squared divided by four pi t four pi t So here's the Gaussian distribution Now derived in a very sort of unconventional fashion Anyways, the point of this discussion was just to give you a feeling for how you can use The Laplace transform the generating function to solve very simple problems in a very clean way And you can hopefully see the unity of the different methods of solution And um We will be using these techniques throughout the the class Yes, um, I'm yeah, so I here I students are you tired? Do you need a five minute break? I think it's fine. What? Okay Yeah, I think it's fine Keep going. Yeah. Yeah, okay. I'll keep going Not a problem Okay, so um, what I want to do now is um, which is my notes So, um, I want to now do an interlude here I'm sorry. Um, yeah about the computation of capital a in the denominator. Don't you have a two square root of s? Maybe Because expression was one over s plus two minus two lambda minus right say that again one over let me just let me So a was Here's where the lack of blackboard space really kills me minus two lambda minus correct. That's what it was Yes, ask away Yep question Maybe there was a so two square root of s in the expression to the left you wrote The expression to the left You wrote two plus s minus two plus The expression for capital a Yeah, so I just wrote it here. Yeah on the left you wrote You wrote, uh, where you cancel a bunch of stuff here. Yeah, there was two plus s Right, then you had uh minus two Plus two square root of s. Maybe Sorry Uh, oh, yeah, sorry Two square root of s Sorry, so this is two square root of s two square root of s And then that's fine. Thank you. Thanks. Thank you. I appreciate that Yeah, okay. Thank you very much. Okay. So I want to do a little interlude This is just a small thing about random walks, which some of you may be aware of and some of you may not But it's just it's a very beautiful result Let me now Because this was all based on a hopping rate of one in general One thinks about a continuum diffusion equation which one is solving dp dt Is equal to d d second p by dx square That would be start. This is the continuum version of what I just solved where d is called the diffusion coefficient It measures the rate at which uh the random walk or the diffusing particle spreads and for a delta function initial condition The solution to this is p x of t is equal to 1 over 4 pi dt now e to the minus x squared over 4 dt and um I want to just tell you some amazing properties of this very simple innocuous function that uh go under the General term of first passage processes. So let's look at this solution So this is the solution of a random walk that's spreading in an infinite one dimensional space But let's ask the following question We start on a semi infinite line We start at some position x naught We release our particles. They're going to make a Gaussian probability distribution that's gradually spreading out But at x equals zero There is a cliff and if a random walker hits the cliff he falls off and dies and he's gone forever And the question I want to ask is what is the probability that the random walker that starts here is going to die If I wait forever, what is the probability he's going to die? And if he does die, how long does it take for him to die? So these two questions go under the term of first passage processes and it turns out one can understand this With a beautiful solution that comes from elementary electrostatics Because what we want to solve now is the diffusion equation in a semi infinite interval This is the solution in the infinite interval with no boundary conditions So how do we solve the problem here? And so the problem that we want to solve again is we want to solve p t is equal to d p x x So here Subscripts mean partial differentiation p at x x at t equals zero Is equal to delta of x minus x naught we start with the particle at x naught And we want to impose the boundary condition that p of zero And any t is equal to zero that is if I hit the cliff By definition I die and then I'm not part of the probability distribution anymore And that can be imposed by saying that the concentration at the origin is zero So how do we solve this problem? So let me write down the answer and the answer turns out to be very beautiful and very simple Well, we start with a particle at x naught. So let's first include that contribution 1 over 4 pi dt e to the minus x minus x naught squared over 4 dt So that would be if there was no boundary condition here We starting now at x naught not it this is for starting at the origin This is for starting at x naught And now we need to do something to satisfy the boundary condition And again for those of you who have taken a course in electrostatics You know that if you have like a grounded plane you put a point charge of a grounded plane There's an image charges coming up because the system is grounded and this can be Essentially reproduced by putting an image charge of exactly the same magnitude of opposite sign At symmetrical location. So all we need to do to satisfy the boundary condition is put an anti-galaxian at minus x naught Whose amplitude is the same as the Gaussian? And then the Gaussian and anti-galaxian conspire symmetrically to ensure that the concentration at x equals zero Is always equal to zero So if I put here minus because it's an anti-galaxian e to the minus x plus x naught squared over 4 dt That's the answer So very little work If you want to be a masochist about it, you can actually solve it directly by the Laplace transform method And I encourage you if you want to see it try it by the Laplace transform method just solve Formally the differential equation and you'll find that you get this answer But now what I really want is something I'll call f x t which is the probability Of first hitting Origin So by construction since if you fall if you hit the cliff you fall off and die So anything that gets here No, this uh, thank you. So thank you f x naught. So this is the problem first hitting the origin of time t Starting thank you for the question starting at x naught And first of all, how do you compute this probability of hitting? Well, this is just the sort of random walkers that are falling off the cliff So it's nothing more than the diffusive flux Over the cliff. So this is nothing more than d dp by dx This is the diffusive flux at x equals zero So i'm not going to go through like the uh, sort of boring details of just Differentiating the same with respect to x and computing and evaluating x equals zero It's simple to calculate and it turns out to be a very beautiful answer one over four pi dt cubed e to the minus x naught squared over four dt So this object is called the first passage probability and um, if you stare at this thing you'll notice It has a number of very amazing features first point is that As t goes to infinity This goes to one and so there's a t to the minus three hat's tail So it means that uh, there's a very long time tail of Struggling random walks that are falling to their deaths, but they're falling At very long times. So in fact, there are two important properties Here's important property one the integral from zero to infinity dt The time integral of the first passage probability f x naught t What is this quantity this is the fraction of random walkers that fall off the cliff How many random walkers or what fraction of them fall off the cliff and an amazing result is that this is equal to one All walkers are sure to die No matter how you know if you wait forever everybody dies so it's uh That property is called recurrence But the second property is that if I say how long does it take to die? So if I look at the integral dt t the average time to fall off the cliff And since this is now properly normalized probability distribution, I don't have to divide by anything This function is equal to infinity So even though you're sure to die It takes infinitely long time to die and the reason why it's infinity is because we're integrating at long times dt T divided by t to three halves, which is certainly from scaling perspective diverges so This dichotomy between you're sure to die, but it takes infinitely long to die Is what makes random walks still a very vital field even after 140 years of serious effort This dichotomy underlies all kinds of strange properties of 1d diffusion that are still unknown even as of today Okay, so I do have more to present so uh, but I should stop at 12 30 more or less 12 35 or How how are the students going to eat lunch if we can't gather? How are we going to eat lunch if we can't gather? Okay, are we going to be delivered food to our looms? With guys in hazmat suits then i'm dead because i'm a vegetarian I can graze the grass Okay, so I'm going to spend you know, I'm not going to finish this topic yet But I want to introduce to you something a very beautiful type of random walk known as the birth death process And part of the reason for introducing this is that first of all Introducing right at the start is that the birth death process can be thought of as a one-dimensional random walk process But as you're going to see it underlies so many non-equilibrium processes. It's just everywhere in biological physics It's it's worth knowing about so here is the birth death process you have organisms Like bacteria and they give birth to their children at some rate And let's call that rate one just to make life simple and then they also die at some some rate So um, let's call p n of t the probability That there exists n organisms At time t And let's write down the master equation for the evolution of the number of part for this p n of t So p n dot How does this change? Well, if I have n minus one organisms and one of them gives birth That means that then there'll be n and so that increases the probability of their n particles in the system But since every organism gives birth at the same rate The collective rate of giving birth is equal to n minus one So there's a term here n minus one p n minus one But it also is possible that you have n plus one organisms and one of them happens to die And they if they die at the same rate as they give birth Then the total collective rate of death is n plus one. So plus n plus one p n plus one and then If I have n particle in the system any one of them could give birth Or it could die So the rate of birth is one the rate of death is one And so the total rate of one organism doing something is two and there's two and there's n particles So the total rate of anything happening is two n p n So this is the master equation for the birth death process In the limit where the birth rate and the death rate are the same Now you might ask well, what happens if the death rate is bigger than the birth rate or vice versa? Well, if the birth rate is bigger than the death rate then the system explodes exponentially and that is not terribly relevant because we don't have We're not completely engulfed with bacteria If the death rate wins out over the birth rate, then the particles die out and then there's nothing left to talk about So this turns out to be the interesting limit Now another perspective that you could look at this thing is is that this is nothing more than a diffusion like process Except that if you're at site n it's like things are moving faster and faster So it's like a spatially dependent random walk process that becomes the random walk becomes jumpier and jumpier as you go further and further to the right And once again, this problem is only defined in the semi-infinite interval I should start with you know, if I start with no particles, nothing's going to happen because if there's no particles Nothing you can't give birth if you're already dead So you should start the natural initial condition is starting with one particle So let's do p n of t equals zero is equal to delta n one So I start here and So way out here the hopping rate is huge. So let me draw this with a really big arrow Whereas over here the hopping rate is kind of small And now we want to solve for the probability distribution of this random walk And well, how do we do it? So once again, um You know, there's lots of sort of baby steps you can take before you try and solve for the whole thing One thing that you might want to try and in general whenever you're faced with solving a master equation And you don't know how to proceed you might say Well, I don't know how to solve the equate the total full master equation But why not look at some moment of the distribution, you know, so as an exercise you could try solving for n average dot And so this will be nothing more than the summation n p of n Summation n equals one to infinity Dot So one could attempt to look at something like this But let's let's see what would happen if I would just solve say for the first moment of the probability distribution Well, the problem is that this is a symmetrical hopping process So if I'm at any point n The rate of giving birth and rate of giving death and the rate of dying are the same So clearly the average n does not change. So this provides you no information So then you might try looking at something like n squared n squared and in fact here you will get useful information So if you want to feel like you're intimidated by this equation and you don't know how to solve it You could try looking at moments and You'll get something. So once again, it's like, uh, you know, I would give this as an exercise to the students Compute this function and see what you get But what I want to do and I guess I'm not going to finish it now But I'll just sort of tantalize you with sort of setting up the problem. How do we solve this differential equation? So let's just stare at this equation for a moment and Uh, and again I would ask students I've spent time presenting generating function Laplace transform for you transform. How would I solve this equation? What should I do? Somebody want to shout out something From the From the students did anything functions Someone said generating function. You get an A Okay So let us solve by the generating function. So let me define the generating function as here summation So, sorry, so I'll define, um, p Let me use the same notation as my notes because I I just don't want to confuse myself. What did I call it in my notes? Oh It's not even here just Yeah, for reasons that are just historical. Let me call the generating function g of z G for generating function. This will be the summation p n of t z to the power n And here I have to be a little bit careful. What do I define as the lower limit because we see that if there's zero particles in the system There's nothing happening But the absorbing state where there's no particles in the system is actually dynamically interesting So it's useful to start at n equals zero To infinity and if you didn't know better and started n equals one you would find that oh, I better need n equals zero So if you didn't have the limits right you would figure it out pretty quickly So let's take this generating function plug it into this differential equation and let's see what we can learn from it Okay, so I take this equation And multiply by z to the power n and let me sum from Zero to infinity So on the left hand side, I just get the generating function itself. So I get g dot And in fact, let me be a little bit more careful just because g is a function of z and t And so in fact, this is the partial derivative with respect to time Let's let's do the sort of like again If I had more blackboard space I'd have a workbook over here But let's just look at one term just to give a flavor for what happens. Let's look at this thing here summation to n pn z to the power n Summation from n equals zero to infinity What is this? well, um, you know, there's If you if you sort of stare at this for a second you say well, look if I multi if I do derivative with respect to z I would bring down the factor n and I have z to the power n minus one So if I multiply by z d by d z that reproduces everything on the right So this thing is nothing more than twice z d by d z summation n equals zero to infinity of pn z to the power n And so this is nothing more than two z d g by d z So that's that term. So I have here minus two z d g by d z Let's do another term So again, maybe I'll wait for the latest possible moment to erase anything. So let's look at the term n plus one pn plus one z to the power n Summation n equals zero to infinity So what can I do with this? So let me just write this. Let's just do this is the same as d by d z of summation pn plus one z to the power n plus one Summation n equals zero to infinity Um, and so you see that if I take d by d z down comes a factor n plus one And then I I lose one power of n and this Is basically the generating function except for the minus one term, but when I differentiate it, it doesn't play any roles So this is nothing more than just Uh, oops, this is right Sorry, um Let me just be 100 sure before I make a fool of myself Uh d by gc It looks right. I'm I'm a little I'm a little suspicious right here. I'm I'm not sure of myself But let me let me just leave it for the moment because I'll do the other term and then I'll know for sure So then I have another term here n minus one p n minus one times z to the power n And this is going from Oh, and by the way notice that here when I sum from n equals zero It looks like there's an equation for p minus one because there's an n equals zero Here on this p minus one But if I have minus one particles you can't give birth So then in fact this term does not start at n equals zero Even though i'm summing from n equals zero to infinity the very first equation has this term absent So this equation even though it seems like it starts at n equals zero actually starts at n equals one to infinity And so it's this And so let's see what we can do with this um Yeah, I feel like i'm done something really stupid, but i'm not sure what i've done What this one? Yeah, oh, yeah. Yeah. Thank you. Thank you. Thank you. Okay. Yes. Sorry You solved my problem for me. So yeah, so I think is that this object here is not quite the generating function Because it's it's got this extra one here. So all you do is you shift the index You say let me define a new index m which is n plus one of p m z to the m summation from m equals one to infinity And but then the I can put in the zero term because that disappears when I differentiate because it doesn't depend on z And so this just becomes um Yeah, so this is nothing more than Sorry, uh here Yes Well, or or I could just say that when I sum from n equals zero, I just don't include this term Yeah And then everything else is okay. What this term The n equals zero n equals zero won't contribute because n is equal to zero. So it doesn't contribute No, no because any when n is equal to zero. This doesn't contribute because it's n equals zero here So so this is this equation is kosher as long as I don't include this term. So um Uh, yeah, I want to get uh, I've done something not quite right Um, and also it's 1233. So let me let me stop here and just say I will I will finish this with the start of the lecture tomorrow But the point is that one is let me tell what the answer is because I know what the answer is At z minus one squared dg by dz So I will derive this properly next time. I'll have all the steps, uh, correct But the point is that um, you know, again now it looks like well, what do I do with this differential equation? This looks kind of hard, but it turns out that whenever you're dealing with linear Stochastic processes of this type where there's birth there's death there's change in the number of particles Always always always when you introduce the generating function you derive a first order Partial differential equation and in fact, this is nothing more than the wave equation in disguise Or if you're more of a mathematical expert, you can solve this by the method of characteristics But anyways, there is a standard set of tool kits of solving these sorts of differential equations And we'll see very quickly that this equation can be solved really really simply And so one has a very simple expression for the generating function And it turns out to be easy to invert it and one gets the full probability distribution of number of particles And so that's what I'll finish this first segment of this course tomorrow Before you end up Yes Uh, what is p zero? Actually, it is a very cheap function using n equal to zero So so p zero is the probability that there are no particles left in the system So in fact, you know as we're going to see and you could we I can pose this as a question for the audience Which is like if I start with one particle And I let the system evolve for a long time. What is the probability of the system goes extinct? Does it go extinct for sure? Or does it go extinct with a probability less than one? And so p zero is nothing more than the probability of extinction and in Biological or ecological problems extinction probability is crucial And so p zero is really actually the fundamental quantity that we're typically interested in in solving sorts of birth death processes So that's what I will do next time and I thank you Wherever you are for your attention And I wish you were here because this is impossible