 I wanted to talk about a little bit about the Ito calculus today but then on looking over it I realized that it is a little formal and it is not very clear to me that really fits into physical applications of stochastic processes but I will try to get back to it. I am trying to write it out, I am trying to say it in as simple a form as possible and I have not quite succeeded. So since I do not want to get into a whole lot of formalism in this class we will take a rain check on it, I will come back to it a little later because I would like to see what is the simplest way of saying it explicitly saying it and so on. I mentioned a few things yesterday about the fact that it makes a difference when you have multiplicative noise as opposed to additive noise and it is a question of how you interpret, how you handle this quantity dw which is the increment in a Wiener process. You certainly cannot write dw as dw over dt times dt because it is not differentiable so that is where the root of the problem is and I am still playing with the idea of playing with how to present this in a simple enough form. So we will take a rain check on it. Meanwhile there is one topic which is of practical importance in the study of noise and stochastic processes occurs all the time and it is a simple enough thing once you make a sufficient number of assumptions and it is got as I said a lot of practical applications and this has to do with the following question. Given a random process in time and we are going to assume for this purpose sufficiently smooth random process in time one could ask what is the rate at which or one could ask how often does this process cross some prescribed threshold. So you could say there is some level in this process and you would like to know the statistics of the level crossings of this process. So that is a very very general statement and we will try to make it specific. So level crossing statistics. So let us look again at a process x of t and I am going to assume that this is a continuous process that is number one and secondly it is not as irregular as Brownian motion which to which this formalism is not applicable for reasons which will become clear. But if you plot this a typical realization of this process the function of time we will assume it is an ongoing process. We do not have to assume that it is a stationary process that is very important because in practice what can happen is the following. You might for instance record the rainfall continuously as a function of time the precipitation as a function of time and this is going to be irregular it is going to fluctuate etc. And you would like to know when the precipitation exceeds per unit per hour or something like that exceeds a certain threshold value. We would like to see the statistics of this thing happen. Now this could change with the seasons very slowly so it is not a stationary process definitely. So whatever we are going to write down will be applicable even to non-stationary processes with some provisors but if it is stationary then things will become simpler as you will see. So it is some ongoing random process the function of time X of t and it goes up and down in this fashion is some kind of irregular process of this kind. And then you ask I would like to know the statistics of when it crosses some prescribed threshold it could even be the origin it does not matter. So some threshold like this we put in the value a and this is prescribed threshold and I would like to know about the statistics of these points where it crosses this crosses over on the other side okay. For that purpose I realized that the immediate thing to do is to define another random process so let us call this process u of t which is equal to 0 if X is below a and equal to 1 if X is above a so rate of X of t minus a and I plot this then schematically what happens if I plot u on the same graph is that this point here in this interval u is unity it is up there and then it remains 0 till over this range well it is not a bad so let us do it this way so over this range it is again positive so it remains goes like that and then there is a piece here and so on. So this is what u of t does and this is unity here so the problem has become much simpler we do not care about the variation of X itself we only want to know is it above the threshold or not in other words when is u 0 when is it 1 that is it okay. Now what we want what we are getting at is the statistics of these instance of time so it is convenient to plot not u but u dot of t can imagine formally differentiating this process of course these are all singular quantities but you can differentiate a step function to get a delta function right then u dot of t equal to a delta function of X of t minus a but it is got an X dot of t also because you have to differentiate this quantity okay. Now what does u dot actually look like it is going to fire whenever X of t hits a namely at these points at all these points but suppose it hits a at one particular value at some time t then this delta function if you convert it to a delta function in time is going to have a divided by it is going to be t minus whatever it is at that instant t minus ti so a typical 0 would look like this thing would look like delta of t minus ti but it would be divided by the modulus of X dot at this point at this ti because that is what this delta function does when you convert it to a delta function in t but there is an X dot sitting on top okay so X dot divided by mod X dot is plus one if X dot is positive and minus one if X dot is negative. So what we have is a delta function here of unit strength out here and one at this point of unit strength in the opposite direction and similarly there is an up crossing here so it goes up and there is a down crossing here so it goes down etc. So this process U dot is just a sequence of pulse it is a pulse sequence in which you have plus one if it goes up and minus one if it goes down okay. So the matter is now very straightforward what we got to do is to find for instance the number of crossings in a given interval of time you would like to know the number of threshold crossings in some time interval say t1 to t2 let us call it n a t1 t2 this number and what is this equal to it is just the count of all these delta functions you just have to integrate over t this quantity and that is it this U dot and that counts it automatically making sure that you have the modulus of course. So this is equal to integral t1 to t2 dt and then you need mod X dot of t delta of X of t minus a that counts one for each of these guys because I put a modulus there and that is it now this is a random variable because these points t sub i are at random okay because X itself is varying randomly with time okay. So now we can proceed to look at the statistics of this number n what is the average number etc etc so we could ask over all realizations of this X of t different realizations in the given time interval t1 to t2 what is the average number of crossings level crossings okay. So that will immediately be expectation n this quantity will be equal to what it is a first there is an integral over this t1 to t2 dt and then these are random variables these guys are random variables so now you can see that it depends not only on X but also on the random on the random process X dot we have not assumed stationarity or anything like that. So these probabilities will be time dependent okay and what you need here is the joint distribution or density function not only of X but also X dot together we have looked at earlier we have looked at cases for diffusing particle of the joint distribution of the position and velocity so you need a thing like that in this case in this general situation so you need some p of X X dot and t because this guy could be time dependent this probability density and you have to integrate over all possible minus infinity to infinity dx integral minus infinity to infinity dx dot but you also have modulus X dot so that is the formal expression for the expected or average number of crossings of this threshold a if you set a equal to 0 sorry times this we need this guy also delta of sorry times delta function of X minus a that is sitting here so this is equal to integral t1 t2 dt minus infinity to infinity dx dot that remains X dot and then this density function at the value a of X dot and t because there is a delta function here I can get rid of this integration over X if you put a equal to 0 you get the number of 0 crossings the average number of 0 crossings so the matter is not quite trivial because you need this joint density and then in that density you have to set a particular value whatever threshold value it is and then you have to do this integral here what would the variance of this number be like okay that gets harder and harder so the variance formally would require the mean square so you would require n squared of a t1 t2 this would be equal to at this formal level it would be equal to integral t1 to t2 dt integral t1 to t2 dt prime integral minus infinity to infinity the X1 dot say the X2 dot minus infinity to infinity mod X1 dot mod X2 dot these guys will be there and then a p of now it is a joint density so it is a X1 dot t a X2 dot t prime this two-time joint density is also required probability of density is also required and formally you have to do this and then of course in general you require the moment generating function of this which would become fairly complicated but that is the formal expression as it stands can we tell what is going to happen can be very often you would like to know what are the up crossings and what are the down crossings like so you would like to look at the distribution of these points where it is crossing upwards like this and then the distribution of the down crossings right so if you call that n minus for instance so if I call n plus up crossings number of up crossings of a t1 t2 what would this be equal to it would be equal to integral t1 to t2 let us let us find the average number that is what we are interested in in general this is equal to what should I do here is the number of crossings actual crossings what should I do if I want just the up crossings the integral over X is trivial that is gone it is just you replace it by a pardon me yeah I must integrate dx dot from 0 to infinity because it is going up that means X is increasing so the integral runs dt integral 0 to infinity dx dot and then there is no need for the modulus it is X dot p of a X dot t that is the up crossings and what about down crossings n minus of a whatever yes that is right so a t1 t2 again integral dt t1 to t2 and what should I do for down crossings I want to count the number of times it goes downwards from above a crosses to below a well clearly X dot is negative because X is decreasing right so what do I integrate over minus infinity to 0 what I should put is mod X1 dot mod X dot clearly right you can so that is the same as minus it is quite right dx dot x dot p of a X dot now this quantity here this guy here this is the expected or average number of crossings in this time interval here it is equal to the mean rate of crossings multiplied by the time interval integrated over the time interval and the rate may be dependent on time that is the reason why you have a t here explicitly the process is not stationary the rate itself might change with time right so this integral equal to mean rate of level crossings at time t so it may change from time to time and we could call it equal to some r of a and of course this obvious definition for the mean rate of up crossings mean rate of down crossings and so on so if you give me the joint density in X and X dot then whether it is stationary or not I can formally write down assuming of course the process is differentiable and so on I can write down a formula for the average number of crossings which is very straightforward here now if the process should turn out to be stationary if X is stationary then X dot is of course stationary differentiation makes it does not change anything then this joint densities and so on would be the steady state densities then if you know that joint density one can actually compute what this rate is so let us do it in the simplest case that we have available Gaussian process okay for which we actually know what the density looks like but we need it for both X and V so we need a process in which X is stationary and X dot is also stationary the normal diffusion of a free particle we know that X dot is stationary if you use the Langevin model but X is not stationary on the other hand can you think of an example again involving one dimensional motion where both X and V are stationary we need to look for a process where there is no long range diffusion because then of course the variance of X becomes infinite and then it diverges with time and it is not stationary so what is the problem which is a standard problem which has got pardon me yeah the Onstein-Ohlenbeck now there is a little problem with the Onstein-Ohlenbeck thing as we applied to the velocity that is an exercise for you we are going to do that we have the joint we have the conditional probability density conditional probability density we have for the velocity okay we do not have it with the acceleration included we have not put that in at all for the velocity process we have not done that yet so it is a little tricky we are not talking here about conditional densities at all what we have our probability densities joint probability densities in a variable and its time derivative the Onstein-Ohlenbeck for the velocity actually gave you P of V T given a V naught this is not what we are interested in at all what we are interested in is a probability density that gives V V dot and possibly as a function of T now let us go back and since you mentioned this example let us look at the particle which obeys the free Langevin equation and it looks like this V dot equal to minus gamma V plus this square root of gamma over m eighth of T we can in principle compute this density here V and V dot but remember what is going to happen this V dot is going to involve white noise here which is not differentiable it is not smooth so there are singularities here in this problem so we cannot talk about things where you have a white noise a bare white noise in this kind somehow it is got to be this white noise has to be ameliorated inside it is got to be made mild it is got to be made harmless and you must have a stationary distribution for both a variable and its time derivative and even this is not enough when we want to find for instance the square and so on we need the joint velocity distribution and V1 at T1 V2 at time T2 and so on that is not so trivial so what is the other example again a non-steen rule in the process but not connected with the velocity what was the other problem we did where you had a steady state in both a stationary distribution in both position and velocity yeah we put an external potential what sort of potential the harmonic oscillator potential the harmonically bound particle we found this object had a stationary distribution right in fact we found the stationary distribution so we found in this case for harmonically bound particle we are not talking about the conditional density at all we are just talking about the probability density being more so we have P of X V and T or X dot and T in this case but this was stationary this then distribution was stationary because the system is an equilibrium both the position and the velocity have Gaussian distributions right so we know that this is not there at all it is stationary in this problem and what is this equal to what is the joint distribution of the position and the velocity for a harmonically bound particle each of them is a Gaussian and you know this from equilibrium statistical mechanics it is just the Maxwell Boltzmann distribution for each of them e to the minus energy over kt normalized right so what is it for what what is this guy here for the velocity yeah it is m over 2 pi k Boltzmann t to the half e to the minus m v squared over 2 k Boltzmann t and it is multiplied by the distribution in the position that is it right so this is m omega naught squared over 2 pi k Boltzmann t e to the minus m omega naught squared over 2 x squared that is all we need that is all we need here and you have to plug that in instead of X dot I called it be here you have to plug it in and that is the end of the matter right so let us see what this rate looks like for such a process let us write down in general what the rate looks like in this Gaussian case so we have p of x x dot equal to 1 over 2 pi and then sigma x square root of sigma x squared and then there is a variance in x dot that is sigma x dot this fashion e to the minus x squared over 2 sigma x squared minus x dot squared over 2 sigma x dot squared that is what the p of x x dot is and now let us ask what is the level crossing threshold in x is between some some number a specify some number a then what we need is the following r of a at any time t but it is exactly the same at all times so it is stationary so the rate at which the mean rate at which this oscillator crosses some point a on the x axis both upwards and downwards this is 1 over 2 pi sigma x dot and then what do we get e to the minus so we have to do this we have to do this integral so e to the minus a squared over 2 sigma x squared because all I have to do is to set x equal to a in there and then an integral minus infinity to infinity dx dot mod x dot okay e to the minus x dot squared over 2 sigma x dot squared I have to do this integral but that is not difficult because there is an x dot sitting here so what does that give us this is equal to first of all let us write this as twice the integral from 0 to infinity and get rid of the mod so this 2 goes away with you and you are stuck with this so let us put let us put x dot squared over 2 sigma x dot squared equal to some u so it says x dot dx dot the 2 cancels over sigma x dot squared equal to du so let us move the sigma x squared up there sigma x dot squared du so now that is going to give me 0 to infinity du e to the minus u and that is one but there is a sigma x dot squared multiplying it so there is a sigma x dot so this is the general formula when you have a Gaussian process and both x and x dot are stationary Gaussian processes of course we could have computed the up crossing down crossing etc in this case it would be half if you put a equal to 0 then it would be completely symmetric about that point in particular notice that r of 0 the 0 crossings equal to sigma x dot over pi sigma x and that is it this factor becomes unity now let us see whether this is physically reasonable or not in the case that we know in this problem in the oscillator problem sigma x equal to square root of kbt over m omega naught squared and sigma the velocity sigma x dot equal to square root of kbt over m right so what does this ratio become omega naught by pi is there a factor of 2 missing somewhere but we are really careful here that is fine that is fine that is fine okay what is the time period of this oscillator the unperturbed oscillator 2 pi over omega naught right so t over 2 every half period on the average it crosses the axis x equal to 0 of course that is exactly what it does okay so this big rigman hole here it is actually in this trivial case it is given as the right answer so it checks this answer out but you now have an explicit formula for when it crosses any value a how come this oscillator does not have any fixed amplitude it is going it is there is a probability of finding it arbitrarily far from the origin but the variance is finite the variance of this oscillator in the steady state is finite stationary state but there is a probability of finding it anywhere why is that it is in contact with a heat bath at temperature T so its energy is not fixed and as you know the energy determines the amplitude right the energy can be arbitrarily large of course it becomes less and less probable that it is moved far away in one direction or in the other direction it is going to stay near the origin most of the time it is Gaussian after all peaked about the origin but the fact is there is a non-zero probability of it being arbitrarily far from the origin and this is the rate at which that threshold is crossed it gets much smaller as a become significant you can see for a given temperature of course the temperature is increased and sigma x squared is also increased and then a can become larger as exactly as you would expect okay so even the zero crossings are taken care of in this problem now the difficulty of course is that you need this variance to be finite need this variance to be finite you could now ask a slightly more complicated question what about the maxima and minima of this random variable we have said it has got a nice realizations the sample parts are nice and smooth and so on what about maxima and minima so let us see what happens there and it is related to this problem so distribution or crossing of so we will try to should really say the distribution of extrema now we have in mind a sample part which looks like this and I want the statistics of these points so I do the same thing as before I say look from a minimum to a maximum the slope is positive the slope is 0 at extrema so I now construct u of t to be equal to theta function of x dot of t so I am trying to find out the statistics of all these segments here so in this segment this guy is positive then again in this segment the guy is positive and so on and 0 in the segments where theta dot is where x dot is negative so between a maximum and a minimum this this process is 0 and between a minimum and a maximum it remains 1 plus 1 and as before if I compute so this is u if I compute if I calculate u dot the up crossings would correspond to unit delta function with plus 1 weight and the down crossings would correspond to minus 1 weight so the maxima would correspond to minus 1 weight and the minima would correspond to plus 1 weight so u dot equal to x double dot of t delta function of x dot of t so now I could ask okay what about the mean then the mean number of these extrema in some time interval okay so the number of maxima mean number of extrema in t1 to t2 equal to n of t2 this is equal to an integral from t1 to t2 dt and then I have to compute this quantity here so the number would say take modulus here x double dot of t delta of x dot of t formally what I am interested in is over all realizations I want to know the mean number so I already wrote mean number this is the mean number is equal to expectation of this equal to but I also have to integrate here so let us do this integral what should I integrate over what should I integrate over so there is going to be definitely an x double dot and then a delta of x dot that is going to sit there so there is definitely an integral over these guys so integral minus infinity to infinity dx dx dot dx double dot all are there because the process itself is x of t and I have to integrate over its density times the joint density of x x dot x double dot perhaps at time t if it is not stationary but in any case I need the joint density of not only the variable but its derivative and its second derivative so this makes sense only if the function is twice differentiable in some sense some precise sense okay generally you would call the mean square differentiable that is the way the stochastic process would have differentiability put into it but you need sufficient smoothness to do this and of course as usual you can get rid of the x dot integral using the delta function so this thing here becomes equal to integral t1 to t2 dt integral dx integral dx double dot times mod x double dot and this is overall values infinity to infinity but then a joint density which is x 0 x double dot and maybe t so this quantity now this full integral gives you the the rate at which the extreme occur and as before if you assign a sign to x double dot we would be able to tell the maxima from the minima all you have to do is to make ask when is this positive or negative and then you can separate the positive from the maxima from the minima okay what if I say look I am not interested in these maxima and minima I am only interested in maxima and minima which are above some threshold so I am interested in this this and any minimum above this what should I do then put in a so don't consider this put in a theta function put in another theta function which says x is bigger than a and go through the same process as before so that is not a that is a small extension of these formulas so you can always change to any arbitrary threshold and say I am not interested in fluctuation in extreme of small values I am interested in it beyond a certain threshold now what can we say about this in a Gaussian process suppose I tell you that you have a Gaussian process in which x x dot and x double dot all of them are Gaussians you know and it is a stationary process all these are stationary random variables then what does this p look like in that case we wrote it down for the case of two Gaussians what we need to do is to do a generalization of that to 3 but there is a little wrinkle that appears here which you have to be careful about I haven't discussed multivariate Gaussians in great detail or in any detail in this class but what happens then is the following this fellow here let me write the general formula down so p of x x dot x double dot in the case when they are all stationary when they are all Gaussian and stationary this thing looks like the following you need to define a covariance matrix the moment you have multivariate Gaussian you need to define a covariance matrix and it looks like this it is 1 over there is this 2 pi square root for each of these factors so there is a 3 halves and then there is a determinant of this covariance matrix let us call it gamma 1 to the power half e to the power minus 1 half the usual standard half it sits there and then I need a notation I need a little notation now so let me I should not call it x let me call it let me call it xi some vector this stands for x x dot x double dot okay we will put this thing in a column vector and then it is e to the half xi transpose gamma inverse xi out there this guy where this covariance matrix gamma looks like this it is got sigma x squared sigma x dot squared sigma x double dot squared in the diagonal elements and then there is a 0 but there is a sigma x dot squared here minus sigma x dot squared here that is what this matrix looks like and you need to find its inverse and plug it in here crucial thing is this this guy sits here in general any multivariate Gaussian you know that so you plug this in into this expression do this integral and you know you now have in the stationary case this to integration is trivial it is t 1 minus t 2 times a rate okay and you can compute the rate of crossings now the real interest comes when you have a noise of some kind about which you do not know much perhaps you can make a Gaussian assumption but you do not know very much more about this we would like to interpret what this whole thing is like what what are these things really trying to tell you and so on for this we need the concept of the power spectrum of the noise I have not talked about this at all yet but this is now the time to introduce it because we are now going to talk about noise processes of various kinds not necessarily Markovian not necessarily described by master equations but we look at various kinds of noise for a while now. So let me define first a power spectrum or power spectral density we will do this in the simplest case first a one-dimensional process first and let us look at a process which is stationary so we have a law if you know if the process is I you know that considerable amount of information is carried by this autocorrelation function here you can write down various properties of it and things like that we kind of expect that as t becomes very large this will decay to 0 from some finite mean square value etc. Now this power spectral density is defined as s it is very simple si of omega as just a Fourier transform of this okay it is a function of t it is a nice smooth function of t in general so it is Fourier transform is it and I will tell you what the use of this definition is what it is going to do or what it actually is measuring so minus infinity to infinity dt we need to choose a Fourier transform convention so I have chosen one now in physical terms what it does is once you measure this and xi is some kind of noise or random process it measures the strength of this of these fluctuations when you Fourier transform this it tells you what is the intensity in some frequency window between omega and omega plus t omega that is what this guy does here and by the way all the others are matters of convention the plus sign here the 1 over 2 pi here etc etc engineers normally define this as twice the Fourier transform without this 2 pi factor and so on but these are we will stick to one convention we will stick to this thing here now what is the import of this whole business it will turn out and we will see this explicitly that when you have some noise driving another variable which also becomes noisy as a consequence then the power spectra of the input and output variables are related to each other okay in fact the response of the system is measured by what is called a transfer function between these two which is dependent on the power spectra of the input and the output variables and there is a theorem called the Wiener-Kinchen theorem which I am going to talk about and which will exploit which will quantify this relationship between the input and output so this is basically what this does now in the cases we have looked at the simplest cases we looked at we can write down what this guy is and then we will come back and look at its significance in greater detail first this Gaussian white noise that we had we had a noise eta which essentially said the eta of t eta of t prime is delta of t minus t prime stationary process so this is just a delta function so s eta of omega is equal to that was a 0 mean Gaussian process by the way I am assuming that the mean is 0 here otherwise a correlation is delta x side of 0 delta side here the different deviation from the mean the correlation of the deviation from the mean so when this is just a delta function this just gives you 1 so is 1 or 2 pi equal to a constant so that is another way of defining white noise it says that when you do a Fourier transform there is equal intensity at all frequencies of course it is unphysical because certainly some energy is involved in producing this noise it is clear that you cannot have arbitrarily high frequencies the same intensity okay so that is a mathematical idealization by the way what was s v of omega in the case when you had a Langevin particle so remember that this process was defined by v dot is minus gamma v plus square root of gamma over m 8 over t what is this equal to what we need to do here is to put in the value for the correlation function right so this is equal to 1 over 2 pi integral minus infinity to infinity dt e to the i omega t and then what is the correlation function of this velocity process the stationary velocity process it is e to the minus gamma t that is the whole idea the velocity correlation time was gamma inverse and died down with one correlation time it was a Markov process exponentially correlated right and then the strength was 1 oh so it was 1 over 2 pi k B t over m that is the mean square value of this guy times integrals minus infinity to infinity dt e to the i omega t e to the minus gamma modulus t modulus t died down on both sides so of course that is a trivial integral to do first thing to do is to remove this thing and make it twice twice 0 to infinity e to the minus gamma t and what is that going to be sorry before before we do that let us leave it like this minus infinity to infinity with a 2 pi e to the i omega t but I break that up into cosine and sine and the sine vanishes because the rest of the integrand is in even function mod t here so I get rid of this 0 to infinity and then there is a cost omega t only the real part survives and what is this guy what is this integral I have all got in mind that is it that is it is it gamma over gamma square plus omega squared or omega over omega gamma squared plus omega squared I am sure this is done by I do not know things have changed so I am sure you have done this e to the minus ax cos bx or sine bx I am sure you have done this integral right these are elementary integrals what you have to do is to write e to the minus ax plus ibx and that is trivial to do and then you take real and imaginary parts you get each of these integrals right so what is this answer for the cosine a over a squared plus b squared or b over a squared plus b squared how do you decide even simpler way of how do you decide put b equal to 0 if you put b equal to 0 this just becomes e to the minus ax for the cosine and it becomes 0 for sine so the sine must have b on top the sine integral right okay so this is gamma over gamma squared plus omega squared so it is gamma over pi 1 over gamma squared plus omega squared crucial part is this what does it look like it is a Lorentzian shape so now you see what is happening the effect of this the effect of the fact the fact that there is inertia in this problem tells you you cannot shake this particle arbitrarily high frequencies with equal amplitude so what does the power spectrum do as a function of omega when omega becomes large it dies down like 1 over omega squared whereas the noise that drives it has equal power everywhere at all frequencies but the response because there is inertia in the problem and damping in the problem this response does not follow the stimulus it is sluggish and this dies down as omega squared for large values we will see more about this and the more interesting cases are where this dies down the the power spectrum dies down like a power of omega which lies between 0.8 and 1.2 or something it is called 1 over f noise we will say a little bit about that okay so let me stop here today but we will take up the idea of the power spectrum and what it does in greater detail