 Last time I went over the stationary phase proclamation as basically an exercise in mathematics that's in a approximation for an integral which has got a rapidly oscillating integrand. Here we're thinking that the cap is being small, which is what makes an integrand oscillate rapidly. I think in whatever whole right in the one dimensional case that I, for the multi-dimensional case I was pretty sketchy. So I want to start today by filling in more of the details about this multi-dimensional result for the stationary phase approximation. The integral x here is short enough for x1 to xn, so it's an in-dimensions as you can see. The basic idea of the proclamation is that the integral is, the integral is integrand is dominated by the regions of the variable with the integration x, the region of x-phase, around which the phase phi is stationary. The stationary phase point is defined as simply one with the gradient of phi is equal to zero. We call that x bar. And so the basic idea is to just expand the exponent around the stationary phase point carrying out the second order into a Gaussian integral. To do it in so much more detail, let me write it like this. So let's take the, let's take a point x much wider than the stationary phase point and make a correction on a y like this. Then we'll expand the phase phi of x in powers of y, which we think of as small. So if it comes to phi of x bar, the stationary phase point, plus first correction is the sum of i of y sub i to the limit of the phi with respect to x i evaluated in x bar. And then the second correction is doing a Taylor series, it's a multi-dimensional Taylor series, the sum of i, j of y i to the y j times the second derivative of phi x i, x j, and the only good in x bar. And then there's cubic and solar terms out of here, which I will probably write down. So this is an extension of the phase in the neighborhood of the stationary phase point. The first order term vanishes because that's the definition of the stationary phase point is where the gradient of phi is zero. And this is why I went on with the second term because it's the first non-managing term after the value of phi on the stationary phase point itself. So I want to take this phi of x and plug it in here to this exponent. You see the first term is just going to be a constant. And the second term is going to be an integrals. So if I call this integral i just to give it a name here, the result is going to be i is equal to an integral d n x. And then for the phi of x bar, I'm just going to get e to the i phi of x bar over kappa. And then there's a correction from second order. To make this a little easier to write, allow me to take this matrix, the second derivative matrix of the phase, evaluate the stationary phase point and just define it to be n i j and just answer matrix such as the same writing. And likewise, let me take the entire second order expression here and write it this way as y transpose times n times y, where y is being viewed as an indimensional vector, just being sandwiched from the matrix n. That's just what this whole expression means. So if I do this and there's another term here on this integral, which e to the i over 2 kappa. And then we've got y transpose times n times y. And this is now an approximation because we've truncated the expansion of the exponent in second order. But you can see that as far as the first factor is concerned, the phi evaluated at the stationary phase point is just a constant. It can be taken out of the integral. And as far as the rest of it is concerned, it's a Gaussian integral. And so we can multiply the dimensions now in the matrix n. This is purely an imaginary exponent, but it's basically a Gaussian integral. The particle of integration v and x is the same thing as v and y, because x and y just differ by a shift. It doesn't change the origin as long as... And so this thing then turns into the central integral is integral of y times this Gaussian exponent. Now to proceed with analyzing this, let's let y be equal to r times z where y is at an old factor of coordinates and z is a new one and r here is an orthogonal matrix. And if we make this substitution in the exponent here, then the exponent which is y transpose times our matrix n times y becomes equal to z transpose times r transpose times m times r times z. It becomes a new coordinate z transpose z sandwiched by a new matrix which is r transpose n of r. Now m is a symmetric matrix, it's a real symmetric matrix because it's just a second limit here. So we can choose this r matrix or the orthogonal matrix r to diagonalize this. Now let's do that, let's call this lambda which is diagonal and the lambda you see contains the eigenvalues of the original matrix n. Also let's note something else which is that the volume element d and y is the same thing as d and z because the orthogonal matrix here which occurs in the coordinate transformation has a determinant which is 1. And so the remaining integral here, I'm spacing it right down and first of all this constant factor is d to the i, phi of x bar over kappa. And then the remaining integral is integral d and z and then we get d to the i over 2 kappa in the exponent here. And then this quadratic form now turns into this. We can write it this way, it's the sum on j of the eigenvalues called lambda j times c sub j squared. And now the point of this is that this resulting integral of z now factorizes a bunch of one-dimensional Gaussian integrals and in effect reduces this to this multi-dimensional case down to one-dimensional case. Now I'm sort of short on the rule here because I've got so much on the board. Let's just look at this integral here and let me do it up here on this board. So the remaining integral here is, it turns into a quadratic with one-dimensional integrals. So our i is equal to this pre-factor d to the i, phi of x bar over kappa. It's the phase analogy of attestation at phase point. And then the remaining integral is probably one-dimensional integral so that is the product over j. And these one-dimensional integrals, as I said, I derived the one-dimensional stationary phase approximation last time. Each one of those is going to turn into this. It's going to be d to the i, phi of x bar times the sum of the eigenvalue lambda j. And then what you get is the square root of 2 phi kappa divided by the absolute value of lambda j. That's the result during the integral. Now, when we take a product over j over all the variables of the lambda j's down here and the denominator, that's the product of the eigenvalues. That's, of course, the same as the determinant of the matrix. So the product of the lambda j's is the determinant of this matrix here, which is the same thing as the determinant of the second derivative matrix here, the value of the stationary phase problem. And that explains this determinant under the square root that you see down here. This answer to that component. That's where that comes from. As far as the product of the 2 phi kappa, since this n of these is supposed to run at the end, this is going to be the 2 phi kappa of the n. And that explains this factor here under the square root, right there. And as far as the product of the signs of the eigenvalues, the eigenvalues are either positive or negative. We're assuming we will assume this matrix is non-singular. And so the product of the signs times c to the pi over 4 is just equal to e to the i pi over 4 times the number of positive eigenvalues minus the number of negative eigenvalues. And that's exactly what this factor here is e to the i u pi over 4 where u is defined as u plus minus u minus and u plus and minus the number of positive and negative eigenvalues of the second derivative matrix. So that's the derivation of the stationary phase approximation for multi-dimensional integrals. And I'll try to put several blocks around that. That's the main result. Now, when last time I was in the process of applying this to the path integral, the general idea here is that in the discretized version of the path integral, the sum which occurs in the exponent is a discretized version of an action interval as it occurs in classical mechanics in the integral of the Lagrangian along the path. And so this suggests that the path integrals connected with the Lagrangian formulation of classical mechanics. In fact, you know according to Hamilton's principle that the paths that cause the action to be stationary are the class point of our orders. And so this suggests that the path integrals connected in a semi-classical way to Hamilton's principle in classical mechanics. So if we think about H bar being small in order to study the semi-classical limit of the propagator, it suggests that the H bar here should be identified with capital as mathematical formula. That's the small parameter. And the integrand is a rapidly oscillating integral. So it's a purely phase factor in the integrand. It's a rapidly oscillating integrand in H bar small. So the first thing we do in trying to fit the path integral to this general formula is as you see at the top of the board is to place capital by H bar. And then as far as the phase function is concerned it appears here. That's everything else in this exponent except for the H bar. So it's epsilon times that sum. That's the phi that you see up there. That's in fact the discretized approximation of the action integral. In other words, as it goes to infinity, as it goes to infinity, this turns into the action integral along the path. All right. And then for reference here, I'm taking derivatives of phi with respect to the x's. Oh, here's another thing to note too is that in this mathematical formula I said x goes from 1 to lowercase n. But in the path integral, if the x's go from 1 to capital N minus 1, remember that x0 and x capital N, which is the same as x, are just fixed parameters of the propagator. They're not variables of integration. They're all integrated over intermediate points. That's the 1, N minus 1. So another thing that I might put up here is that we're going to also take N goes through N minus 1 and try to convert this mathematical formula into the application of the path integral. All right. So anyway, then here's the derivatives, which I just worked out. I'm taking derivatives of that phase function which appears in the discretized version of the path integral. Now I'm applying the stationary phase approximation and the steps involved are basically these. The first thing is, is you need to find the stationary phase points, the x bars that are indicated here. And those are the roots of the gradient of the phase function equals 0. The places of the phase function is stationary. That's why they're called stationary phase points. So in the application of the path integral, all we need to do is take these first derivatives here, which are written out here in general for, and set them to 0. So if we take partial 5 with respect to xi and set it to 0, and I showed this last time, this means the state of the square brackets goes to 0. This is actually the discretized version of Newton's Laws. In other words, it turns into this. It turns into m times xi plus 1 minus twice xi plus xi minus 1 over x1 squared, which is the discretized version of the acceleration. It's equal to minus d prime of xi, which is the force. And so in the limit in which n goes to infinity, this turns into a differential equation. So if we can write this way, d squared x of tau d tau squared is equal to minus d prime of x of tau. This is just Newton's Laws for the path x of tau. So it's a classical path. In the limit in which it goes to infinity, it's a classical path of the stationary phase points. They're points because they're points of past space. And to follow this notation down here, where I was using a bar on the stationary phase point, perhaps I should do that also here. I should put a bar on this to distinguish the paths, which are the solutions of Newton's equations and the classical outpass for any old path, which can be put into the actual functional law. In any case, this is just a repetition of the usual derivation that the stationary action leads to the classical, or it's usually done in classical mechanics courses. All right, so when we determine the stationary phase points, that is to say paths, they are the classical paths. All right, so that's the first step. And that already allows us to speak of this exponent, which occurs here in this stationary phase approximation. What does the meaning of the exponent is? It's the phase evaluated on the classical path. So let's go here and let's write down some stuff. We're going to have our propagator, k of x, x0 and t. It's going to equal to a whole lot of stuff that has to do with all of these prefactors here, which I'll take back to in a minute. But just to look at the final phase here, it's going to be a bunch of stuff on this spaceport. And then it's going to be e to the i over h bar times the phase function evaluated on the classical path, which is calling x bar. Well, that's the same thing as a of x bar of tau. But the phase function evaluated on the classical path is also called Hamilton's principal function. This is the definition of Hamilton's principal function. The first simplicity lets us assume there's only one classical path connected to the endpoints of end times. In that case, Hamilton's principal function depends on the endpoints of end times, as I've indicated here. It's usually called s. But it's just the integral of the Lagrangian, now along the classical path. And I've not seen the crazy path that's not visible. And then, as I mentioned last time, Hamilton has shown that this function satisfies certain differential equations. If you differentiate with respect to the parameters, the x, x0, and t, that s depends on, you get the final momentum, you differentiate with respect to x, the initial momentum with respect to x0, and minus the energy or Hamiltonian with respect to time. I didn't prove these, but these are well known results. These are called generating function relations, which are well known in classical mechanics. In any case, the result of this is that the phase of the propagator can now be written this way, as e to the i over h bar, minus s to the x0, and t, it is the classical Hamilton's principal function of period and period phase. And now what we need to worry about is the rest of this stuff, which is connected with all of these pre-factors here, in phases, too. There's a constant, there's a real pre-factor, and then there's a phase. Now, there's a lot of trees here in this forest, so I know I should lose sight of the forest before I get into all the trees. The basic intuitive idea of this is that, as I say, the integral is dominated by the regions of x-space that surround the stationary phase points. So x-space is going to be a path space, and it means the integral of the path integral is dominated by the regions of the path space that surround the classical path. You might say it's classical path plus small fluctuations around them. That's the language that's common to use for this. How far do you have to move away from the classical path? The idea is you move far enough away that there's enough defacing and the oscillations of the integrand start killing things off because they become just plus and minus signs that cancel each other out. There's some central region around the stationary phase point in which the integrand is in phase. That's the same phase as it does at the stationary phase point, and that's where the contribution to the integral is coming from. So what all these prefactors do is they tell you how far you can move away in this case from the stationary phase point where they do apply to the path integral is going to be how far away you can move from the classical path and still have the action function being approximately in the phase where the value had all in the classical path. In other words, this takes everyone to the situations around the classical path. Well, all right, so that's the intuitive idea and now I'll get into the details with more of the details of this. It's clear that in order to look at this, to examine this, we're going to need the second derivatives of phi and the value of it in here is just writing this out of this form here of a log. This is just taking the derivatives coming down. Taking the second derivative over epsilon times something in the square brackets here. To get this thing in the square brackets in name, let me, since I have a little room here, let me kind of drill a little bit. Let me call this here, just Q i j, just to give it a name. This is a matrix Q. In effect, the second derivative is a phi equal to a constant n over epsilon times Q i j. Now, we'll worry about Q i j in a minute, but right now let me concentrate on the arrow of epsilon. So, I'm going to be sorry to keep reading this out and out. I wish I had more boards here. But in between the K and the space factor, there's going to be prefactors here. And this is what I want to work on right now is the prefactors. So let's take a look at what those prefactors are. In the first place, there is this normalization constant here, which comes from the screen-tized version of the path interval. This has an imaginary unit down the stairs. Square root of i is interpreted as e to the i pi over 4. And this is square root of i to the right is n power. So let me take care of the prefactors first. Take care of that prefactors first. So the prefactors are going to be equal to e to the minus i n pi over 4, which takes care of that phase. And then I'll copy everything else except the i over 2 pi epsilon h bar. We get over 2. That takes care of just another way of rewriting this prefactor right here. But in addition, by doing the stationary phase approximation, we're going to get another phase here, nu e to the i nu pi over 4. The nu is going to be the difference between the positive and negative i. The number of positive and negative i can value is at this second derivative matrix times this whole square root. So notice that this has got the derivative of the b squared pi, but the b squared pi is n over x one times the central column q. So the capital here is the same as h bar. So I've covered that unfortunately. This is going to turn into this. It will be the square root of 2 pi h bar to the n minus 1 power. And then it's going to be determinative of this thing, the second derivatives. But this is an n minus 1 by n minus 1 dimensional matrix. So we get the factor n over epsilon to the n minus 1 power. And then once that's over, it's the determinant of q, which I'm going to put out the value signs on. So those are the prefactors. Prefactor that came from the pattern integral itself is right here. And you can see it converges rather badly by the n gets large, because it's got the epsilon of the denominator. The epsilon of the delta t is t over n. So it goes to 0 with the denominator and the exponent is getting big. Don't just go to infinity like n over n. It's really bad divergence. But it has to be compensated, will be compensated by the rest of the integral, which you'll see. In fact, if you look over here on this factor as it came from the stationary phase approximation, I've got a 2 pi h bar divided by n over epsilon. It's really the same thing as in these ground fractions. I can say this is n minus 1 over 2 because of the square root. But anyway, it combines with that just to leave over maybe 9. It's a single one-half power. And so this turns into this. Oh, I forgot the, I also forgot to e to the i nu pi over 4 that's supposed to be in the stationary phase approximation. So these prefactors now turn into e to the i n minus nu times pi over 4. That's the phase of the thing right here and here. And then the m's are the 2 pi epsilon h bar. So if I take that and that and this, if I just get m over 2 pi epsilon h bar now to just for the one-half, a single one-half power, most of the divergence is gone in that city. And then I've got 1 over the square root of the absolute value of standard of q. Okay, so this is what's left of the prefactors that's starting to simplify some. Now as far as the determinant of q is concerned, here's q that's sustaining this square of brackets here. This is a n minus 1 by n minus 1 dimensional matrix and then the limit of n goes to infinity. It's going to turn into an infinite dimensional matrix. And so we have to determine the infinite dimensional matrix. It sounds like a tall order. In fact, often in dealing with path minerals, it's these matrices where these affect volume factors in path space with the most difficult parts. I used to get a homework problem on how to evaluate this thing, but I think it's two parts, but I will inflict it on you. And I think what I'll do instead is just try to outline in general terms how one derives this thing and how one can evaluate this in ultimately simple terms. So again, I hope you don't decide on the force of the trees because the real meaning of all of this is just, in fact, giving you an indication of how far you can move away from the classical path in path space before you deface the path that everyone knows. So, allow me now to work on this in terms of the Q. The first thing to notice is that the answers ought to be finite as capital N goes to infinity and there's still a remaining epsilon in here. So you must expect that in the limit that N goes to infinity is that determining the Q must go as 1 over epsilon. It's going to have to do that to cancel this epsilon error. In fact, I'll show that this is actually true. It's also true that as N goes to infinity, a part of the mu minus n approaches a nice finite value, which it has to have given again a definite phase for the answer. All right. Now, let's just take this matrix Q then and try to write it out. So this is Q ij. Here it is. It's depending on the square brackets in the middle of the border of the block. Let's look at the first three terms. These are chronic and deltas here. The first term is just 2s on the diagonal. And these terms here with minus signs are minus 1s on the upper and lower off diagonal elements. So if I just take those first two terms, the right Q here is a big matrix. It is indeed big. It's going to have 2s on the diagonal going down like this, going all the way down to the left on the right 2 here. And then it's got minus 1s on the off diagonal on both upper and lower off diagonal out of here. There's no off diagonal. There's an off diagonal on this side minus 1, minus 1, going down to the minus 1 there. So this is actually a tridiagon matrix and it's got 0s everywhere else. That helps a lot in finding this determinant. This tridiagon matrix that I've written down so far, that's all there was to it, would be a discretized version of the second derivative operator. That's what it turns into in a limiting. This is our acceleration. In any case, there's an additional correction here, including the potential. In order to avoid the same writing, we call ck is equal to epsilon squared over xk. This is times the diagonal elements, delta ij. So I can just subtract ck for the diagonal elements. So I get 2 minus c1 here, 2 minus c2 there, 2 minus c1 minus 1. All right. So how do we find the determinant of that thing? Actually, to find the determinant of it in a finite form would actually be kind of hard, but in the limiting, it goes to infinity. It's actually simple. So it works like this. Let's take a k by k block of this thing. Let's call this a k by k block. Let's call decay to be equal to the determinant of the k, the first k by k block. And remember, I'm just going to outline how one does this. And not going to total details. If you use Kramer's rule on this decay matrix, which is pretty easy to do because you've got rows and columns that have a lot of zeros in them, you see, it's pretty easy to derive a recursion relation for the decays. And it ends up looking like this. It is m times decay plus 1 minus twice decay plus decay minus 1 divided by absolute square equals minus v double prime evaluated at xk plus 1 times decay. And you can derive this for yourself in a few minutes, it's easy to do. But you see on the left-hand side, again, there's a discretized version of an acceleration. On the right-hand side, you've got the potential of now, this is actually going to make an x bar because we're only interested in evaluating this on the classical path. The stationary phase, in one sense, in other words. And so this is an x bar, it's assumed to be a known function of k in the discrete case but in the continuous case it comes in a definite function of tau where tau is this continuous variable that goes between the initial time 0 and the final time t. In any case, this turns into a limit in industrial infinity if you're a differential equation. But before I do that, allow me to take into account the fact that determinant q is going to go as 1 over epsilon, we expect that. So let's say just through a simple change of notation dk equals 1 over epsilon times fk just to give another name for something fk and just scale out the epsilon, that's all. And if we do this and we get the same equation across the linear equation of these it becomes the same equation now fk plus 1 minus twice fk plus fk minus 1 divided by epsilon squared is equal to minus double the time of x bar k plus 1 minus fk is the same thing, same equation. And then in the limit of industrial infinity this turns into m times v squared f which is now a function of continuous variable tau d tau squared. So if the minus v double prime evaluated at x bar of tau that's the classical word that defines method of moving the graph in the limit becomes a differential equation. And one that actually has an interesting interpretation of classical mechanics regarding nearby ordnance as I'll explain in just a moment but before I do that, let me add that one can also work out what the boundary conditions on f are you have an f of 0 is equal to 0 and you have f prime of 0 or let's call it f dot this is the time derivative of 0 is equal to 1 those are the conditions that come out. You get these conditions by just looking at the first couple of blocks the first block and epsilon goes to 0 and the determinant of the first one first two and epsilon goes to 0 you can take the limit and get the derivative. Alright, so this is a definite differential equation with definite boundary conditions and so there's a definite solution and the f of t then is going to be interpreted as as epsilon times the determinant of this q in the limit that n goes to infinity. Now let's examine this equation here it turns out this equation is closely related to the problem of nearby ordnance in classical mechanics. Here's a place where such problem arises you send a spacecraft to Mars to fire rockets and the spacecraft is launched onto a ballistic trajectory after it gets out in space and the rocket engines are turned off. So you make measurements of the trajectory and you find that it's not actually right because there was some bad weather and passing through the atmosphere and so you need to make some corrections to the trajectory. This is typical as a matter of fact. So you've got a trajectory let's say, I'll grow a trajectory in face space let's say there's an x0 and p0 in the initial trajectory that ends up at a final point x of t and p of t at some final time, like this some last time t evolved here. Let's say this is t to 0 and unfortunately so this is your spacecraft when after it's been launched and you find that the final position is not what you want you need a different final position. So this gives rise to the following classical mechanics. Suppose I make a small perturbation of the conditions called the delta x0 and delta p0 like this this will give rise to the perturb orbit which will be year by the original orbit and at the final time it will be at the perturbations of the final position delta x of t and delta p of t. And now what we might do this and Mars problem is we know what delta x of t is because we want to correct for the error and we like to find out what perturbation and initial conditions do we need to impose in order to get delta x of t. Well in general terms solving differential equations here this is the final condition x as a function of the initial conditions the initial position of momentum in the time and likewise the final momentum is also a function of the initial position of the initial momentum in the time. This is just a general solution of the equations of motion and so this by differentiating this using the chain rule what you get is that delta x and delta p the deviations in the orbit of the final time is the derivatives of the final position of respect to the initial position, the final position of respect to the initial momentum, the final momentum of respect to the initial position and the final momentum of respect to the initial momentum there's a 2 by 2 matrix like this and then this multiple xor delta p0 This is just using the chain rule of a chain rule to get to the small perturbation of the final conditions as a function of the small perturbation of the initial equations. All right, so to solve this problem, we'll need to know this matrix. All right, now I'm floating on the space. So let's see if I can do it here in the space which is left. Let's go back to this equation. The x-bar here, t, was our classical orbit. And so it satisfies, you know, times v squared x-bar of t, v tau squared, as equal to minus v prime of x-bar, v prime of x-bar of tau, v prime of x-bar of tau. This is just moving as well as for the classical orbit. Now let's consider a nearby classical orbit. Let's take x-bar and replace it by x-bar plus delta x. We're interested here in an orbit which is still a classical orbit. It's still supposed to satisfy the equation's function, but it's close to the original classical orbit. So if I plug this in here, it'll linearize this equation very early on in the first order of delta x. What we get is, you know, times v squared v tau squared of delta x of tau is equal to the right-hand side of what this end expanded on the first order. It's going to be minus v double prime of x-bar of tau times delta x of tau. So this is the equation for the deviations of x deviations. I've noticed it's exactly the same equation as the differential equation for the determinant of this matrix. These two initial conditions. So what is the solution? The solution is given by a matrix like that. A matrix equation like that in terms of initial conditions. So the idea is that if I want to take delta x-zero equal to zero, and delta p, so delta x-zero, we're going to set that equal to zero. As far as delta p-zero, this is the same thing as the last times delta x-zero dot. I want to identify the f here with a delta x because it's the same equation. So f-zero dot is one, so the delta p-zero is going to be just n. And so the result of this is that f at the final time, and the mass times f dot at the final time, which is like the final momentum, is equal to this same matrix, the x dx-zero, the x dp-zero, dp-dx-zero, and dp-dp-zero applied to zero and n, like that. And all we're really interested in is this f of t part. And so this says that f of t is equal to this component of this matrix here, partial of x with respect to p-zero times the mass n. And that's our solution that we want. And this differential equation satisfies little boundary conditions. It's directly related to the derivative of the final position with respect to the initial momentum, both line times the mass. So are we going to erase this now because I don't need this anymore? Try to finish it up here. So the f of t, which is the final f to copy it again is equal to this, partial of x with respect to p-zero. Now the partial of x with respect to p-zero is the derivative of the final conditions with respect to the equation I wrote. It's a derivative of the final x with respect to the initial momentum, but it holds the time and the initial x-zero fixed. So it'll be explicit about that. Let me put the x-zero here, the whole x-zero fixed. Now allow me to transform this. It turns out this can be written in terms of how much this principal function works like this. This is the mass. We write it this way. It's the partial of p-zero with respect to x on this terminal derivative upside down. The constant x-zero can put an inverse on it. That's okay. Now, again, I need three words at the same time, which I don't have. But here's Hamilton's principal function. It's a function of the initial position and the final position at the time. And here's Hamilton's generating function relations. In the denominator, ds to the x-zero is equal to minus p-zero. And s is the function of x, x-zero, t. And the result is that the result is that this is the more idea I think the result is is that this can also be written as the minus time m times the second derivative of s with respect to x-zero and s with an inverse of minus 1 power. And this implies that, this implies that, gosh, no, what have I done here? I've probably erased what I mean. For d, t was going to be 1 over s1 times f. And so the absolute value of d or t, which is that terminal that's needed in the main case is m over s1 times the absolute value of d star s vx-zero vx-one or vx is minus 1 power. And this is the same thing as this determinant of q. And the limit of n goes to infinity to the determinant of q, which, where was I talking about pre-count? Because it's right there. And so you see nicely the m over epsilon that appears there cancels out the m over epsilon over there. So the result is finite. And the derivative of this infinite dimensional matrix turns into just a second derivative of how this principle functions. So let me bring this down. And now I no longer need this over here, so let me erase this. And so dealing with the pre-factors, this thing turns into, I'll get the phases in a minute. But as far as these square roots are concerned, what's left is just one over. It's one over the square root of 2 pi h bar. And then this determinant of q now turns into the second derivative of how this principle functions with respect to initial and final positions to the 1 half power. That was the minus 1. So this is in the denominator. And I've turned it into the numerator. And there's this square root still there. The phases of this part here, these can be analyzed this way. u is equal to mu plus minus and minus is the number of positive minus negative eigenvalues of this matrix q, which is going to, the limit is turning into an infinite dimensional matrix. So one of these numbers, mu plus or mu minus, has to be infinite. It's going to approach infinity. On the other hand, n here, well, on the other hand, the sum of the positive and negative eigenvalues is the total number of eigenvalues of the matrix, which is equal to n minus 1. So I want mu minus n. So let me subtract that from that. And I get mu minus n plus 1 is equal to the mu plus is canceled. You get minus twice mu minus. Or if you allow me to use my fingers and put that point over here. mu minus n is minus twice the mu minus minus 1. So the minus 1, that gives me a e to the i pi over 4. Let me bring that down into the numerator here. It actually puts an i down there. i is the certain number I do that. And then for the mu minus, let me do this. Let's just give another name for this. Let's say I need the same thing as mu minus. It's easier to write. This is the number of negative eigenvalues. And this is a multiply times 2. The 2 times pi over 4 gives a pi over 2. OK. So I don't expect you to follow all these details. If you had an expert time study, which you don't, you can easily follow this. But I just want to give you at least a, at least a mostly, mostly outlined promise is all derived. In case the result is the following. It is that the propagator a of x, x, 0, t in the stationary phase approximation is equal to e to the minus i mu pi over 2 divided by the square root of 2 pi i h bar. So it is the absolute value of the second derivative of Hamilton's principal function with respect to initial and trans-conditions to the 1 half power. And then it's e to the i over h bar and it's the principal function of x, x, 0, t. And this is the stationary phase of the approximation to the propagator. It divides the principal length between the classical mechanics and the time that we use the quantum mechanics. All right. Now, about this mu, what is this mu? The mu is the number of negative eigenvalues that matrix Q is going to infinity becoming a different dimensional matrix. In a limit that goes to infinity it turns into this operator that I spoke of earlier in last lecture that's connected with the second variations of the action called the v. Perhaps you remember that the second variation of the action I called t2 could be written this way as delta x sandwiched around an operator p times s where the operator v if we think of delta x as being a function of tau where tau is an integral from 0 up to t then v is equal to minus n over 2 v squared v tau squared minus the potential of prime double prime, excuse me, evaluated at x of tau this is really x bar of tau it's the classical order of this. So this has to do with fluctuations around the classical order and whether they cause the action to increase or decrease. And so this mu which occurs here mu is equal to the number of negative eigenvalues of v. Perhaps you'll recall that if v has all the negative eigenvalues involved with the eigenvalues are positive then it means that this quadratic expression of this expectation value is a positive definite means that v is a positive definite operator and therefore it means that if you make any perturbation around the classical path it only increases the action and if that's the case then the action is really minimum along the classical part of what the books tend to say but if v has the negative eigenvalues it's not true that the classical action is not minimum and the number of negative eigenvalues is this mu here which appears to be a phase in the quantum propagator so classical mechanics nobody cares if it's maximum or minimum because you don't care about the second act the variations of the action because all you want in classical mechanics is to get the equations of motion which are a bit of a wrong equation and that comes from just the first variation but in quantum mechanics you need the second variation of the phase right. Now many people in quantum mechanics blow off phases and they say oh I don't care about that and well you know take that point of view then you ignore this it's kind of complicated to do it anyway. Alright anyway this is the formula and this is called the many white formula and lengthen exactly to write in this form but this is a more modern point of view of this formula right in the 1920s actually. Alright now what I wouldn't like so I don't expect you to follow the details of that derivation but it's more or less how it works however what I will do is show you how to use this formula and so let me give you probably the first example to look at but just in the case of a free particle. Let's work out the so this is really our first evaluation actually the evaluation of the path integral which we're doing in stationary phase approximations. So take the case of a free particle most of this calculation is classical because the S is the action along the path to the path integral of the Lagrangian along the path so let's do that so the Lagrangian for a free particle is m over 2x dot square now we're thinking am I going to the initial position x0 to the final position x and we're going to do this the classical path is of course is of course a straight line with constant velocity and we do this in time t and so the action which is s which is the integral of Lv t or Lv tau from 0 to t is easy to evaluate in this case because the Lagrangian is the same as the kinetic energy which is constant on the classical path and so this is just equal to Lagrangian times the elapsed time it's all there is to it but that's the same thing as am over 2x dot square times the elapsed time however, Calvin's principal function is supposed to be expressed as a function of the initial position and the final position of the time and here we've got it expressed as a function of the velocity well we get rid of the velocity by saying the velocity is equal to x minus x0 over t that's really of course the average velocity but for a free particle it's the same as the velocity which is constant so pointing that in what this turns into is am over 2 times x minus x0 square over t by the way for the free particle so this x let me call it u0 over t x0 over t the u0 means free particle here we work this out by using the exact one mechanics a couple of lectures ago and the answer is that it's am divided by 2 pi i h bar t square root of all of that times e to the i over 2 e to the i over h bar we write it this way x minus x0 square over time t what you see now is this is an Hamilton's principal function this is it right here the integral of the coordinate over the path is exactly the x part that comes in doing the exact one mechanics in fact this really is the e to the i over h bar s of x, x, 0 and t which appears in the then left part and it's the same as that one right there now what about the prefactors here to get them we'd normally take we've already titled s we just have to differentiate it twice with respect to x and x0 so before we do that as long as we're differentiating s why don't we verify the Hamilton's generating function relations let me do that here in this space so for the free particle as I said s is m over 2 x minus x0 x minus x0 squared over t so if we differentiate s with respect to x you can easily see what you get you get m times x minus x0 over t but x minus x0 over t is the velocity so it's the mass times the velocity and is therefore equal to the momentum at the final endpoint which is just what Hamilton says let's take the s v x0 with respect to the initial position now there's a minus sign that comes in minus m x minus x0 over t which is equal to minus m v which is equal to the momentum minus the momentum at the initial point and then finally if I differentiate with respect to time the s v t now I'll put a minus sign on it here so it should be plus the Hamiltonian you can see what that does we differentiate 1 over t 2 s minus x0 squared over t squared this is m over 2 v squared which is the energy of the particle which is the same as the Hamiltonian so all of Hamilton's generating function relations are verified now in a kind of life form however we need to be second derivatives which is right here so let's compute v squared s v x0 so let's say this x here with respect to x0 we get v squared s v x v x0 which is easy to do which becomes minus m over t alright and so for the case of the three particle let's see here this is not the other board for the case of the three particle this factor here turns into m over t square root and if we combine it with the square root down here you see we get exactly this pre-factor the only thing that's left is this new thing here which has to do with the number of negative eigenvalues in the second variation well the answer better be 0 if the answer is 0 means there are negative eigenvalues on a three particle the action really is like minimum on the last order but how can we do that so in this case there's no potential this minus m over 2 v squared v tau squared and in this case delta x v delta x is then therefore minus m over 2 integral from 0 to t v tau but if we integrate this by parts this becomes a plus m over 2 and remember delta x damages at the entrance v tau of delta x not squared and this as you see is positive fairly equal to 0 and so the result is the operator v is positive definite and therefore it doesn't have any negative eigenvalues so u is 0 therefore the action really is minimum you can go further it's not that hard to find the actual eigenfunctions and the actual eigenvalues this is the easier argument to show in the right way that it's positive definite so alright so that's the derivation for the three particle that Schoensley agrees so if we just make one more comment is that the answer will not be exact for the three particle and the main reason for that is if we're branching it doesn't involve the positions at all and then black formula is if Gaussian integration involves second order fluctuations around the classical orbit it's expanding out the second order but if Lagrangian is only second order in positions and velocities this expansion is exact and so this infinite dimensional Gaussian interval we just did has to give the right answer more generally we give the right answer when any Lagrangian has a quadratic function in positions this would include harmonic oscillators three particles particles with gravitational fields and it also includes particles in uniform magnetic fields as well but those are all cases for which then black formula would give you the exact answer for the property