 And derived in the first place, the discretized version of the path interval, which I'll circle here, put in a box, is represented as a limit as n goes to infinity. So that interval is an expression for the propagator, which otherwise is called k, which is otherwise the matrix elements in position space x of the unitary time evolution operator. If you think of x0 as an initial position and x0 as a final position, this is over a time t that ranges from 0 to a final time, which is just called t here, in other words, t is the last time. The epsilon that appears in here can be thought of as delta t. It's the fixed time interval divided by capital N, a parameter which is allowed to go to infinity. So the epsilon is a kind of a delta t, which goes to 0, splitting up the time interval, finite time interval into a large number of very small increments. And for each one of those, there's an insertion of a resolution of the identity, which is where these integrals come from. Last time, we also explained how this integral can be interpreted pathetable as a discretized version, can be interpreted as an integral over t as an attribution space. And in the discretized version, these are discretized paths. But the exponent that appears here is apparently, or formally at least, it is a Riemann sum, an approximation to a Riemann integral. And we can write the integral in a more compact form as a A of x of tau. Tau here is a variable that just goes, for a time-like variable, it just goes between 0 and t. Using the simple t here to stand for a fixed final time. So tau is a variable time that's not between the initial and final times like this. And the integral that appears here, in the formally, at least from the limit that n goes to infinity, it takes the form of a Riemann integral, which is the integral of the, it appears A of x of tau. But it is in fact the integral of the Lagrangian function over tau between the two time limits along the path x of tau, integral of LBT basically, or LBT tau, in a more abbreviated language. All right, so I think that takes us up to where we were. This really, as I'm really representing here, two versions of the path integral, the discretized version, whether it's an explicit limit, and a more compact version, which is easier to remember to write down. The D of x of tau, which appears here, is just notation for the, quote unquote, volume integral in path space. It really represents the cost of these Dx's. We don't have any integrals over x0 or xn, of course, the initial and final x's, because those are fixed parameters of the path integral of the x0 and x of n. And they're not variables of integration. That's the way I'm saying the path has fixed endpoints that the difference should be in time, zero and t. But in between those fixed endpoints at the time, it's a lot of tape on any position, any position at all. All right, now, the fact that the exponent of the path integral is appearing as an integral of Lagrangian over time, something here I'm calling action functional, is striking because the action functional plays a prominent role in classical mechanics, in particular in Lagrangian formulation of classical mechanics, and also Hamilton's principle. I assume that you've seen some of this in your undergraduate course in classical mechanics, at least in sketchy form. I don't think they usually go into it very carefully. So I'm going to say a few more words about it. Go into a little more detail than you've probably seen before in order to ultimately connect that with the path integral and see her. So what I'd like to do in the next few minutes is to sort of put the path integral on hold and just to reduce some aspects of the Lagrangian formulation of classical mechanics that actually functionally role that it plays in that. So here I come up now with the formula of classical mechanics to go to classical mechanics. So the story of classical mechanics begins by speaking of a space of paths, which I can sketch again in the space time diagram like I did yesterday for the path and the path integral. Let's make an x and t like this, which is two times calling t0 and t1 an initial and final time, and two positions x0 and x1. This is just one dimensional problem, by the way, just with simplicity, to work in 1D. And by drawing lines here for the initial and final positions and the initial and final times, we get a rectangle in the spacetime plane. The lower left corner is the x0, t0, which we think of as initial position and initial time. And the upper right corner is an x1, t1, which is a final position and time. Now when we define a path as a function x of t that passes through these initial positions, is given in points of n times. So it satisfies the condition that x evaluated at t0 is equal to x0, and x evaluated at t1 is equal to x1. Discussion on the diagram, here a path is occurred that comes through here like this, I'll call this x of t. It's a curve that passes from the initial to the final positions and times, which are just fixed. These are just fixed forever, this is a problem. These four numbers x0, t0, x1, and t1. Now, so this is what we do by a path here. Even though this is a classical mechanics, there's no requirement that this path satisfy the classical equations of motion, Newton's laws, F equals sin A. It can be any path, although we will require it to be continuous because you're usually continuous and actually smooth because that's what you usually want in classical mechanics. All right, so this includes lots of crazy paths that have nothing whatsoever to do with the classical motion. They may go off in a long direction, they may linger at some point for a long time, and then sing over to the final position to get there at the right amount of time. There's all kinds of crazy paths that are in there. Now for each one of these paths, we define a functional, we can find a number, it's called the action functional, and it's defined as an integral from t0 and t1 of the Lagrangian function we evaluated on the path, x of t, x dot of t, dt, it's integral over time of the path of the Lagrangian taken along the path. The Lagrangian is the ordinary function of x and x dot. It may also depend on time, but for simplicity, let's just say x and x dot here like this. And the problems we're interested in, this is n over two of x dot squared minus d of x, we're just talking about simple one-dimensional kinetic energy, potential energy problems, it's kinetic minus potential energy, so that's a Lagrangian that we'll use. Anyway, this thing is called the action functional, and it's defined on all paths, whether or not they're physical from the classical standpoint. All right, now there is, however, something that characterizes the physical paths. It's to say that x of t is physical, this is called, we might not stop there, this is called Hamilton's Principle. So I'll write that out, Hamilton's Principle, which was enunciated just 100 years before one of the events says that x of t is not physical, that means it satisfies Newton's laws, doesn't only have a functional derivative in the action with respect to the path x of t is equal to zero. Now, you may have seen functional derivatives before, but I'm assuming you've seen at least a sketchy aspect of Hamilton's Principle in your classical mechanics course. So let me elaborate on what this means, and I'll show you how this works out. So let's take a path x of t, which may not be a physical path, and let's consider another path which is nearby. So I'll sketch it here. It's supposed to go from the same end points of n times as the original path. So let's call it x of t plus delta x of t, right there by path. It's important to understand this notation delta x of t. Delta x of t is merely the difference between two paths that satisfy the end points of n time conditions. The delta is not an operator, delta of x is a function of time. However, the delta is a reminder that it's supposed to be small because we're talking about two paths that are nearby one another. So this function delta x of t has to satisfy the division that delta x of t zero and delta x of t one must be equal to zero because otherwise the modified path wouldn't pass through the end points in n times, but just to conditionally oppose the difference. Now, let's take this action, and let's evaluate it instead of on the original path. Let's evaluate it in the modified path, like this. So what should we get? Well, the action function along the original path, when we write this out, this is the same thing as integral from t zero to t one. The mass over two times x dot of t quantity squared minus the potential to be evaluated at x of t. And that whole thing will be evaluated over time. This is just what we need by integrating the LVT with Rangian. We just take our path x of t, we differentiate it to get an x dot, plug it into this formula, we want to integrate it because it's a function of time, we do the integral, the number comes out, and that's what we call the action along the path. But remember that it does not have to be a physical path. All right, now let's evaluate the actual and the modified path. This is an integral from t zero to t one, and then we've got the mass over two, and then we've got x dot plus delta x dot, one squared, and then we've got minus the potential to be evaluated at x plus delta x, those are both functions of time. And then the whole thing is integrating over time, t, t. All right, action along the modified path. Now, along the expand out the right-hand side, assuming that delta x is small. So this becomes this, let's write this, so this becomes, so we can expand this out. You see, this quadratic term expands into x dot squared plus twice x dot, delta x dot plus delta x dot squared. And the v here expands into v of x plus v prime of x times delta x plus one-half v double prime of x over the remainder times delta x squared, you make that a double prime, plus a higher order of terms. So we can expand out the integrated powers of delta x. And then let's collect the terms by orders in delta x. We just call the terms t zero plus t one plus t two plus dot dot dot, where t just stands for term. That's the mime here. But it's the terms that occur at different orders in the powers of the delta x, which is like the perturbation around the original path. And so it's pretty easy to collect formulas to get at what these various terms look like. If we get the t zero, what is that? That's just ignoring the delta x altogether. So that's the integral of t zero t one of m over two x dot squared minus v of x and the integral of integrated v t. And that of course is the original action along the original path and x of t. As far as t one is concerned, it's the integral from t zero to t one. And now if I collect the first order terms, there's a two x dot delta x dot times m over two. So I get a mass times delta x times x dot times delta x dot. And then for the potential energy, I get a minus v prime of x times delta x. And that whole thing is integrated v t. And then t two, the second order term, is a integral from t zero to t one. So now I need to take the quadratic terms delta x squared here and delta x dot squared there and delta x squared there. So it's because m over two delta x dot quantity squared minus v double prime of x times delta x squared dt, et cetera. You can keep on ignoring this higher order terms as well. Now, remember the context of this is we're thinking of x of t as being a given path. So it's a definite function of time. You just pick some other draw path and delta x is a small variation around it. So the x that appears here is the original path and by putting it in here, these things, v of x d prime of x d double prime of x and so on, just become functions of time which are going to get integrated. All right, anyway, those are the terms. Now, all I've done is taken action along an arbitrary usually non-visible path and just expanded it out to the first order and the second order in the variations around the path. Now, let's take a look at the t one which is the first order of correction right here. Looking particularly at the term x dot delta x, what I want to do is integrate this by parts so that means I'll integrate the delta x and then I'll differentiate x dot and change the sign in this boundary term. And so if you do this in t one, it becomes equal to the mass times x dot times delta x evaluated from t one to t zero plus the remaining integral which is from t zero to t one. And then I can change the sign and so it's going to be minus the mass times x double dot times delta x. And then for the potential energy term it's the same, it's minus e prime of x times delta x and this is integrated to t. All right? Now, however, the delta x vanishes at the endpoints. That's so the modified path has the same end point and end time conditions as the original path. And so this return goes to zero because the delta x is zero both the upper and lower limits. As far as the second term is concerned there's a common factor in delta x which I'll take out. And so this turns into an integral from t zero to t one of delta times x double dot minus e prime of x multiplied times delta x which is a function of time for us times dt. And now you can see an example of this principle because you see that this first order term vanishes if the original path is a physical path. Original path is a physical path which satisfies Newton's laws. Newton's law said that the mass times the acceleration this should have been a minus sign here. So the mass times the acceleration is the force which is minus e prime of the potential or the Newton. And so if the original path we started with was a physical path then this t one term vanishes. Conversely, if this t one term vanishes for all possible choices of delta x in other words if we have a path such that when you make variations around the path the action suffers no change in first order then that path is physical because if this integral vanishes for all choices of delta x then the integrand has to vanish and that's what Hamilton's principle is and this notation here is just a notation for what I just said in words. In fact, this quantity here actually is the functional derivative of delta a of x with respect to x of t that's just a definition of the functional derivative. It's that thing. So Hamilton's principle is equivalent to Newton's laws. Alright. This is how this functional derivative works out in the case of this kinetic minus potential of Rangian. If you go for a general of Rangian this is the same thing as minus d dt of partial of L with respect to x dot plus of partial of L with respect to x. And that could set to zero. Here should be the famous one of the Brown's equations the last equations we all should guess. Quick question. So in the regular evaluating t one what was the reason that the first boundary term managed? Here? Yes. Because we have two paths we're starting with one path x of t in our path space passes through the given end points at n times which are just parameters of this problem x0, t0, x1 and t1. Those are fixed. Now we want to take a nearby path but it's supposed to be satisfied by the same boundary conditions. It's just the rules of the game. We're only going to look at paths satisfied by the boundary conditions in position and time. And that's true since x of t satisfies the boundary conditions the delta x has to go to zero at the boundary, at the boundary. That's the reason for this here that's what leads to the vanishing of this boundary term here. Okay? Right. Now, so this is an illustration of what's meant by this principle. So I'm going to send this in more pictorial languages so that the classical paths are characterized by the fact that if you make first order variations in the path around the classically allowed path the action suffers only second order variations. That is to say t1 is zero and t2 may be nonzero. That's the second variation of the action that the t1 term manages. This is equivalent to the functional derivative there, delta x of t equals zero. This animal's principle is stated in many books by saying that the action is minimal along the classical motion. In fact, software I repeat this is actually a month's not really correct. Because a more proper way of saying this is to say the action is stationary on the classical path. All we've done is shown that the first order variations are vanish around the classical path. It's like just because the derivative of function equals zero doesn't mean you could be a maximum or you could be in many dimensions of the saddle point. There's all kinds of possibilities. Is it possible for there to be several extra elements? Absolutely. In fact, I might not think any. This is another error which is made in many books. They say this is a unique classical path connecting these endpoints at many times and in general there's not this. Does somebody know what action they have? Every one of them that happens satisfies this condition. For all of them it's true that it's like finding if you have a function and you may have many extremity here and here and here. For each one of them, the first derivative vanishes likewise in pass space. This is like the first derivative in pass space. You may find it to be more or more effective. Here's an example of the second particle in blocks of hard walls. If I started off at this wall here it's going to say t equals zero and I want to get back to the same wall for final time t. So the initial x and final x would be the same. One way to do it is just add zero velocity and just stay there. You sort of get back to that final position. Another way to do it is to give yourself enough velocity so that in trying t you go and hit the wall with a bounce and come back. That's another classical amount of that. In this case you see there's an infinite number of classical orbits that satisfy the initial and final conditions and initial and final times. So then to differentiate do you want to read something about the initial and final conditions? No, you see it's interesting because the functional derivative leads to a differential equation. So the usual thing if you do a classical mechanics is solve the differential equation with respect to give an initial position at a velocity. Here this is the original formulations in terms of an initial position and a final position which is a kind of boundary value problem in classical mechanics. Alright. So as I said this is not really a principle of least action as a principle of stationary action. However it's interesting to ask whether or not the action really is a better one on the classical path. It's just an academic question at this point but if you wanted to answer that should we go on in the second direction so let's take a look at T2 because it turns out that although this is an academic question in classical mechanics it actually has an effect in quantum mechanics. So let's look at the second variation T2 and once again I'd like to take the first term and integrate by parts. This is delta x dot times delta x dot if you'll allow me to use that finger. So integrate one of them that's different that the other one gets delta x delta dot when there's a boundary term. So this becomes delta over 2 times delta x delta dot times delta x evaluated between the limits of T1 and T0 plus an integral from T0 to T1 and now what we've got with the change of sign is minus delta over 2 delta x delta dot times delta x and then for the potential energy term I'll copy it down. I'll just show you a bit of one half here see the one half up there. It's just a Taylor expansion so it's minus one half V delta times x delta x squared that's T2 and once again you'll see this first term boundary term goes from zero because of the boundary conditions on delta x and we're left with this integral. I just remembered something I wanted to say before I got launched into T2 so let me put T2 at home and go back up to the previous board because I don't want to say something which is that the T0 is the actual along the unperturbed path and as I just explained the path is physical and T1 is equal to zero you can evaluate the actual along any path whether it's physical or not however, if the path is a physical path and classifies out a path solution and it's lost then this just gives us a number which is the action evaluated in that classical path and there's a question about whether that number, that action on the classical path has any particular meaning in classical mechanics this is a question that Henle had asked himself in the 1840s I think it was and it turns out that it does have an interesting physical physical role in classical mechanics let's make the following definition let's take the action and evaluate it along I'll call it X0 to C0 of T meaning a classical path or a solution to Newton's laws well, if it's a classical path then it has to satisfy these boundary conditions X0, T0, and X1 to T1 and so it's rather a special kind of a path and the number that comes out there for X0, X1, T0 let's put it this way X0, T0, X1, and T1 this is defined to be the action evaluated in the classical path this is not a functional it's an ordinary function of these four parameters now the fact that if you're the questioner who's just asked about whether the classical path is unique or maybe more than one and so you can evaluate this along different paths he was trying to introduce let's say a branch index B equals 1, 2, or 4 and labels the classical solutions full solutions that this S depends on to B when I say classical path or I put a B index on it I've interviewed which classical path I've talked about but anyway, this is a function that's created in this way and this function is called Hamilton's principal function Hamilton's principal function is a function of these four parameters now one of the things that's Hamilton showed is that this function satisfies interesting differential equations in particular dS with respect to X1 is equal to the momentum at the endpoint of the path and dS with respect to X0 is given as people to minus the momentum at the beginning of the path likewise, dS dt1 is equal to with a minus sign at the end of the path the energy at the end of the path and dS dt0 with a plus sign is equal to the Hamiltonian at the beginning of the path I won't prove these in class here although I did prove them in this appendix that I wrote on the classical appendix so if you want to see how these are derived if you look at them there anyway, this is Hamilton's principal function it's the action of value that I wrote on the classical path now, go back to the T2 that's what I'm really thinking about here we're evaluating this action on the classical path and considering variations and the first variation of T1 is 0 in accordance with Hamilton's principle but we're going to look at T2 to just find out whether the action is really minimum or not so here's T2 by integration by parts I've really expressed it in terms of this integral now, here's some convenient notation let's let f of T by g of T etcetera, the functions just coordinate functions in fact, it's making real functions defined on the integral from T0 to T less than or equal to T less than or equal to T1 at the time interval of this classical problem and let's define a scalar product which I'll use the rack notation for of these functions to be equal to the integral from T0 to T1 f of T times g of T it looks just like the scalar product of wave functions in quantum mechanics, except they didn't put a star in half and the only reason I didn't do that is that these are real functions another difference of course is the integrals over time instead of over space these are the classical problem here but it formally looks like a scalar product in quantum mechanics in fact, you can find this in every space of that function now then, allow me to take this integral here to rewrite it in a slightly different form T0 to T1 and then let me put the delta x of T on one side like this and then in the middle I'll put minus m over 2 d squared by dT squared then minus one half v double prime of x of T this is the classical path dT and then I'll put a delta x on T on the other side and integrate this over time dT just to be writing this integral let me call this thing in the squared brackets here we call this d it's an operator it's an operator that acts on functions of time finding out this integral with that notation we can see that we can write this delta x sandwiched from d it looks like an expectation not even an operator in quantum mechanics so, now if we want to know whether the action is minimum along the classical path the first variation is vanished what we need to show is that the second variation would be positive for all choices of delta x all non-zero choices delta x would be zero of course in a variation delta T2 would be zero in order for that to be a minimum it would mean that if you make any variation along the classical path the action will only increase that would be the condition for minimum but that in turn is equivalent to saying this operator is a positive definite so if you want to find out that the action is really a minimum you take this operator and find its eigenvalues and if they're all positive then the action really was a minimum and what the books say about these actions is actually correct the answer is it depends on the potential and it also depends on the time interval sometimes it is and sometimes it isn't so, in general, this Hamilton's principle is not a principle of least action it's a principle of stationary action in any case, this is what you need to do if you want to find out is examine the eigenvalues to be now, I think that's all I want to say about a Hamilton's principle in classical mechanics it's a good deal more than you probably saw in your classical mechanics course and allow me now to go back to the pattern real quick so where we stand at this point is we notice that the the exponent is formally similar to the action interval that occurs in classical mechanics the path except t satisfies the same type of boundary conditions that it passes through given positions and given in points and given in times as in a classical problem the path itself is rather different maybe these Brownian motion or white noise type paths that I described last time particularly not smooth except for that it's very similar and so the suggestion is raised is that the path interval provides some kind of a connection between Hamilton's principle in classical mechanics and the quantum mechanics of the propagator by the way Hamilton's principle was noticed a hundred years before quantum mechanics in fact this functional derivative vanishes in the functional derivative as long as known but no one knew why it was true and the answer the real explanation of Hamilton's principle only came with finding this path interval in the 1940s but only in the 1950s alright so in any case here's the idea is that the one in which h bar is zero the classical, excuse me should go over to the classical problem we should see classical mechanics emerging however if h bar goes to zero it means that this exponent here gets to be large and more exactly it becomes rapidly oscillating as the path goes and so what happens to this is that what this means is because rapidly varying h bar is close to zero it means that typical paths in past space interfere destructively with their neighbors which are nearby because of the rapid oscillations the exception of that however is paths in past space from which the action is stationary because then nearby paths in past space are nearly in phase with the given path well by Hamilton's principle those are the classical paths so the classical paths make their privilege to terribly stone by the path that they are in the past space in phase with their neighbors and the result is is that this integral should be dominated integral over all past space including a lot of crazy paths like white noise but the integral should be dominated by regions of past space that lie around the classical paths and that is the fact how the classical mechanics emerges from the path and now what I want to do is to go into some more detail and show you how this comes about in more explicit so this idea that if you have an integral with a rapidly oscillating exponent all the contributions come from the places where the phase is stationary this is called the principle of stationary phase and it is an approximation that is frequently used for approximating integrals so let me switch over now to just pure mathematics and give you some mappings of what is called the stationary phase approximation let's take a one-dimensional integral over a variable x and let's say there is a phase and I will divide by a quantity of kappa and what we are going to think about is kappa going to zero so kappa is a small number but this kappa goes to zero the integrand which is e to the power kappa oscillates more and more rapidly so the amplitude is of course equal to one we are thinking of phi here being a real function so if you think about this on the integrand and on the x-axis like this it looks something like this for some value of kappa and if I take kappa it is supposed to be kappa and so if I take kappa over two divided by two e to the same amplitude which you have oscillations that only wouldn't have to wait when twice as fast like this when you look at these kind of rapid oscillations what happens is the area of the one positive load nearly cancels the area of the subsequent negative load and so you can only make signs of the only canceling numbers and the result is that this integral goes to zero as kappa goes to zero it just oscillates itself to death another question is how does it go to zero when kappa goes to zero how does the answer depend on kappa on a small and the answer to that question depends on whether or not the function phi has any what we call critical points or stationary phase points so here is what I will call a critical point is mathematical terminology it's called an X bar X bar is the root of the equation phi prime of X bar equal to zero in other words X bar is just a place where the derivative of the phase is equal to zero that's what's called a critical point I'm going to call it a stationary phase point because it's more factorial for our applications it's a point of the variable s which is the variable of integration here at which the phase is locally stationary it's first derivative of tangentials or another way to say this is that the phase has a stationary phase point, first order of variations in X around X bar this is all the second order of variations in the phase this should remind you of Hamilton's principle in that space the reason for that is the integration of the phase point it means that the integrand is in phase of a larger range of the variable X than it is at an ordinary point instead of getting oscillations like this what you find is there's one central load around X bar which is in fact not cancelled by its neighbors and it has a width which is important to the square root of kappa and so it contributes a total integral which is important to the square root of kappa so the answer to this question up here is this is it turns out goes as the square root of kappa if there exists a stationary phase point but if there are no stationary phase points this goes to zero faster than any power of kappa, it's exponentially fast it makes a big difference whether it's stationary or phase points so the approximation itself is straightforward let's just suppose if there is a critical point or stationary phase point X bar and we'll just take phi of X and we'll approximate it in the neighborhood of X bar by phi of X bar let me let Y equal to X minus X bar so it's the deviation of weight from the stationary phase point and then doing the Taylor series expansion we have Y times phi prime of X bar plus 1 half Y squared times phi double prime of X bar plus et cetera like this except the first term goes away because X bar is the stationary phase point the first term that manages there and if I call this integral i here this is called either i and i should be approximately equal to the integral over dx of e to the i over kappa times this expansion phi of X bar plus i over 2 kappa of phi double prime of X bar times Y squared and dx by the way just to change the variables and so you can see the first term is just a constant phase that's independent of the variable denigration which you can take out the second term I can write as another phase here and as far as I mean the second term it's a Gaussian integral which you can do and so what you get is is that the integral i is approximately equal to first there's a constant phase so I'll write up over here in the right e to the i over kappa times and then for the Gaussian integral if you do that you'll find the answer is this it's square root of 2 pi i kappa divided by phi double prime of X bar alright this is called the stationary phase approximation to the original integral now there's two things I need to fix up about this answer the first one is that it involves the square root of i and there's two square roots to anything and you need to say which one you mean if you do it right the square root of i is considered to be either the i pi over 4 however if phi double prime the second derivative is negative and anything you can transfer the sign of the phi double prime over to the i you can get the square root of minus i which is either the minus i pi over 4 so you either get a plus or minus i pi over 4 for the phase depending on the sign of the phi double prime let's do this let's let equal the sign of phi double prime at the stationary phase equation and then bring right this this way is to say i is equal to e to the i nu pi over 4 the square root of 2 pi kappa divided by the absolute value of the phi double prime evaluated at the stationary phase point times e to the i over kappa phi evaluated at the stationary phase point and that's a this is a better version of the answer because it's clear not to face this now there's only one more cosmetic change not so cosmetic I need to make for this which again as was pointed out is that there may be more than one root of this there may be more for the stationary phase point let's label the roots by a branch index is called a d for 1, 2, 3 and so on and technically there is a discrete set of roots and if we do this then the answer has to be summed over the branches sum over the branches the x bars depend on the branches and nu depends on the branches because it depends on the sign of phi double prime for the branch index of that and having made those changes this now is the stationary phase approximation through one dimensional integrals that are the oscillating x-formants it's really doing a Gaussian approximation to the integral around the station at this point now let's do a generalization of this let's do the indimensional generalization and almost just quoting answers in this case let's let x now stand for x1 up to x again and let's let x bar which is the same thing as x bar 1 up to x bar again be the stationary phase point which is to say that it's a root of the equation that the derivative of the phase with respect to x i evaluated x bar equal to 0 this is to all i going from 1 up to n by the way the integral that we're interested in here we call it i is the integral of an indimensional space d and x of e to the i of 5 multidimensional x divided by capital is the multidimensional generalization of this okay in other words the stationary phase point is just a place where the gradient of the phase the multidimensional gradient of the phase vanishes and now we do the same thing we take this 5 and expand out the second order the first order terms will vanish at the stationary phase point because that's the definition of the stationary phase point so we get a constant term a quadratic term a multidimensional Gaussian interval involves a symmetric matrix which is the second derivative of 5 that can be diagonalized converted into a bunch of one-dimensional intervals just like that so the details of this are contained in the notes I'm just going to quote the answer for you the i that turns into this it is an e to the i the new i over 4 you get the square root 2 pi kappa to the nth power divided by the denominator the absolute value of the determinant of the second derivative of 5 which is x i and x j evaluated at the stationary phase point then multiplied times e to the i over kappa times 5 evaluated at the stationary phase point now this is not quite done yet because I need to say what new is the sign of the second derivative in the multidimensional case you need to look at the eigenvalues of the second derivative matrix some of the matrix is positive and some are negative if they're positive they give you a plus pi over 4 if they're negative they give you a minus pi over 4 so the new here is defined in here that's called a new plus minus new minus where new plus and minus are equal to the number of positive or negative eigenvalues of the second derivative of the phase x i x j evaluated at the stationary phase point it's a symmetric matrix so this just accounts for the plus or minus pi over 4 phase factors that come from the n different gaussians the determinant is the product of the eigenvalues that's really the product of the final final factors for each of the dimensions that be diagonal and then finally one final change is that in case there are multiple roots in the sum of the branches and again it's the x bar which is labeled by the branches so it appears there and there and it also appears in the new because the new is related to the x value of x bar and this final box down here is the multidimensional version of the stationary phase point now the so the suggestion is that we carry out the I'm sorry I have to cover that up but I'll pull this down so you can see that here let me go back to the path interval and say what we're going to do next as we're going to apply the multidimensional version of the stationary phase approximation to the path interval the idea is that what I call kappa up there is from the replaced by h bar which we'll think of as being small leading to a mathematically oscillating x component which is a classical action to actually function along the path and what we'll do is we'll find the stationary paths which are like the stationary phase points those are the classical paths and we'll carry this out we'll carry the expansion of the second order because that's what you need to do to get these Gaussian integrals and stationary phase approximations this will mean we'll be considering paths in the path space not only include the classical paths but also include variations around the classical path like the little tube in the path space around the classical path and then by expanding the action of the second order we then in effect have a Gaussian integral multidimensional Gaussian integral which is doable and it gives us an approximation actually a semi-classical approximation of the propagator which is built around the structure of the classical paths connecting the initial and final positions okay so there's a fair amount of algebra in doing this but I want to be clear before I get into that what the general strategy is in fact I'm not going to go through all the details of this in lecture most of them are in the notes but I want to really think the essential ideas here are more important than all the details alright so how do I take a middle picture of this discretized version of the path integral as you see it's an capital N minus 1 dimensional integral we're going to let N go to infinity and a phase if I identify the h-bar here with the capital that I was using before the phase is everything in here except the i over h-bar so it's epsilon times that sum so the epsilon times the sum is an approximation to the actual integral of the classical action alright and now you see why we want the second order in the expansion of the classical action function because the second order terms are going to give us the Gaussian integration in the stationary phase approximation and this by the way is why it makes a difference in quantum mechanics whether the answer whether the classical order minimizes the action or not because this is the effective phase of the propagator probably the most amazing thing about all of this is the fact that the path integral gives an explanation for Hampton's principle so although I have to cover this up I'll have to cut very well my notes so using this notation so let's take the count of integers up here and let's replace it by the h-bar and then what I call y of the x's this is going to be an x of y and x of n minus y and our discretized exponent except for the i of the h-bar so let me write this out this is epsilon times the sum of k equals 0 to n minus 1 and then we have m over 2 times that's j plus 1 minus j squared over epsilon squared this is the this is the discretized committed energy delta x over delta t squared over m over 2 minus the potential energy to be around m over x j like this so the first thing we're going to need is to find the critical points so if we have to differentiate this with respect to let's say x i and if you do this what you find is m over k because it might be more clear make a difference with respect to x k what you find is this is epsilon times and then you get this from s m you get x minus x k plus 1 plus twice x k minus x k minus 1 divided by epsilon squared that's what we're doing the differentiation here and then you get minus v prime of x k like that okay that's what you get and to find the critical or stationary face point we have to set this equal to 0 well setting this equal to 0 just means the thing with the square of the racket is equal to 0 let me just rewrite this in a slightly different form we can ask we'll bring this over to one side and keep that to the other side it's x k plus 1 minus twice x k plus x k minus 1 divided by epsilon squared is equal to minus v prime of x k well the epsilon is interpreted as delta t and what's in the numerator here is a discretized version of the second derivative of x so what you've got here is a discretized version of the mass times the acceleration and then the right hand side you've got the force to be elevated to x k so to solve the discretized version of the differential equation is what people do on computers when they solve them approximately which is the limit that n goes to infinity so in the limit that n goes to infinity I take n goes to infinity this turns into a continuous differential equation which is n times vx squared of tau I would say v squared x of tau of tau squared is equal to minus the potential of v prime of the elevated x of tau and this just repeats what we really have to find is that these stationary face points this is how it is principle stationary face points of the action functional are precisely the classical arguments for a discretized version of the path integral they're appearing in a discretized form taking the limit to give us the continuous classical paths so the x of tau is to satisfy this in the classical path what's that? oh, okay, right well that's all then so we'll take up to this next time and finish this stationary phase