 Let me begin by just reviewing where we were, which was already last year's state. The idea is that we're interested in evolving the coherent state of quantum mechanics in time, partly as a way of understanding what this relationship is to classical mechanics, where coherent state has a minimum values of the aspect of the product of the dispersion of x and v. And so, in some sense, it represents a quantum state that is as close as possible to a classical state. So in the present case, we're thinking of a coherent state, which is centered as expectation values, as you say. We think of it as being centered on a position x, zero, and p, zero, or the phase station. I think I'm thinking of a log centered around that position. We identify phase space with a complex plane by introducing complex coordinates, x plus i, p, or the square root of two. So the real and imaginary parts of the complex coordinates, z, are the same things apart from the square root of two. They're the same things as the positioning momentum of the particle. Anyway, we then define a coherent state called c0, which is centered at that position. This is the Heisenberg operator acting on the ground state. And we're going to be interested in the exact one we have, the time evolution operator, applied to this initial state, c0, just to see what happens. How does it compare to the classical motions? Then, last time, by using some operator relations, we succeeded in expanding this coherent state, z, zero, as a linear combination of energy and eigenstates. This is a preliminary step towards computing the time evolution. As you know, you frequently do this by expanding the initial conditions as a linear combination of energy and eigenstates. This is an interesting sum, because it gives us an explicit formula for the coefficients. These coefficients were just c0 to the n over the square root of n factorial. We could, of course, also obtain them just by doing an integral of the Gaussian weight packet, which is this initial coherent state. There can't be harmonic oscillator eigenfunctions, which have to meet polynomials and stuff in them. It would be kind of a hard way of doing the finding of the coefficients. But by operative methods, without too much trouble, we get the coefficients here that are c0 to the n over the root of n factorial. Well, in any case, that's most of the work towards finding the time evolution. The reason is that the u of t is the same thing as, of course, t to the minus by t times the Hamiltonian. Here, remember, we're setting h particles from y, and it's all these dimensionless units. And the Hamiltonian acts on its own energy eigenstate. It brings out its eigenvalue, which is n plus a half of these dimensionless units that we're using. So it makes it easy to apply u of t to both sides of this equation. On the left hand side, we get u of t acting like c0, which is what we want. And on the right hand side, the u of t comes through and acts on the eigenstate n. And so we get the same sum, n equals 0 to infinity, the same coefficient, c0 to the n over the square root of n factorial. But then u of t acting on n just brings out the exponential of e to the i t times minus i t times the energy eigenvalue. So e to the minus i t times n plus a half acting on, multiplying onto the energy eigenstate n. This is the face factor. All right. Now, there's two parts to this. There's the n part, and then there's the half part. The n part depends on the summation of the index of summation, and the half part doesn't. So I could take the half part out of the sum. The one that's left over is the n part, and I get e to the minus i t n, multiplying c0 to the n. This gives us a combination, which I'll call c of t, which is defined here for e to the c0 times e to the minus i t. Well, last time I showed that this is a definition that's convenient for this quantum mechanical calculation. The last time I showed that this is actually also the same thing as the classical evolution in phase space in these complex coordinates. So if I think of this x0, p0, or equivalent to z0 as an initial condition, the orbit is a circle. I'll try to draw it without being too lopsided. If you think of this theory, we have some final time z of t down here. So this is the c0, or equivalent to c0. The z of t is equal to e to the minus i t times z0, and the t is the angle at just a clockwise rotation of the complex plane. So in other words, the classical solution is coming out of this quantum problem. And we can write the answer like this. The e to the minus i t over 2, which is this constant factor that came out of it. Then we have the sum n equals 0 to infinity of now z of t raised to the power divided by root. In fact, when we multiply energy, I can state n. This is the same sum we got earlier in expanding the initial state, except that instead of z0, we've got z of t. So this summation, then, is the same thing as a coherent state. The right of this way is e to the minus i t over 2, a coherent state, which is centered at location z of t in base space. And this is not e to the minus i that's wrong. This is u of t, or if it's wrong, it's u of t back to where the initial state is v0. It's equal to this. So this is the time involved state in quantum mechanics. And what you see is that it is proportional to a coherent state. It's a phase factor times a coherent state. That means that the minimum uncertainty condition, which is that delta x equals delta p equals 1 over root 2, that's what defines, that's our definition of a coherent state for the purposes of these last couple of lectures. And what we see is that a coherent state remains a coherent state. These dispersions don't change. So the weight packet in this branch nor anything else just stays the same delta x and delta p. But its center or its expectation values move and exactly follow the classical motion. Well, the fact that the expectation values follow the classical motion is a consequence of the air-faster relations. And in fact, the Hamiltonian is quadratic. So we kind of knew that already. But this gives us further information. It tells us that a coherent state, moreover, there's an overall phase factor, which is this e to the minus i t over 2. Anyway, so this is the name result of this, which is an explicit demonstration of how the weight packet, in this case, of a united state follows the classical motion and remains the minimum uncertainty weight packet in the course of time. This confirmation in different forms is first described by Schrodinger in his 1926 paper on quantum mechanics. He solved this problem. He used her mean polynomials and solved it with operators. All right. So the lesson of this is the relationship between the classical and the quantum problem, which is particularly close for harmonic oscillators. Let me just mention one thing, which is that if we've chosen the initial state in which the dispersions were not equal like this, but perhaps it was still a minimum uncertainty variance of weight packet. Let's suppose, for example, delta x over 1 half and delta p over equal to 1, then the product delta x delta p still has its minimum value, which is 1 half. So this is a minimum uncertainty weight packet, all right, but the delta x is now smaller than it was for the coherent state. This is what's sometimes called the squeeze state. Squeeze states have been of interest lately because you can actually prepare them experimentally in quantum optics. It requires nonlinear optical elements to create them. But if you consider the squeeze state, you can kind of think of that as something in phase space. You see the delta x has been squished by a factor of one of the square root of two, but the delta p has been stretched by the same factor, so the product remains invariant. You can think of this as phase space. Instead of this picture I can hear you to think of this as something that's been squished like that, the ellipse sticks out upwards like that. And if you go through a similar analysis, it's more challenging now for the squeeze state to work out the time evolution. But if you go through it, you'll find that the expectations of delta x and delta p actually breathe in and out and periodic, not with the period of the oscillator, but with twice the period of the oscillator. And so there is spreading and contracting in the weight packet, one might say, but it returns to a squeeze state with minimum uncertainty after a single cycle. If it weren't too hard to do this, I'd give it as a homework problem. I think it's too much formalistic to rate the value to get out of it, but it's not that bad a calculation to do that and show that this is what happens to the dispersions in the case of the squeeze state. All right. Well, this is essentially all I'm going to say for now anyway for a harmonic oscillator, so let me ask you if there's any questions about this general study before I move on to the next one. If not, I'm going to start now on our next topic, which is propagators and path integrals. First, I'll cover it. I'll cover it. So first of all, let me just say a few words about path integrals. The path integral is a formulation of quantum mechanics and in some respects is alternative to the usual one, which is based on the Schrodinger equation and Hamiltonians. In fact, the path integral is based on Lagrangian, so the classical Lagrangian is a matter of fact. So it turns out that the Lagrangian operator, unlike the Hamiltonian operator, the Lagrangian that appears in the path integral is, in a sense, the classical Lagrangian, but the result is that it's a different formulation of quantum mechanics. It wasn't all the path integral that came after the Schrodinger Hamiltonian formulation of quantum mechanics. It was worked out by Feynman in the late 1940s and it's particularly valuable for the relativistic problems where one requires covariance under Lorentz transformations. The path integral formulation is considerable for that. In any case, all I'm going to give you here today and probably it'll be Wednesday's lecture also is just a very introduction of path integrals, but it contains the main ideas and it's actually quite an interesting approach to quantum mechanics. By the way, Feynman's development of quantum mechanics is based on some earlier somewhat interesting ideas by Dirac which he then put together and he made a very good use of this in the early days of quantum electrodynamics. In those days one of the problems was to find covariance formulations of quantum mechanics and find the goodness in his homework primarily through the use of the path integral. All right, in any case, the path integral concerns the propagator and what is the propagator? The propagator is basically the matrix cell which is the time-evolution operator. To be specific, in this presentation I'm going to make it simple and talk about just a one-dimensional problem where the Hamiltonian is the kinetic plus potential of the Hamiltonian, so it's writing this t plus v for kinetic plus potential energy. And then the time-evolution operator then, assuming that potential energy is independent of time, the time-evolution operator is even the minus i t h over h bar. I'll go back to where there are units now. And the propagator is basically the x-space matrix cell of the time-evolution operator. I'll put this in quotes and call this the propagator. One reason I put it in quotes is this isn't exactly, this is very slight changes in this is what you will find in the literature is the definition of the propagator. But I'm trying to make the discussion simple, so for us, this will do. Later on in the course, when we talk about Marines functions, I'll be more detailed about it. In any case, this is all the it's basically the x-space matrix cell which is the time-evolution operator. Now this means that that means that this quantity, the propagators, you can also give it a name, let's call it capital K. It's a function of two positions, x, x, 0, and a lapse time t. You think of the x, 0 as being in some sense an initial position. The reason why we call it that would be apparent later on. I'm thinking of this x over here as a final position. You can also think of an initial and final time here that t equals 0's initial time with this t parameter here as a final time. Of course the x, 0 stands for a singular quantum state in which the particle is concentrated in an infinitesimal state. If it's concentrated in an infinitesimal region around the point x, 0 you can't really realize such a state in practice because it's not normalizable but it's an idealization or a limit if I have realistic states. And then if we apply u or t to that it's like saying, well we knew the particle was that position x, 0 at the initial time we didn't evolve in time then when we take the scalar product of x we're finding the amplitude to find the particle in a particular final position in a later time. In a sense it gives us a probability or a probability density for finding the particle in some variable final position if we think of this as variable and x serves as being fixed. This is a way of visualizing this and also attaching a language to it an amplitude to find the particle in a final position given that it was at some initial position at some initial time. The propagator can be used to advance wave functions in time because if I have a position operator with map states in cat language looks like this this is the initial state u or t maps it into the final state psi of t. If we define a time dependent wave function the psi of x and t is equal to x scalar product of the psi of t then this type of equation in cat language becomes this in wave function language psi of x and t is equal to the interval of the initial position of the matrix on the x, u and t x0 times psi of the x0 comma 0. This is just just comes from the certain resolutions of the multiplying this by excellent both sides inserting the resolution of the identity between u and t letting x0 be the variable of integration if you obtain this situation evolving wave functions forward in time from the initial wave function to the final one and this part of the heligrand is called the kernel the integral transform and that kernel this definition is the propagator is the same as this function a and x, x0 and t propagator is that kernel I should also mention a pictorial way of thinking about this this is related to Huygens principle which goes back in optics several centuries the final wave function the final wave function at the final time is a superposition that's what this integral is a superposition that can be thought of as a superposition of if here's some region where the initial wave function psi of x0 at time 0 is non-zero as you can think of it as each point of this region where the initial wave function is non-zero radiates a wave think of it as being kind of a circular wave coming out as often times it is it radiates a wave and what is that wave? the wave is the propagator that's the propagator that radiated from position x0 is evaluated in the final position of time x and t but what this integral is saying is the final wave function is the linear superposition of all these waves coming out like this so this is a pictorial way of thinking of this in relation to Huygens principle in relation to these waves which are radiated out or weighted by the value of the initial wave function at the beginning of the point which is the source of the radiated waves now so that's a little in the background of the propagator and its x-space making solvents one thing I'd like to do right away is to work out an example of the propagator I'd like to do it for the case of the free particle just as posted as a result of all they have we'll refer back to it quite a few times later on of course so let's do it now and just get it it's just a calculation of trying it so for the case of the free particle of course the Hamiltonian is p squared over 2m I'll put a hat on it I think you just emphasize that it's an operator and so the propagator k of xx0 is equal to the matrix solvents between x and x0 of the time evolution operator which is e to the minus it times p hat squared over 2m and I also have to divide by h part it's equal to that it's a position space matrix value basically e to the i p squared now we'll evaluate this by inserting a resolution of the identity somewhere pp resolution of the identity of the integral on it and the result is is that this can be written now as a integral pp and we have a matrix solvent of x e to the minus i over h part p hat squared over 2m t times t and then we have pp and then we have x0 like this by inserting resolution of the identity of course because now this operator the time evolution operator is a function only in the momentum operator it's actually in an eigenstate of momentum so it just replaces the operator by its eigenvalue which becomes a number as directs as c number which we can take out of the matrix solvent and so what we get is integral pp of e to the minus it over h part p squared over 2m and then you'll have to escape with a matrix solvent d 0 product multiplying px0 and these are standard position of the letter matrix solvents and the product of those two is equal to p in the item mentioned p hat times x minus x0 over h part divided by 2i h part if you multiply those two matrix solvents together yes what happens to the px0 autism X and X0 are, think of these as fixed initial and final times. These are the two sides of the matrix cylinder which constitutes the propagator. But if you're going to use it to an intensive wave function, you will need to integrate over X0. So right now, I'm just calculating the k. OK, so this is a Gaussian interval. You see it's an interval of momentum as a v squared in the next order and a linear term p in the next order. And I'll skip the details of doing the Gaussian integral because it's straightforward. So I'll just quote the answer. It is this, right down the fold, k of X, X0, and t is equal to, there's the square root of the mass divided by 2 pi i h bar t times t to the i over 2, you can write it as i over h bar, of the mass m times X minus X0 squared over 2t. And let me block that because that's the main result for the propagator of the free particle. There aren't very many systems for which one can calculate explicitly the propagator, but the free particle is one of them. And it appears quite often in practice. And so this is the result of use. I'm back to a number of times in the course. Now, before I leave this to move on to the path, let me just mention one thing, which is that this quantity that appears in the exponent here, which is multiplied by i over h bar, is a quantity that has an interesting interpretation in classical mechanics. It's called the Helmholtz's principle function. And I'll say more about that later on. But it is a function which plays an important role in classical mechanics. But it's interesting that it's appearing here in this quantum problem of propagating propagator pressure functions. Professor Lerner? Do I hear a question? Yes. Then a 2t in the denominator on that. Yes, a 2t up there, yes. All right. Now, let me turn to the more general case in which the Hamiltonian also has a potential energy. And in that case, we can't generally write down an explicit formula for the propagator. But there is a version of a general formula for the propagator, which is a path that rules the main oxygen tank when it gets to here today. And so let's let H now be Px squared over 2m for the kinetic energy plus V of x hat. I'm going to ask again, because the slides, these are operators, kinetic plus potential Hamiltonian. Now, we're going to be interested in the, well, the matrix elements ultimately. But let's just talk about the time that we're going to talk later right now. This is a simple exponential. So it's e to the minus i t h hat over h bar. What we're going to do is to take the time interval t, and we'll write it as capital N times epsilon, where we want to think that the psychology here is one thing of n going to infinity. In other words, if we will take and we'll go to infinity later on, but right now let's make a larger number as large as you like. And this means that epsilon is equal to t divided by n. And we'll think of t itself, which is the final time here. It's just equal to x. So this is the context of these definitions. And the idea is that we're going to split this time interval, finite time interval t, into a large number of small intervals of duration epsilon. So epsilon is like a small delta t here. Then it follows that u of t then is equal to the same thing as u of epsilon raised to the nth power. Therefore, the matrix element, therefore, the propagator, which is k of x comma x0 comma t, which is the matrix element of u of t sandwiched between x and x0, can be written this way as x u of epsilon times u of epsilon dot dot dot times u of epsilon x0, where there are n factors, total n factors there. So in other words, we're switching over an effect to a large number of small-time propagations. And the question is, can we now approximate the small-time propagation in some useful way? The answer is yes. It works like this. Let's take the propagator between two points x, and I'll call it x prime now instead of x0, of the small-time propagation u of epsilon. Let's look at this. This is the same thing as x, of course, e to the right is i over h bar epsilon times the kinetic energy plus the potential energy, x prime, that's the matrix element. As far as the operator that appears under the matrix element is concerned, so it's epsilon is now small. Let's expand it into a Taylor series, and write it this way. It's 1 minus i epsilon over h bar times 2 plus b. And then I'll just write over epsilon squared for the quadratic term. But it's an exponential series, so the term is an exponential series to the right-hand side. This can also be written this way, is 1 minus i epsilon over h bar times the kinetic energy plus higher order terms. And 1 minus i epsilon over h bar times the potential energy plus higher order terms. Now, if I were to stop there, and you just multiply these two series out to first order in epsilon, it reproduces the line above. So these two series actually agree to order epsilon. I'm intending these series to be both of these series in the second line, the exponential series. And if you carry out the second order, you'll find it does not agree with the epsilon squared term above. The reason for that is, if you have operators e to the a and e to the b, this in general is not equal to e to the a plus b. We saw an example of this already in the Glaucus theorem, where there were some cases where you could correct for it. But in this, in other words, operators don't obey the rules of the ordinary numbers when they don't commute. In this case, kinetic potential energy certainly don't commute because one depends on the amount of energy they have around the position. However, in the small epsilon expansion, the two do agree is two exponential series do agree through first order. So if I put an order of epsilon squared correction here, this is actually a correct statement. And so I can rewrite this now as e to the minus i epsilon in terms of the series, e to the minus i epsilon over h bar times the kinetic energy times e to the minus i epsilon over h bar times the potential energy plus order epsilon squared. It's a factorization of the approximate factorization for small times for the kinetic and potential energies into these two terms. And so the result is this matrix solid now looks like this. It's x e to the minus i, and I'll write it as i minus i over h bar epsilon, kinetic energy p at the square root of two hours. And then minus i to the e to the minus i epsilon over h bar times the potential energy here, x half, and then whole plan k at x prime of the right hand side. All right, plus order epsilon squared, the error of the error I submit here. Now for this matrix solid, that's that I've got here allowing me to introduce once again a momentum, momentum resolution of the identity between these two factors, very similar to what was done in the case of the three particle on the border I thought of. And this then turns into an integral dp x matrix solid x on the left, even the minus i epsilon over h bar e to the square root of two hours times raw p. In the second matrix solid, it's ket p on the left, e to the minus i epsilon over h bar p of x hat, whole plan at x prime of the right, plus order epsilon squared. And now these matrix elements are easy to do because the momentum of the kinetic energy operator is next to the momentum eigencat, and the potential energy operator is next to the position eigencat. So I just replace these operators by the eigenvalues x prime here and p there. Those just become numbers. So this becomes integral dp, and we get e to the minus i epsilon over h bar p square root of two minutes. Now, no hat on that. This is the p is the same as that of a momentum value. It's another variable of integration. And then we've got to remain and make the solvent xp. And for this one, we get e to the minus i epsilon over h bar v of x prime, where x has to be replaced by x prime. And then we've got to make the solvent p x prime. If you take these two remaining xp matrix elements together, they're the same thing as e to the i p times x minus x prime over h bar, and the whole thing is divided by 2 pi h bar. I'll clean this up to make it a single integral so it will be easier to read. So the whole thing here, I'll bring it around. This whole thing comes over here, equal to an integral dp over 2 pi h bar e to the i minus i epsilon over h bar p squared over 2m plus i over h bar times momentum p times x minus x prime minus i epsilon over h bar v of x prime. Now, the momentum integral, which is to be done here, is exactly the same as we have in case of a three-part integral on the board model. It's Gaussian integral. The only thing that's different is that there's another phase here which wasn't there before that depends on the potential of the value of x prime. The variable of integration is p, so this is just a constant as far as the integral is concerned. And the result is that we get an answer, which is, essentially, it's essentially the three-particle propagator above, except that t will be replaced by a small-time epsilon. And we'll have this phase factor for the potential energy and this is plus or the rest of the square correction. So let me just say dot dot dot here and this is the result filled in on the board. And what we get is that the short-time propagator x prime satisfied u of epsilon is equal to this. It's the square root of mass divided by 2 pi i h bar epsilon times e to the i over h bar times x minus x prime squared times x minus x prime squared twice epsilon. And then what we get is minus pi over h bar epsilon times e to the i over h bar. And then we get plus or the rest of the square. That's the short-time expression for the propagator processation. Now, to go back to the actual object we really want, this is the propagator for some finite time, u of t, which is not necessarily small, sandwiched between x and x0. But u of t is u of epsilon raised to the nth power, so this is the same thing as x. And then it's series of factors of u of epsilon, 0 like this. And in fact, you can see that there are capital N factors there. And so what we do now is to insert a resolution of the identity between each of these factors. This will now be in position space, an x space, a resolution of the identity. Since there's n factors, there's n minus 1 slots between the factors. And we'll call it variable of integration for the one of the right x1, the x1, x2, and so on. And this will take us down to x of n minus 1 for the last one, x of n minus 2, series of resolutions of the identity. And just to make the notation more symmetrical, let's take this final position, x. And let's just define x of capital N by definition of the same thing as our final position, x. But if we do this, and this becomes a product of fact, it becomes this. There's an integral over the intermediate x's. dx1 out of dx2 minus 1. And then we've got xn u of epsilon, xn minus 1. Of course, make yourself clear. Second rule is xn minus 1, and then we've got u of epsilon, xn minus 2, dot, dot, dot, all the way down to x1, u of epsilon, x0. Because it becomes a product in short-time making cell. It's in short-time compilators. This stage is exact. And now we get to use our short-time compilator, which is all separate of the coordinate box, because it's the main result of the calculation of the coordinate box. Here's the short-time compilator. Now let's take this short-time compilator and plug it in for each of these factors. So let's take a typical factor here. Let's call it xj plus 1 u of epsilon, xj to the k index, and the range is between 0 and n minus 1. So we need to take this xx prime variables in this short-time compilation by xj xj plus 1 i. So if we do this, then you can see that there's going to be these pre-factors here. This square root will get raised to the nth power because there's n of them. And this becomes equal to the mass n divided by 2 pi i h bar epsilon to the n of the 2 power. And then we've got the integral dx1 dxn, dxn minus 1, excuse me. And then what we've got is the product of exponentials that are in this form. And the product of exponentials, of course, is the exponential of the sum. As far as this exponential goes, it's perhaps easier if I factor out the pi over h bar. I'm going to do that. I'll put square brackets around this. They won't take away the pi over h bar there. I'm going to make sure I'm using my fingers on that. This is all contained in the notes. So then what I need to do is to sum this exponent over terms in which I've got j, xj, and xj plus 1, and where the j runs from 0 to n minus 1. So what's left in this integral then is the exponential, all right? This way, the exponential of pi over h bar, then we have the sum from j equals 0 to n minus 1. And then the first term is the mass m times xj plus 1 minus xj over d squared divided by 2 epsilon minus epsilon times d evaluated by xj. And that's the sum, and that's the whole thing that appears at the x part. There's except there's one problem, which is that I still have to take into account the order epsilon squared. This is only an approximation. I'm putting this approximation into each of these factors in which there's n of them. And so because each term has an error of order epsilon squared, when I put this into the factors, we can roughly see that the order of the error is going to be multiplied by the factor of capital N. But remember, n up here is something we're going to take to infinity. So epsilon is like order 1 over n. The t itself is fixed. So any times epsilon squared is the same thing as order epsilon. And the result is although the error gets bigger, it's still only order epsilon. So for some finite subdivision of this time interval, then in seconds we get the error of order epsilon like this. The way to get rid of this order epsilon is to take the limit in which capital N goes to infinity, in which we divide the time interval up into an infinite number of infinitely small steps. If we do this, then the number of resolutions of the identity of x that are inserted also goes to infinity. And so it becomes an integral of an infinite dimensional space of x's. In any case, let me write out the result by doing that. So we can write it this way. You can say that the propagator is x, x0 sandwiched from our u and t is equal to the limit as N goes to infinity. And then we've got this factor N divided by 2 by i epsilon h bar to the capital N over 2 power the integral dx1 up to dx N minus 1. I don't have room to write this down. And then we have the exponential of i over h bar. And let me just reformulate. Let me just modify this expression slightly by factoring out a factor of epsilon. I'll take a factor of epsilon now and I'll move it there and think I'm going to get an denominator of epsilon squared. So it becomes epsilon times the sum j equals 0 up to N minus 1 of N over 2 xj plus 1 minus xj divided by epsilon quantity squared minus v of xj of this square graph is sort of what's being sum and the flow is going to be like that. And allow me to boss this because this is regarded as the discretized version of the path integral in the configuration space. I'll tell you in just a moment about why it's considered a path integral but that's what this is considered to be. In some sense this is a first major step in the result. It's expressing the propagator in terms of an infinite dimensional integral. Now, I think the first thing to do is to visualize, maybe it's like let me just make one remark. This stuff I'm doing here with order of epsilon squared and order of epsilon is, we make a mathematician pull their hair out because it's so un-rigorous what I'm doing. The best you can say is this is an outline of a suggestion that something like this, that some expression like this should be a representation of the propagator. In fact, it isn't at all easy to make a propagator rigorous. People worked on it for a number of years. Excuse me, to make a path integral rigorous to understand this limit. Especially when you have an imaginary exponent as it appears here. The imaginary exponent is characteristic of the Feynman path integral. There are similar types of integrals, infinite dimensional integrals that occur in statistics or statistical mechanics in which the exponent is real. It's a bouncy advance minus sign. It's a bouncy advance kind of a thing. And those are much easier to make rigorous. In fact, the history that goes back before the Feynman integral is called the Wiener integral. Which I shall say something more about that later on when we talk about the partition function as statistical mechanics. But anyway, for now, I don't worry about it any more. Let's take this as a formal expression. I can say that this limit is, this is certainly a limit that can be calculated when the physical problem's limit exists and it gives you, actually, it does give you, okay. All right. Now, the next thing to do, I think, is to try to visualize what's going on with this exponent and the nature of this integration. So if that's what it's allowing you to make a space-time diagram trying to draw it big, in which we've got time going off to the right and we've got space going up vertically. This is just one dimensional now. We have two positions. There's an x0 and an x. That's our final position. These are just parameters of the matrix element up here. So those are just fixed. But let me draw, extend these as far as how the lines are going across here in this space-time diagram. We also have two times. One is t equals 0, which is beginning time. So let me draw a part of the line here coming up. And the other one is the final time, t, which we can also call t sub n if you want. In fact, let me define, let's say t sub j equals j times epsilon. So that t0 is the beginning time and tn is the last time. Then there are intermediate times, t1, t2, t3, and so on. And they're just multiples of epsilon. t sub n minus 1, t sub n minus 2 coming back down to the upper time, the intermediate time in between. Now, for each of these times, let me draw on a dotted line here. Like this vertical line, which is going parallel to the x-axis. Like this. So here's this tn, I'll draw it on a solid line because that's a fixed line. Now, this is an integral. And all of these x's, x1 through xn minus 1, notice these are the intermediate x's. The initial and final x's are fixed. This x is the same thing as xn. Those are fixed, but the intermediate x's are the variables of integration that came through an insertion of the resolution identity. So let's just take some particular values of x1 through xn minus 1 in order to evaluate the integrand at some point in this range of integration. So let's say maybe x1 is right there. And let's say x2 is there. And x3 is there. And xn minus 1 is here. And xn minus 2 is up there maybe. It doesn't have to be inside the interval x0 to xn. It can be outside. It needs to be. These x integrals are both, I didn't say so, but they go from minus infinity to plus infinity because they came from a resolution of the identity. Okay. So now, let's do this. Let's take this point here, which is the point x0 at time t equals 0. We regard this as an initial point. We take this point here, which is xn, which is x, at the final time t, which is tn. We regard that as a fixed final point, the initial final points and times are both going to be fixed. And just plan this as a problem. But they have immediate x's and variables of integration. And as I say, let's take specific values of the intermediate variables and now let's connect them together by solid lines on these intervals like this. And if you do, you get something that looks like this and for specific values of the intermediate x's, you get a jagged line like this, and these connect together in between. You know, they're going to go up and down in the most way they're going to do. There's something like that. So in particular set of intermediate x values, what we have here is a discretized version of a path in configuration space that starts at the given initial x-point at the given initial time that ends at the given final x-point at the given final time. But in between, the path is allowed to do anything it wants because these x variables in between are variables of integration of both minus infinity to plus infinity. And so for a fixed value of capital N, what we have is where we're really integrating over all discretized paths that have fixed initial end points and end times. And in conception at least, when we let n go to infinity, we're integrating over all possible paths and connecting those two points in space. And so in some sense, the variable of integration here, it's being represented as a bunch of intermediate x's, but in some sense, the variable of integration is a path in configuration space with fixed initial and final positions and times. But in between, a lot of them do anything. And so this leads to a, this leads to some change in notation of this path, and there's more abbreviated notation as you see this is better linked in the right and south. One thing we can say is let's replace this dx to dx in minus one. Let's just write it as d of x of tau, like this. I'm using tau now for a variable intermediate time. I don't want to call it t because t was our first final time. So tau is just something that goes between t equals zero and t, the tau equals t. But anyway, the idea of this is this represents the effect of volume element in path space. That's for the exponent up here. What about that? What we've got here is a sum. As you can see, it's a sum. And let's do this. Let's change notation a little bit. Instead of epsilon, let's write delta t. That's kind of what it is because it's a small time increment moving down this time variable. It's been split up. Now let's take xj plus one minus x of j. Let's call this delta xj. Like that. Now I'm running out of room. Let's do this. Then the exponent picks on an interesting form. Turns into the integral of the class in the ground chain. So that sum, which is up there, they're right here, they're right, upper right corner of the board. It turns into this. With this change of notation, it turns into page and zero to n minus one of delta t. It's the mass divided by two, delta xj divided by delta t part of the square. Why is the potential v evaluated at xj? Why behind it? This is a limit where it's defined by the thing. This sum is what you'd call a Riemann approximation to a Riemann integral, which would look like this. Integral from zero to t, d tau, d tau is replaced with delta t times m over two x dot of tau divided by the square minus v of x of tau. This is the same thing as integral from zero to t, d tau of the class of the ground chain evaluated at x of tau and x dot of tau. The ground chain is a function of the x's and x dot's of the system and for a kinetic plus potential Hamiltonian, the ground chain is kinetic minus the potential of the change in sign. And that's what you're seeing here. It won't have them be squared for the kinetic energy and then minus the potential energy. So this appears to be a Riemann, as I say, appears to be a Riemann sum approximation of this symbol, which is the integral of the time of the Lagrangian. As we say, this is the integral of the Lagrangian along the path x and tau in the configuration space. And because of this, because of these fairly simple associations, there's a more compact way of writing a path integral that's like this. It says that if you write x u of t x0, that's our propagator, it's equal to the normalization constant that I'll just call capital C. That's really the same thing as having home over 2 pi i epsilon h bar in the n over 2 factor, all that stuff. Let's just call it C. And then we say integral. And then instead of the dx1 and dxn minus 1, let's write it as d of x of tau, which means the volume element in the path space. And then for the x component, we get d to the i over h bar, the integral from 0 to t of the Lagrangian to be evaluated on the path at some time in x dot and tau p tau. And this is just notation, but it's a more compact notation than the one above. And it's actually, this is actually quite an interesting one to play around with to give you a place in interesting games with this expression. Now, however, it's just notation and there's some things you need to be aware of. Let's go back to this picture of the discrete-tized path. I drew it kind of giant, as you see. Recall, the idea was that we were just going to pick some variables, some intermediate variables of integration, x1 for xn minus 1, click down dots, and then connect them to straight lines. However, each of those intermediate variables of integration may bring the word down so I can point it in here. Each of those variables of intermediate variables of integration actually ranges from minus infinity to plus infinity. So let's suppose we hold all of these x's fixed except for one of them, like maybe x3 here. The x3 is going to underneath this t3 variable. So all this jagged curves looks just as I drew it, except for this one, this point is going to be allowed to go from minus infinity all the way to plus infinity, which is necessary in doing the integral. But what you see is that you've got a path in which the delta x, which is the jump from x2 to x3, ranges all the way from minus infinity to plus infinity. And it does so in a delta t, which is this epsilon, which is going to zero. And as epsilon goes to zero, this delta x does not go to zero. It's going to go all the way between minus infinity to plus infinity in these intervals. So what it looks like is that these paths, these paths that we get, the technical path that occurs in the path space is one for which as delta t goes to zero, the delta x goes to infinity. That's what it seems like for most of the paths anyway. That's what it looks like. Well, if this were true, then you'd have a path that you'd say is not continuous. Because a continuous path is one for which the delta x would go to zero as delta t goes to zero. Well, that's that because it means these paths are really crazy. They're not like ordinary smooth paths and configuration specs. But it turns out this is actually not quite the correct conclusion. It is true the variable delta x goes to infinity in the integral. But as it turns out, the major contributions to the integral occur only over a limited range of delta x. And instead, this should be replaced by delta x is an order of the square root of delta t. Not because the range of integration is limited, but just because this is, as it turns out, I'll show you this in the next lecture, the dominant contributions to the integral come only over this more limited range of delta x. And thus, for the paths that we have here, we do find that as delta t goes to zero, we take the limit n goes to infinity, the delta x also goes to zero. And thus, the path, the typical paths that really make a contribution to the path integral actually are continuous. However, it's still kind of strange because we compute the velocity, which is delta x over delta t, and we take the limit where n goes to infinity. Since delta x is a limited to a range of the square root of delta t, and this goes as one over the square root of delta t, and thus this goes to infinity as n goes to infinity. And so what we have is a set of paths, a typical path which contributed to the path integral, are paths which are continuous, but they are not differentiable. They in fact have infinite velocity everywhere. They jump up and down with infinite velocity, but they do so in such a way that they are in fact continuous. One of the ways of visualizing this is to think of a Brownian motion, in which a Brownian motion is a particle, as a random walk, in which a particle makes very small steps and over a large period of time, a random walk through space. I hope you know that for a random walk, that delta x goes as a square root of delta t. In fact, it's the same scaling laws as I was thinking of here. And so one of the ways of visualizing these paths is that in a light-to-pass of the Brownian particle, we're going to write a motion. Maybe another way of visualizing this is to say that it's like white noises in the solar scope case. White noise has all-frequency components up to infinity. It means that it fits. In a sense, white noise has occurred which is continuous but not differentiable. So those are the typical paths that go under this. Well, I probably better stop. And so try this over again.