 I want to begin today with the error of best relations, which are one of the ways of understanding the class connected to quantum mechanics. To be specific, we'll deal with a Hamiltonian of three dimensions, which is a potential Hamiltonian common case like this, 3D. And I'd like to begin by writing down Heisenberg equations of motion for this Hamiltonian. So the equations of motion are that x dot is minus i h bar times the commutator of x with Hamiltonian. Similarly, the time derivative of momentum operator is minus i h bar times the commutator of momentum with the Hamiltonian. And this is the Heisenberg equations of motion. And so it's understood that these operators, x and p and Hamiltonian also are all in Heisenberg's picture, although I didn't put it in the substrate formula. I'll remind you that in general there's another term involving an explicit time dependence. You can have it like partial x with respect to t, but since both x and p have no explicit time dependence, those extra terms don't appear. And this is all there is to the Heisenberg equations of motion. Now, in order to evaluate these commutators on the right-hand side, we need to do that to get the equations of motion explicitly. Let's notice that the kinetic energy is a function only at the momentum, whereas the potential energy is a function only in position. So the position operator commutes with the potential, but not with the kinetic energy. And likewise, or conversely, the momentum operator commutes with the kinetic energy, but not with the momentum. So to evaluate these commutators, there are some commutator results that are useful to use for this, which I'll cite for you there. If we take the commutator, one of the components of the transition was some function of momentum, I'll call it f of p. The result is i h bar times the derivative of f with respect to t i. Likewise, we take the commutator of one of the components of momentum with another function, which I'll call g, of position. The answer is minus i h bar partial of g with respect to x of i. And these are results which I will prove. I'll leave this as an exercise for you. I'll just remark that you can prove it by going back to the definition of a function of an operator which was presented in the first seven notes in the mathematical section of the notes. In any case, using these commutators, these general commutators, these specific ones are easy to work out. And they turn into this, that x dot is equal to the minus i h bar cancels with the plus i h bar here, which would end up with as x dot is just equal to the momentum p divided by the mass. And you have p dot is equal to minus the gradient of the potential evaluated in position x like this. And so these are the Heisenberg equations of motion for a particle moving in a potential in three dimensions. What's striking about these equations is that they have exactly the same form as the classical Hamilton's equations in classical mechanics. Just with reinterpretation of symbols, this is the same as classical Hamiltonian. Hamilton's classical equations are that x dot is equal to the derivative of h with respect to momentum. And then p dot is equal to minus the derivative of h with respect to position, which for this particular Hamiltonian gives you p over n for x dot and minus the gradient of p for p dot, which is the function of x. And so they have exactly the same equations. Well, they're not the same really because these equations are operator equations. These are the time-dependent operators in the Heisenberg picture. So these are things that act on this infinite dimensional ket space. Whereas these equations involve numbers. These are what you might call c-number equations, classical equations. The vectors here are vectors of ordinary numbers and not of operators. These can be regarded as q-number equations where I'm using Durak's terminology distinguishing between q-numbers and p-numbers. But still, it's striking that exactly the same form. It's just reinterpretation of the symbols takes the classical equations into the Heisenberg quantum equations. All right. Now, we can take these q-number or Heisenberg quantum equations and convert them into c-number equations just by taking expectation values with respect to some states. So let's say we have a state, let's call it psi. I'll put in each subscript on it because it's in the Heisenberg picture. But let's take these Heisenberg equations and sandwich them between psi h on the both sides in order to get expectation values. So if we do this, let's start with the x-dot up here at the top of the Heisenberg equations. If we do this, then we have psi h on the left x-dot in the middle of, let me write it as d dt of x to make it more explicit. And the x, I'll put an h on it to remind us that we're in the Heisenberg picture. So for the x-dot chart, this is the expectation value we get. Now, the Heisenberg bras and kets are independent of time. So the d dt can be taken out. And this becomes then d dt of the expectation value in the Heisenberg picture of just a tax operator like this. However, I'll remind you that expectation values are the same. We've got the Schrodinger and the Heisenberg pictures. So this is the same thing as d dt of psi Schrodinger, x Schrodinger, psi Schrodinger like this, completely in Schrodinger picture, in which now, of course, the time is shifted over to the kets, the kets on the bras. And the xs is now time independent. Let's just abbreviate this by writing which is d dt of the expectation value of x without being specific which picture it is because it doesn't matter which picture it is. Alright, so let's do this then. Let's take this psi h bras ket and sandwich it around both sides of both these equations. These are the Heisenberg equations in motion. And if we do this, it turns into C number equations. So it's all right out here. So the first one becomes d dt of the expectation value of x is equal to the expectation value of momentum divided by mass. The second one becomes d dt of the expectation value of momentum is equal to minus the average value or expectation value of the gradient of the velocity as a function of position like this. So these are the expectation value equations we get. So by this procedure, we've converted the Q number equations into C number equations and the classical equations into C number equations. So are these new equations the same as the classical equations now? The answer is no because it has to do with its potential energy. The reason this is that the average value of the expectation value of the gradient of the potential of x is in general not the same thing as the minus sign here. The gradient of the potential is evaluated at the expectation value of x. Those aren't the same thing. If they were the same thing, then what the classical x's and p's are replaced by expectation values, both in vectors and C numbers, then we would have exactly the same equations. And then we could say that the expectation values in the quantum problem fall into classical motion. But because these two terms are not equal in general, that's not true. You'll hear people say, the error-infested relations say that expectation values fall into classical motion, and that's actually not true in general for the reason I just pointed out. Nevertheless, it is approximately true into some circumstances which Elmer is playing. So the circumstances of this, let me sketch a potential energy of x like this. Let's say we've got some potential. Excuse me, let me do it this way. What appears here is the gradient of the potential, so it's a vector and that's three components. That's, of course, classically the force. Let me take a function, let me just call it a function of x, half of x like this, which could be any of the components of the force. And let's say it's got some shape like this. All right. Now let's suppose that the wave, so this is the function f of x. I'll sketch this in one dimension although the equations here are three dimensions, three dimensional. Now let's suppose the wave function of x is a wave packet which is localized. Let's say this is psi of x like this. And the way I've drawn it is that the psi of x is the spatial extent of the wave packet is small compared to the scaling of this function f. Anyway, the expectation value of the position of the operator x is just sort of the center of the wave packet. Let's call it x0 here. x0 is the same thing as the expectation value of x, which is an integral of dx of the square of the wave function psi times x itself. Let's call that x0. And if we look over the extent of the wave packet, the potential is almost constant over this because it's a small variable. And so it's clear from the diagram that the average value of f of x is approximately equal to f, evaluated at the average value of x under the circumstances that I've sketched here. Just for the graph. We can be more quantitative about this. However, we can take the expectation value of f of x. This is the integral over x of the square of the wave function psi times f of x. And since we're only interested in a small range of x's around the center of the wave packet, let's expand f of x about that point x0 and replace it by f of x0 plus x minus x0 times f prime of x0 plus second order terms, which I will pop it right out. Then we're doing this, did not do the integral over this series expansion of the function f. We're expanding the slowly varying function you see. The first term f of x0 is constant. So when you do the integral, you can take the f of x0 out of the integral. What's left is just the integral of psi squared, which is 1. So the first term is just f of x0 itself. As far as the second term is concerned, again, f prime of x0 and x0 are constants if you've taken out the integral. As far as the x itself here, you've got integral of psi squared times x. Well, that's the same thing as x0. It's the expectation value of x. So what happens on doing the integral is that x gets replaced by x0, x0 minus x0 is 0, and the entire first order term vanishes. It goes away. So even if I took into account the slope of the function, I have to still get a 0 here. The first non-zero corrections are at the quadratic border, which I won't probably write them out, but there's quadratic corrections there. So what we see then is that if the summarized it, that is the average value of f of x is equal to f is evaluated at the average, the average value of f of x is f evaluated at the average value of x plus quadratic corrections. So this inequality does actually a good approximation if the weight function, let's call it a weight packet, because it's localized, has a spatial extent small compared to the scaling of the potential. And in that case, we could then approximately replace that expectation value with that one, and the result will be is that approximately the expectation values in the quantum problem do follow the classical orbits. At least this is an approximation. Now a further remark here is that there are some circumstances in which approximation is exact, and it's easy to see what they are, because if the function f, I sketched it here as being kind of a curve, but if the function f were a straight line, that would mean that its Taylor series expansion terminates at the first order. So there wouldn't be any quadratic corrections, and this calculation we went through would be exact with f of x0 plus 0. And so for a linear function of x, this relationship is exact. Now the function f that we're talking about here is interpreted in the board above as being one of the components of the gradient of f. That's to say it's a force. So if the force is a linear function of x, then this becomes an equality here. So let me go back to the old board here and say that this becomes equals, this becomes an equal sign. Under certain conditions, the conditions are as if the components of the force are linear functions of positions. So this is a three-dimensional problem. If the force is a linear function of position, it means the potential is a quadratic function of position. So this will be true. The potential v of x, this is now three dimensions, has the following form. I'm going to write down the general quadratic function of the position coordinates x, y, and z. The sum of i and j is some matrix that's called aij of xi and j. That's a quadratic term, plus a linear term as the sum of i is some coefficient of di times xi, plus a constant term c. So if the potential has this form, it's a general quadratic polynomial of the coordinates x, y, and z, then the error-infested relations are exact, wave packets, or expectation values of the quantum wave functions do exactly the following classical motion. And by the way, I almost had a verbal slip there because it doesn't even have to be a wave packet. You see, this condition holds, regardless, it doesn't have to be a packet. The termination of this packet determines the linear order. This is true for any wave function. And so for these types of potentials, expectation values always follow the classical motion. Now, this includes several problems of interest. In the first place, at the right-hand side, it's zero. You've got a free particle. In that case, for a free particle, this is true. Expectations follow the classical motion. If we have just the linear term here, that includes a case of a particle in the gravitational field, a uniform gravitational field, or a charged particle in the uniform electric field. Or if we have a quadratic term, that includes harmonic oscillators of all kinds because we've got three dimensions and the matrix can be anything here, not just simple harmonic oscillators. I'll mention one more thing, too, which I will approve, is that it can be shown that the error-infested relations, the expectation values follow classical motions. In a general case, if the entire Hamiltonian is an arbitrary quadratic function of positions and momenta, including momenta as well, where even cross terms p times x's are allowed, then that includes the important case of the particle in the uniform magnetic field. Although I won't prove it as to be true, the error-infested relations are exactly the same. So what are the error-infested relations? The error-infested relations are the error-infested relations would be replacing the classical equations of motion by expectation values. Maybe I'm using this in a sloppy way because I can refer to these equations as the error-infested relations which are exactly quantum mechanics. They do not, however, as I just explained, they do not know how to say that expectation value is called classical motion unless this Hamilton is starting to replace my equation, which it can in this case. Alright. Okay, so that's all I wanted to say about error-infested theorem or error-infested relations. These are, as I say, one of the ways of understanding classical whether it's quantum mechanics or evolution of expectation values. I'd like to say some things about particles in magnetic fields. Let me begin with the classical case of a particle in a magnetic field. A particle in charge of q in a magnetic field then classical equations of motion are the, I call them the Luglorens equations, force equals mass times acceleration which is the charge q multiplied times the electric field plus the velocity here to c cross into the magnetic field. The electric and magnetic fields of course are a lot of the questions of space and time. And so this is a classical equations of motion. These equations can be derived from the classical Hamiltonian. The classical Hamiltonian is this. It's 1 over 2m, momentum p minus q over c times the vector potential a which in general is a function of position of time. That whole thing is squared plus the charge q times the scalar potential which I'll call capital 5 which in general is also a function of position of time. Anyway, this is the classical Hamiltonian for motion of the charge particle in a given, we might say external, given external electric and magnetic fields. Electric fields and magnetic fields are expressed in terms of the potential by e as minus gradient of 5 minus 1 over c to the a to the t. Standard results of electric and magnetic theory and d is equal to the curl of a. One of the strange features of this classical Hamiltonian is that it's expressed in terms of potentials. Maybe that doesn't surprise you by the scalar potential because you're used to potential energies but now that you've got a magnetic field you need the vector potential also. This is in contrast to the Neumann's equations of motion which involve the electric and magnetic fields directly not potentials in relation to being given by these lower equations. You have to use potentials given in classical mechanics if you want to break down our Hamiltonian. Here's another thing to pay attention to. If we take the Hamilton's equations of motion this whole board is classical here up to this point. x dot according to Hamilton's equations is d h d p the momentum and if you do the derivative here which we get is 1 over m times p minus q over c a to the mass of the potential and x dot of course is the velocity. So if we take this equation and we solve for the momentum as a function of the velocity what we get is p is equal to the mass times the velocity q over c times the vector potential which in general depends on space and time and so you can see right away the momentum which appears here in the classical Hamiltonian is not the kinetic momentum which is just the first term but it's rather the canonical momentum it's the whole thing. So pay attention to the difference between the kinetic momentum and the canonical momentum when you're dealing with problems where there's a vector potential. There's no magnetic field around. So if there's no magnetic field if they were equal to zero most people would say oh let's take a equal to zero also then we can forget about this whole magnetic field vector potential stuff. As we'll see in a couple of lectures however there's an interesting an interesting physical situation that's going to be hard enough both of that in which we have a magnetic field equal to zero not equal to zero. So actually necessary to do with that case. So in any case in any case the momentum here is the canonical momentum. By the way once we do that once you recognize if p is the canonical momentum and velocity is given by this expression there's another unneeded conclusion which is that this entire first complicated term of the Hamiltonian if I wrote it in terms of the velocity instead of the momentum where the point is this entire expression is a complicated way of writing the kinetic energy. The kinetic energy is not p squared over 2 a there's a vector potential around it's this whole expression that's physical meaning though. Alright now there's another interesting another point that I need to make which is the potentials a and pi are not unique if we pick a function of position in time function this is a think of this as an arbitrary function it's a function I'll call the gauge scalar using the gauge scalar it's possible to convert old potentials into new ones the rules are this is that a prime a new potential is equal to a plus the gradient of f and pi prime new scalar potential is equal to pi the old scalar potential minus c d f d t these are the expressions for a gauge transformation in the center of the gauge transformation in electric magnetic theory but if you make these changes like from the un-prime to the prime of the old and new potentials what you find is the electric and magnetic fields are varying the prime and un-prime versions of them are the same well the electric and magnetic fields are directly measurable in that sense they're physical fields and you can measure them by looking at the four sum charged part of them but the fact that so they don't change what's physical does not change in the gauge transformation but the potentials do change and so the potentials have what we might say is a non-physical element in fact I think it's useful to think about the potentials this way is to say that they contain both the non-physical element and the physical one they contain the physical element because by taking derivatives of them you get the measurable fields which are A and B but they contain a non-physical element and that's what changes when you do a gauge transformation so there's a redundant mathematical description of the symbol physical reality which is the E and the B fields and as I said it's an interesting fact that the Hamiltonian requires the use of the potentials another way of saying this is there's no such thing as a 5 meter or an A meter there's no way to measure 5 or A with some kind of a meter well you might say oh I've got a voltmeter I can certainly measure the potential with a voltmeter can't I a voltmeter has two connections there's no voltmeter with just one wire so you don't actually measure 5 in one place which measures the difference in 5 between two different places by the way that's only works for static or at least slowly varying since it's only really only valid for like the static approximation anyway so I go back to the same statement there really is no such thing as a 5 meter or an A meter alright so that's some review of classical electromagnetism and motion of transpirals and so on now what we do in quantum mechanics is what we're going to do is we're going to take this classical Hamiltonian and just like we did with the kinetic plus potential Hamiltonian to provisionally adopt this Hamiltonian as a quantum Hamiltonian by reinterpretation of the symbols the same issues arise so without changing the formulas by just reinterpreting the symbols this now becomes a quantum Hamiltonian when we do this of course we have to ask if it's physically correct I explained last time that this we're talking here about the process of quantization and we're going from classical mechanics to quantum mechanics there's new physics in quantum mechanics and so whether the answer is the right one for quantum mechanics or not is something you have to determine by experiment the fact is that if you borrow this Hamiltonian into quantum mechanics in some cases it's actually not so good for example for the case of an electron in magnetic fields you'll find that the energy levels are not even qualitatively correct if you use this Hamiltonian and the reason is because of the fact there's of course a non classical effect for a spinless particle it works better so let's just say that adopting this Hamiltonian into quantum mechanics is a provisional step sometimes it's alright later on we'll consider the physical effects that are needed to really take into account in various situations alright so in quantum mechanics then now this is our Hamiltonian equation out now and the configuration representation as a differential equation you see it's not going to be this the h psi equals i h bar e psi dt becomes a more explicit form it becomes minus i h bar gradient the whole thing under brackets whatever it's to them minus q over c times the effect of potential a of x and t this whole thing squared and you have plus q times the scalar potential of i of x and t called the line psi of x and t and I think into i h bar d dt of psi of x and t this is kind of ensuring this way adopting this Hamiltonian for a particle moving in a three-dimensional charged particle of three-dimensions and together let's say external electric and magnetic fields one of the questions that arises right away is what's the role of the choice of gauge in the quantum problem again the Hamiltonian is expressed in terms of potentials not in terms of magnetic fields and the potentials are not unique so what happens if you change the potentials from a to a and phi to a prime and phi prime does the Schrodinger equation even invariant does it the same Schrodinger equation in new gauge the answer is it isn't exactly invariant that means it has the same form in the new gauge as it had in the old gauge but in order to achieve this covariance in other words to preserve the form of the equation it's necessary not only to change the potentials for the electric and magnetic fields in terms of this gauge scalar but it's also necessary to change the wave function in fact here's how the wave function transforms under a gauge transformation which is the old wave function the value of f of x and t is equal to the new wave function called the sine prime of x and t times e to the i charge q gauge scalar f of x and t divided by h bar c this is the gauge transformation of the wave function which needs to be coupled with these transformations of the potential now you just did a homework problem in which one of the questions was what is the function of the phase of the wave function look what the gauge transformation does because it changes the phase of the wave function in a spatially dependent manner in other words the related phases of the wave function in different spatial points two points in space I hope you know by now from that homework problem that you can only measure the phase differences between the wave function and two different points in space the value of the wave function is non-physical what you now see is the relative phase in the wave function between two different spatial points actually depends on the gauge depends on your gauge convention for the vector potential that you choose to be like the magnetic field this is true for a charged particle the q is charged well this seems crazy but in fact it's true in that homework problem we were thinking of there was no magnetic field in fact it was really free particles and so the implied gauge was never mentioned in the homework problem the implied gauge was that A was equal to zero and so the measurement process that if you worked it out to the right way to do it is to get it by a double hole experiment looking at interference that double hole experiment is going to give you the phase of the wave function under this gauge convention for a vanishing magnetic field I don't know anyway I think I'm going to skip the algebra because it's entirely straightforward however I'll just say that if you take this change for psi substitute this psi in here and everywhere else that you see everywhere else you see in this equation for both sides and you do the algebra grind through the derivatives there's a time derivative and a spatial derivative what you'll find is that there's a common factor a basic factor like this that can be removed from the Schrodinger equation and when you're done you get a new equation in which the vector and the scalar potentials are prime and so is the wave function anywhere goes into its new equation like this so this is what I mean by covariance the Schrodinger equation maintains the same form the algebra this is contained in the notes and it says it's completely straightforward I don't understand how this is possible you should have sorry we have a gauge choice which would be something that all are true at the same time any gauge choice is accurate so how can a wave how can the phase of the wave depend on the gauge choice that I made just to make my math easier that's a good question if you think about this is actually related to you're asking how can a phase depend on this well think about how you measure the wave function in that Humber problem you measured it by having an interference we had two holes here we wanted to know the relative phase between two points on the screen and the way to do it is to punch holes in the screen and then look at the so you've got waves radiating out like this and then you look at the interference pattern that's down here on the lower screen you can by measuring the phase differences you can get the difference in the cases but in order to do this you need that in order to count the phase difference in the number the number of wavelengths along this segment and along this segment subtracting until you get the phase difference in the two points this is a standard interference between two points but as I say you have to know how the waves are going to propagate in this region well if you change the phase convention let's say in that Humber problem the implied phase convention change the gauge convention in that Humber problem the implied gauge convention was at A equals zero because we didn't have any magnetic fields so why not take A equals zero and if you do this and these waves are dispersed in waves you get the lapse phase just by counting the distance and dividing by the wavelength sure but the results of that should be the exact same thing this time but it should always be the same thing but it's not you'll find the phase is different if I put a non-zero vector potential if I put a non-zero vector potential A it's not equal to zero but we still have a magnetic field equal to zero this implies that A is actually equal to the gradient of the gauge scalar it's a different gradient in that case you'll find you have to take into account the elapsed amount the elapsed variation of the gauge scalar between these two points how would you perform an experiment to tell what danger? you can't you can't but the experiment is not just an experiment measuring something, it also involves a theory about how the wave propagates from that hole down to this lower screen and you can't do that without having a theory of propagation this is one of the ways why measuring wave functions in quantum mechanics is completely different from measuring a classical field all you do is just measure it if you want to find the phase of an electric magnetic wave in a classical E&M you don't need to solve Maxwell's equations you just go measure the electric and magnetic fields this is different that is one of the points of that problem it's just to emphasize that wave functions are not the same as people think they are it's what they don't think about it very much but the quantum wave function is not just some complex field out of space if you go measure the side meter or something, it doesn't like that at all that's all for a problem with electric and magnetic fields for now, we'll come back to them and actually deal with it in more detail in a few more lectures down the line but right now I want to address a few other issues one of them is I'd like to tell you about just some simple matters involving solving the Schrodinger simple matters that arise when you solve the Schrodinger equation in one-dimensional problems let's take just simple 1D problems and I wanted to make just a few comments about the Schrodinger equation in fact, this is going to be the time-independence Schrodinger equation so it's kinetic energy most potential energy by size equals to a total energy this is the problem of quantum energy I do function in one dimension the first question I'd like to address is the question of the generacies when do we have the generacies in one-dimensional problems like this so to begin with let's suppose we have a solution psi of this equation which I just wrote down and let's suppose we have another solution which I'll call pi of the same equation if these two solutions are linearly independent then we have the generacies to analyze when this is possible let's take the first equation multiplied by phi and the second equation multiplied by psi and then subtract the two if you can do this then we get phi psi phi psi on both potential energy and total energy terms so we use to crack those drop out and all that's left of the second derivative terms where there's a common factor of minus h bar squared over 2m so doing this fraction what you get is that phi times psi double prime minus psi times phi double prime is equal to zero this however is also equal to the derivative of respect to x of a quantity called w w is given a name it's called the once again of the equation and in fact what it's equal to is phi times psi prime minus psi times phi prime now we're only going to derivative you take the derivative of this the second derivative is one of these second terms of the derivative is the first terms of cancer this is an exact derivative this can also by the way be written as a determinant of phi phi prime in the first row that's psi prime in the second row anyway the point of this is that the Ronskin has a derivative which is equal to zero so the Ronskin is a function of axis of constant let's suppose so this is always true if I have any two solutions of the one dimensional Schrodinger equation now let's suppose that we have psi of x zero let's suppose if there exists some x zero such that x zero is equal to five x zero equal to zero in other words let's suppose that the solutions vanish at the same point x zero where this happens is typically due to boundary conditions if you have for example a hard wall like this then wave machines have to go psi and phi they both have to go to zero at the same point x zero which is the wall another place where this might happen is if you have a potential energy via an x which is well like this and then you know you've got a fearsome energy like this and you know you've got a wave function of rices and oscillates like this and then outside the classical region of Geisson exponentially so this is a place where the wave function has to go to zero both psi and phi go to zero as x goes plus or minus infinity this point x zero could be infinity it doesn't matter as long as this condition holds at the same point x zero this implies that the Rundstein is equal to zero because the Rundstein is called psi and phi are zero and you can get the point from the whole Rundstein to zero so this implies that w is equal to zero but this implies since w is this quantity here it implies that phi psi prime is equal to psi phi prime and that implies that psi prime over psi is equal to phi prime over phi that implies that the logarithm of psi is equal to the logarithm of phi plus constant and that implies taking exponentials that psi is equal to c times phi and so what you see is is that under this condition which I'll now box here this implies that there is no degeneracy because any two solutions are necessarily proportional to one another one degeneracy and so in particular if you've got a hard wall problem or if you've got a problem where the wave function is a compound so that it dies off exponentially outside the well then these are necessarily non-degenerate solutions in one dimension in fact it doesn't have to die off to zero at both infinities it just dies off at one infinity that's enough to make it a non-degenerate so to give you another example of what you're doing I suppose I've got a potential energy that looks like this here's x or v of x let's suppose potentially it's mostly zero negative that's going to rise up like this so it's a mountain of the particles being scattered again so you've got an energy like this coming in well in this case the wave oscillates like this turns up towards the turning point and then it dies off exponentially in this direction so here the wave function goes to zero as x goes to plus infinity it's still enough it still means there's no degeneracy so it's true for bound states but it may also be true for unbound states if the wave function dies off at infinity or if there's some hard wall then there'll be no degeneracy on the other hand suppose the wave function does not have any boundary conditions of course it goes to zero degeneracy the answer is s but let's just take a case of a free particle which is doing this forever in this case for a free particle there's two solutions either the IKX and the minus IKX these have the same energy but they are linearly independent so you do have a degeneracy and the same energy linearly independent therefore there is a degeneracy of this theorem or not that you may have changes shouldn't everywhere known but for unbound wave functions that cause an energy such as in the case of the free particle they don't die off those are not normalizable but this one is also an unbound wave function and it's not regenerative because it dies off at one infinity alright here's another little theorem a non-generative wave function psi in one dimension which is most of what we're talking about up here can't be chosen to be real well as they can be chosen you can always of course choose the phase the overall phase of the wave function it might mean that by choosing the overall phase if some algorithm psi came up to be complex you could eliminate the complex numbers just by multiplying by an overall phase factor this is related to this earlier result here what one needs to show is that both psi and psi star if there's a non-degeneracy they satisfy the same Schrodinger equation because Schrodinger equation is real and I could care about the above if they're non-degenerate and they're proportional to one another and one can show by factoring on a phase factor to make them real I'm not going to go through the details of this because number one it's in the nose in the case of the reality of wave functions when systems are invariant and they're time reversal and this is something we'll delve into in more generality later in the semester but right now I just want to mention the result you see by the way that in this degenerate case here the wave function is not real it's a fact complex although you can make linear combinations to get real wave functions if you want however the theorem both does apply so that you've got non-degenerate wave functions then you can choose to be real so I'll state wave functions in one dimensional wells you can always make psi real those are the main things I wanted to say about some topics in one dimensional wave paths next I'd like to make the beginning at the time it's left this hour a WKB theory which will occupy us some let's hear it more so let's talk about WKB theory WKB theory for our purposes in quantum mechanics in general really has two roles one of them is to supply a conceptual framework for understanding the classical limit of quantum mechanics the error infestation which I described earlier in the hour another way of understanding the classical limit about the evolution of wave packets following classical orbits in some approximation WKB theory is actually deeper it's a more sophisticated theory and it gives us more insight into the relation between classical and quantum mechanics there's a second purpose of WKB theory which is it allows you to approximate the solutions of a quantum problem in terms of the classical solutions this is what we'll see now these are what you might call the practical applications of WKB theory these practical applications are usually restricted to one-dimensional problems a multi-dimensional problem is more sophisticated so we won't go into them in this course so I'm going to make this lecture primarily, we'll follow your lectures primarily on the one-dimensional case just a few mentions of the three-dimensional case, the notes have a little bit more of the three-dimensional case so first of all let me describe for you in the pictorial qualitative terms when WKB approximation is valid so I'll be talking about what we will make for now so to begin with let's suppose we have a potential energy it is a function of x and by the way the equation will be understood in solving just the one-dimensional Schrodinger equation so we'll do this one let's suppose we have a potential energy which roughly looks something like this it has some potential like that then under many circumstances the wave function psi is matched on the same x-axis looks something like this there's first of all an envelope which all make a kind of dotted line and makes a mark like this I can draw this very well and then inside the envelope there is oscillations that go back and forth and fill in the envelope like this there are many circumstances in one-dimensional quantum mechanics when you get solutions that look like this I'm not thinking of a time independent case but this also is true of a time independent case as well now let's say that the potential energy here has got some particular scale link which something like this a link over which the potential varies significantly so let's call the hell as the scale link of the potential the wave function of it and it has these oscillations which have their wave link which is mass that is the wave link which is my way to go back to the scaling L the scaling L is the link of which the potential varies now the there is a circumstance like sketching here the wave function has the form of a modulated wave it's kind of like an FM wave signal and the envelope of the wave has the same scaling as a potential itself just the way I sketched it here I'm going to have the wave link of individual oscillations is a much shorter scale than I've drawn here it's lambda now by the de Broglie relation if we use the de Broglie relation lambda is the same thing as 2 pi h bar over an associated momentum p the de Broglie relation is usually thought of in terms of plane waves or free particle solutions which is not what we're talking about here here however this wave link actually changes as you go from one part of this envelope to another in fact the wave link lambda has the same scaling as L it changes slowly on the scale of L and so what this means is that the momentum of the de Broglie relation is actually a function of x so this thing implies that there is some momentum function of x let's say p is p of x which is associated with this wave function of this type one of the things we want to do is to understand what does the meaning of this momentum function of the momentum as a function of position let's see this come out in just a moment alright now the next thing we'd like to do is to write down what we call wk beyondsatz which is a representation of a wave function of the momentum of a wave function that looks like this so right inside of x in terms of an amplitude we don't call a aatz which we understand to be slowly varying slowly varying means on the scaling of capital L multiplied times a rapidly varying phase which all runs exponential of e to the i times the function s of x over h bar now let me explain why I put s of g let me explain why I put this phase factor for a rapidly varying phase in the first place the wkb approximation is going to be valid when the deroy wavelength is much less than the scaling of the potential those are the conditions of validity on the wkb approximation this is of course the same thing as 2 pi h bar divided by this momentum p is much less than l only equivalently it's 2 pi h bar is much less than p times l now sometimes the way people state this is they say wkb approximation is valid if h bar is quote-unquote small I put this in quotes because h bar is not a dimensionless number this numerical value depends on the units you use if you use typical macroscopic units like grand seven years and seconds then h bar d has a small numerical value but you can invent other units in which it's got a large value so there is no absolute meaning to small however in a distance interval to compare h bars these have the dimensions of actions such as the momentum times the length if you think of a momentum and the length if you're asking classical quantities this length is a classical quantity because the scaling of the potential and the conditions of validity in wkb theory are that actions that occur in a classical problem are much larger than h bar and in fact this is normally what you'd expect in a classical limit of ordinary classical systems typically in terms of rams and centimeters and so on but the small value of h bar means that classical actions are small so the small h bar limit is equivalent to a classical limit now let me proceed a little further with this explain why this phase factor here represents the rapidly varying phase like we see in this picture over here let's consider the change in a phase that appears here let's consider the change in a phase over a single wavelength lambda now that change has to be too high because that's an advance of too high of a single wavelength so let's say this holds when delta x is equal to lambda but we take this as of x and and expand and I'm just using it as first derivative and the left hand side is approximately the same thing as s prime of x times lambda divided by h bar and that's equal to 2 pi and so the result of this is that s prime of x is equal to 2 pi h bar over lambda and you'll see that's the same thing exactly the same thing as this momentum function p of x which was emerging by using the De Broglie relation on this wave here so how does this interpretation of this function as of x is that its derivative is this momentum function which we still need to interpret what it is but it connects this momentum function to the phase of the wave function here ok so that's all for today let's continue with this next one