 Well, the hardy are here. Hope you survive. I hope you have the solutions posted soon. And bring them as soon as I can. All right. So, let's continue. There is a form of assignment, and you best try to get started on it a little bit. Can I look at it, at least, so that we can productively have a problem session tomorrow? And as you remember, there is this room schedule and conflict. So, I will introduce the problem to discuss them a little bit from 10 to 15, and hang out a lot of you, and I'll come around and talk to you. Hope to be there. All right. So, last time, last lecture, we had discussed that there are two different so-called pictures that one typically uses in different contexts to study dynamics in quantum mechanics. One is the Schrodinger picture. And in the Schrodinger picture, what we do is we dynamically evolve the state. So, the state of the system evolves as a function of time according to the Schrodinger equation, which, or a pure state is written like this, or more generally can be written like that. And the observables, the permission operators that we use to calculate physical quantities are constant operators, unless they are functions of some classical parameters that themselves are explicit functions of time. But that's kind of an upside to them. In the Heisenberg picture, we flip things and we say the state is fixed and the observables evolve anytime and they evolve according to the Heisenberg equations of motion. And the Heisenberg picture is useful in two important contexts. One, if we're only interested in the evolution of the expectation values of a few observables, then it's typically easier to use the Heisenberg pictures because you don't have to know the whole state, you just have to know how that particular expectation value evolves as a function of time. And if I'm thinking about things like multi-time correlation functions, which are also expectation values, then one typically works in the Heisenberg picture. So typically in quantum field theory and in condensed matter physics, it's almost exclusively done in the Heisenberg picture. Because, frankly, it's very hard to measure individual quantum systems in that case. So you're typically looking at average values anyway. Very hard to look at the probability of some particular event happening. So often it's dynamizable. The connection between these is through the unitarian evolution. That's general. Whether I'm talking about the Heisenberg or the Schrodinger picture, there's a time evolution operator. And we take advantage of conditions that whatever the Heisenberg operator is at time t equals 0 is what, I'm sorry, the constant Schrodinger picture operator is. And whatever the Schrodinger picture state is at time t equals 0 is the same as what that constant, the fixed Heisenberg state is. So then they're connected at time t equals 0 and then they evolve in time. So the state evolves in this way and in the Heisenberg picture, the operator evolves according to unitary time evolution transformation. And by this connection, what that means is that any expectation value is the same whether I calculated in the Schrodinger picture or the Heisenberg picture. So I plug in for the state this way or the operator this way and I look at them and they'll be the same. And that's why they are physically the same. It's just whether you shove the time dependence on the operator or shove the time dependence on the states either way. They have conceptual differences but at the end of the day it's matrix elements that carry the physical predictions. And so it doesn't matter which picture we use. All right? And one of the things about the Heisenberg picture is that it is much more similar to classical dynamics. In classical dynamics the way we typically treat it, although there is a Schrodinger-like picture of classical dynamics, we just almost never use it. It's called the Liebel equation. It's like a Schrodinger picture for classical mechanics. We can talk about that at some point if you're interested. But typically we think about, well, you know, there are positions of momentum which are the analogs of the observables and they evolve according to Hamilton's equations of motion. And I got this backwards last lecture. I had the brackets. So this is the Poisson bracket. And Hamilton's equations of motion say if I want to see the time evolution of some observable, I take the Poisson bracket with the Hamilton. And the Poisson bracket is defined here. And it looks just like a Heisenberg equation of motion. Except there's this IHR thing going on. Okay? And so in particular if I looked at, for example, how position and momentum evolve as function time if I think about a particle moving in one spatial dimension in the presence of a potential, then I get the usual equations. This is the momentum is related to velocity and the change of momentum is given by the force. So what we want to now discuss is how do we treat this particular problem, the problem of mechanics, quantumly? Let's talk about quantum mechanics. Okay? We haven't done that yet. So we're going to really now put the mechanics back in quantum mechanics. So what as we discussed, our cook for doing that is symmetries. Okay? And so what we discussed is that momentum is the generator of translations in positions. That follows from Noether's theorem, which says that if we have a translationally invariant system if there's no difference in the system as a function of position in space, then momentum is conserved. Okay? And therefore momentum is the generator of translations in position. Okay? So now what that means is that we have a, if we're in the quantum world, now we have x and p come out of coordinates become observables. x hat and u hat. These are provision operators. And there is a unitary representation of translations in space or in position. Now for the moment, I'm going to restrict my attention to one spatial dimension. We'll generalize that a little bit later. The subtleties involved there, we'll think about three dimensions or more. So let's just think about one d here. So these translations in position is a lead group over the real life. Okay? And so this unitary representation is the following. Let's say t is a unitary operator. And to say that is a representation of translations in time, it says that if I do a unitary translation on the position operator, where this particular element of the group triumphs, this is a one d, sorry, if I put a one moment, if I translate this by some amount, then what I should get is this. So, and this really means x here times the identity. So this is saying that on this side I do a unitary transformation on the position operator and its action is to translate position by that amount. Okay? So that's what we want. And since momentum is the generator of that action, then we can write down what this translation operator is in the same way using the same technique that we use in thinking about time translations. And the way we did that was to think about translations that are near identity. So it's just a differential translation. So near identity translation, see if I translate for a little differential dg. Well, this is the identity and then we said that the generator had to be anti-permission. So we factor out a minus i. There's some operator I'll call k. Okay? Actually, let me write it out. We know that it's proportional to momentum. So momentum is the generator of space translation. So my near identity translation operator is the identity and then a small amount of proportional dt times an anti-permission operator. The anti-permission operator is gotta be plus or minus i. That's just like convention. Take it minus i times momentum operator. And then there's some constant out. Now this has to be dimension. This whole thing has to be dimensionless. It has no dimensions. It's just an operator. If this is the identity operator, the inner operator has no dimensions. So this whole thing has to have those dimensions. So that means that the units of this proportionality constant is one over the units of position times momentum. So that this thing. And that is action. It's another way of writing action in constant physics. And it's the same constant. What a great choice. So our near identity translation operator is thus... I mean, this can't really derive that it's h bar. It's just that it's the consistent way as you can come to a little bit later why h bar is the units that ever come into quantum mechanics. It's a way of in some sense translating physical stuff into distinguishable states. It tells me how many... What's the dimension of Hilbert space in some sense? It's a kind of way in which we coarse-grained phase space. But with all of that said, let's just plug it in. Let's say this is the near identity translation. And this has to act. If I act this uniterally, translate the position operator, it should translate by that. So let's plug that in. That's equal to the identity. The time to get at this is plus i over h bar. This is a permission operator. So p datator is p times x times the identity minus i over h bar p times dx. This is a differential. So we keep only things to first order. dx squared is zero as a differential. So we'll keep only those terms. And what do I get? I get minus i over h bar x times p minus p times x dx. I don't know if the identity should be x. Hey, voilà! I love saying that. What we see over here is in order for this to be the case of this whole thing has to be one. Right? Because this equals that. And thus we see in order for that to say that momentum is the generator of translations in position, it must be the case that the leap bracket, which is the commutator satisfies the following. x times p minus p times x the commutator is equal to h bar over minus i, which is i h bar. So this commutator is nothing but the statement that momentum is the generator of translations in position. Then it has to have that commutation. If they commuted, well, the momentum would do nothing to position. Cool. So now we can also say, we can write down the translation operator in position for a, not just for a differential, but for an arbitrary finite translation. So if I translate by an amount say x naught again, I can break that. I can take the limit of little differentials. So I'm going to write this in terms of over h bar p and then x over n. And then I do that eight times. So this is my near identity, because n is very, very big, this goes to infinity, but then I do it, this is the same thing. This is either the minus i h bar p x. So that's the translation by a finite amount x. And if x becomes a differential, when I expand this as a power series, I get that. Now just to be pedantic, because that's my nature, let's just show that this is all self-consistent by using paper caramel houses or some version of that. So let's look explicitly at, this is e to the plus i over h bar p x i e to the minus i over h bar. And what we wrote down last time is that we can do this transformation with multiple commutators, right? So a side up here to remember, we had e to the a e to the minus b was equal to the sum n equals 0 to infinity over n factorial, the multiple commutators of b with a n times. Where what that means is 0 is a. I'm sorry, a. And then do it one time, it's b with a. Do it two times, it's b with b with a. Do it two times, it's b with b with b with a, etc. So let's do it. What's a and what's b in this case? So here's a and here's b. It has this form. And thus this is equal to the first term is x. The second term is the 1 over 1 factorial is 1 plus the commutator this with this plus the next term is a half the commutator of this with that again and then, etc. Well, yeah, it's still correct. Yes, this is correct. So this guy is what? Well, there's a constant. Bring that out. I over h r, the commutator of p with x. And the commutator of p with x is mined as the commutator. There's an x zero in there. Thank you so much. Let's move it over here. And the commutator of p with the x is minus i h r. So minus i, that becomes 1. So this is the 1. Now, as you suggested, the rest of these terms I can throw out. Why? Because the magic of these particular operators is that their commutator is a constant. Of course, it's an identity operator. And that's it. Commutate everything. Excellent. So it commutes. All these other terms are zero because this is a constant. And the constant or the identity commutes with everything. Et voilà. Once again, I'm going to say that this is equal to what we wanted to be. So it works. So let's now show the Heisenberg equations of motion for x and p when I have a Hamiltonian of the form that I wrote down over there for classical mechanics. So a Hamiltonian for a particle moving in 1v a potential v. The Hamiltonian operator that comes is the kinetic energy operator. And the potential operator, which of course is the potential operator as a function of x hat. We know now what this means. It's a function of an operator. We know what that means. So now let us write the Heisenberg equations of motion. So let's look at the derivative of x hat respect to time. That's equal to omni over h bar. It's minus i over h bar x out of the forms that we have over there in the front board. Okay. So this is equal to what? Well, plug it in. I get minus i over h bar the commutator with the kinetic energy term and the commutator with the potential energy term. Of course, this is zero because x can be a function of x. And now what about this? Well, we've got to use the old product rule in that case. So this is a little commutology. Let's just write it down. I want to just remind ourselves how that goes. So just background the constants and let's look at this. What do I do? Well, I have to take, remember, put a little sign over here. The commutator of A with the product of B and C is A with B comma A times C plus B A with C. We can prove that at some point or just remember it. We better start remembering it. Okay. So now we do that. So this is two things. So I get minus i over two h bar m times the commutator of x with P times P plus P times the commutator because in this case both B and C are the same. I just did it twice. Think about this P times P. Right? And this are both equal to i h bar. So this thus is equal to P hat branch. Yay. So what we have here is that we have this and similarly, you can go to this for homework, the time derivative of the momentum operator with respect to time, which is minus i over h bar the commutator of P which is the commutator of P with the potential operator. If you go through those details you will find that this is equal to so this looks like I mean it looks exactly like classical mechanics. Doesn't it? I mean these are happened as equations. So what does it say so this is often stated as Aaron Fess's theorem. Okay. Aaron Fess's theorem. Let's see it comes to there. The professor says loosely that mean values in quantum mechanics follow the classical trajectories. This is how it's often stated. That is to say if I let x equal the expected value of x and P equal the expected value of P. DT is P over m and DT is equal to minus so it looks like I just recover classical dynamics as long as I just follow the center of the wave the mean value. Now this is wrong. Why? Why is it wrong? I mean it looks like that's true. Let's actually take the expectation value of these equations. Okay. So what we have is the x and T is P over m which implies well that's cool. That's exactly this. If I replace this by a mean value and I get the effect, that's fine. However, if I look at the second equation that says if I take the expectation value of both sides and I'm shoving on rosy cats then I get that the rate of change of the mean value of a meadow is equal to minus the but this does not generally equal that. Sometimes we'll see that that's true but in general no. We have afforded potential for potential as a function of x with some fourth power. Then the force on the particle the expected value of force well it's this. But this does not equal no. No. Here. Sorry. Indeed. It doesn't equal that either. It would only be true if the particle was extremely well localized such that the fluctuations were negligible. So only if delta x, the uncertainty the fluctuation or this way is much much less than variations. So if the particle is extremely well localized compared to the way in which the potential change then it doesn't see that but if the particle is delocalized and samples different parts the potential experiences different forces in some sense or it experiences coherent superpositions of different forces. So it is not correct to say that a particle mean value it's way follows the classical trajectory in general. Now there are some cases where that's true suppose I had a quadratic potential monocoscillator. Then in this case and the mean value of this is the same thing for that case this is true. So if I have a quadratic potential or a linear potential then yeah. So in a harmonic oscillator the wave path that mean value follows classical trajectory but in other ones they don't. And in fact if classically the system is chaotic then the wave path will spread incredibly fast and so that means that even if at the initial times I had the particle well well localized at very short times extremely delos wise. And it's what we call a quantum break time. The time it takes for the particle to no longer follow the classical trajectory breaks at a time that's very short if the system is chaotic. In fact there was an interesting paradox that was posed by Boychick Zurich in thinking about this and thinking about one of the moons of Saturn which has a chaotic orbit. And you ask the question how long would it take before the trajectory of this moon is no longer described by the mountains equations. Even though it's the damn moon which is as macroscopic and it's actually very short time I try to look it up at that time are you being a Mercury orbit? No, I'm talking about it's one of the moons in this case but you know the same you can do the same I don't think Mercury's orbit is chaotic Mercury orbit takes us to the belt of the Sun Yeah but it's not chaotic this is about chaos Now that's a little bit of a paradox we don't expect that we need quantum dynamics to describe the trajectory of the moon so there is this question about the emergence of classical phenomena so the resolution to that paradox is again that the moon in this case the moon of Saturn is not a closed quantum system it's an open quantum system constantly being interacted with and so the interference between discriminatory trajectories that give us the quantum don't interfere anymore they decoherent and decoherence rescues the moon very good so let's talk about things from the point of view of the Schrodinger picture about the state space so we have position and momentum operators we have position and momentum representations so x and p on our mission operators thus we can look for their eigen vectors and because we're talking about this continuous symmetry of time translation these eigenvalues are real numbers that are continuous so these are the eigenvectors and these are the eigenvalues and these eigenvectors and eigenvalues form a resolution of the identity now because they are continuous we don't just sum reintegrate okay so the resolution of the identity now becomes the following integrate over all the position eigenstates it's a dummy variable we love the x0 from minus infinity to infinity that's the identity and similarly for the momentum now things get quite messy here complicated isn't it there's a whole lot of rigor here that will touch on what we're going to sweep a good part of it under the weller but let me just say if one piece won't work here firstly if we look at this expression x has units it's units of length and this is the dimensions right similarly here this has units of momentum mass times velocity and this is the dimensions which means that these cats or bras have units in this case okay so this thing has the dimensions of one over length it's a density a linear density and this thing has unit the dimensions of one over momentum it's a momentum density well given that we can write down we have a state let's talk about your states in no words states we can write down a position representation how do we do that well if we have a basis a resolution of identity do you know how to do it we just write the state misses at our expansion coefficient of the state in that basis but now it's a continuous variable basis and this is called the wave function so the wave function is nothing but the representation of the state in position what are the possible allowed wave functions in the Hilbert space well we must be the cases that this thing should be able to normalize so if this is going to be a probability distribution this must be normalizable and typically we will set this equal to one so in this case if it were a you know a discrete basis we would write the normalization like this and that of course is equal to the but now we have this continuous basis so let's plug that in and that's just equal to this and that's not going to be equal to one so the allowed states in Hilbert space are the ones such that when I integrate them square their magnitude and integrate them over the whole real line I get a finite number and I set people to one so this Hilbert space has a technical name the Hilbert space is called L2 over R that Hilbert space is the space of square normalizable and of course how do we interpret the weight function well one thing we see just from our general theory of measurement so one interprets one when we interpret the weight function so what we have here is that in a resolution of the area which is in terms of a positive operator value measure here it is how it really is a measure as thought about as a measure as an integral measure this is the projector onto the range x to x plus px and then let's sum up all those little slices okay what that means is that the board rule helps me that this is my projector and this is my state and this is then dx psi star psi that this is the probability to find in a projective measurement the particle in the range x to x plus dx is that the magnitude of the probability amplitude in the position representation square kk psi star psi is what we call the probability density so when we have our random variable that we're measuring is a continuous variable we can't talk about the probability to be at a point because that's a set of measure zero we can only talk about the probability to be within some differential range okay so this is the probability density okay moreover let's say what else can we say about the position representation we want to say about translation operator how does it act so let's say that we have some psi and we look at psi of x the wave function okay and let's suppose in what the case it doesn't really matter that this psi of x was a wave packet localized at the origin centered at the origin now let's ask the translation operator what the translation operator should do let's call psi prime let's translate by an amount x9 okay and what is the wave function now of this well it's just the guy that's translated by x9 okay so it should be in the same shape so imagine that if this is psi of x what is this psi of x minus x9 right because its value at the origin is the same as its value at x9 so this function is psi of x minus x9 which means that this has to be x minus x9 so what we have here is that the translation operator acting to the left is x minus x9 now I can take the adjoint of this whole thing and that becomes t dagger of x9 acting on x is x minus x9 this is unitary and so t dagger is t inverse but the inverse is just translated by minus x so this is a very long-winded way of saying that the action of the translation operator on the acting pet now of course we also have another representation because we have position and momentum so I have the momentum representation I have to say if I wanted to I could write the resolution of the identity in terms of the momentum basis in which case I have a representation which I'll call psi tilde so this psi tilde momentum space wave function is nothing more than the probability of the amplitude in the position eigen space the momentum eigen space and of course we can equally well look at the normalization the normal square of the state the momentum space so the momentum space wave function is normalized in this way and the same interpretation of all this is the momentum space probability density that is to say the probability to find the particle momentum in the range p to p plus dp is dp kind of momentum so what we have been studying though for the last time is that we can always transform between any representation in one basis and the representation in another basis via a unitary transformation and that unitary transformation has a name here it's called a 4A transform but it's just an example of what we've been doing in the new basis so let's do it if I have the position space representation I can get to the momentum space representation so let's say I want to get to the momentum space representation that's the momentum space wave function well how do I do it I insert a resolution of the identity in the new basis in this case it's x this is just another way of writing a unitary transformation this is like my column vector and this is like my elements of my unitary matrix from this column vector to that column vector but now it's not matrix multiplication it's the integral because these are not discrete numbers so this is my change of basis unitary similarly we can do the inverse we can go from momentum space to position space and by exact the same now it's the inverse of this and the inverse of this is the adjoint and the adjoint of that is the conjugate so that's this oh I'm sorry yes thank you okay so now what is the value of this that's calculated so let's to do that what I'm going to do is say let's again let's consider the normal vector norm squared we check to write as the integral over the square of the probability of the momentum space by the way let's plug that in okay so that's equal to the integral over all momentum and then I'm going to write that's squared so I won't have to apply this by its complex conjugate so I have two integrals over x and they're two different integrals so I have to have two different variables integral over x and then integral over x prime okay I'm sorry I realized where I was going with this but I left that one important step so let me just write this down and I'll explain what I meant so this is equal to some px and then the conjugate and then one of these guys just plug that in honestly I forgot one step we can't draw it in many ways but I wanted to draw it in one side let's consider our guys over here according to what we just expressed before this guy is equal to x is translating from x equals zero to some amount of x right and this is p but the translation operator which has now been erased was the exponentiation of something e to the minus i over h bar p hat I ask you what is the action of this guy on this p hat operator right we can just replace it because this is an eigenvector p it doesn't matter it's in the exponent so that becomes a number e to the minus i over h bar px and this is just some number and then the other guy which we take to you we always have we can manage the overall phase and we are so now I can go and plug that in we don't know what this n is but let's plug it in so now I have n squared I have the integral of these integrals take these guys reverse the order of integration which using physics we always do unless we have to worry about it the integral of p e to the h bar what do we do now well this integral is something that I'm sure you've seen if I ask you what is the integral from minus infinity to infinity of some p into the i kx minus x prime what is that it's got a function right it's got some factors in it and if I write it oh this is equal to 2 pi times the delta function evaluated sorry I can't read these down which might be let's say we integrate over some variable k then I can write this down this where this is a direct delta function so now this integral if I divide by h bar over here and multiply by h bar over here then I can call this k8 and I get 2 pi delta x minus x prime when I integrate over the delta function x prime back of places where I see x prime gets an x so this becomes 2 pi h bar n squared the integral from minus infinity to infinity dx integral squared which is equal to 1 because this all the way back here was supposed to be an over the way function but this is also equal to 1 so that tells me how I should normalize this it tells me that I'm going to choose n to be equal to 1 over the square root of 2 times so we now have the relationship between the position and momentum space weight functions and of course it's nothing but the Fourier transform and you should always think of a Fourier transform as nothing but a change of basis that's how you should think about it where I have two linear representations one in terms of position and momentum or time and frequency just a change of basis so we have the transform says the momentum space weight function is the integral over all positions e to the plus i dx over h bar dx square root of 2 pi h bar and the momentum and the position space weight function is the inverse in order to the momentum space weight function I have to project the state onto the position eigenstates that's what I'm doing right so momentum space weight function momentum space eigenstates that's what this is saying funding in the x or the position space weight function I project on the position eigenstates this express the momentum space now there is one important fact here we kind of slept over so position and momentum space eigenvectors are not l2 but not in the Hilbert space what do you mean by that well let's say suppose the state of the system works as a let's say it works say this thing is supposed to be something in the Hilbert space it's got to be the case that its momentum space weight function for example should be 1 right because if it's in the space it's normalized well according to this that's equal to this which is we put this factor in there and that was equal to e to the minus i p x over h right there so let's plug that in square that into the integral of this thing the norm of the square of that 1 over 2 pi and the square of that is what's the square what's the magnitude square of this 1 right and so that is infinity so that can't not say it's true for momentum these things are not where normalizable they are not in the Hilbert space so it's a little bit there's a whole sophisticated structure that we're just touching our toes into here about when we talk about Hilbert spaces with infinite dimensions about a loud states not a loud states bounded operators unbounded operators for the most part we get away with this in physics by dealing with things like delta functions that's where I bear this delta function of course is not a real function it's what you call a temperate distribution but formally speaking x and p are not in the Hilbert space now their duels exist because I can talk about the bras because that it gives me a perfectly fine representation but the tests don't one way the last thing I would say about this what this means is that position and position momentum eigenvectors are unphysical there is no preparation that can prepare the system in an eigenstated position or an eigenstated momentum because that would take an infinite amount of energy if you have a position eigenstate it has a uncertain momentum to all possible momentum it has an infinite amount of energy you can never localize something to a point to localize it as close as you want but not to a point so it's no worry mathematically that these things are not in the Hilbert space the same thing from momentum because it's an unphysical state which is why we can never ask the question what is the probability to find a particle at position x because that would mean at that point it would take infinite precision to do so we can say to within some digits of precision some small range as small as we like but never at a point so are the different sets of x and p's are the same or right so let's ask this question suppose I have the state of the system if I wanted to write it formally speaking let's say it's localized at some position x not and I ask what is the weight function associated with this well that's this and this is this so we can write it down formally as a delta function localized at x not but it's not a good function because if I square it first of all it blows up when its argument is 0 and moreover if you try to square it and integrate it you get infinity it's not normalizable it's an unphysical state there's no such state we can't prepare the state we can prepare something that approaches it and can make a Gaussian wave packet with a width that is extremely narrow which limits to a delta function but it never is exactly with 0 we will continue this and solve our problem