 It oscillates at a frequency. Sure, this has been brought up before this week, given by its spring constants and its mass. And actually, and in classical mechanics, we can make the following statements that if we're provided with the initial conditions, we can then write down the full time evolution very simply. So cm here means classical mechanics. And the result would just be that the position of this object as a function of time would be related to its initial position and subsequent cosine-like oscillations and also to the initial momentum and subsequent sinusoidal oscillations. And likewise, the momentum as a function of time is set by those same initial conditions. So this is one way of saying why it's regarded as simple. In the quantum mechanical description of the lone oscillator, the same idea holds that I can write down the time evolution of the quantum oscillator very simply if I'm just given its initial state. So if you give me that initial state, psi of 0, simply expand it in the basis of energy eigenvalues. And each of those oscillates. And those energy eigenvalues are also things that we know pretty well. They are just Hermit-Gauss polynomials where the constant x0 sets the size of the ground state wave function. Now, if we add driving to this system, not much really changes. At least, things don't get much less trivial. If the force that's doing the driving is harmonic in time, so if it's just an oscillatory force, then the motion, so let me call that f sub omega for a oscillatory force at frequency omega, then the resulting motion due to that force is just given by its amplitude, a factor that sort of asks whether or not the frequency is being applied after near resonance. And then the motion is at the same frequency as the applied drive. Sometimes we tidy this up a bit by putting that quantity in the parentheses, defining that as the susceptibility of the oscillator at frequency omega. And the linearity of the oscillator allows us also to treat time-dependent forces that aren't just a oscillatory, but have pretty much arbitrary form. So if the force is a function of time can be written as a sum over a whole bunch of forces each applied at some frequency omega sub n, then the resulting motion, x of t, is just the sum of the responses to these individual Fourier components. This thing here, x of omega n, is just what we wrote over there. So even the arbitrary linear drive applied to the oscillator doesn't make it much more complicated in the classical picture. In the quantum mechanical picture, let me not go into details, but just say that pretty much the same results apply, which is that for an arbitrary time-dependent force, the time evolution of the state vector is pretty simple. I mentioned coupling. And there are things, a very wet eraser, a cloth. I have two very wet cloths and a very wet eraser. There's another one floating around. That would probably be. OK, I'll stay off the bottom of the board. OK, that would be great. So when we have a whole bunch of oscillators and couple them, so we have not just one oscillator and some arbitrary collection of them all coupled to each other, still via linear springs and the like with different springs. So this could be the spring coupling mass 1 to mass 2 and mass 2 to mass 3. And this object here has a coordinate x1. And this object's coordinate is x2. And this object's coordinate is x3. OK, thanks a lot. Then even though the equations of motion, I'll use the abbreviation EOM, the equation of motion for these coordinates might involve a lot of couplings and simultaneous solutions and the like, there is another set of coordinates, which we could call psi 1, 2, et cetera, all the way up to the same number, which fully describe the system in terms of a set of uncoupled oscillators. So these are what we would call the normal modes. And this coordinate is normal mode 1. This coordinate is normal mode 2, so on and so forth, each of them having some resonant frequency, omega 2, omega 3. And the transformation between the two is linear in the original coordinates and is represented by solving some kind of eigenvalue problem, like the diagonalization of a matrix, which gives us the normal coordinates and the frequencies of the normal modes. And this is in the classical version. In the quantum mechanical version, a couple of oscillators are really still pretty trivial. The reason being that the usual approach in quantum mechanics is to take the classical description of the physical system and then quantize it by taking the classical coordinates and converting them into operators, noted by hats and the like. And those operators, since they are just linear functions of the original x's and p's, will obey the same commutation relations. So this is a normal coordinate i. This is the momentum of the if normal coordinate. And the last point that I wanted to make was about the introduction of dissipation. And in classical system of oscillators, usually the assumption is that once dissipation is introduced, not much qualitatively changes. The normal modes still oscillate, but they do so at ever decreasing energy. And you can find statements to this effect in classical mechanics books by Goldstein, Landau, and Lipschitz and others. Again, this is regarding classical oscillators. Regarding quantum mechanical oscillators, the statement would be even stronger that in the quantum mechanical system of oscillators, the introduction of dissipation just causes them to gradually lose energy. But if anything, it's even worse because even faster than the energy dissipates, most of the interesting quantum mechanical effects dissipate even more quickly. So things like superpositions or decay even faster than the energy decay rate. So hopefully there are not too many big surprises here, but I just wanted to put out front what some of the conventional wisdom is about systems of coupled oscillators, which is most of what we study in optomechanics. So all of this being said, this is like a giant unadvertisement for the field. Where is the interest? Well, so the interest can come about from non-linearity, which is what I've manifestly left out of here. And one point to make is that non-linearity doesn't have to be very strong in order to begin to dominate the system and be interesting. Loosely speaking, it just has to be stronger than the dissipation so that a drive applied to the system results in its moving with an amplitude that's large enough for even a small non-linear coefficient to begin to affect its motion. So experimentalists in the audience, if you have a really high-Q mechanical oscillator, you know that to see a nice Lorenzi in resonance, you have to drive it very, very weakly. If you have a really high-Q oscillator and you drive it strongly, you'll see all the duffing-like non-linear behavior. And I know you heard a lot about interesting non-linear oscillator physics earlier this week. I'll just add that if that non-linearity, if the scale for the non-linearity to set in is on the order of a few quanta of energy, then the non-linear physics isn't just accessible in the classical regime, but also in the quantum mechanical regime. So another point of interest might be in the actual physical realization of the oscillators. So instead of just an abstract mass on a spring or whatever, if, for example, one of my oscillators is a mass on a spring, some sort of mechanical oscillator, and the other humble oscillator, to which it is linearly coupled, is a mode of the electromagnetic field that oscillates at 10 to the 15 hertz or something like that. So here's a real mechanical oscillator at 1 megahertz. And if the thing that it's coupled to is, well, I'm trying to draw the physical system, if it's, say, a mode of an optical cavity that might oscillate at 1 peta hertz, then interesting things begin to happen as they're coupled to each other, and normal modes begin to be some sort of mixture of this hot, slow, heavy degree of freedom and this very fast, very cold, very quantum, very measurable degree of freedom. And that's what drives a lot of interest in optomechanics. So for example, if this frequency is high enough, it can be placed in a pure quantum state. And if the coupling to this mechanical oscillator, which normally is at some high temperature, like a Kelvin or so, is sufficiently strong, the purity of this quantum state can be transferred to this normally impure thermal state. This is the beam splitter interaction that I think you guys heard about earlier this week. So this is the last point that I just made is a powerful route to quantum optomechanics, which is still a study of oscillators, but perhaps intriguing because this type of coupling allows the quantum properties of a system that might normally never display quantum properties to be made detectable. So that's a lot of what drives optomechanics as I see it. So that's the end of my introduction to the field. And now we'll start talking about some specific problems. But before we do, I would be happy to answer questions about what we've talked about so far. And also just in general, encourage you to ask questions while I'm talking. I definitely would rather slow down a bit and have you understand more than cover a bit extra and have less of it be understood. So feel free to stop me at any time. So the first thing that I want to talk about is the question of distinguishing quantum behavior from classical behavior in oscillator systems. So this has been an important issue in optomechanics. And it's one that's maybe a little bit more subtle than in a lot of other fields as a consequence of the fact that the physical systems that we study are harmonic oscillators. So in trying to determine whether or not a phenomena that you're observing, an experiment that you're carrying out, has to be explained by quantum mechanics and can't be explained by classical mechanics. It's important to make sure that you're addressing a question with your experiment that is well posed in both theories. That isn't always the case. If you insist on asking which slit did the particle really go through, these are words that don't translate very simply into the equations of quantum mechanics. So it's important to define questions that are well posed in both theories. So in order to make sure that we're describing oscillator systems in a way that makes sense in both classical mechanics and quantum mechanics, let me just start with a summary in classical mechanics. So the first point is that a state of the system is specified by two real numbers, which are the oscillator's position and momentum. So if you like, the state of the system can be drawn in the phase space of the oscillator as a point. So there's a state of a classical harmonic oscillator. In this description, each state has a definite energy, which if I keep units along, is just the potential energy, 1 half kx squared plus the kinetic energy, p squared over 2m. There is a state with the lowest energy. It's the state with x equals 0 and p equals 0. And obviously, it has energy equals to 0. And the last point that I want to make is that if the harmonic oscillator is weekly coupled to an ideal thermal bath that's at some temperature T, then measurements of x and p. So if it's not coupled to any kind of bath, then x and p evolve according to f equals ma, if you like. And the time evolution is fully specified by the initial conditions, as we said at the outset. But if instead the harmonic oscillator is coupled to a bath at temperature T, then a measurement of x and p, measurements of x and p will be random variables drawn from a specific distribution, which is the Boltzmann distribution. And it's at least proportional to e to the minus energy of x and p over kT. And for the particular form of our energy just over here, this will be a Gaussian in x and p. So this will be minus 1 half k x squared over kT times e to the minus p squared over 2m over kT, which, if I try to make a sort of three-dimensional plot where this is the plane of phase space, and on this axis I plot the probability distribution. This will look like a two-dimensional Gaussian. So that's what I wanted to say about the classical oscillator. And the reason that I phrased all those statements in that fashion is because I think I can make very nearly parallel statements about the corresponding quantum mechanical system. So here we would have a state is any square normalizable function, which is to say it's equivalent to a ray in Hilbert space, though often we just call them vectors. Every one of these states, so every wave function that I can imagine, has a well-defined expectation value of the energy. This is the expectation value of the Hamiltonian. But only a discrete set are actual eigenstates. And those are the well-known harmonic oscillator energy eigenstates with energy eigenvalues h bar omega n plus 1 half. So if the oscillator is not coupled to a bath, measurements of x and p will be random variables that are real valued, that are drawn from probability distribution functions that are related to the wave function in the following way, where the subscript here means that this is the wave function in the x basis. Questions? And I guess it's just worth mentioning that these two functions, the momentum space wave function and the real space wave functions are Fourier transforms of each other. But it's important to note that the physical state isn't specified by the outcome of measurements. So the physical state, the variables that define the state of the system aren't real numbers. And they aren't specifically the outcomes of measurements. They're operator valued quantities, x hat and p hat. These are the things whose time evolution we calculate using the equation of motion. And if we know these, then we know the state of the system. And the operator valuedness of these quantities is particularly important because they don't commute with each other. And the usual interpretation of this non-commutation is that there is no value of the position and the momentum at a given time, or that a measurement of x will disturb subsequent measures, the outcomes of subsequent measurements of p. The last thing I wanted to add to this list was that for energy eigenstates, which in many ways might seem like the closest analogy to the classical states since they have perfectly well-defined energy, the actual distribution from which measurements of the position would be drawn is related to the Gauss-Hermite polynomials, as I mentioned before. So in order to see how close we can make the connection between the properties of the classical oscillators and properties of the quantum oscillators, let me take the particular case of the ground states of the oscillator. And in that case, the probability of finding the particle at a given position, x, is just the ground state wave function squared. So it's a Gaussian of width x naught. Sometimes this is called the zero point motion. If I ask what's the probability for outcomes of measurements of the momentum, that's also a Gaussian of width, at least, set by x naught. And if I ask about the probability of any measurement of any other variable, q sub theta, it's a linear combination of x and p, it will have its results drawn from the same, by distribution with the same form. I guess we could define q sub theta is cosine of some angle theta times x plus sine of that angle times p. So a linear combination of x and p. So I'm really just plotting the square of the ground state wave function for the harmonic oscillator. And these particular curves, though, have an important consequence for measurements trying to look at specifically quantum mechanical features of the harmonic oscillator that I'll draw for you next. So let me put a little subscript here to indicate that this is in my original notes. So there's a subscript of a zero cat in here, which says that this is the probability distribution associated with the ground state for measurements of x, probability distribution function associated with the ground state for measurements of p, and the probability distribution function for the same state for any linear combination of them. Because of their form, we can say that we can regard them as residuals, and I'll explain in a minute what this means if this isn't a familiar term, of a joint probability function that I'll call the w function associated with this state, which is a function of x and p, whose meaning is, so now I'm going to define what I mean by a joint probability distribution. This is the probability that x is equal to some specific value and that p is equal to some specific value. And the reason that we can make this definition is that the residual of a joint probability is the probability that x is equal to some specific value regardless of what p happens to be equal to. So this should sound a tiny bit fishy, because the whole point before, well, let's have a look. Here are our residuals. So let me know if these words don't mean a ton to you, let me draw a picture of what these terms really mean. What these words mean is that as a function of x and p, for each value, there's a quantity w that has a certain value. And I'm just going to draw what it is here, and we'll argue that it gives us the right value. So it is just a two-dimensional Gaussian. And the definition of a residual, if we take it here, being the probability that x equals something regardless of what p is, is gotten by saying, OK, I want to know what's the chance that x equals this value here. So let me then integrate up the probability of x being equal to that and p being equal to whatever it might happen to be. So the probability of having x equal to x naught regardless of what p is equal to is the area under this curve here. And so if I plot that as a function of different choices of x naught, this is sort of at the limit of my perspective drawing skills, but I'm going to show you a computer generated picture in a minute, then the area under such a slice is big when the slice passes through a lot of this shape and small when it passes through less of it. And so this is the probability of getting some x regardless of what p equals. And likewise, if I want to calculate the probability of getting a certain value of p regardless of what x is, I do the same thing but take the slice the other way. So if I want to know the probability of having a certain value of p naught, I find that by summing up all the ways in which I can get p naught, which is to say over all the different values that x might be. And if I plot that as a function of my choice for p naught, which is here, and again, I get a large answer for p's that are near the origin and a small answer elsewhere. And these curves look very much like the actual ground state wave functions, the square modulus we plotted over there. And you can check by explicit calculation or substitution or whatever you like that the w function that gives as residuals our p sub x and p sub p and all of the other values because you can see that the rotational symmetry of this thing is such that if I project along this axis onto a screen over here for some quantity q sub beta, I'll get the same shape. That joint probability distribution is equal to or at least proportional to e to the minus x over x naught squared over 2 times e to the minus p m omega over x naught squared over 2. So even though we know that x and p don't commute, they're not supposed to have values at the same time, we can explain all of the measurements on a harmonic oscillator that's in its ground state as though they did. As though, in fact, the system was described as having a position and a momentum that was equal to this with a probability that's equal to the height of this function w at that value. So not only is that sometimes a little bit surprising that this width of the zero point motion, which in some ways can be attributed to the non-commutation of x and p, can still be described by assuming that x and p do actually have independent real values, which are just drawn from a random distribution. So there's no in the data, there's no explicit signature of the fact that x and p are incompatible variables. Not only is it possible to construct such a function, it isn't such an outlandish function. We've already seen it once today in this lecture already. It isn't just any function. So this ground state to w function, which is called the Wigner function, is exactly the same as the Boltzmann distribution for this oscillator, assuming that its temperature happens to be exactly equal to h bar omega over 2 kb. So this analytic form of the Wigner function is definitely the same as the analytic form of the Boltzmann distribution, just the distribution of the Brownian motion of a harmonic oscillator. That's interesting enough, but even worse from the point of view of an experimentalist hoping to distinguish some tellingly quantum mechanical feature in the data, not just the form, but the actual details, its size, is exactly the same as if that oscillator were just described by classical mechanics and coupled to a bath whose temperature happened to have this numerical value. So this is an important result. Is it the end of the story, though? I mean, is this always going to be true for harmonic oscillators in their states? Are there states that I could prepare for which the measurements of x and p would not be consistent with an underlying joint distribution of random real valued quantities? To check that out or to illustrate that fact, let's do exactly the same thing for the first excited state of the oscillator. So in that case, the wave function of that state in the x basis looks like this. The wave function of that state in the p basis looks exactly the same. And the wave function in any other combination of these variables looks the same. And this fact is a consequence of the fact that the Hermite-Gauss polynomials, these eigenfunctions, are themselves eigenfunctions of the Fourier transform operator. So they're the functions that when you Fourier transform them, give you the same thing right back again. That's why they keep looking the same. So now let's ask whether or not we can construct a classical probability distribution function that would give us these wave functions or their square modulus as residuals. So what I want to know is, if I have x and p here, can I construct a function w up above that plane such that when I project it onto a screen over here, I get the mod squared of one of these functions, which is to say, mod squared, I just get a double lobe. And when I project it over here, I should get a double lobe. And when I project it this way, or in fact, anyway, I'll always get the double lobe. And to answer this question, there's a nice bit of intuition around these joint distributions and their residuals. You can write down the equations and try to puzzle it out. You could read up about the inverse radon transform, which would give you the rigorous answers. But a residual you can think of as being the shadow cast by an object whose opacity is given by the w function. So when we had that Gaussian here, each of those single lobed bumps was the shadow, the darkness of the shadow of light passing through an object that was rather transparent, far from the origin, and rather absorbing near the origin. That's why there was a shadow with one big lump in it. So now we want to find an object whose opacity is chosen such that the shadow that it cast this way is double humped. I could imagine such a thing. It's got some opacity here and there. And we want to find one whose shadow, when it's cast by light coming this way, gives a double lump. Well, that's also easy. I just have to have two absorbing things here. And I want it to cast a double lumped shadow when I shine light this way, which means that it should have some weight here and there. But now I've backed myself into a corner. Because when I had to put these two dark eyes here, that's going to make this become shadowy, not bright. So this axis is the darkness of the shadow. So based on this kind of reasoning, you can convince yourself that there is in fact no such object. There's no object that casts a shadow that is two dark lobes regardless of the direction of illumination. So that doesn't leave us completely high and dry, because there is still such a function whose residuals will result in this, but it's not a function that is everywhere positive. So it's a function that has to have, and this I will try to draw it, but then I'll show you the nice computer graphics just in case. So this looks like a volcanic island from the movies. So the function that accomplishes this is a volcano with a very deep, so deep, in fact, that it becomes negative, which is necessary to make this integral coming along here balance out to zero, even though you had to go through some opaque parts. So the reason that that's interesting is that this function does exist, but it is not everywhere positive. So whatever it is, it is not a probability distribution. No, so in order to get zero here, which is when I integrate this w curve along this axis here, my function goes up here. It goes up here, so it's better go negative in order for this area to be zero, which is what's required right here on the axis. OK, so I will put this all out of our misery and just show this with nice computer graphics over here. So this is the quantum ground state. It's position and momentum distributions. This is the classical oscillator's Boltzmann distributions, x and p. It's all Gaussians. And they can be ascribed to a, here I called it p, but a positive valued function whose projections give you the x and the p distributions that you'd like. So this is the situation that works for the ground state. For the first harmonic, first excited state, this is what the probability distribution of position measurements looks like. That's what the probability distribution of momentum measurements look like. And the only shape, two dimensional shape that will give you these as shadows, so to speak, has to go negative. And so it can't be interpreted as a probability of having an instantaneous value of x and p. If I want to say, well, what's the probability of having such and such value of x and such and such value of p? And you tell me minus 0.03. But we're not talking about statistics anymore. So this negativity of w is a clear signature of something that doesn't exist classically. And it's a direct manifestation of the incompatibility of x and p. And so things like this can be measured in a variety of ways in a lot of quantum optical systems. But as far as I know, this direct measurement of this quantity hasn't been carried out for mechanical systems. And it might seem like, if anything, this was good news for the experimentalists, right? Because when does this not work? When do you not get exciting quantum physics when you cool all the way to the ground state? So this might seem like the whole epic of cooling to the ground states in optomechanics was a giant waste of time, because that's the state that isn't interesting. I should mention that all the w functions for all energy eigenstates have negativity, except for the ground state. So really, you just have to stop cooling, and then you'll get to access this exciting physics. You can just stop at that first excited state. So it doesn't work that way. We started 5.30, 6.30. We go to 7. OK. So it doesn't work that way. And to explain why that is, I want to introduce a type of state called the coherent state. So the coherent state has an index just the way the energy eigenstates have an integer index. The coherent states are indexed by a complex number. Alpha is an element of the set of complex numbers. And it's defined in the following way. It's defined by starting with the ground state and then shifting its position, which we do by acting on it with e to the i p hat times a number, which in this case is x0 times the real part of alpha. And then we shift it in momentum, which we do by acting on it with x hat times p0 times the imaginary part of alpha. And if this looks a little arcane, all this is doing is shifting the mean value of the momentum by an amount proportional to the imaginary part of the number alpha. And this operator is just shifting the mean location of by an amount proportional to the real part of alpha. So this is just the ground state. You just pick it up and you move it over to the side by a certain distance. And you give it some momentum. And one thing to note is that the state in which the coherent state, which is labeled by the number 0, is the same thing as a result of this definition as the energy eigenstate with quantum number 0. But that's the only instance in which an energy eigenstate is also a coherent state. Couple other things to note about coherent states. And I'm assuming that this is at least somewhat familiar is that, of course, they can be represented in the basis of energy eigenstates. That's a complete basis. But it requires an infinite sum to do so. And the other interesting thing is about these states is that they are eigenstates of the lowering operator. And the eigenvalue is the index alpha. But really what I want to call your attention to is this definition over here. That a coherent state is nothing other than a displaced ground state. So let me start by redrawing that Wigner function of the ground state. Now just as a contour plot, here's p and x. The Wigner function is non-zero at the origin and then sort of dies off rather quickly away from the origin. And it has an area that's approximately equal to h bar, which is to say it's a minimum uncertainty wave packet. The fluctuations in x and p of the ground state are as small as quantum mechanics allows you to make for any state. As a result of these definitions, I can also draw the Wigner function of a coherent state. So here's a contour map of the Wigner function of the state alpha. It's exactly the same as the ground state. It's just been moved over by an amount that's set by the real and imaginary parts of the index alpha. So it also just looks like a 2D Gaussian of area h bar. So it is also a minimum uncertainty wave packet. So that's what I wanted to say about the statics of coherent states. Now I want to turn to their dynamics. So in order to describe the time evolution of a system that's initially prepared in one of these coherent states, let me use the Heisenberg equation of motion to find the time evolution of the operator that sort of defines the coherent states anyway, which is the lowering operator a. So to get the dynamics, I solved the following equation. So since the Hamiltonian is h bar omega and an a dagger and an a, the 1 half doesn't matter for the commutator. So this becomes just minus i omega a is a dot. This is a differential equation that you know from just the classical harmonic oscillator. And it has solutions that are just the initial value of a times e to the i omega t. So what this means is that if I consider the action of the time dependent operator on a coherent state, well, I can just copy from the line above. This is a of 0 e to the minus i omega t acting on the coherent state. This is just a number. I can pull it out front. The Heisenberg operator at t equals 0 is just the regular Schrodinger operator. And its action on a coherent state is just to pop out that factor alpha. So if I look at the equation in this way, I can see that the coherent state is an eigenstate of the time dependent lowering operator, which means that in the Schrodinger picture, the time of evolution of a state that's initially in the coherent state is still a coherent state. It's just a slightly different one. It's the one whose eigenvalue is that. So now we can say, what's the time evolution of this bigger function? So if it starts out with some alpha that gives us a minimum uncertainty blob over here, well, all that happens is it remains a coherent state, just a coherent state whose alpha gets multiplied by an e to the i omega t. I think there's a minus sign here, which means that it doesn't change at all, except it just goes spinning around in phase space. Again, this is x and this is p. And at every instant in time, it's just a coherent state centered at one place or another on this loop as it goes around. So this is an important result because this is something that we can also state in classical mechanics. In both, we can say, imagine the following. Prepare the system in its ground state. We define what we meant by the classical mechanical ground state. It's just the pendulum sitting still at the bottom. In both cases, we can displace by a certain amount x0 in position, which is, say, we take the pendulum and we start it here. We can also displace it in momentum by saying it starts with a certain amount of velocity. You can do that in classical mechanics or quantum mechanics. And then we can make the following statement, which is that the subsequent evolution of the outcomes of position measurements is that the mean of those measurements will be equal to the initial mean times cosine omega t plus the initial momentum times sine omega t. And that the average value of measurements of the momentum will be set by the initial value of the average of measurements of momentum, subsequent to subjects to cosine and yasoidal oscillations, likewise for, so usually in classical mechanics, we wouldn't bother to talk about the mean of the outcomes of measurements, because we're, at least, entitled to imagine that they would always be the same. But there's still a mean for that distribution. And the means in both the quantum calculation and the classical calculation are in agreement. I could make an even stronger statement, though, which is to say that there does seem to be a difference, which is that in the quantum case, my measurements won't always be exactly those numbers. They'll be drawn from a distribution that's set by the size of this blob with area h bar. But as we just discussed, that blob is exactly what you would get for the Boltzmann distribution of the oscillator at a certain temperature. So another statement that I could make that would be true, both in the classical and quantum description of this system, is that measurements that are made of x and p will be drawn randomly from a distribution that is a 2D Gaussian centered at the point specified by the classical equations of motion and having a shape that's given by the Wigner function that we talked about a few minutes ago or what is exactly the same function, the Boltzmann distribution, assuming a temperature that just happens to be equal to h bar omega over 2 kb. So coherent states, like the ground state, act exactly the same as a classical system at a finite temperature. But why have I dragged you through this? Who cares about coherent states? I just pulled them out of nowhere and then subjected you to this lengthy description of all of their boring properties. Let's just skip these. Remember what we said was interesting was the n equals 1 energy eigenstate. So why not just create it? And we know how to prepare a state. If I give you a system with a certain spectrum, and this is the n equals 0 ground state, and this is n equals 1 and n equals 2. Let's say we know how to cool to the ground state, which we now do in a lot of experiments. So the system is here. And I want to put it in the first excited state We learned about this in high school chemistry. You just apply an oscillating drive at the frequency that matches this transition. So just drive it at frequency omega. The problem is it doesn't work quite that way in the harmonic oscillator. So in order to illustrate why that is, let me now solve for you the problem of the driven oscillator where it's subject to a completely arbitrary time-dependent drive. High school chemistry reasoning and atomic level transitions might lead you to suspect that we should apply a sinusoidal drive at this frequency. But let's solve the problem in full generality. I'm going to allow you to apply any time-dependent force you want to try and drive the system from the ground state to the first excited state. So the equation of motion will be the Heisenberg. And we're interested in the time evolution just as we did before of the lowering operator. And now our Hamiltonian is going to include some driving. So I'm going to say that the Hamiltonian is just the oscillator itself, a dagger a plus a half, plus a completely arbitrary time-dependent force, f of t, which I can put in the Hamiltonian just by appending it to an x operator here. The force here is in the Hamiltonian multiplied by x, technically integrated with respect to x. But we're going to assume that f of t does not depend on x. It only depends on time. And in order to tidy up the notation a tiny bit, I'm going to redefine the force just by scaling it by some constants. I'll denote that as little f of t so that I can convert x, so that I can rewrite x in terms of raising and lowering operators. So if you like, little f of t is just f of t, which really has units of newtons in everything times the size of the ground-seat wave function. So now that I have the Hamiltonian written just in terms of a as in a daggers, it's easy to find its commutator with a. And that is, so this is a regular old undriven system plus a driving term. It's still just a first-order differential equation, so we should expect to be able to solve it. And a nice way to solve a first-order differential equation is to try and put one side of the equation in terms of a total derivative and then just integrate both sides. So we can do that if I bring this term over to the left to the other side. And if I multiply everything by a factor of e to the i omega t, and the only reason for doing that is that this quantity here is equal to the total derivative with respect to time of a e to the i omega t. And over here, I still have the same old expression. And the reason that this is a nice thing to do is I can take the dt from down here, pop it up over here, and integrate both sides. And we'll make this a definite integral from time t equals 0 to time t equals capital T. And since over here we're integrating with respect to a different variable, let me just make that explicit, the limits of integration. And the integral on the left-hand side is easy to do. This is the whole point of putting things of a total derivative. It's just a at t e to the i omega t minus a at 0. And on the right-hand side, we really don't have much simplifying that we can do without knowing the form of f of t, which at this point we haven't specified. So let me just simplify things a tiny bit by isolating this term, which is sort of what we want. This is the time evolution of the operator a and putting everything else on the other side. And this is really as much simplifying as I think we can do without knowing some specific functional form for the time-dependent force. But we can still learn about how coherent states evolve under such a force by acting this operator on a coherent state. So let's now act this operator that we just found on a coherent state and find out how it evolves as a function of time. So just copying from over there, the operator is a zero-time operator, which is just the Schrodinger operator. And then there's this integral. And this whole thing is acting on a coherent state. And even though this thing in the brackets is quite a lengthy expression, it's all numbers except for one operator, which is the Schrodinger picture lowering operator. And we know how that operator acts on the state. It just pops out a factor of alpha. So this is evidently equal to alpha e to the i omega t. Everything else is just a number, which just multiplies a state and leaves it unaffected. But comparing these two expressions and remembering that this thing here is the operator that we calculated, this is a at t. And its action on a state is to give us back that state times what is still just a number, even though it's a lengthy expression. So this thing in brackets must be the eigenvalue of a of t. So this is the big result. This tells us that if you start with a coherent state, alpha, and you apply a completely arbitrary time-dependent force to it, it will still be a coherent state. Which coherent state? This one, because this is the number that you get back when you act on it with a lowering operator. That's how you know which coherent state it is. So this means we're not going to get the n equals 1 energy eigenstate, which is bad news, means we're stuck in a coherent state, which is bad news because the outcomes of all experiments will look exactly like a classical oscillator, subjects to a bit of thermal fluctuations. And actually, the news is even worse than that, because if I ask which coherent state is it that we get, which of these many boring coherent states do we get, the answer is the most boring one. It's the one that you would get just by calculating the system's dynamics classically. So it isn't just a coherent state, which has a lot of classical properties. It's the coherent state that you would calculate from classical dynamics. And to show you that it's the one that we would get from f equals ma, let me solve the same problem in classical mechanics by solving Newton's second law. So here's the exact same problem just written in terms of f equals ma. It's an inhomogeneous differential equation. So I know that my solutions are going to consist of a homogenous part, which is independent of the drive, and an inhomogenous part. The homogenous part is, well, we already know what an oscillator, subject to no force, does. I've already written this a few times. And the evolution of p in the homogenous solution, I remember now that you can't see what I'm writing, but it's OK because I've already written it a whole bunch of times. It's just the same free evolution of a harmonic oscillator. So that's going to be the homogenous part of the solution. Then we need to find an inhomogenous solution. And for this, I'm going to make use of this property of the linearity of the system that I mentioned at the very beginning of my lecture, where I said that if your force as a function of time can be written as a sum over a bunch of functions of time, at the start of the lecture, we considered cosines and sines. But this is true for any function, going to the linearity of the equations of motion. Then, so if the force can be written that way, then the corresponding motion is just the sum over the solutions to each of those individual forces. And as a particular way of disassembling whatever crazy force as a function of time you give me into manageable units, let me do that in the following way. Let me write whatever time-dependent force you give me as a sum over a whole bunch of impulses. So I'm going to take that nice continuous function and approximate it by a series of delta function pulses, each one having a given strength and occurring at a certain time. And then let me just take the limit of this of a whole lot of these little pulses, so much so that I can turn the sum into an integral. And you can see that this equality holds just from the properties of the direct delta function. Now, this may seem like an odd thing to do to take your force as a function of time, which you may have specified to be something nice and smooth, and to say, no, I'm going to approximate it as a whole bunch of little kicks. But the reason to do that is that the solution, all of these x sub i, the evolution that results from a tiny little kick, is really easy to evaluate. So those little x sub i's, I'll just write down without deriving them. If I give an impulsive force to an oscillator, that's the same thing as just altering its momentum instantaneously. And I guess I should say that this is equivalent to a Green's function. So the subsequent motion of an oscillator that has been subject to an impulse of time tau is that it just oscillates, as though it had been given a impulsive kick. And the size of that oscillation is set by the magnitude of that impulse and the subsequent evolution of the momentum after such an impulse is very similar. But it oscillates cosine-usoidally. So I guess by that clock, we have two or three minutes left, which I think will be OK. Is that clock seems accurate? Are people ending at 7 o'clock sharp? Pretty close? OK. So where did this come from? Well, you know that if I apply an impulsive force here at some time tau, the oscillator will not have been moving beforehand, and then we'll start to ring. And likewise, if I apply that force at time tau, that will take the momentum, which was always 0, and boost it, at which point it will just begin to oscillate. So these are these Green's functions that I've written here. And now I just go back to this statement here, that the full solution is just gotten by summing over or integrating, if it's a continuum, over all the little sub-forces. Then that means that the inhomogeneous solution is just given by this, and the inhomogeneous solutions momentum is almost exactly the same. There's a cosine here. OK, so adding it to the homogenous solution, which I'll just leave from a second ago, gives us the full solution correspondingly for the momentum. So this is the generic driven harmonic oscillator. No assumptions about the form of the driving force. And now to compare with the exact same thing that we calculated quantum mechanically, we just have to note the following identities, which I should have mentioned earlier, which is that the expectation value of the position in a coherent state is just x0 times the real part of alpha. Sort of did mention this, that the real part of alpha just tells you by how much you're taking the ground state wave function and displacing it in space. And the imaginary part tells you how much you're displacing the ground state wave function in momentum. So if you just remember these two things and then apply them to our result from a couple of minutes ago about what alpha is as a function of time when it's subject to completely arbitrary time dependent force, well, I guess I will write it. But the point is that it's exactly equal to what I wrote over there for the classical position, except that the initial conditions are replaced by the expectation value of the initial condition. Those dots mean exactly the same thing as in the classical result. So this is the main, this is the big conclusion of what I wanted to say, which is that if I have a harmonic oscillator and I subject to an arbitrary time dependent force, classically it will start at some location in phase space. And as a result of my really complicated force, it will have some evolution, which we just calculated. If I do the same thing quantum mechanically and start the system in a coherent state which is located at the same spot, that just means there's a 2D Gaussian blob of uncertainty around that spot, its subsequent evolution will exactly follow at every moment in time the phase space trajectory of the classical oscillator, albeit with a certain amount of uncertainty, how much? The absolute smallest amount allowed by the laws of quantum mechanics. So I guess that's a good stopping point. And it illustrates why it is so fiendishly difficult to prepare the one thing that we wanted, which was the first excited energy eigenstate. You can't do it by applying a linear drive. So at the start of my first lecture tomorrow, I'll give a couple minutes of comments about how you could go beyond this, but apparently you do need to do something other than employing the usual tricks. So we'll stop here and I guess pick up tomorrow morning.