 Thanks. OK, so welcome back. Good morning. Thank you for coming again. Just wanted to pick up a little bit back at the start of the lectures. And I rewrote this list that I meant to write at the beginning of the first lecture. But the chalkboard is a different shape than my notebook. And so it all came out rather differently. But here's the list that I meant to put up about conventional wisdom about harmonic oscillators, the oscillator by itself. And it's classical and quantum description. What happens when we apply an arbitrary drive? What happens when we couple a bunch of oscillators? And what happens when we damp them? And in the last lecture, we really tackled this. So this was lecture one. And my point was to tell you something that is sort of sad or boring, which is that, by and large, the harmonic oscillator, which is what we study in Optomechanics, even when it's quantum mechanical really tends to mimic classical physics as precisely as any quantum system can. So what we found was that, at least for a broad class of states, for the coherent states, even in the presence of an arbitrary linear drive, the physical state of the system will correspond exactly to the evolution of the classical system plus the smallest amount of fuzz, smallest amount of fluctuations allowed by the laws of quantum physics. So this was lecture one. And I say it's sort of a sad story. But I think it's a crucial thing for experimentalists to know. So even though this was pure calculation, we didn't talk about any experiments. I think this is an idea that is really central to the field of Optomechanics. You have to know this if you're going to think about how to interpret your measurements and how to look for quantum effects in macroscopic massive objects. So that's why it's an important thing to see. And I wanted to follow up a tiny bit and give you a bit of intuition for why this was. Like, we wrote down equations, we solved them, we made plots. But there's really a very simple picture for this, which is the following. If I have the harmonic oscillator here, and these are its energy eigenstates, 0, 1, 2, 3. One of the first things that we notice about the harmonic oscillator, even in freshman quantum mechanics, the first time you see it, is that these are all spaced by exactly the same amount. And that's something you don't find in any other physical system. As far as I know, this is unique to the parabolic well. And that has a consequence, which is that when you apply a drive at this frequency, which as you know from atomic physics or whatever, should drive this transition. Indeed, it does drive that transition. So it does take you from the ground state up to the first excited state. That's what we wanted. We wanted to make this state. This is the one place where we found real interesting quantum physics. But because this spacing is exactly the same, you don't apply the drive just to these two levels. You apply the drive to the entire system. So as soon as there's a bit of wave function up here, you also start getting transitions like that. And as soon as there's a bit of wave function up here, the same drive that you're applying also makes transitions like that. And in fact, they're also transitioning down and down and up again and the like. With the result that applying this drive doesn't just give you the excited state. It gives you some superposition of all of the energy eigenstates. And not just any superposition, I can tell you exactly that this is this without doing any fancy math over here. Because in the last lecture, we derived the fact that applying a drive to the ground state gives you a coherent state. And here's the coherent state written out as a sum over energy eigenstates. And just the reason that this happens, the reason that applying this drive spreads you out in the sort of hopeless way over all these states, is because of this evenly spaced spectrum. That's one way to see what the problem with the harmonic oscillator is and why you can't just drive into one of these interesting energy eigenstates. So hopefully this provides some intuition. It also provides us with a clue for how to solve this problem. So one way to solve the problem would be to tweak the Hamiltonian a bit, not even the drive, but just tweak the confining potential so that instead of a perfect parabola, it's a little bit distorted. At that point, maybe I leave the ground state at the same energy. But maybe I shift the first excited state down by a bit and maybe this perturbed potential shifts the second state up a bit and the next one down. And I don't know, it could do anything. But as long as it does something, then when I apply a drive at this frequency, you can call the new frequency, that drive, because these levels have been moved around a bit, is not resonant with the other excited state. So now we have exactly what we want. We apply a drive at some frequency. It will take the state from the ground state to the first excited state. And then it will not take the wave function up to the next excited state. And then I'll really have a two level system. It really will behave like an atom. I can drive from here up to there. And if I want to, put the entire wave function up here. Or if I don't want to, just let it rabby oscillate back and forth and get all the rich quantum physics associated with discrete levels. So this would be mechanical non-linearity at the few phonon level. Now, of course, everything is non-linear. If I have a diving board out over a swimming pool and I get it oscillating at large amplitude, that's a non-linear oscillator. But the key point is that we don't want the level spacing to get uneven somewhere astronomically high. We want the level spacing to get even when we have a sort of manageable number of quantum states. So this would be, this is sort of one goal for the field. I don't think we're there yet, but people are certainly making progress. Another approach would be to make the drive non-linear. So we considered the case, so let me remind you first what the linear drive looks like, which is the one that we considered. It was just a time-dependent force, which turns into a Hamiltonian of that time-dependent force times the position operator, which is a dagger plus a. So this means it's a term in the Hamiltonian that can raise you up by a level or lower you down by a level. And that's why applying it pops you from here up to here and then here and then here and then here and down again and fills up all the rungs of this ladder. But if instead we had what's called a non-linear drive, which really just means a force that is a function of time, yes, but also a position, then that will give us terms in the Hamiltonian that involve a dagger, a dagger, and a a terms. So any function of x you could expand in a Taylor series and x. And the lowest order terms would give you a dagger, a dagger, and a a terms in the Hamiltonian. And that would mean that if you started in the ground state, even with a perfectly linear oscillator, if you apply this non-linear drive, it's only has matrix elements, or at least it has some extra matrix elements, to take you from the ground state to the second excited state and from the second excited state to the fourth one and so on. And it will still spread you out all through this ladder, but it won't spread you out in precisely this fashion. So it won't recapitulate the classical oscillator's dynamic. So it'll give you something else. And at least in the simplest case, it will give you a squeeze state, but it can be more dramatically non-classical even than that. So these are two routes to defeating the tyranny of the harmonic oscillator and getting some interesting quantum physics out of it. The last one that I'll mention is strong measurement. So we didn't talk much about this, but if I have a linear oscillator, and let's say I've applied a linear drive to it, and I'm stuck in this situation of having a coherent state, so I have a certain amount of wave function amplitude in all of these levels. So here's some wave function, here's a bit, here's a bit, here's a bit, here's a bit. Now if I have an instrument that performs a strong projective measurement of the system's energy, if I have a detector that's going to give me a click if the system is here and two clicks if it's here and three clicks if it's here and so on, then the result of that measurement will be that the system will be left in the state corresponding to the outcome of my measurement. So if I do the measurement and I get three clicks, click, click, click, that means at the end of the day the system is just in that state. So this is the notion of a projective measurement. And based on our discussion you can see that even though you might not think of measurement as an inherently non-linear process, that's exactly what it is, because it takes a coherent state and, bang, turns it into an energy eigenstate. So that's another way to produce an energy eigenstate. And at present we don't have a great way of doing this with mechanical oscillators. There are ways of doing this with photons, for example, getting a number resolving photon detector. But again, there's also a lot of progress on realizing this with mechanical oscillators. So last lecture was all about the difficulty of getting into these really interesting quantum states. Here's three quick descriptions of possible routes to do just that. Save that again. So for mechanical oscillators that are bigger than single atoms, I think no one has made any foxtates. I think for individual ions, people have demonstrated foxtates in this area, 1, 2, and I don't know the literature well. But I think they've seen at least the n equals 1 foxtate for ions. But for microfabricated, nanofabricated things, nothing in. Though there's a lot of progress on all these fronts. So this is the end of lecture one, a little bit late, but this topic. So I'd be happy to take more questions. Absolutely. So these transition rates you might calculate using Fermi's golden rule. Fermi's golden rule requires, which is a first order perturbation theory, requires that the frequency of your drive match the energy difference for energy conservation. And then it also says that that rate's going to be proportional to a matrix element. And if the symmetry of the wave functions and the symmetry of the drive is such that that matrix element is 0, then that's what's called a forbidden transition and you won't drive it even if you apply the right frequency. In mechanical systems, if your drive is of this form, this will couple the nth state with the n plus 1 state and the nth state with the n minus 1 state to first order and perturbation theory, which gives you all these little arrows I was drawing before. At higher orders, there would be, I think, two photon transitions. Does that sound right? Let's say I have not shown that. That's for sure. My guess is there are some similarities when you apply a nonlinear drive to the quantum system is when you apply a nonlinear parametric drive to the classical system. Mark Dickman would be that expert. So I think that the tyranny is not as strict once your drive is nonlinear. Other questions? OK, so then we'll move on to Lecture 2. And really, the point of Lecture 2 and Lecture 3 are both to address this claim here. And specifically to show you that rather than what is conventionally asserted in Goldstein and Landau and Lifschitz and the literature in general, the statement is quite wrong. And the introduction of damping into systems of coupled oscillators leads to really qualitative changes, including most dramatically non-trivial topology. And at any conference that is not specifically dedicated to topological physics, you can hear lunchtime conversations of people bemoaning the overuse of the word topological and sort of audience exhaustion around it. I will give you a very clear description of what I mean by non-trivial topology appearing in systems of coupled oscillators. So this is going to be the goal of Lectures 2 and 3. And I should say that Lecture 1 is stuff that anyone working with quantum optics or harmonic oscillators should learn at some point and is very well known in the field. What I'm talking about in Lectures 2 and 3 is a little bit less widely appreciated. And I also wanted to say at the outset that I learned about the topic of Lecture 1 from Steve Gerven, who explained all this stuff to me. Today's lecture is going to come from this little paper, which is exactly two pages long in the American Journal of Physics, but is incredibly helpful. It's very dense, though. So today's lecture is really going to expand it out. But I think everything I talk about is more or less covered here. And for the third lecture, I'm thankful to collaboration with Nick Reed, who's a condensed matter theorist at Yale. OK, so I claimed that, contrary to what we're usually taught, the introduction of damping to a system of coupled oscillators results in some really surprising, quantitative, and even potentially quite powerful changes to how we describe the system. So how is that possible? From where does this emerge? And to answer that question, I think I need two lectures. And for the first lecture, what I'm going to do is revisit the apparatus of the normal mode description for damp systems. And to illustrate what's so different from what you've learned previously, I'm going to go all the way back to the beginning and talk a bit about how we use linear algebra in this simple kind of physics. And usually, at least that I learned linear algebra as a physicist was in the context of quantum mechanics and Schrodinger's equation. And I want to emphasize that what we will do in this lecture is not at all quantum mechanics. In fact, it's a description of the linear algebra that I didn't get to learn because I learned linear algebra and quantum mechanics. But to make sure that we're all starting at the same place, I'll start with this. So usually, we start with the time dependent Schrodinger equation. And given the form of this equation, if we assume that the time dependence of the state vector is given by some time independent part and the time dependence is just some frequency where this omega is as yet unknown and this state vector is also unknown. If we plug this expression back into the time dependent Schrodinger equation, we just get out of there one. OK, so taking a time derivative of this gives us an i omega e i omega t psi for the left-hand side and on the right-hand side, minus i over h bar psi still. And then a tiny bit of reorganizing leaves this as there's something important missing here. So definitely, yeah, OK. If I make glaring errors of fact or judgment or taste, definitely raise your hand and let me know. So there's supposed to be the Hamiltonian in here and also here and also here. So now, we do a tiny bit of reorganizing. And that means that the Hamiltonian acting on this time independent, this constant, is h bar times that unknown frequency omega. And usually, we write this by combining the h bar and the omega into a single constant. And since the Hamiltonian was all along an operator, and this is a number, this is what we refer to as the eigenvalue problem, where the state, it's a funny equation because they're actually two unknowns. There's a vector unknown and a scalar unknown. Oh, so this isn't a harmonic oscillator. And omega isn't the oscillator frequency of the classical system. This is just the unknown eigenvalue. But yeah, fair enough, there will be a bunch of, I think at this point, no, I will agree with you that there will be a lot of different solutions. There will be, for every, there will be a lot of different size that solve this equation, each one with a different omega, and we'll pair them up. So it's a funny kind of equation because they're actually two unknowns and only one equation. And this is, in some literature, this is called the simple eigenvalue problem. And we'll see where this term comes from later. But the way that we, OK, so this is where we wanted to start. And now, just to emphasize that we're not going to be talking specifically about quantum mechanics, let me stop writing Hamiltonians as the operators and size as the vectors and just rewrite this problem as saying that it's some matrix will take to be finite, n by n dimensional, times a vector, x, which is an n dimensional vector, is equal to some number, little a, times that same vector. And again, the unknowns are the x vector and the a constant. So this is the equation we have to solve. We can start by just putting everything over on the right hand side and factoring out the x vector. And in order to combine this n by n matrix a with this scalar little a, let me multiply little a by the n by n identity matrix. That gives everything the same form times the x vector equals 0. So this equation has solutions whenever the x vector is 0. In linear algebra, these are called the trivial solutions. And in a lot of physics problems where we want to restrict ourselves to normalizable vectors, we are not interested in those. So those are not of interest. The ones that are of interest are the non-trivial solutions. And those are given, definitely won't derive this. But in a linear algebra class, you would learn that this equation has solutions whenever the determinant of this thing in the parentheses is 0. Now what's the determinant of an n by n matrix? The determinant of an n by n matrix is an nth order polynomial. It's called the characteristic polynomial of the matrix. And it will be nth order. So you could just write this out as an n by n square matrix and use your favorite algorithm for calculating its determinant. And you will find that it's an nth order polynomial in this constant little a. So it will look like some coefficient times little a to the n plus some other coefficient times little a to the n minus 1, so on and so forth, all the way down to the last one. And it's when that polynomial equals 0 that you have to hunt around for the values of a such that this polynomial equals to 0, which is to say you have to find the roots of this nth order polynomial. So there's another thing that I won't derive, which is the fundamental theorem of algebra. And that says that if all these coefficients are complex numbers, then there are exactly capital N roots, which is to say values of a that solve that equation among the complex numbers. And strictly speaking, that holds if we count degenerate roots according to their degeneracy. So any nth order polynomial has n roots in the complex plane. We'll make use of this at various points. But as long as a is not just any matrix, but as long as a is Hermitian, then all the roots are real. So I guess we'll call those roots a sub n, and they're all real. So this theorem still holds. But if the matrix a has a certain restricted form, then all of those roots end up lying on the real axis. So we've now solved half of this funny equation with two unknowns. We found all the values that a can take on. They are all the roots of this characteristic polynomial. That's half of what we wanted, but we also wanted to know about those unknown vectors x, the eigenvectors. And one way to get them is to say, well, now we have all of our roots. Let's just take one of them and plug it back into our original equation. So let me take the nth root a sub n. And then this equation here with a star on it, I'll just copy over having plugged in my nth eigenvalue. And it becomes this. And now I have one equation with only one unknown. So this is one of my known roots from that polynomial. Here's my one unknown. And I can just go ahead and solve for this vector. And that vector will certainly depend on which root I plugged in here. So we call that the eigenvector that's associated with this eigenvalue. And then we take the next eigenvalue, plug it into this equation, solve for the eigenvector that's associated with it, repeat, and repeat, and repeat. And we will get n eigenvectors. And so this is how the eigenvalue problem usually proceeds in quantum mechanics. We could derive a few quick results about these eigenvectors. The first is their orthogonality. So what I would like to do is to take one of these eigenvectors, x sub m, and dot it into another one, x sub m, and see what I get. And in more quantum mechanics, bra ket notation, this would be taking the nth eigenvector and dotting it into the nth one and asking what we get. So how can I evaluate this expression? At this point, the only thing that I know about these eigenvectors is what I've written so far. So the only thing that I know about x sub n, say, is that if I act on x sub n with the operator a, OK, I guess we're going to use some bra ket notation here, I get out the eigenvalue associated with it. The only thing that I know about vector m is sort of the equivalent thing. If I act on it with capital A, I get out the same vector times the nth eigenvalue. So let me now rewrite this equation for m. So somehow I want to take these two equations, put them together, and calculate this thing here. So in order to do that, let me take this equation and consider the operation from the other side, so to speak, and get this expression here. And then let me take the inner product of both sides of the equations with the nth eigenvector. And so that means that on the left-hand side, I have x of m a x of n equals a sub m times the inner product of x of m with x sub n, which is, after all, the quantity that we're after. But I still don't have any easy way of evaluating that further. So let me do the same thing with the nth eigenvector. So let me take this expression here and dot it from the other side with x of m. So I'm going to take the inner product of this equation with x of m, and that gives me. So if I look at this equation and I look at this equation and I subtract them from each other, I subtract the left-hand side from the left-hand side, that's 0. They share the same left-hand sides. And on the right-hand side, they're almost the same. They both have this inner product. But one of them, whoops, this was an a sub n, one of them has an a sub n, has the nth eigenvalue, the other one has the nth eigenvalue, that they're both multiplying this inner product. So for this equation to hold, if the eigenvalues aren't degenerate, then we have to have that these two eigenvectors are orthogonal. So we just derived the orthogonality of eigenvectors, and we're reminded of the fact that if your two states happen to be degenerate, their eigenvectors don't actually have to be orthogonal. So this is the orthogonality of eigenvectors derived from this kind of eigenproblem. And the reason that I drive it for you is we use this an awful lot. Like, experimentalists use this. It's nice to know that the eigenfunctions are orthogonal, or that the eigenmodes are orthogonal. The other property that we use a lot is the completeness of these eigenvectors, which is to say that any other vector can be expanded in terms of them. And this is also pretty quick to show. So let me rewrite our last result by saying that the inner product of x sub m of the nth eigenvector with the nth one is the chronic or delta in m and n, at least again, as long as they're not degenerate. We'll leave that as a special case for the time being. If that's true, then I can do the following. I can take both sides and sum over the index n, multiply from the right side by the vector by the nth eigenvector. And the reason that I do that will become a little more clear when we do the same thing to the right-hand side, because summing over a chronic or delta is always easy, amounts more or less to a change of variables or something like it. So now on the right-hand side, let me note that this m eigenvector doesn't care about the summation index. So I can pull it out front, point to the left-hand side, looks like this. And the right-hand side, well, I can do this sum. This should have been so. Let's look at that there. Yeah. So now the point being that on the left-hand side, I have the bra of the nth eigenvector and a bunch of stuff. And on the right-hand side, I just have the bra of the nth eigenvector, which tells me that all of this must be the identity operator. And this is another thing that we use a lot in calculation. Sometimes it's helpful to resolve the calculation just by sticking in an identity operator realized as a sum over eigenstates. Let's now stop talking about quantum mechanics and start talking about coupled oscillator systems, having reminded ourselves of these results. Because I sort of always had the sense that finding the normal modes of coupled oscillators was an awful lot, like reapplying this apparatus here. So to begin at the beginning, if I have a bunch of oscillators coupled by springs, and for the time being, let's assume all their masses are the same, but not necessarily they're springs. So this is object 1. It has coordinate x1. This object has coordinate x2. This object has coordinate x3. And they're coupled by springs, like spring 1, 2, spring 2, 3. And we don't have to restrict ourselves to nearest neighbors. We have spring 3, 4 here. We could also have, this looks a little funny, but it's easy to imagine what it is, a spring that connects mass 1 with mass 4, and so on and so forth. Springs between all of these different masses. And if I write down Newton's second law for this, let me start with the first object. So it's mass times acceleration is equal to the forces on it. And the forces on it are just the compression of any spring to which it is attached. So there's a force on it due to this spring. And the compression of that spring is x1 minus x2. And there's also a force on this guy due to the compression of this other spring. And let me just also imagine that, in fact, there's springs everywhere there possibly could be. So there's a completely arbitrary network of springs between these things. So there's also a spring between mass 1 and mass 3, whose compression is this, and et cetera. And then I write down the equation of motion for the second mass over here. And again, the forces acting on it are just the compressions of all the springs to which it is attached. First one being the spring from mass 2 to mass 1. Next one being the spring from mass 2 to mass 3, and so on. And all the way down to the very last one. So this is a messy bunch of coupled equations. The one thing that we can make use of though is that all of the degrees of freedom only appear linearly. They appear over and over again, but always just to the first power. And that means I can rewrite this using matrix notation by saying that I have here a vector, all these entries, is the second derivative of a vector called x. They're all being multiplied by the same mass. So let me just put an m here. And on the other side, I have all the different x's in some combination. But because it's a linear combination, I can write that as some matrix here, k. And the elements of this capital K matrix are not quite these springs, but almost exactly. You can go through and figure out what the actual coefficients are. And they're certainly determined by all of these springs. Now if these are real springs or anything even kind of like them, this K matrix will be Hermitian. It's not impossible. We can talk about it later. But with conventional mechanical apparatus, you're very likely to get to Hermitian matrix here. Furthermore, they're all going to be real numbers. So that tells us that the matrix K is Hermitian. And actually, it's even a more restrictive class. It's a real Hermitian matrix. But for now, its Hermiticity is enough. So to make progress, we will again assume that, and you could derive this, for example, from the method of Frobenius. But for now, we'll just take x of t is equal to some constant times e to the i omega t. We're again, both the vector coefficient is an unknown. And this frequency is an unknown. This is going to turn out to be our eigenvector when we solve for it. And this is going to turn out to be the associated eigenvalue. So plugging this into that gives us what looks like a real eigenvalue problem. And in order to gather terms together a little bit, let me just call it. I want to do this. Let me on the left-hand side, actually just to make this as clear as possible, let me on the left-hand side leave the minus sign, the 1 over m, and the k matrix all acting on the x vector. And on the right-hand side have omega squared acting on the x vector. And then let me just incorporate this minus 1 over m into a new matrix, which I'll call kappa. But obviously, it has the same properties as k. It's just being multiplied by a number. But I just really want to reduce it to a familiar looking form. And let me define lambda as being equal to omega squared. So that now I really have exactly the same eigenvalue problem as before. And again, this is what's called the simple eigenvalue problem for reasons that will become clear later. So again, it's the same story. I just rewrite this as the matrix k kappa, sorry, minus lambda times the identity matrix acting on the vector x has to be 0. And this is non-trivial solutions when the determinant of this thing in the parentheses is 0, which again is an nth order polynomial in lambda. It has n roots, which we'll call the lambda sub n, which again are all real because this is still a Hermitian matrix. So is this just like the time-independent Schrodinger equation? The time-independent Schrodinger equation? Well, it definitely looks like it. There's one important difference though, which is that our eigenvalue, our unknown of interest, wasn't lambda. It was omega. And we just got values for lambda, which is omega squared, which means that we don't have n unique eigenvalues for omega. We have 2n. And that's because for each lambda n, there are definitely capital N of those. Omega n can be either plus square root or negative square root. So we could say that we actually have, we should label our eigenvalues in the following way. There are capital N values of little n, but there's a positive and negative version of each of those. So we actually have 2n eigenfrequencies rather than the n eigenvalues in quantum mechanics. Does this matter? Maybe not. But before we talk about whether it matters, let's remember why this happened. Why did it happen? It happened because we had an omega squared as our eigenvalue. Why did we have an omega squared when we made the same assumption? It's because we had a double time derivative in our original equation of motion. Whereas in the time-dependent Schrodinger equation, that's first order in time, taking the time derivative only popped out one power of the eigenvalue. But a second order differential equation pops out two powers. And that's why we ended up with a quadratic eigenvalue there. But OK, maybe this is not so important. One reason to say that it's not so important is that when we go to find the eigenvectors, remember what we do. We go back to our eigen equation here. And we plug in one of our roots. So let me try the, let's say, the n plus root. So if one of these eigenfrequencies was 17 hertz, that means I have 17 hertz and minus 17 hertz. So let me try the one at 17 hertz. And then what I'm supposed to solve for is supposed to solve this equation for the eigenvector that's associated with the positive frequency. And unlike the Schrodinger equation, though, I'm also going to have an eigenvector solution for the negative frequency. And that is going to give me maybe a different eigenvector. But it actually gives me exactly the same one. A more succinct way of saying this is that in the eigenvalue problem, the only thing that appears is the square of the eigenfrequency. So it doesn't matter that I have negatives and positives. They give me the exact same thing. When I go and put the 17 hertz eigenvalue in there, I solve for a certain eigenvector. When I put the negative 17 hertz eigenvalue in there, I get exactly the same equation. So I have to get the exact same eigenvector. So let's say that there are two n eigenvalues, though they come in pairs, positive and negative, and they give rise to n unique eigenvectors. But in some subtle sense, that's hiding the fact that there are actually two n eigenvectors. They just happen to come in identical pairs. And this is why it's safe to say that if I have a system of n coupled oscillators, I have n eigenmodes. Because when you do the math, actually you get two n, but they come in identical pairs. So they're n unique eigenmodes. So next, let's make this a tiny bit more sophisticated by allowing our masses to be unequal. So we're going to do the same problem. And writing out the equations of motion, the only difference is that I now have to give each mass its own value. So it's going to look exactly the same, except over here in the leftmost column, each mass has its own subscript. So again, I write this in terms of a matrix. So I have a big vector of accelerations equals a matrix of spring constants times a vector of displacements. But now I can't just say that this vector of accelerations has a single coefficient. It has to be multiplied by something that's actually a matrix. And in this particular case, that matrix happens to be diagonal. But in a more general setup or with a different choice of coordinates, this could be a non-diagonal matrix. So it really is. Its matrix value is relevant. And now if I want to play the same series of tricks, it starts to get a tiny bit thornier basically because I can't do this. I can't just divide through by little m. So let me take the first baby step. And again, we make the harmonic assumption. It looks like this. And if I wanted to, I could get this into my simple eigenvalue problem by multiplying through by the inverse matrix of m. And that would give me this. And you say, well, OK, I could define this as some new matrix. We could even bring in the minus sign. And we could call this, say, the gamma matrix. And this now looks exactly like our old eigenvalue problem. The issue is that this matrix, gamma, is not guaranteed to be Hermitian. So the only way that you get a Hermitian matrix by multiplying a matrix times the inverse of another matrix is if those two matrices commute. And even though the mass matrix is definitely going to be Hermitian and the k matrix is definitely going to be Hermitian, they are not guaranteed to commute. And so if I try to combine them in this way, I'm very likely to get a non-Hermitian matrix. And then a lot of what we know about eigenvalue problems isn't applicable anymore. So let's not take that route for now. So let me not just merge those guys together in a desperate attempt to get a familiar looking equation. Let's leave them separate for the time being and see if we can make any headway regardless. So let's go back to this equation and do our best. And our best would be to take the k matrix and minus omega squared times the m matrix and note that they're both multiplying the x factor and giving us 0. And this looks a lot like our old problem, except that instead of having the identity here, we have some arbitrary Hermitian matrix. And that means, so the results from linear algebra still hold. This thing still has solutions if the determinant of this thing in matrix in parentheses is 0. It's just a tiny bit, it's going to be a tiny bit more complicated than our old eigenvalue problem. This is what's known as the linear eigenvalue problem in some engineering literature primarily. It's still the same order of equation. You just, instead of having the coefficient 1, so to speak, on this term, you have the coefficient m. But it's still the same story. We still get an nth order polynomial in lambda. Still get capital N roots that we can call lambda sub n. We still get 2n unique eigenvalues that come in pairs. And we still get the same story about the eigenvectors, which is that in some sense, there are 2n of them, but they really come in identical twins. And if we take all these properties that we have, we could repeat our exercise about the orthogonality and do exactly the same math. And actually, this is interesting enough result that I'll put in some of the steps. So what we know is that k minus omega squared m acting on the nth eigenvector is 0. So let me take the dot product of that with the nth eigenvector. That's still going to be 0. And let me also do the same thing with m and n reversed, which is to note that this operator acting on the nth eigenvector is 0. And I can take its dot product with the nth eigenvector. And again, let me subtract these equations from each other. And then I will get xm k xn minus the corresponding term in the other equation is xn k xm. And then the other terms are xm m xn times omega n squared. There's going to be an n here and an n there. And from the other one, minus xn m xn times omega m squared. And I think there's a sign at the stake, but it doesn't matter very much. So this is now the four terms. And this is all supposed to equal to 0. So the first thing that we can do is look at these two terms here. And because of the hermiticity of k, this is 0. Basically, the nmth matrix elements of k and the mnth matrix elements of k are equal. So when I subtract them from each other, I get 0. The other two terms over here, I can write as omega n squared minus omega n squared, all multiplying the same thing. So here I have xm m xn. Here I have the opposite. But again, because of the hermiticity of the m matrix, that's really the same thing. So this is xm. And so now this thing has to equal 0, which, as regards orthogonality, tells me something very similar to the simple eigenvalue problem, but not exactly the same. So if I ask what has to be true to make that equation hold, then if I'm talking about non-degenerate modes, and actually, well, OK, so two difference. One is that the eigenvalue difference, what we have here is the difference between the square of the eigenvalues rather than in the quantum case where it was just the eigenvalues. So I don't require the eigenvalues be unequal. I require that they be unequal in magnitude. So I have to have omega n absolute value, not equal to omega m absolute value. Then I have that the nth eigenvector dotted with the mass matrix dotted with the nth eigenvector equals 0. And this is not conventional orthogonality. So first result that's a little bit surprising is that when I have unequal masses, my eigenvectors are not orthogonal. We don't have to panic, because this is, in fact, a generalized notion of orthogonality. This is what's called orthogonal with respect to a certain measure. If you like, this is setting the metric of the vectors that we should use in calculating their dot product. So the eigenvectors are still orthogonal with respect to the mass matrix m. And this is, I think, very well known in a lot of fields of wave physics. It was first pointed out to me by Charles Brown, the graduate student for my group who's here, who noted this in the orthogonality conditions for electromagnetic waves, where the index of refraction plays the same role as the mass. So if you're finding the normal modes for an electromagnetic resonator, and you calculate their overlap, and you integrate this over space, you will not get 0 unless the index of refraction happens to be uniform throughout. So what really is 0 is this. And I kept saying, no, this can't be right. This can't be right. And eventually, Charles explained this to me. OK, so this is a common feature for coupled oscillators that aren't all nearly identical, which is that their eigenmodes are orthogonal with respect to some coordinate system that is defined by the nonuniformity of the mass within the system. But it's still a little bit of a surprise. And if we were to go through the completeness exercise, and this I really will skip, we would find that the identity matrix is realized, again, by a slightly different form, which is, so here would be the usual identity matrix, but instead it has to have this in it. So if you want to expand some arbitrary excitation of your system of oscillators, this is the identity operator that you need to insert to realize that expansion. So this is maybe just a curiosity at this point. What I really want to come back to is this doubling of the eigenvectors and the eigenvalues. So what are they there for? And how important are they? As we said, they are there because we're solving a second order equation of motion. And so having two n eigenvectors, one way to think about this is that what this gives us is a doubling in the number of degrees of freedom, which allows us to specify not just n initial conditions, which is what you would need to specify, say, an initial quantum state. You would need to say, well, the magnitude of the state in each of the n levels is such and such. Now we get to specify two n of those numbers. And the reason is that a second order differential equation requires twice as many initial conditions, an initial position, and an initial momentum. So that's what it's there for, in some sense, initial position of the ith oscillator and what you don't need to specify in quantum mechanics, the initial momentum of that ith oscillator. But it is a little strange that our configuration space, so the space spanned by all the coordinates of the oscillators, is n dimensional. But for some reason, we're cramming twice as many vectors as we need to into that space. It might jive a little bit more with our intuition from the simple eigenvalue problem of quantum mechanics if instead we had a two n dimensional space for all of these eigenvectors. Then it would be very natural. Of course, you have 34 degrees of freedom. You have 34 normal modes, 34 eigenvectors. Of course, they make a nice, complete basis. There's no issue of there being over-complete or anything like that. But in order to do that, we must have a first order equation of motion. Of course, everything that I'm describing to you happens naturally in undergraduate mechanics when we switch from, say, Newton's laws to Hamilton's equations. Hamilton's equations take the second order differential equation of Newton and turn it into two first order differential equations for twice as many coordinates, x and p, instead of just x. So they involve the coordinates, not just the n position coordinates, but also the n momentum coordinates. So they describe the system in what we call phase space. So if we take this approach, then we'll have a set situation in which the meaning of these two n eigenvectors makes perfect sense given the dimensionality of the space. I have 17 oscillators. They have, for some reason, 34 normal frequencies and 34 normal modes. But if I describe them with Hamilton's equation, that makes perfect sense because they actually live in a 34 dimensional space. But there's an important point about this space, which is that two n dimensional phase space is not the same thing as two n dimensional space for the following reason. But if I have two oscillators with coordinates x1 and x2, it's totally fine if I vary x1 and x2 evolves in some different way. I mean, they're basically completely independent of each other, except as determined by the equation of motion. Phase space is different, though. When I have an oscillator and its momentum, the time evolution of this coordinate is not even a little bit independent of the time evolution of the momentum. Because momentum is nothing other than the rate of change of x. So this is one of the points of Hamiltonian mechanics is that, yes, there are two degrees of freedom in some sense, but they are not at all independent of each other. So in this sense, phase space is not just a Euclidean space. It's not just a space of another bunch of variables that are completely independent from the first bunch. So this lack of dependence, lack of independence between the x, between the position coordinates and the momentum coordinates, gives rise to what's called the symplectic structure of phase space. I won't have much to say about, but we'll illustrate it. It has a nice illustration in terms of simple harmonic oscillators. And so let me give you a concrete example. Let's suppose that I just have two oscillators coupled, and they're all, it's as simple as could be. They all have equal springs and equal masses. We know what the normal modes for these oscillators are. There's the one where they both do this, which happens at frequency omega, and there's the one where they do this, which happens at twice that frequency. So the normal modes in configuration space, which is how we would normally write them, whereas coordinate x1 and coordinate x2 combine together to make a vector, those normal modes would be either 1, 1, which is the symmetric mode doing this, or 1 minus 1, which is the anti-symmetric mode doing this. And these guys have frequencies, omega naught, and frequency, I think two omega naught, but it doesn't matter much. And one thing about these normal modes in configuration space is that they, definitely, so I'm gonna call this mode here q1, and I'm gonna call this mode here q2. They definitely obey orthogonality, as we would expect it. q1 dotted into q2 is zero, as you can see from direct computation. But this is a little too glib, because really this mode has two frequencies, plus and minus, and this mode has two frequencies, plus and minus, and if I take the q1 mode that oscillates at plus omega naught and dot it with the q1 mode that oscillates at minus omega naught, well that's definitely not zero, because they're the same vector. So these two eigenvectors are not orthogonal. This is what we've been saying all along, because they're distinguished only by the sign of their frequency. But the point of putting this whole system into a higher dimensional space is to leave enough dimensions for all of these two n eigenvectors to actually be orthogonal in the usual way. So this happens in the following. Because the momentum of the nth normal degree of freedom is just the time derivative of the nth position, that means that for a harmonic oscillator the momentum coordinates are very simply related to the position coordinates. So when I translate this into the phase space of these two oscillators, which is four dimensional, the position, so the x coordinates of this mode, or let's say this mode, oscillating at positive omega naught are one and one. Those are the amplitudes of motion. The other information I need to give you is the amplitude of the momentum oscillations. The amplitude of their momentum oscillations is not independent of their amplitude of motion. In fact, it's determined just in this fashion here. So the momentum, p1, is just minus i omega naught times x sub n, which is one. And the momentum of the second oscillator is the same thing. It's minus i times omega naught times this entry, which is one. So this is what I would call q1 plus. There's going to be another matrix, which is the symmetric motion at negative frequency. So it's still this motion, one and one, but now at negative omega naught. So it's this. So this is the normal mode, one at negative frequency. And then there's the anti-symmetric modes, which have one and minus one. And at positive frequency will be minus i times two omega naught. And then plus i at two omega naught. Because the momentum of object two has a different sign from the momentum of object one, because their motion has a different sign. So this is phase space eigenvector number two plus. And then it's negative frequency twin brother is no longer identical. This is what happens when we allow them to live in the four dimensional phase space where our whole notion of normal modes is more naturally at home. Because we have the same number of eigenvectors as the dimensionality of our space. And so in general, the way that we would write a normal mode is in phase space is that we would take the normal mode in configuration space, and it would have capital N entries. And then the next capital N entries would be very simply related to them. Hello, can you hear me? Is this better? Okay. Yeah, you don't hear me if I just talk all the way. Okay. All right. So this one? It's better, it's just fair better, yes. This one? This one? There should be batteries around somewhere. Oh, oh, they're batteries. Here. Oh, is that it? For that? I think that's a rechargeable. Well, if I turn these on, there are three of them. Let me just try this and see if it doesn't work. Okay, so it would be awfully nice if these matrices were all orthogonal and really filled up this four dimensional space. Note that they are not though. So if I take one of these like q1 plus and I dot it into just any other one q2 minus, they definitely don't equal zero. Furthermore, there's some serious units problems just with taking the dot product because these entries have different units than these entries here. So, but this is what I meant by saying that phase space isn't just Euclidean space. So the problem is addressed by defining an inner product in this space as being taken with respect to a metric. So I take my phase space vector. I wanna take this inner product with some other phase space vector and I do that with respect to a measure that has the following form. It's related to the mass matrix and it has this blocked off diagonal structure. So this is how you take a dot product in this non Euclidean space, in this symplectic phase space. And if you do that, then you can see by direct computation that in fact all of these eigenvectors are orthogonal to each other. And I have that written out in my notes but I think in the interest of time, we'll skip that except to say that if I define one of these phase space vectors, the one that's associated with the M configuration space mode and is at either the positive frequency or the negative frequency. And I take its dot product with respect to this funny matrix here which we can call S and then another phase space matrix eigenvector. And it's just a couple of lines of algebra and it's in that book that I mentioned and it's indeed zero unless we're talking about the same and so that's the orthogonality and then the completeness. Again, going through a very similar exercise, we would find that the identity matrix in this two-end dimensional phase space is just the symplectic S matrix and then a sum over the phase space matrices taken not in the dot product but in the usual way and instead of the bra cat product, the cat bra product leads us with the matrix. Okay, so we only have about five minutes left. So let me just start with our discussion of what we wanted to get to which is all of this with damping. And this has been hopefully at least a little bit interesting and surprising but the real utility of it comes about when we ask about how to finally describe systems of damped oscillators and it's really important to have laid all of this groundwork in order to do that. Okay, so now with damping. So the physical system is again going to be a bunch of oscillators. We'll start just full generality so a bunch of different masses and they're all linked by springs and not just nearest neighbors but any pairwise combinations that we want to imagine and they're also all linked by damping elements. So I'll draw those as standard notations what's called a dashpot which I think is just meant to be a gas filled piston that motion is forced to compress and expand and so result in dissipative forces and even though I've drawn them all the same we can give each of these a different value. So this spring is K12, this damping rate is, I'm gonna call it little D12 for the damping rate and this is damping rate 23 and this is spring 23 and so on and so forth. And so having drawn that picture we can follow the exact same steps and see if there are any big surprises waiting for us. And so the steps would be to write out the equation of motion and I just will skip the intermediates and let you know that it can still all be written out using matrix and vector notation and you'll have a mass matrix as before a spring constant matrix as before and a damping matrix multiplying the velocity vector. And again if we combine all this into a more eigenvalue-ish looking arrangement and make the harmonic assumption which is not much of an assumption we get the following where again we have an unknown normal mode and an unknown normal frequency. And now for the first time the fact that omega appears quadratically which has been doing all along we can't sweep under the rug by saying no I just want to call that lambda and later on I'll take the square root of it because now omega appears both to the second power and to the first power. Okay so it really is definitely quadratic and so we see that all along the simple harmonic oscillator is an example of what's known in some of the engineering literature as a quadratic eigenvalue problem and it was only when the damping is zero that we were able to temporarily sweep that under the rug by defining a lambda that was just omega squared and pretending that it was a linear eigenvalue problem. Okay so a couple more reminders. Not reminders, a couple of other things. So when we take the determinant so this will have solutions when the determinant of k minus i omega d minus omega squared m are zero these are all n by n matrices. So this whole thing is an n by n matrix. So the determinant is just the roots of an nth order polynomial but in this n by n matrix each element will in general have omega squared appearing in it, two powers of omega. So this will be a polynomial in omega of order two n. So it's an nth order polynomial but in everything appears omega to the zero, the first and the second power. And this was always happening. We just pretended that the two nth order polynomial in omega was an nth order polynomial in omega squared which was okay because the characteristic polynomial only involved even powers of omega which is to say integer powers of that thing we called lambda which we can no longer make use of. And now we really have all two n orders of omega. So the fundamental theorem of algebra tells us that we will get two n truly distinct values of the frequency scattered throughout the complex plane. The very last thing I think that I'll mention is that while that's true if all of these matrices in our quadratic formula here whose eigenvalues we wanna find if they are if all of these coefficients matrix valued coefficients in this second order polynomial are all of our mission which again for any reasonable set of springs, masses and dampings they probably will be then these eigenvalues aren't scattered randomly throughout the complex plane. They still come in pairs and they specifically they come in pairs where if you tell me one frequency I can tell you that there's going to be another one at that frequency is complex conjugate times the minus. And when you solve the single damp harmonic oscillator you saw this so if you have an oscillator that is gonna be one kilohertz and is going to have a damping rate of one hertz when you solve Newton's equations you find that it has eigenvalues at plus a kilohertz and minus a kilohertz and both of those are damped at one hertz. So that's what this means. Your frequencies have real parts which come in positive and negative pairs but their imaginary parts are always the same size which makes an awful lot of sense if you have reasonable physical parameters because what that tells you is that yes they oscillated a kilohertz but everything damps. If their imaginary parts came in opposite signs and some of these would have exponential growth as a function of time. But that's not what happens for any reasonable set of physical parameters. Okay, so this is a good stopping place. We'll pick this up tomorrow and start to see how this leads to interesting topology. So, it's tomorrow my time. Okay, thanks.