 Okay, so on Friday, we began looking at operators, the connection between observables and operators. So the observable is the primitive start, is the starting point of our discussion. An observable has a spectrum, in other words, there are possible values you can get when you measure this observable, so an observable is something you can measure. So it has possible answers, and to each answer there is at least one state in which you are certain of getting that answer. So a state where there is no ambiguity, there is no question, there's nothing probabilistic about the result of that measurement. Out of those states and those numbers, we construct an operator, this animal here, and one good thing about this operator, one useful aspect of it is, that if you squeeze it between the ket, the state of your system and the associated bra, you get out the expectation value of the observable q when we're in this state. So when there is uncertainty and the result of the measurement is probabilistic, which normally will be the case, for most states will be the case, then this simple algebraic formula we showed last time, I think that's where we finished, that that leads to the expectation value of that measurement, so that's one way in which this operator q is useful. You'll find as we go along that there are many other ways in which this operator q, which for the moment is going to have a hat to distinguish it from the observable q, which is a physical conceptual thing, and the operator, which is just some mathematical fiction which we're going to get used to. Gradually the distinction will blur, but I hope when you need to, you can distinguish between the physical thing, so energy is the physical thing, and energy comes with an operator, which at the moment will be called e hat. Well, actually we did introduce that, so the operator e hat is for historical reasons called h, and of course it is the operator sum of all possible energies of energy. So these are the states of well-defined energy, and these are the corresponding energies, and this is the Hamiltonian in honor of the Irish mathematician who introduced this into classical physics, the corresponding operator into classical physics. Okay, so any, I guess you will have, I hope you will recognize from Professor Eslis' lectures that if we have given a basis, any old basis, then every operator can be turned into a matrix, because given a basis we can say given any state phi, then this will be the sum ai, i can be written as this linear combination of basis vectors. If we use any operator q on, on a psi, we're going to get some other animal phi, and we can expand phi, we can say that this is equal to the sum of bi i, and then this becomes q operating on the sum of aj j, this being summed over j, this being summed over i, alright, that's just substituting in here, and then if I want to find out what bi is, or actually let's change this to k, make it slightly cleaner job, this is just a dummy index, I can call it anything I like, let us call it k, if I want to find what bi is, I pick out, to pick out of this sum over all the possible, all the bk's, I, I course bra through with i, so I bra through with i, and that leads me to the conclusion that bi, because there's on this sum, we're going to have an i k here, which is going to be nothing except when k is i, so I get a bi is equal to the sum over j, the sum of i of i operate q j times aj, because this is a complex number, so when we bra through by i, it doesn't get in the way, because i is a linear function on the kets, so we can write this as the sum over j and i of q ij, aj, where q ij is by definition, the complex number that you get in this way by taking the jth basis vector operating on it with the operator q, and then taking the dot product as it were braing through with i, so every operator can be represented by a matrix of complex numbers, and of course any one of these things is called, any one of those numbers is called a matrix element, and a lot of quantum mechanics, a lot of physics revolves around calculating matrix elements, so it's a word that's often used, so it's a matrix made out of matrix elements, these matrix elements are complex numbers, so if, now another point to make is, if the basis i is a basis of the eigenvectors of q, and I forgot to last, on Friday already I think we saw, I forgot to mention it just now, I think on Friday we saw that these things, well we defined q this way, and with this definition it turned out that q i is an eigen ket of q, of q and q i is an eigenvalue, that was a consequence, so these physically important states are, as a consequence of this definition, these physically important states become eigen kets, eigenvectors of the operator q and these become the eigenvalues, so now we can say something different, we can say q is constructed out of its eigen kets and its eigenvalues in this manner, whereas previously we had a physical statement that the operator q was constructed out of the states in which there's no ambiguity as to the measurement, and the possible results of the measurement. So if we use the eigen kets q i as our basis vectors, then this matrix becomes very simple, then q ij is going to be of course i, q, well I'm going to put this in as q i, q j, but q on q j is necessarily q j times q j, so this becomes q j times q i times q j, but this is delta ij, so this becomes q j times delta ij. So these matrix elements vanish unless j is equal to i, when j is equal to i, we get the number q j. In other words, in this basis, q is represented by a diagonal matrix. In other words, q is going to look like the matrix of q, q ij is going to be q 1, q 2, q 3, all these numbers down the diagonal and nothing everywhere else and so on until we're bored or run out of, more to the point run out of possible states in which q has a well-defined value. Okay. As a result of that, if we take the complex conjugate, no, I'd not do this. Yeah, all right. So the Hermitian adjoint, I think I'm going to take it that you remember this from Professor Esler's lectures, the Hermitian adjoint of q ij of q, sorry, the matrix q. Now, we've got three things now. It's a bit confusing, isn't it? We've got a physical quantity q like the energy. We've got an operator q hat and we've got a matrix which is in one particular set of basis vectors is representing the operator. So I'm a little bit short of notations. I've got a q and a q hat, but I'm tempted to write q ij, which sometimes means the particular complex number that you will find in the i-th row and the j-th column of the matrix q. But sometimes we use this notation q ij to imply the matrix that represents q. Do you see that there's a slight overbooking of notation here? It's universal in theoretical physics. You can't, well, nobody has a natty way of distinguishing between the matrix and the matrix elements. So let me just write the matrix q. So the Hermitian adjoint of the matrix q is q dagger and q dagger is defined. So the ij-th element of it is the complex conjugate of the ji-th element of the matrix q. This means the complex conjugate. So the Hermitian conjugate is you take, you swap rows and columns and you take the complex conjugate. That's what happens with the individual elements. So let's see what happens here. This property doesn't depend on what basis we look at it in. So let's have a look at it there. So what is this? q ij, so in the particular basis of the eigenvectors of q, what does this statement become? It becomes that q dagger ij is equal to we figured out what q ji is. q ji turned out to be q up there, i delta ij doesn't matter. That's what we found. So that's q ij in this particular basis, and now I, sorry, ji, I've swapped, I hope I've swapped it over. Now I take the complex conjugate, if q i is real, then this becomes q i times delta ij is equal to q ij. So the Hermitian adjoint of q will be q itself if it's possible, if all the elements in its spectrum are real. And traditionally people have said, it's obvious that an observable is a real number. And I remember was an undergraduate thinking, hang on a moment, that's ridiculous. The impedance of a circuit, right, is something that I have to measure. Might be something you do in one of the, might have done last year in some of the electronics practicals, measure the impedance of this circuit at this frequency. It's clearly a complex number. So it's nonsense to say that observables have to be real. Of course they don't have to be real. But if they are real, then the observable will be represented by an Hermitian matrix. So if the spectrum is all real, then q hat is Hermitian. This is, in the great majority of treatments, this is all back to front. People say that every observable is going to be represented by or associated with a Hermitian operator. They then use some well-known theorem, which I'm sure you've met, which says that every Hermitian operator has real eigenvalues and orthogonal eigencets. And then therefore they say the eigencets of these things are orthogonal. That's not the way actually the flow of the logic of the, of the flow from the real physical world into the mathematical world works. It's the other, it's the real argument is that the eigenstates in which the states, sorry, the states in which q has a well-defined value have to be mutually orthogonal because why? Because qi, qj, this complex number is the amplitude to get qj given qi. And if you know that the result of the measurement is going to be qi, this, this amplitude has to vanish for any qj not equal to qi. So this orthogonality comes in as a physical requirement of the way we want to use the theory. Then if the eigenvalues are all real, if the spectrum, the possible results are all real, then you end up with Hermitian matrices, right? But there's no need to be working with Hermitian matrices if you want to work with the complex impedance as you're observable. That's not required, but what you do need is this orthogonality result. That is a consequence of, that's a logical necessity of the way we want to interpret the mathematics. OK, now we can, of course, multiply operators together. So something else we can do with operators is we've got two operators, r and q. We can define this animal by the rule that this multiplied object operating on any state of psi is simply the result of using the operators in the sequence given. That is to say you use q on a psi first, which makes you some ket, which you then use r on, et cetera. And when we, if we choose to look at this, if we ask, so what's the matrix of rq? So what's the matrix of this in some basis, in any basis now? It's going to be i r, what does this mean? It means rqj. And into here, we can stick one of our identity operators, the sum of m of m, m, right? We saw on Friday that this sum is the identity operator. You can stick an identity operator anywhere into a product. And then this becomes i r hat m mq hat j. And this now needs a sum of m. And what is that? This is rim. This is qmj. So this is just the usual rule for a matrix product. So it's rimqmj. And we will want to know what the Hermitian adjoint of this thing is. We'll want to know what rq dagger ij is. And so what is that going to be? It's going to be i r hat. Do I want to do this? I think I probably don't. I think probably you've seen this done. I think what I want to, you've seen this done in the math physics lectures this year. So I think we can just remind you that this is q hat dagger r hat dagger, right? When you take the Hermitian adjoint of a product of operators, you reverse the order of the things in the product and dagger the individual bits. And I hope you've seen the demonstration. You'll find the demonstration in the book. If you haven't seen it, you don't recall the demonstration. From Professor Estlis' lectures, then this is all a bit dry and boring, isn't it? OK, one thing you may not have seen is functions of operators. So in particular, for a given example, x, the position x down the x-axis is going to become an operator. And we are going to want to evaluate functions of x like the potential energy at the position x. Depends upon x and therefore is a function of x. So in classical physics, there is a potential function v of x that tells you the potential energy at the location x. And since x is going to become an operator, v is going to become an operator which is obtained by taking a function of an operator. So we need to know what it means to take a function of an operator. Another example is there's going to be an operator associated with momentum. The kinetic energy of a particle in classical physics is p squared over 2m, the momentum squared over twice the mass because that's a half mv squared in classical physics. So p squared is a function of p, a very simple one. It's a function of p. So we need to know what it means to take a function of an operator. When you do statistical mechanics, you will need to, there is a quantity, a density operator, which you calculate the entropy of a system which involves a logarithm of the density operator. So you need to be able to take the logarithm of something. So we need to be able to take functions of operators. So let's decide what this means. So what we're going to be done, we're going to imagine, we're given f of x. So this is just a boring, at the moment, this is just a boring number. Suppose we're given a function. This is a boring number, and that's a boring number. I'm just giving an ordinary function of a complex valued function of a complex valued number, say. And let's imagine that we can tailor expand this. So we can write this as f naught, the value that f takes at naught plus f1 of x, the first derivative, plus a half f2 x squared over 2 factorial. This is the second derivative plus a third, sorry, at 1 over 3 factorial of 6, f3 x cubed, oops, over 3 factorial, et cetera. So we're going to imagine that our function can be tailor-through as expanded. In detail, it might not be possible to expand it around the origin, but then we can expand it around some other place in some little neighborhood. Physicists always assume they can expand their functions. And sometimes that leads to major disasters. There are important bits of physics which happen only because you can't actually tailor-series expand everything in life. But it's a good starting point. So we're given this function. Now we want to know what f of q is. So what is f of q? The answer to that is this. It's the sum of f of qi, qi, qi. So this is the definition. When we say a function of an operator, this is what we mean. So what is it? This here is an operator which has, so it has the same eigencets as its argument, right? So a function takes an argument. The argument is an operator. This operator has eigencets. So a function of an operator has the same eigencets by construction. But the eigenvalues are the given function of the old eigenvalues. And can you see that this is guaranteed to work? Because we started with a function of a real valued function on the real variable. So then this is just going to be some real number so this is a perfectly well-defined thing. But actually it would all work perfectly fine with complex values functions of a complex argument. So this is what we mean by a function of an operator. I'm going to, it's a problem. I mean, I'm leaving it as a problem. You can now show, so on some problem set, it's a problem to show that this definition is the same as f of q is equal to f naught times the identity plus f1 times q plus f2 over 2 q times q plus, right? So if you've got the Taylor series expansion, then you know what this stuff means, right? Because we know what it is to multiply an operator on itself. We may not know what it is to take the logarithm of an operator, but we do know what it is to multiply an operator on itself as many times as you usually will want because we've defined multiplication of operators. So this right-hand side has a well-defined meaning and it's an exercise to prove, not desperately difficult, to prove that this animal on the right that we're defining here has, as eigenvectors, these animals and its eigenvalues, these animals and therefore these two definitions coincide. But this is the more general definition because this doesn't assume that we can do any Taylor series expanding, this does, but when you can do a Taylor series expansion or somehow express f in terms of algebra which has meaning for operators, which is to say only multiplication. For example, you can't divide one operator by another operator, that doesn't necessarily mean anything, but you can multiply them together. So when this definition works, then this one is the same as this one and that's an exercise that I would encourage you to do. But we'll not take time to do it now because we're setting up these mathematical operators and I'm sure you're all dying to do a bit of physics and I am too. But we do have to cover a couple of little things here, commutators. Oh, actually, perhaps, sorry, perhaps we should, it's time I moved over here. Okay, so in some sense the big news with operators is that A hat, B hat is not necessarily equal to B hat, A hat. You know this already in as much as you know that matrix multiplication doesn't commute generally. So when you're multiplying matrices together, you don't expect the product this way and the product that way to agree and we've agreed that operators, once we take a particular basis vector, system of basis vectors, can be represented by matrices. So it's not surprising that there is this non-commutability. And the elementary text claim this is the key thing about quantum mechanics, I claim this is not the key thing about quantum mechanics, non-commuting things occur also in classical physics and we'll see that concretely as we go down the line. However, it is a fact that these operators do not commute and we spend a great deal of time calculating this animal which is AB minus BA. Okay, so the definition of A and B, A comma B in a square bracket is that it means just this. Now, we have some obvious results. We have that A comma B plus C, the commutator of A with B and C, the result of adding B to C, is clearly the sum from this definition it follows that it is just this sum. Oh yeah, we have this obvious result that AB is equal to BA plus A comma B. One of the reasons why we need to know the value, as you will see, why we need to know the value of a commutator is because we often need to swap, we often need to want to whatever, we often want to swap the order in which operators occur around and the way to do it is to write that AB is BA plus this commutator which is obviously true, the way I think of it is this adds in the thing that I should have had and takes away the thing that I've put in that I'm not entitled to have, but it's obvious, right? And now finally, a less obvious result which is that AB, the product AB commuted with C is equal to A comma C with B standing by on the outside of the commutator plus, excuse me, plus A with C comma B like this. It's easy to prove this, I encourage you to prove it, I'm not going to take time to do it, all you have to do is write down what this is from that definition and then insert two extra terms which cancel each other and you'll find you can arrange it like this. It should be B comma C, you're absolutely right, thank you very much for that. The other one I got right, yep. Okay, so what is this analogous to, this is analogous to D by DC of AB, if I have to do a differential of a product with respect to C, then that is equal to D A by DC B plus A D B by DC, right? This is the rule for differentiating the product and can you see the mirror there? The idea is that taking the commutator of something with C is analogous to taking the derivative of something with C and this is no accident, this for a mathematician in certain contexts is called a lead derivative and the rule that we are familiar with here is that you first of all, if you have a product you can get the result by having this operation happen on the first thing, well the second stands idly by and then you let the first one stand idly by and then you work on the second one. So here we have, you work on the first one, second standing idly by and then you work on the second one with the first one standing idly by. The only material difference between these formulae is that this formula is left invariant if I move B over here or if I move A over there or whatever, if I change the order here it won't make any difference because these are ordinary boring multiplications of complex numbers but here it does make a difference like this A comma C is an operator, it's the difference of two operators so it's an operator and therefore it isn't clear that I can swap the order of this operator and this operator and the order in which you write those things down is important. So these rules should be kind of, you should make sure you understand where they come from, you should memorize them and broadly speaking, once you've got these three rules on board you never need to look inside a commutator and use this relationship here, it's bad practice by and large when you're doing computations to expand commutators to see what's inside them in the same way I would say as this rule here of course comes from looking at A B evaluated at C plus delta C minus A B evaluated at C all over delta C limit, all this stuff using this stuff you can prove this but once you've got the rules of calculus you don't do this expanding stuff anymore you just, you know that's what lies underneath it that's the justification but you don't go back to that every time you have to do a calculation every time you have to differentiate D B D X of X cubed you do not write that this is X plus delta X cubed minus X cubed all over delta X cubed and come to the conclusion that it's about three X squared do you? So please don't resist the temptation to expand out a commutator to write the contents of a commutator out there are times when ultimately you have to do that but most of the time you don't and try and avoid doing it by using these rules here okay okay I'm going to need one result which combines these statements and those statements we're going to need very shortly to calculate what F of B comma A is so I've got it so I will want the commutator concretely this is going to be V of X and I'm going to want to take the commutator with the momentum operator and these things these all need hats I suppose yeah and those things up there need hats but okay imagine them on so this is I'm going to want to I'm going to want to calculate something like this so let's see what this comes to in order to see what it comes to I'm going to imagine that I can expand F in this manner so I can write this as F naught times the identity plus F1 times B plus F2 over 2 times B squared plus blah blah right the Taylor series expansion of F around the origin commuted with A so now I can use that second rule there second rule to do the commutator of this product this is a boring number right this is a number this is the identity operator that's a number but this is the identity operator and the identity operator obviously commutes with everybody because I times A is going to be A same as A times I is going to be A so the commutator so I use the second rule to say that the commutator of this sum with A is the sum of the commutators of this thing with A vanishes and this thing with A so that's going to be F1 B hat, A hat this comes outside the commutator maybe I should have added that to the rule list there because it's a boring number but I think it's kind of an obvious principle plus F2 over 2 factorial of B hat squared, A plus F3 over 3 factorial B hat cubed, A plus plus plus plus plus plus plus plus till you're bored so that's the middle rule used now we use the last rule to say that this is F1 well this is just a repeat but this B squared is B times B so I can expand this into F2 into B hat, A hat, B hat plus so it was BB commuted with A so I worked on the first B while the second B stood idly by and now I have to put down the first B standing idly by and have the second B worked on by A plus dot dot dot plus F3, etc. which is going to involve three terms because it'll be BBB commuted with A so there'll be three things to consider and this is as far as I can go in general but in an important case if B hat, A hat commutes with B so if this commutator B hat, A hat, commutator is an operator because it's the difference of two operators so if this operator commutes with B hat then this B, A and this B, A and this one could all be taken outside and I have that under this condition that F of B hat commuted with A hat is equal to B hat, A hat times F1 plus F2 plus can you see it'll be F3 over 2 because the F3 would have been over 3 factorial but we would have had three terms oh sorry this is going to be times B this is silly me this is going to be times B hat this is going to be times B hat squared plus so this is what this will all reduce to which can be more conveniently written as dF by dB so this is an operator whoops sorry yeah it doesn't matter which order I put it in this is an operator and that Taylor series is the Taylor series for dF by dx so I can write this stuff here as dF by dB and then here is my B, A and I was momentarily panicked about having written this in front of this and we've agreed that this operator commutes with B that was the condition under which we were making this further development and if this thing commutes with B it commutes with every function of B in particular it commutes with dF by dB which is a function of B so it doesn't matter which order I put this in this is a function which means it has the same eigencats so that's a result we're going to want and there's one other thing that needs to be discussed which is the physical implications of A commuting with B so if A hat, B hat equals 0 we say commuting observables then the mathematicians assure us we have a theorem and the theorem is that in this case there is a complete set of mutual eigencats we'll call these mutual eigencats just i that is to say for each and every one of these it is true that A hat on i is equal to A i i and simultaneously B hat on i is equal to some number B i on i when two operators commute there's a theorem that says this what does that mean for the real physical world what that says for the real physical world is there is a complete set of states these states in which the result of making a measurement of A is definitely known and simultaneously the result of making a measurement of B is certainly known so there is a complete set of states in which there is no ambiguity there is nothing probabilistic about the result of measuring either of these quantities it's very important to bear in mind that complete we're not merely saying that there is a state or ten states with this property there are enough states with this property that any state can be written as a linear combination whatever, g i of these objects right, they're complete that's what completeness means that any state so there is complete set of states in which there is absolute certainty it does not mean that the fact that there is no uncertainty in the value that B takes implies that there is no uncertainty in the value that A takes that does not follow from the commuting of A and B as we will see it may well be the case that there are states in which B definitely has a value for which A is uncertain so the result of two observables commuting their operator is commuting is slightly technical because it involves this complete statement it is that there is a complete set of states in which the outcomes of the measurements of both observables are certain okay now if A, B not equal to zero what does this mean all it means is that there is at least one cat such that A, B there may be an infinite number of cats such that A, B operates on them and produces nothing but there is one, there is at least one if you say that these operators don't commute you're saying, you're asserting that there is at least one cat the commutator operating when it doesn't produce nothing so if so what does this imply it implies that there is no complete so it's a very weak, not emotionally striking result that there just isn't a complete set of states in which they both have definite values there may be a very large number of states in which they do have definite values simultaneously so it is not a statement that you can't know the value of this simultaneously with the value of that we'll come across a counter example next term I guess a very important counter example so don't run away it's very, very widely held misconception that if two operators don't commute you can't know the value of the one and the value of the other that's just not true the statement is that there isn't a complete set of states in which there is that nice property we've just got time to start on the next really important section which is about time evolution maybe it's time to move over here okay so physics is about prophecy it's prophecy that works it's about predicting the future that's what it's about and therefore the core of it is equations of motion Newtonian mechanics we think of usually as to do with F equals MA it's making a statement of what the acceleration is when you can calculate the acceleration and you know the initial position and velocity you can predict where your missile is going to be at some future time where your planet is going to be at some future time and so on that's what it's all about so the core of quantum mechanics sits its time evolution equation and I'm not going to immediately justify this I'm just going to write it down it's the time the time dependent this is the core of the subject this is where the physics sits and it's IH bar oops IH bar dpsi by dt is equal to Hpsi this is why it's because it appears in this central crucial vital equation the Hamiltonian sits here that's why the Hamiltonian matters its status in life is unique because it uniquely tells you about the future and that's what physics is about and this is the state any system so it's completely non-negotiable for a state which purports to describe a real physical object it has to satisfy this equation it tells you how the state evolves in time it's of course a very abstract object at the moment it won't be telling you much and at the moment I can't connect it we will be connecting it very shortly but just at the moment I can't connect this for most of you to classical mechanics those of you who've done did the S7 short option will recognise this perhaps just a little bit as having something to do with Hamilton's equations but we will so the justification the physical justification is we will come by and by but ultimately there's no way this can be derived from anything you already know this cannot be derived out of classical physics classical physics can be derived out of this because classical physics provides an approximation to this right the assertion is that nature evolves things according to this equation and whether that's true or not can only be determined by experiment it's got nothing to do with mathematics and it cannot be justified on the basis of classical physics ultimately but if this is a valid statement it should it should produce the right Newtonian equations of motion I will show you that it does produce the right Newtonian equations of motion because Newtonian mechanics is an approximation to quantum mechanics okay now suppose this is kind of a scary equation let's try and find some circumstances in which we can solve this so suppose our system has well-defined energy in other words the state of psi at time t well the state of psi is equal to E where H E is equal to E E right a state of well-defined energy has to be an eigenfunction of the energy operator H and value E that's what it is so let's suppose that we happen our system happens to have well-defined energy then it will have to solve this equation and we'll have I H bar D E by D T is equal to H E oops H E is equal to E E so the rate of change of E is simply proportional to E and we know how to solve that equation we spot it just from ordinary old-fashioned calculus we spot that this implies that E at time t is equal to E to the minus I E T over H bar E at zero so I feel entitled to write this down on the basis of just boring classical mathematics which says that if we know that D X by D T if I know that D X by D T where X is some variable is equal to A X that implies that X of T is equal to X of naught E to the A T so this result familiar result inspires me to write down that that I can now trivially check by differentiating this right-hand side that it satisfies this differential equation because when I differentiate this right-hand side this thing is not a function of time it's the value that the state of well-defined energy takes at time t equals naught so it has no time derivative so the time derivative comes merely from this which is a totally boring exponential of a bunch of real numbers well, apart from the I I can differentiate this so it's easy to evaluate the time derivative of this and it's trivial to check that then E satisfies this equation so what does this tell us? this is a very important result it tells us that the time evolution of states of well-defined energy is really dead trivial they basically don't change all that happens is that their phase goes around in increments of the constant rate E over H bar which is of course incredibly for typical systems like this is incredibly large because H bar is so small and it's on the bottom there so this frequency is stupendous for an object like that so this thing has some energy and its wave function is zooming around at some hysterical rate that's all that's happening the beautiful thing is that this enables us to solve the general problem because if I have I want to solve I've got now some system that's not in a state of well-defined energy and we'll see that real systems never are in states of well-defined energy but then I can surely write this as a linear combination with coefficients that depend on time of states of well-defined energy these are a complete set of states because we've been through this this is just boring so I can put I simply put this ansatz this expression, this expansion into both sides of my time dependent Schrodinger equation and we discover that IH bar dpsi by dt is equal to IH bar brackets we have to differentiate this stuff so it's an dot ent plus an times the time derivative of this times the en by dt what's that equal to that's equal to on this side H into the sum an en I've missed out a sum over n indeed I have, I've missed out a sum over n thank you just about here I'm kind of conscious of that horrible clock and but well okay why don't we just write this why don't we just carry this on and write this as the sum over n of an h en this term, this term here cancels this term here IH bar an so IH bar d en by dt is h en so these terms all cancel those terms leading to the conclusion so when I look at this stuff is equal to this stuff I've cancelled this so the right side now says nothing and the left side has this stuff has a dot so I've got the conclusion that the sum over n of an dot en of t equals naught bra through with a an EI of t and that leads to the conclusion that ai dot equals naught so the a's, the ai are constant so we have a solution this enables us to write down the solution to the general problem we have that psi of t is equal to the sum of some constants an which you can determine from the initial conditions times en of t but I can explicitly write that out because I know how this thing evolves in time this is the sum an of naught e to the minus i en of t en t of h bar times en of naught so once now so this is the really really this is a fabulously important equation sort of this part of it is needs to be burnt into the back of the retina and it's the key to everything because what it tells us is once we know what these states of well-defined energy are and the approved energies we can trivially evolve in time the dynamical state of our system and predict the future we have everything, that's it so a large part, a huge part of this subject revolves around finding what these states of well-defined energy are because they have this enormous predictive power they are their miracle they're sort of one to drug they solve the problem, they do it so we'll talk some more about them tomorrow