 All right, so we have some homework for you today. Please put it in Gopri's mailbox with all the team mailboxes. We're going to get under this wonderful name. If we're going to have it pronounce it before we can pass the class. There will be another problem set up here a little bit later on the way today. I'll let you know when it's available. And because we didn't have office hours this week, I'll have a special office hour today at one o'clock in my office. If you have any final questions, you can sit in the corner. Discuss with me. And we'll set up a weekly office hour with Gopri on Tuesdays as well. All right. Okey-dokey. So let's see. So last time we were talking about the important problem of looking for eigenvectors and eigenvalues of operators. So eigenvectors, that's the German for characteristic vector, are characteristics of the map. So they are vectors that are mapped back to themselves and mapped up to an overall scale factor. And that scale factor is called the eigenvalue. And we are ignoring the trivial case. Any of the null vector, that is the vector of all zeros, is trivially always an eigenvector of that sort. We're not including those in our class of eigenvectors. And really what we're talking about are classes of vectors raised in the Hilbert space. Because if we rescale the eigenvector by any of the scalar factor, that's also an eigenvector with the same eigenvalue. So the eigenvectors are different if they're not related to one another by just a scale factor. Okey. And so we stated that we didn't prove an important mathematical theorem known as the spectral theorem. The set of eigenvalues of a matrix is often known as the spectrum of the operator for reasons that will remind ourselves of relationship to Fourier transformers. And the spectral theorem says that if the operator is a normal operator, meaning that it commutes with its adjoining, and examples of such operators are unitary operators or permission operators, then there exists an orthonormal basis of the Hilbert space that are all eigenvectors of the normal matrix. Okey. So if we were looking for the, how to actually find the eigenvectors and eigenvalues that will remind ourselves of what we do is solving this equation is equivalent to solving this equation. And that is to say, the set of eigenvalues are the roots of this polynomial. Okey. But right now, as I said, we're restricting ourselves to find that dimensional Hilbert spaces will very soon or soon enough get to the important case of infinite dimensions and some of the challenges and subtleties associated with that. But for the moment, we're restricting ourselves to finite dimensions. Then these are matrices. And this is just a determinant of this matrix. It's a polynomial of order D and it has D roots. And those D roots are the eigenvalues. And then to find the eigenvector, we plug that back in and find the vector that solves that equation. Okey. Alright. So now the process of finding the eigenvalues of an operator, as you know, is often called diagonalization. So it's in a jargon for the following reasons. That we know, so if we have a normal operator, then we said that a eigenvector is in the orthonormal basis. Of course, as we said, generally we have to normalize them if we want them to be normal. Okey. So what that means is that since there exists a set of eigenvectors according to this theorem, that orthonormal basis may form a revolution of the identity. So the set of eigenvectors forms a resolution of the identity by the spectral theorem. And because of that, that means that if I write a representation of the operator in the basis of its own eigenvectors, so we do that by quickly inserting a complete set on each side, a resolution of the identity. So I have a sum over two dummy indices here. So I just inserted those complete sets. And this is an eigenvector. So this is equal to lambda times the eigenvector. And so this is equal to sum over lambda and lambda prime. I guess this is lambda prime, pardon me. And since these were orthonormal, the inner product between two of these is 0 if lambda doesn't equal lambda prime and 1 if the two indices are equal. And so we can sum over that and replace all lambda prime by lambda and write the familiar representation of the operator in the basis of its own eigenvectors as something which is, as a matrix, is with zeros everywhere else. So this matrix is a diagonal matrix. So in represented in the basis corresponding to its own eigenvectors, the operator is a diagonal matrix where the diagonal entries are the eigenvalues. So let's remind ourselves, let's prove some things about the nature of the eigenvalues. Let's suppose we're looking at a unitary operator. A unitary operator is a normal operator and thus can be diagonalized for its eigenvalues and eigenvectors. So let's look at some particular eigenvector. If I look now at the adjoint of this, so that becomes the brah U dagger around the star. So I just took the adjoint of this vector. If I now look at the inner product between those two vectors, that's equal to this times this is the magnitude of lambda star and this of course is the identity. So what that tells me is that for a unitary operator, the magnitude squared of the eigenvalue is 1. So for a unitary operator, you can say that the eigenvalues are unit length complex numbers. Let's say unit magnitude complex numbers, therefore, they are on the complex circle, the unit circle. So that means that for a unitary operator all of its eigenvalues are of this form. So unitary operators have eigenvalues that are phases if it's a unitary operator. And as I said, you should think about unitary operators kind of like that. They're like the operator equivalent of phases. One reason to think about them that way. What about permission operators? That's another example of a normal operator. So let's say I have a permission operator called A. This is the eigenvalue. And I'm going to switch around notation sort of a little bit willy-nilly here. Sometimes we denote the ket by just what its eigenvalue is. I mean, you're familiar with that. For example, in the harmonic oscillator, you write the ket as N for the number of states. Sometimes we'll write U sub N or sometimes we'll just write N and understand that that is a label for the particular eigenvector. So this is our permission operator, meaning that is self-rejoining. Was that eigenvalue? No. I'm saying the eigenvalue here denotes which eigenvector. This whole thing is the eigenvector. I'm sorry if I'm confusing you. What I'm trying to say is that the notation that we often use is to denote the eigenvector by just saying it's the one with this particular eigenvalue. So, right. So let's consider the following inner product. So this is equal to A for that, right? And if it's normalized, that's A. That's fine. Now let's take the complex conjugate of this. That's A sub. But the conjugate of this matrix is I take the adjoint of that and the adjoint of that, right? This is self-rejoining. That is a termination. So this is equal to A. So for a emission operator, the eigenvalue is equal to its complex conjugate. In other words, it's real. So the eigenvalues for mission operators are real numbers. What I can say is that the eigenvectors associated with different eigenvalues of a emission operator must be orthogonal. So, how do we show that to be true? Well, let's consider now instead of this inner product, let's take this inner product where A and A prime are different eigenvalues. So let's consider that. Now this is an eigenvector. So this is equal to the action of this operator on this eigenvector is to replace it by an eigenvalue, right? So that's not true. So I'm going to say A was equal to A prime, right? It's not equal to A prime. I meant not equal to, but wrote equal to A prime. Okay, so that's true. What about the action? Now this is something to be careful about. What is the action of A acting to the left on the K? Generally, to find that, the way to see how an operator acts on a bra is by first seeing what its action is by the adjoint, right? Because then when I adjoint that, I get that. So the action of this operator on the bra is what I get by first acting its adjoint on the K and then adjointing the whole thing. Because I only know how things in some sense are first acting. Now this is self adjoint. So this is the same thing as this. This is equal to this. This is a real number because we said that the eigenvectors of a revision operator have real eigenvalues. So this is then equal to this with that and we have to start. So for a Hermitian operator, if I have an eigenvector on the right and on the left, it's the same action. Okay, so that means that I can equally well act this to the left and I get A prime. Because it's Hermitian. If there were intermission, this would be a star. If they were unitary, right? Okay, so that tells me that A minus A prime times its inner product is zero. If I just subtract this from both sides of the equation. Since I assumed that these were eigenvectors with different eigenvalues, that is to say they are non-degenerate. That's how we call eigenvectors associated with different eigenvectors associated with different eigenvalues are called non-degenerate. So since this doesn't equal this, it must mean, well, everyone loves writing QDD. So if we have non-degenerate eigenvectors, that's to say we have different eigenvectors. The different eigenvectors don't have the same eigenvalue. They have different eigenvalues. Then they must be equal to awful. Now, what about the case of the genoces? What do I mean by that? Let's just talk about things. Let's just try to stay a little bit consistent with the notation. I'll just say this more generally. So let's suppose there is a set of eigenvectors that I'm going to note as follows. U is sub-lambda, super script i. Where i is an index that goes from 1 to up to some number g for that particular lambda. So it has that many elements. This g is called the degenerates. What do I mean by degenerates? Well, what I mean is that for every one of the vectors in this set, it has the same eigenvalue. So this set of vectors, these are called degenerates. These are eigenvectors that are different, meaning that they're not parallel to one another. Because when I'm talking about different eigenvectors, they can't just be rescaled by a constant. So they're a set of linearly independent set. But nonetheless, they all have the same eigenvalue. Such sets are sets of degenerates. All right. Well, what can I say about such a set? Well, first of all, it's not guaranteed that eigenvectors that are within this set are orthogonal to one another. It's not guaranteed because this, let's say these terms are Hermitian operator, because this proof relied on this assumption. It's perfectly possible that this could be zero just because the eigenvalues are the same. But this doesn't have to be orthogonal. So what that means is that the following is true. The set of the degenerate eigenvectors, they're not parallel to one another, which means that if I take any superposition of them with some coefficients, this vector is also an eigenvector. Is that clear? How do we know that? How do we know that this guy is an eigenvector and with the same eigenvalue? Well, we can just check it quickly, right? If I act this operator on this superposition, well, every one of the members of the degenerate set is also an eigenvector, and I can factor out that lambda and get back to the same superposition because that lambda is the same for all members of the sum and thus can be factored outside of the sum. So what that says is I can, every superposition of vectors within this space is also an eigenvector with the same eigenvalue. So what that says is that this set spans a subspace of the Hilbert space, which we will call such that this is an eigenvector for all vectors in this subspace. What that further means is that we can, since this is a subspace, we can pick an orthonormal basis for that subspace, right? Every vector space in our Hilbert space, that's a Hilbert space, there exists orthonormal basis. We don't have to pick arbitrary basis. There exists a set of vectors. The dimension of this Hilbert space is the degeneracy. So there exists a set of vectors which form an orthonormal basis. That is to say, any two of them, this of course goes back to the spectrum in the existence of this orthonormal basis. There always exists orthonormal basis which diagonalize the whole matrix. So what can we say about this? Well, what this says is that we can decompose the Hilbert space into a set of subspaces. So that's just to say, for our mission operator, the total Hilbert space decomposes into a Hilbert space associated with each eigenvalue of a. So there is, say, a1, a2, dot, dot, dot. This symbol here is what we call the direct sum. Each one of these Hilbert spaces is a subspace of the total Hilbert space. What's the backwards e mean? There exists. So this is for all and this symbol means there exists. So the set of a are the eigenvalues and each a has a degeneracy g sub a. So the dimension of this Hilbert space is the degeneracy. Now that the degeneracy might be one, that is to say there could be no, there's only one eigenvector associated with that eigenvalue. In this case, each one of these would be a one-dimensional Hilbert space in which case there would be d such one-dimensional Hilbert spaces associated with the d distinct eigenvalues of a d-dimensional Hilbert space. Or it might be the case that this guy's one-dimensional, this guy's three-dimensional, this guy's one-dimensional. They can all be different degeneracies. There's no rule here. Now what is this mean direct sum? What this means is that this subspace is orthogonal to this subspace. That is to say every vector in this space is orthogonal to all the vectors in this space. What's the example of this that we're more familiar with instead of using the same example? Suppose I have three-dimensional space, here we are. This space is a direct sum of the vectors along the z-axis with the vectors in the x-y plane. Here's the x-y plane. Here's the z-axis. Any vector in three-dimensional space is a direct sum of a vector along the z-axis and a vector in the plane. This is a two-dimensional subspace. Three-dimensional space is a one-dimensional subspace. This can be think of as a union of the sets but has this other structure on the inner products and such. How do we know that this is true? That these Hilbert spaces are orthogonal? Any guesses, any ideas, any notions? The eigenvector proof? Yeah, exactly. We just prove that if we have two vectors, which are eigenvectors of the emission operator associated with different eigenvalues, then they must be orthogonal. That's what's here. Everything in here is an eigenvector of the observable a of this eigenvalue, and everything in this subspace is an eigenvector of this observable. But these two are different eigenvalues. And by the proof over here, they must be orthogonal. So what we can say is generally we have a basis for the total Hilbert space, which of the node e sub a i, where a has a one up to whatever the nth eigenvalue is, and then i goes from one up to the degeneracy. So what we can say is that two eigenvectors are orthonormal. Let's say there exists an orthonormal basis such that if they're associated with different eigenvalues, they're orthogonal. Or if they're for the same eigenvalue, but there can be different vectors within one of the subspaces, we can always choose them to be orthogonal. And what that tells me, since this is a basis, I can write a resolution of the identity. That's to say the total Hilbert space for the system, I sum over all the possible eigenvalues up to the nth eigenvalue. And then for each one, I have to sum over all the degenerate eigenvectors within that sum of all the vectors. That is a way of writing a resolution of identity. This operator summing over all the eigenvectors in this outer product for a given eigenvalue is what I'm going to call a projection operator onto the space associated with eigenvalue a. So this is projection operator onto the subspace h sub a. Why is that a projection operator onto that space? Well, let's look back at this example. This is the simple picture here. Here's some arbitrary position vector in three-dimensional space. Suppose I wanted to project this operator onto the xy plane. So this is what I call r in the xy plane, that vector. How do I obtain this vector? Well, I mean that vector is the x coordinate of that and the y coordinate of that. That's what that vector is. The z coordinate is that vector. And what are these? To get this, I have to project this onto the ex on this side plus ey. I can write this on the other side because it's just a number that doesn't make a difference. And this we recognize as a dyadic that we did in the first lecture. This is a projection operator that projects me into the xy plane. Or if I want to put the double arrow on it. So the sum of these outer products is a projection onto a subspace. That subspace in this case was the xy plane. The subspace here is generally the subspace associated with the span of the eigenvectors that all have eigenvalue a. And the dimensionality of that subspace depends on the degeneracy. If there is no degeneracy, if all of them have degeneracy one, well then there's no sum to do and this is a more familiar expression. So we can resolve the identity as a sum of projectors onto orthogonal subspaces. The most fine-grained resolution is would be a projection onto the z-axis, the x-axis and the y-axis. What I can say then is that in this orthonormal basis the diagonal representation of the observable of the operator A is, well, let's just perform and say this is A1. So here's a matrix. It's a diagonal matrix. It's got zeros everywhere that I didn't write a number. But I've written in terms of blocks. So this is the block associated with the subspace associated with the eigenvalue A1. It's a one-dimensional subspace because this is a non-degenerate eigenvalue. Then I say that this eigenvalue is four-fold degenerate. Maybe it is. In that case, this block is all of this is a 4D block. That's just saying the degeneracy of that eigenvalue is four. And then there's a block associated with H3. This is a 3D block just because there's nothing special about 3 and 3. It's by chance. Okay? I don't know. That's 2D. It looks like... I guess that was 2D. So one of the things that one has to be careful about when one is diagonalizing a matrix is that if you have the degenerate eigenvalues, you're not guaranteed that the eigenvectors are orthogonal. So doing the thing with the characteristic polynomial and plugging it in, necessarily just by doing that, find orthogonal eigenvectors. If not, after the fact, you have to take the subspace of eigenvectors, the linearly independent case, and orthogonalize them by something like the Gram-Schmidt orthogonalization procedure. Okay? Okie dokie. Now, the last topic that I want to cover in this Mathematical Foundation is the question about compatible observables. Thank you. That's an American-Saint-Lewis course spelling, but I'm not sure. Compatible observables for animal operators and mutual states of eigenvectors. So let us suppose that we had another Hermitian offer. Suppose that A and B are Hermitian and B and A commute. Remember, the commutation is the operator you get by multiplying them together and then subtracting them in the other order. So, let's just suppose that's true. And let little A be the eigenvector of the operator A hat. Then the vector that I obtain by operating B on that is an eigenvector with the same eigenvalue. That is to say if I look at the action of A on this vector, I claim that that's equal to this. Is that obvious? Sure, it's obvious because they commute. Because they commute, that means A times B is equal to B times A. And since this is an eigenvector with eigenvalue, this is an eigenvalue of A minus function that's equal to this. Once again, you can be there. Oh yeah, ran away. Thank you. It is the case that this vector is now degenerate. That means what? That means that this vector must be proportional because as we said eigenvectors are really massive vectors. They're rays in Hilberg space and all things along the same ray are the same eigenvector. And that's all that's possible because this is non-degenerate. That means that B on A is some number which I'll call little b. That is to say this vector is really denoted by two indices. To say this vector is also an eigenvector of B. So if we have two operators that commute and if we have a vector which is an eigenvector of one of those operators, it must also be an eigenvector of the other operator. It doesn't have to have the same eigenvalue. Not the different eigenvalue. But it must and statistically we're going to know both eigenvalues in the ket. But can we say if A is one of a degenerate set of eigenvectors of A? Well then what we know for sure is that B on this vector of the operator A hat, capital A hat but we can't say anything more than that. Because here because it was non-degenerate that insisted that that was true. But if it's a degenerate we can't make that conclusion. And thus what we can say for sure is that this thing is a member of this subspace. That's what we can say. It maps the eigenvector in a way somewhere within that space. Because every vector in this space with that eigenvalue I mean with that index is an eigenvector of A hat. And that's what we do. What that means is that given another eigenvector of A prime where A prime is not degenerate with A what can I say about this matrix element? This is in this space. This is not. What can we say this matrix element is there? Zero. It has to be. Because we said eigenvectors in subspaces associated with different eigenvalues are orthogonal. Therefore this whatever this vector is it's got to be orthogonal to that. So that means that if I have a so suppose that I have so let a set of vectors B and an orthonormal basis vectors then what can I say for sure based on what we've said about the matrix representation of the operator B? Well what I can say is that B is block-diagonal. So if I look at that let's just take that as an example what the B will look like is the B1 B2 which is the 1 by 1 this is a 4 by 4 B3 2 by 2 etc. I can't say a priority that these are diagonal matrices they're just made 4 by 4 matrices some 4 by 4 matrix this one is of course just a number and this is some 2 by 2 matrix but all the orthogonal elements between different blocks are necessarily 0 because of this so we have what we call block-diagonalized B we haven't fully diagonalized it yet so another way of thinking at this is the following because we have this basis we said that there is a resolution of the identity in terms of a sum over all projections onto the different spaces, sub-staces associated with the different eigenvalues of A that's what we wrote over there let me just, I forgot to write one definition of the or one property of projection operators so this is a projection operator onto the space H8 those projection operators satisfy the following property if the two spaces, sub-spaces are different, then that projection is different because they're orthogonal however what happens when A equals A prime projection again but if I projected it once it doesn't do anything different so projection operators swear is itself so let's look at the operator B in this space I'm sorry, in this representation well that is the sum over A sum over A prime projection A however, because of this if these are different projection on different spaces they often add matrix elements for all zero so the only non-zero wonder when they do the same projection onto that space and that's equal to the sum I prime so these are the matrix elements of these blocks those matrix elements need not be diagonal in general, yeah why don't you go from of sum over two variables to sum over one? yeah, good point so that followed from this in other words, we said that when B commuted with A it must be true that there are no orthogonal elements of B between vectors associated with different eigenvalues okay, however although it's the case that these matrices need not be diagonal they are Hermitian matrices okay, so I was considering A and B to be two Hermitian operators, this is a Hermitian matrix it's a 4 by 4 matrix and therefore we can diagonalize it right, in which case this will be we can always find a set of orthonormal vectors so the operators B A, that is the operators projected into the eigen subspaces of A are Hermitian implies we can diagonalize them now, as we said so if the eigen values B A are non-degenerate automatically they are a problem and we have found a mutual set of eigenvectors that are orthonormal that simultaneously diagonalize the operators A and B however, maybe there are degeneracies eigenvalues then we so we diagonalize these blocks, but now some of these blocks have sub-blocks in which there are degeneracies and within those degenerate spaces of these guys it may be the case that two eigenvectors they don't have to be orthogonal of course we can always diagonalize them but they don't automatically have to be orthogonal half here then is that if the eigenvalues of B A are degenerate they aren't guaranteed to be orthogonal so what that means is suppose there is a third operator and let C also commute with A and B and A and D commute with one another so we have these new sets of eigenvectors with degeneracy at E, A, I good I have A and D there is degeneracy of A and degeneracy of D each one of these can be degenerate then the matrix C well it has a certain block diagonal structure right so I can look at this now there is C, let's say this is C11 one dimensional here this is then I have C let's say this guy was the first eigenvalue for B so this was four dimensional but for B it's one dimensional this is number two let me explain what I'm trying to say here all this notation so let's just for the heck of it say that when we diagonalize this matrix this guy was a one by one there was one eigenvalue that was non-degenerate there was a second eigenvalue that was doubly degenerate and then there was that was non-degenerate when I express C in this basis it is block diagonal this guy is going to be a two by two matrix and I just keep going and that leaves us to the final result which says the following a so-called mutually for a complete set of mutually commuting permission operators A, B, C or however many there are so it's that every one of these guys commutes with everybody else in this set completely specify a vector in over space what do I mean by that as to say if there is there are a set of operators permission operators such that these are mutually eigenvectors specifying if I don't have the complete set of such operators specifying it is just an eigenvector of one of those observables does not uniquely specify the state in over space or the vector in over space let's just make this more physical for a moment just to remind you of something that you know about angular momentum so suppose I look at the angular momentum operation cross momentum now if I look at the magnitude square of that vector I can look at the eigenvectors of that operator but so for example we know that there are eigenvectors of this with L could be 0 1, 2, 3 etc however specifying the vector as an eigenvector of L squared does not tell me uniquely what vector I'm talking about because there is degeneracy we know for a given L the degeneracy is 2L plus 1 there are 2L plus 1 different vectors in Hilbert space which all have the same eigenvalue of L squared I can completely specify what vector I'm talking about by specifying another operator not just L squared but another one that commutes with L squared in fact a complete set of mutually commuting operators in this case set of mutually commuting operators it's the standard set L squared and the projection along the z-axis that looks like a type of momentum spin yes any type of orbital whether it be orbital or spin indeed and in that case we specify the eigenvectors of this are so now we have 2 eigenvalues L and M with this and this so this is an example of a set of observables which are have a simultaneous set of eigenvectors and I need to specify all of these eigenvalues in order to uniquely specify the state so this will have important physical implications that we'll get to soon we'll stop there and we're done now with the basic mathematical formulas we need yes indeed and we'll continue on yep