 And we will have our first problem set tomorrow at 30 and, you know, take around close to noon. I mean, we have a day on Monday, so we're on our heartbeat. We're going to have an office hour from our usual office hour from the start. But we'll be available after class on Tuesday. So I saw I've been heard from most of the people about availability on a Tuesday. So that's going to schedule his office hours. So please send me that email. I have to do our best to accommodate everybody. All right, so last time we were talking about linear operators. Operators are maps on the vectors. In other words, based on linear operators satisfied the property that the map on the linear combination is the same linear combination of the map on the individual pieces. And there's an intimate relationship between linear operators and matrices. They're one and the same object, really. And so in particular, we can write, as we saw, representations of an operator, particularly on a finite dimensional hill for the space. We'll get into the question of infinite dimension. It'll be very important for the study of quantum mechanics and how you deal with that mathematically. We'll get to that a little bit later. For the moment, let's just restrict our attention to finite dimensions. And if we have a finite dimension over space, if we have it more than normal basis for that space, as we know we can write a resolution of the identity, and use that to quickly write a representation of the operator in terms of a combination of transition operators between the basis elements weighted by these complex numbers. And those complex numbers, the array of them form a matrix. So this is a matrix representation of the operator in a basis. And these numbers are the matrix elements, the elements of the matrix. And so the first index is the row and the second index is the column. And I can think about, there are many ways to remember or think about the matrix representation in a useful way. You think about each column of the matrix is really just the vector that well-attains by operating the operator on the basis vector associated with that column, represented in the basis of interest. So this is the vector M on E1 represented with coordinates in the basis by projecting it into all that. Is that clear? We'll see an example. Maybe I'll do it a little more. This has to be clear if you want to try to stay here. Right. Okay, so we talked about basic matrix operations, like the inverse and the act of the adjoint. The matrix elements of the adjoint operator are given by the complex conjugate transpose of that. And we talked about special classes of matrices or of operators. So we talked about permission operators. Permission operators also are called self-adjoint. And we also have anti-permission, a type called skew permission, which satisfies that property. Is I times information 3 times anti-permission? Always. So as we said last time, we can always talk about permission operator is kind of a moral equivalent of a real number. And the anti-permission operator is kind of a moral equivalent of a imaginary number. And an imaginary number is I times information 3. Yeah. I'm sorry, I didn't hear the question. The question was, can is every anti-permission operator I, the complex number I times a permission operator, an answer is yes, for this very reason. Because if I divide an anti-permission operator by I, well then it's permission. And the other important class, very important class of operators that we talked about briefly are unitary operators. Unitary operators are almost always called U. By the way, permission operators are generally called H. So call it a permission operator A. We're going to do that. So there's some kind of notation that we use all the time and some we don't. We almost always call unitary operator U. Sometimes we call it E. But generally we call it U. And a unitary operator is such that its inverse is a dagger. And so that is to say U dagger U, U U dagger is the idea. And these have the important property that they preserve the inner product. That is to say if I map every vector in an overt space by this map, then whatever the inner product was before between two vectors stays that inner product. Okay, we proved that last time. That's an important property that you define. You might say that's really the defining property of a unitary operator from which all this other stuff follows, depending on what you want to call the definition. And if we have an orthonormal basis for this space and we define the set U tilde I as what I get by mapping each of the EIs by this U, then this is also orthonormal. Okay, so another thing to keep in mind about unitary operators is that they are, can be thought about as maps that take orthonormal bases to other orthonormal bases. So that's the way to think about that. So I can write U, as if I like, as a transition operator that takes this basis vector to that basis vector. You can think about E tilde as the image of the vector under the map. Okay? So what we said then is that if I wanted to look at a change of basis, a change of representation, so this matrix is one representation of the operator in this basis. But I can choose a different representation. It's a representation of the same operator, but just with respect to a different basis. And those two different representations are related to one another through a change of basis that is a unitary transformation. So just once again to emphasize that fact, if I look at the, let's look at the matrix elements, so different amount of change of basis. So here's the representation in a new basis, so that's L tilde. So if I want to find out how these matrix elements are related to these matrix elements, what I do is insert a resolution of the identity. Arrows look like my unities over here. And I do that with respect to the basis that I want to represent the operators that basis is. So this is, I'll write it all out, just to emphasize these facts. So we can't use i and j anymore. We've got to use different dummy indices. That's always the trick. Be very careful about that. And that is just equal to sum over L and k, e to the i tilde L, e to the l m, e to the k, e to the k, e to the j tilde. So these are the matrix elements in the new basis. And these are the matrix elements in the old basis. What we call new is the un-tilted basis. This is the tilted basis. And these things are what? Well, this is equal to the matrix element of this U. That's to say it's the matrix element associated with the U that takes me between the old basis and the new basis. And this is, as we showed last time, the adjoint in matrix. So this is a unitary transformation that takes me from this basis. This is an example of what we call a unitary transformation. Similarly, we can change the representation of a vector. Not just the matrix. It's in the nodes. Check it out for yourself. All right. Any questions so far? All right. So far, so many things. Let's just quickly review an example here. So a particular set of operators that you're familiar with, I just thought we could do a pretty important role in lots of the physics that we'll talk about, are the tally operators. The tally operators are familiar in the theory of spin 1 half. But as we will see, they're kind of more, they're useful more generally than just thinking about the problem of spin. We'll see that. So what are the definitions? So if I have a spin 1 half particle, then we know that I can form a basis for the space that describes the spin state as two possible spin states. I'll call them spin up along the z, and spin down along the z, and I'll call them plus and minus. Clear? In a moment. Maybe a few moments. Okay? So in that basis, so this is spin up along the z, and this is spin down. So what are the operators? So sigma x is equal to this operator. Sigma y is equal to this operator, and sigma z is equal to this operator. Those are the operators written in this kind of notation. There's lots of notations that we can use to express operators, but this is one of them. Now although I've written it with respect to this basis, this is really a basis independent expression because I can change the basis quickly, but this object equals that object. It's not that it's represented by, it's actually the same object. But we can write a representation in basis. So in that case, we do a basis. We can write a matrix representation of these operators in this basis. So how do you do that? I just began to emphasize, right? So, you know, sigma x, say s s prime, where s is plus or minus one, along with this one of these guys, right? That's the matrix elements. And what are we safe about those matrix elements? So if I wanted to then look at that, where, say, a sigma x as a matrix, I have to do those matrix elements, right? So what you see first of all is the diagonal matrix elements. What are they? They're zero, right? Because if I look at, for example, plus, plus, well, I put plus, I split the sandwich. Like that, right? Plus on that is zero. Because they're orthogonal. Clear? And plus on this is zero. Because it's orthogonal. So both of those diagonal matrix elements aren't zero. That clear? What about the off diagonal elements? Well, I have to, let's look at, so this is zero. What about plus, that's row plus column minus. So I put plus on this guy and minus on that guy, right? And what is that? One. And then I put plus on that guy and minus on that guy. What is that? Zero. Zero. So this matrix element is one. It would be two if it was, well, it's one. And similarly, the other guy, using the same kind of thing. So that's the matrix representation of sigma x in the basis of spin up and down along z. Okay. What about sigma y? Can you just tell me now what its matrix representation is? It looks just like this guy, except instead of ones over here, it has these plus and minus i's. So firstly, what are its diagonal matrix elements? Zero. There's zero, right? What about the off diagonal one? Well, one of them is i and the other one is minus i. Which one is which? Remember, row column. And we're doing it in this word. That's important. When you write down a matrix, it's always with respect to a particular order of the basis. You have to know what order I'm talking about. I didn't actually tell you. I'm writing this in the order plus minus. I didn't have to. We can write it any way we like. We have to know matrix has, when you write a matrix, you're always specifying the order in which you're specifying the elements of the basis. You don't know what that is, but it's meaningless what the matrix is. So in that order, if this is column plus, column minus, this is the row, because it's row column. So plus minus. And what about sigma z? Let's just think about that for a second. The diagonal elements of this, are they zero? Not this time, right? Because if I put plus plus here, what do I get on this guy? I get one in here and get zero. So that plus plus, that's one. And for minus minus, negative one. And what about the octagon matrix elements? They're zero this time. So this guy is a diagonal matrix. Obviously, this is an example where, as we know, when we talk about more this lecture, these vectors are the eigenvectors of this matrix. And a matrix, or this operator, and a matrix or an operator represented in the basis of its own eigenvectors, is diagonal, where the diagonal elements are the eigenvalues. And that's why I'm calling these things plus and minus. Because sigma z, acting on plus and minus z, is plus and minus one. eigenvectors, and these are the eigenvalues. Yes. I think if I have a sigma in the homework, we've got the same things, but with halves instead of, yes, they co-efficient instead of one. Those are the, remember that spin operator is h bar over two times the sigma operator. Okay. And I said h bar over one. Okay, so, okay, yeah. h bar is always one. Yes, I know. I like h bar. We'll see why. It's important to keep h bar around sometimes. All right, cool. Now, there's some interesting properties of the Pali matrices or the Pali operators. We can learn how you look at it. You could look at them as a matrix. Remember, these, of course, are representations. Say, these are the standard representation. When something says, write down the Pali matrices, they always mean means. Because the Pali matrices are understood to be the Pali operators that are represented in these cases. All right. So, with that said, what are some other interesting things you could say about these matrices? Firstly, or these operators, the Pali operators are, is that clear? What are the possibilities to see it? If, let's just, just to get a feel for and remind ourselves about the different kinds of manipulations we do, let's just take the adjoints of these different operators. So, let's just, for that, let's look at, let's do sigma y. So, that is the adjoint. And how do you take that adjoint? Well, the way you take an adjoint is you complex conjugate. So, minus i goes to plus i and then you switch the bras and the kets. And that, of course, is the same thing. And then it's obvious you split the bras and the kets and you get back. So all of them, these are all permission operators. They have another interesting property. All of them, if I look at the square of them, they just happens to be, this is a very special property of Pali operators, the all square of the identity. But anyways, let me do this by matrix multiplication. Sometimes it's easier. I mean, you know, this is, to some degree, a matter of taste. What you're more comfortable with, play around with the bras and kets or play around with the matrices. I can look at sigma y squared as being represented by the matrix multiplication of these two matrices. I want you to do it. And so, one gets zero, kind of zero, zero. The other way, zero. So this is the identity. And that's true for all of them. You can do that check. So an interesting fact for me here is that not only are Pali matrices or Pali operators permission, they're also unitary, right? Because if I look at one of these guys times this attribute, then that is the same thing as in time itself because it's selfishly. And that we said is that this is one. So there's an interesting fact about this that these guys, all the Pali operators, they're also happen to be unitary and permission at the same time. Let me go grab my notes part of me. No, that's not quite true. We'll see what that copies in a moment. There are lots of other copies of single matrices, and we'll see some of them today and we'll get to some more later. But let's hold on. So, what I want to talk about now are just when we spend a breath of a lecture to be troubled with the next week, talking more about operator psychology. For example, we can do of course what we can talk about here is I can talk about the composition 8 times B has a meaning because A, B, the vector is just what I get when first I act B on the vector that's some other vector and then I act B on it. That's just the definition. Of course we get that by major complication. So, I can ask the question of what is the adjoint of the composition and the answer you can easily prove which we'll do quickly is this? That's to say the adjoint of the product is the product of the adjoint so, how do we prove that? Well, let's look at this and let's call this thing W. So, A, B adjoint acting on B is the adjoint of A acting on W. And the adjoint of A acting on W is the product of the adjoint by definition. Okay? No, this is all screwed up. So now I can say the dagger of W is U and then the adjoint of whatever you buy that. So, B is B on U and W is on D. So, this is equal to the V dagger or V bra is equal to U bra V dagger V dagger and thus this is equal to that. So, the adjoint of a product is the product of the adjoint. Now, why do we care about the orders because generally the operators don't commute. So, composition of operators satisfy a few properties. Firstly, we do have associativity. It's easy to see by just sequential mapping that we can if we have a set of operators it doesn't matter the order in which we multiply them. That's equal. However, commutativity is not necessarily the case. So, generally the composition of A times B does not equal the composition of B times A. They don't commute generally. That is to say AB is equal to BA plus a remaining piece of the commutator. Where the commutator as you know is an important role in the mathematics we're using. It's just the difference between the operators and the two different orders. So, what are some of the properties of the commutator? Well, firstly you can see immediately that it's anti-symmetric. That is to say that if I take the commutator in the other order it's not the same but it's a minus sign. It's linear in the following six. If I commute A or it doesn't matter when your combination of operators with scalars that's the same thing when your combination of the commutators. Another important role that is important to it used to we use it all the time is the product role with the commutators which you can check for yourself quickly so let's, if I have the commutator of A with the product of two operators what I do is bring that guy out and commute A with C and multiply that or bring this guy out but be careful you got to put it in the right order because generally they don't commute Conjugation I look at the operator I get by commuting A with B and I conjugate that well I conjugate each piece here right and of course switches the order that becomes B dagger A that becomes B dagger A dagger becomes A dagger B dagger and this thus is equal to because I reverse the order minus right now one thing we could save us is that if A and B is for mission operators then the commutator between them is anti-commutation right because these guys are a mission that means the dagger of the commutator is minus the commutator you know that if you have X and P the commutator is IHR and finally there's one other property which is that you can never use it anymore but there's a dichrobiotic property that it makes this a B bracket that is sometimes a more dense object if I look at the commutator of A with the commutator B and C and then I sickly we permute it all through commutator of C with A and B plus the commutator of B with C with A and I'll add it up to 0 not such an important property but one quick note one quick note we sometimes talk about the anti-commutator which is not universally but often written instead of with a square bracket a curly bracket and it's defined A times B instead of with a minus sign it's got the plus sign and the reason it's called the anti-commutator is what it says is that A plus B that is able to minus B plus the anti-commutator so it says when you reverse the order you put a minus sign and then add the anti-commutator whereas with the commutator it just reverses the order and the anti-commutator is we'll see next semester important in the theory of fermions to capitalize I mean it's fermion draw but very good so let's go back to our example over here about polyoperators and let's look at the commutator relations and anti-commutator relations so as you mentioned Z there is the relationship between so if I define in the remember the polyoperators that we call i equals 1, 2, 3 or x, y, z is the spin component divided by h bar over 2 so this is spin angular momentum if you recall that the components of spin angular momentum satisfy the following commutational relations this i is not the same as that i that's the actual square root of minus 1 with B times h bar this is the levy-chibbett symbol let's put it in the symmetric tensor and I'm using the Einstein summation convention where repeated indices are summed up so what that tells me is that the commutation relation between two poly matrices if I divide everything by or multiply everything by h bar over 2 divide all these by h bar over 2 this is 2i epsilon i jk k so let's check that let's we got our our vectors over here or I'm sorry our polyoperators over there so sigma x and sigma y let's do it in terms of matrices because it's easier I think to multiply matrices I didn't deal with all the pros and cons so sigma x times sigma y is that matrix multiplication and that is equal to y i 0 sigma y times sigma x equal to minus i 0 0 plus i so that tells me then that the calculator of these two is equal in this representation to this minus that which is equal to 2i 1 minus 1 0 0 which is equal to 2i sigma z so that works pretty good there's another property though that's interesting and this perhaps even which way we're thinking about but the anti-commutator of the poly matrices is so for example what's the anti-commutator of sigma x with itself because this is sigma x times sigma x plus sigma x times sigma x and each poly matrix squares to 1 so this is 2 the anti-commutator of sigma x times sigma y that's to say if i isn't j this should be 0 well that's sigma x times sigma y plus sigma y times sigma x we just did that add these two together to get it the 0 matrix so poly matrices different poly matrices anti-commute okay that's an interesting property poly matrices are all kinds of fun properties which we'll use throughout throughout the year alright so now the next thing we want to talk about to give the most important thing is to remind ourselves about the theory of eigenvectors and eigenvalues we said we have a linear operator that maps some vector to some other vector however there is a special characteristic vectors this was done in Austria in Germany first eigenvectors so eigen means characteristic of the map which are such that if I operate this one of these vectors labeled by a number lambda if I map it by m the act of the map is just to scale the vector by the number such vectors are called eigenvectors or characteristic vectors of the map or operator and the scale factor for that particular vector is the characteristic value of the eigenvalue so by that we're not talking about ignore the trivial case where this is just the zero vector I'll call it no the column vector with all zeroes in it that's always an eigenvector because multiply that zero and then you get that zero so the eigenvalue can be zero as you'll see the case where this is the zero vector we're not including that that's a trivial case this is the vector where I mean by this is all zeroes that's the no vector so ignore that that's obviously always an eigenvector another thing to note is that if this is an eigenvector with eigenvalue then I can scale that vector bias any scalar and this is also an eigenvector with the same eigenvalue but again they're not considered as distinct they're really a whole ray a whole ray in Hilbert space is the same eigenvector this is not a distinct eigenvector from this it's just scaled so typically we will at least to some degree specify the eigenvector by specifying the length of the vector it's still not completely defined because we can multiply by a number which has magnitude one that's the phase but what phase we choose will be a matter of convention so typically we fix the normalization or the length such that the it's a unit vector but as I say completely specify the eigenvector because I can multiply it by a phase and it would still have normal one so now this is here in which I'm not going to prove which states the following before I get to that I wanted to show you that so let me do that first so let's just say let's just consider a rotation in three-dimensional real space about the axis so my Hilbert space here is a real redimensional Hilbert space that's fine so so I'll have an operator which is a rotation about the z-axis by some angle theta so here's the z-axis here's the x-y plane hats on these guys call them hats, what the heck and I'm rotating about the z-axis the z-twice the monitor I rotate this by some angle theta so the action of my operator on the x basis vector is this basis vector this has cosine theta along x and sine theta along y the action of this rotation operator on the y basis vector is minus sine theta on the x and cos theta and the action of this on the z is to do nothing with the z-axis along okay so now I ask you what is the representation of this operator in this basis in some order so how do I do it what I want to emphasize here is one of the points I made at the very beginning of the lecture I can immediately write down without doing all the bras and kets although I could write down the bras and kets it's kind of up to you how you feel comfortable if I know how the operator acts on each of the basis vectors then I just have to write down that image as a column so this is cos theta zero that is the representation of the operator acting on the first guy minus sine theta cos theta zero that is the representation of this guy as a column vector zero zero one again you can do the bras and kets you get the same thing right row column row column so so this is the rotation matrix we can see immediately here that this guy that this is an eigenvector and what's the eigenvalue plus one so that geometrically what you see is the operation of this map on this basis vector on this eigenvector was to not change its direction left it alone that's what an eigenvector must do but in addition in particular it doesn't just not change in this case it left its magnitude the same so that's why the eigenvalue is plus one okay alright so now to the theorem so how do we generally find all the eigenvectors and eigenvalues of the operator or the matrix well now I want to disinvoke a mathematical theorem which says the following an operator on a finite dimensional overhead space has a dimensional overhead space a dimension d has d orthonormal eigenvectors if and only if m commutes with its adjoint such operators are known to be if they satisfy this property that they commute with their adjoint they're called normal operators normal operator is one that commutes with its adjoint now our favorite operators in quantum mechanics are normal operators so for example a Hermitian operator are normal operators right because of its Hermitian m is equal to m dagger and everything commutes with itself so that's cool but also unitary operators or normal operators why do we know that well we know that u dagger u is one and u u dagger it's also one so the commutator of u with its adjoint is zero with its adjoint it's not equal to its adjoint generally but it's it could be good so what that means is that both Hermitian matrices and unitary matrices or unitary operators and Hermitian operators have a if they act on a d dimensional space there exists d orthonormal eigenvectors of those operators that's not true of all operators but it's true if they're normal if they're normal then it's true okay alright so what can we say in this case well the rotation operator that we're just talking about right here is unitary it's actually orthogonal in this case but orthogonal matrices are also easier to guess how do we see that well the inverse of this operator is just rotating in the other direction right easy for me to look at so if I look at this with the minus sign in it we can say that like sign is minus sign but that's just the transpose of the matrix but since it's all the elements are real that's also the same thing as that because everything is real so this is the same thing as that so this is unitary it's also orthogonal an orthogonal matrix is one of these inverses but if it's always a real matrix then it doesn't matter of unitary so this matrix or this operator has three orthogonal eigenvectors the other eigenvectors imagine that isn't it I mean what we said is that the property of the vector is such that when you map it it stays in the same direction yeah they have to be complex in some way so let's see what they are you'll recognize them hey Bob what are we saying that they sound complex boom let's do it out let's do it out so let's remind ourselves how we find eigenvectors and eigenvalues of matrix so the defining equation is written over here generally an eigenvector satisfies that it stays the same up to multiplication by the eigenvalue which means that this combination acting on this gives me the null vector and since we are ruling out the null vector here as an eigenvector that means that this thing cannot be invertible because if it was then this would equal zero and it doesn't and since it doesn't equal zero I mean since this is a singular operator, a singular matrix that means that the matrix has a determinant of zero so we find the eigenvalues by what's known as the characteristic equation or the eigen equation which says that the determinant of the matrix or operator is zero so this object is called characteristic polynomial for a d-dimensional Hilbert space it has d roots and those d roots are the eigenvalues now we could say a few things just in terms of definition if the eigenvalues are the same or if any two eigenvalues are the same they are said to be degenerate and the cars on the eigenvectors degenerate eigenvalues and degenerate eigenvectors okay we'll get to that soon what are the nature of the eigenvalues so it depends on on the roots here so if this characteristic polynomial is real it's a real polynomial so it doesn't have to be because we might be looking at unitary matrices then what can you tell me about the roots the roots of a real polynomial not always but if they're complex they come in complex pairs right so if the polynomial is real if you have x squared plus 1 is 0 then the roots are plus or minus i so if the polynomial is real that means that if any root or any eigenvalue is complex then the conjugate is also nice value let's take a look at our specific example I'm typing here having kind of a the roots of our characteristic polynomial for this matrix okay so let's write this down let's try to have a characteristic determinant of cos theta minus lambda sin theta 0 minus sin theta cos theta minus lambda 0 0 1 minus lambda that term that determinant has to go 0 so you take the determinant of that matrix you got cos theta minus lambda squared minus sin squared theta all times 1 minus minus sin squared if it's minus it's plus so we have the roots minus lambda is 0 there's one root which is lambda is 1 and then the other roots are the roots of this quadratic and I'll leave it to you to quickly show that the roots of those are p to the plus or minus i it does but then you have the cross term you have to set this to 0 so these are the eigenvalues now I'm running out of time so I'll finish this up next I just want to say one last thing what are the eigenvectors well remind ourselves how to find them but let me just write this down because I think it's kind of and it just reminds me so eigenvectors that is to say if I act this rotation operator this vector this is so this on e plus or minus this on e to the minus these vectors you recognize maybe as the vectors we talked about in electromagnetism they were related to circular polarization plus and minus circular polarization are the eigenstates of rotation about that axis with eigenvalues representing the phase of the rotation this basis this complex basis will play a very important role next semester this is sometimes known as the spherical basis not that the view is spherical it has to do with spherical symmetry it's the basis that will play a very important role in the theory of irreducible tensor operators and the Pickard-Hackard theorem all this good stuff that we'll see next semester but anyway it is a basis and is it orthonormal we show that these guys are orthogonal they're obviously irreversible I will post the notes I'm sure we need stuff especially something we need for homework I'm sure we can do that all right