 Let's see, a few announcements. So we are scheduled for our problem session for the course of the other Fridays. I guess I'll say that. 10.30 to noon, 8.90. OK. I know that there is some, having discussed it with me, the Odom, who is seminar. I forgot about that. Hopefully, we'll be able to report something out with a few of you who have that conflict. My office hours are a little, I said in my email a different time, it turns out that that conflicted with some of the people, especially people who couldn't make it to this. So I would make sure that the problem session, you can make it to my office hour. Yeah, is that also on credit? No. The time I did was a day or two. That was, you know, that's Monday. Yeah, Monday. And that's, not even 1.90, but 3.30. My office hours will be OK. Now, homework will be due on Tuesdays. And I've decided, here's people some space here, that homework will be due at 5 o'clock on Tuesdays. OK. So we're going to have our problem session on Friday. We're going to have my office hours on Monday. So it might be good to look at what your schedule is like. That happens on Saturday and Tuesday. After the class? After class. Well, I'll explore that with everybody over email, OK? So what this means is the homework will be distributed on Tuesday. I'll let you know. I'm touching it out on the web. I'll let you know when it's available. Should work on it, try it, so that come in time for the problem session where we will spend the time making sure everyone is on track to do the problem set. There will be additional problems. We'll be working on the homework during the problem session. So get the problem set, get to work on Wednesday and Thursday so that on Friday, we could have a productive problem session. And then you can crank on it about weekends and ask a couple of questions on Monday and Thursday. All right? That's the plan. And the first assignment will be available soon. Right, so let's see. Last time, we're laying this Mathematical Foundation reviewing it in some ways and hopefully we'll make it a little bit more sophisticated as we go. So the key ideas are because of the principle of reposition, the mathematical structure that allows us to deal with that is linear vector spaces, in particular, linear vector spaces with an inner product over the field of complex numbers. And the notation we use to deal with that in quantum mechanics was laid out by Paul Dirac, Paul Dirac application now like it's universally used as the way of manipulating amplitude and operators, et cetera. So the space we generally call a Hilbert space, this complex vector space with an inner product, which in principle could be if an interdimensional. And that set of vectors in the space are the sets. And the dual vectors are the blocks. And the dual vectors are there so that we can take inner products. And that inner product is generally some complex number and it's complex conjugated. You switch the vector and it's dual. And a very important concept is the concept of basis for the space. So if we write, we have a basis is a set which spans the space, in particular, we're interested in bases that are orthonormal. And orthonormal means that different basis vectors are orthogonal. And the norm of each vector itself is one. That's written compactly in terms of the Kronecker delta. And once we have a basis that allows us to expand a vector as a linear combination of the basis vectors, which means that there is, for a given basis, there is an isomorphism between the vector and the set of coordinates, if you like, of the vector in the basis. And those coordinates are the projection of the vector onto the basis directions. And that set of complex numbers forms what we call a representation of the vector. Representing the vector in the basis. And generically, we write that collection of coordinates as a column vector. And the inner product, then, which is given by the product, if you were to write out a vector and it's dual using this rule here, then that inner product is given by the product of the coordinates or representations in this way. Which is equally well written in terms of the rules of matrix multiplication, where you multiply rows and columns. So that tells us that we can think about the dual vector in this representation as a row vector. So it's the transpose conjugate prunin, which is the adjoint operation. That's the dual vector. There's lots of ways we think about representation. One thing I just want to make clear, of course, in the basis, if I look at the representation of the vectors E in their own basis, then this is what you might call the standard basis. So that a, if I were to say E1, that would be the vector which is this, that, represented by that. E2 is this basis vector, not that, that, that basis vector. So that's the representation of the basis vector. So that a vector V is represented by a certain amount V1 times the first basis vector, V2 times the second basis vector, et cetera. So we go back and forth in our minds between representations which are collections of numbers and abstractions of kind of geometric objects. Of course, here, the vectors themselves, the geometric objects, are represented in their own basis. So we have our vector space. And the second important ingredient in thinking about the space is thinking about operators. So operators are maps on the vector space. They take objects in the vector space and map them to objects in the vector space. And we put little hats on them. OK? In particular, so what that means is that if I operate this in this notation, the map is written this way, this, say, this on U is V. All right? Now, in particular, of course, we're interested in a word called linear operators. And linear operators are a set of a class of these maps which satisfy that suppose let's say that W is equal to alpha U plus theta V. It is itself a linear combination. Or let's try to make this a little bit clearer with the notation. Let's say U1 and U2. OK? And let's say that this linear operator maps U1 to V1, and it maps U2 to V2. Then if it's a linear operator, then N on W, which is and on this linear combination of U1 and U2 is equal to alpha M1 U1 plus theta M2 U2, or W is equal to, excuse me? And M on W is equal to that? M on W. Just the last line. Oh, pardon me. So M on W is, so that is for all alpha and theta and all vectors U1 and U2. If the map obeys that property, that a map of a linear combination is a linear combination of the maps on the states, then we say that this is a linear operator. OK. So examples of linear operators. Let's say, let's restrict for the moment. Let's say we have this space. Let's talk about rotations in three-dimensional real space. OK, so I'll just, for the moment, restrict my attention to a vector space with real coefficient. I could do that. And let's say, so I have vectors in my three-dimensional space by z. Here's a vector v1. Here's a vector v2. Here's some vector, W, which is a some linear combination of those. OK, then if I have a rotation operator, which rotates each one of those vectors in some way, so I have some rotation operator that rotates v1 and v2. Let's call them u's. This is the consistent rotation operator that rotates v2 and v2. Then, so this, I'm still bad at drawing this kind of perspective, but I hope you'll bear with me, so I rotate u1 by some amount, rotate u2 by some amount. And the sum of that is the same thing as rotating W. That's a say, the rotation on W is the same thing as adding together the rotated versions of the two pieces. So clearly, this is a linear operator. Another example, say, a bracket notation. Talk about the outer product. So let's suppose I define an operator, let me call it e, as the outer product between two vectors. Let's use the three vectors. OK, so this is an operator, right? It's an operator that takes vectors to vectors. If I operate this on a vector v, this, which is some number, alpha. So that is an operator. Is it a linear operator? Yes, it is. And how do we know that? Well, let's say we add this on a linear combination, something else, so better that's hard to write. Let's say that e is a linear combination of two vectors. So this is given by, and we know, so this is now the inner product of psi with this vector. But the inner product itself is a linear operation. So this is equal to alpha plus beta, which is the same thing as alpha v on u plus beta v. So indeed, I can distribute e linearly through the sum and get the same thing. So this outer product is a linear operator. So now we have operators. And as you know, we can, in the same way, as we have abstract objects as vectors, kets and bras, which can be written in representations as column vectors and row vectors, we can represent linear operators as matrices. So we can write down in a representation what we said is that m maps some vector u into some vector v. Now, u itself express a linear combination of basis vectors. So I can just plug that in. I mean, because it's a linear operator, when m acts on u, I can just distribute it through because it's linear and see its action on each of these guys. And of course, I can say u and not v. So now what? Well, I want to look at this by projecting. I want to look at the relationship between the column vectors of u and the column vectors of v. And that show that the way in which those column vectors are mapped into one another is through a matrix. So what do I do? Well, if I want to get the column vector of v, I've got to look at the representation of v. And the way I get the representation of v is to project v onto the basis vectors. Now, this is where I have to be careful. I've already used up the letter i as a dummy vector. I mean, it's a dummy index. I summed over it. Nothing here depends on i. I could have called it k, or called it k. I could have called it xc. I could have called it epsilon. It didn't make a difference. Choose your favorite letter. But once you use it, it's used. So pick a different letter. So now let's look at vj as the projection on the j-face vector. That is equal to vk, acting on the sum over i, ui. And again, the inner product is linear, and this can come inside the sum. So this is the sum over i. I'm right out of this list of v. I was going to be ui. So what did we do? This is a vector, and then we looked at its projection on each j. Therefore, what kind of object is this? Is this a vector? Is this a tensor? Is it an operator? Is it a number? It's just a scalar. It's a number. It's a complex number. It's indexed by two things, but it's a complex number. General, meaning it generally has a real imaginary part to it. Since it's a complex number, and this is a complex number, it doesn't matter what order I write in it. This is sometimes confusing because, of course, these are elements of a matrix, and there's questions about how the order in which you write things, because of commutation we'll talk about. But these, as written as the sums, these are sums over numbers. So those numbers can be written in any order. So I'll write this as it's convenient to have the same thing with the same index next to one another. So what we have here is that the action of m on u, which gives b, when written in a representation, is nothing more than matrix multiplication. That is to say, the vector v1, v2, dot, dot, dot, repeat the representation in that basis is given by the standard rules of matrix multiplication where these are the rows and these are the columns. The first index, so I have m1, 1, m1, 2, dot, dot, dot, m1, v. Then I have the second row, first column, second row, second column, second row, deep column, dot, dot, dot, deep row, first column, deep row, second column. So once we have a basis and once we decompose our vectors in that basis, then a map between those representations as a linear operator is represented by a matrix. So this is the representation of v. This is the representation of m. And this is the representation of u. Right enough above so that we can see it. Object is known as a matrix element. It's an element of the matrix. The matrix itself is a ray of all the matrix elements arranged in rows and columns according to the rule that we worked. The matrix is a representation. It's isomorphic, but it depends on the choice of basis. Now in the same way, here we have an identity. This is my representation. This is an identity. This is what I'm trying to do. So it's a method of semantics that I'm using. Let's just say this object is the same kind of thing as this object. They're absolutely equal. It's just that I can express this guy as a linear combination of these guys. Whereas the collection of numbers is not equal to this. It's just a representation of it. Similarly, this matrix is not the same thing as this. It's a representation of this. Now in the same way that I can express the object in terms of the representation in terms of a sum of the basis vectors, I want to do the same thing with the operators. That's to say, I want to look at the operators as a linear combination of other operators weighted by its representation, which is what I'm trying to say here. So u was equal to this. So this is these two are equal. And I'm weighting each one of the basis vectors by this number to get the total. One way to get at this, as we said explicitly, was to use the resolution of the identity. So if I want to, without thinking at all, having a computer do it for me, and I wanted to decompose this vector in terms of this basis, one way to do it is to say, OK, I know that the identity operator is equal to a sum, as we saw last lecture, of projection operator. So the projection operator is a particular example of the kind of outer product that we saw as an example of a linear operator. But that where this is, I have the vector in its dual right. That is to say, the projection operator acting on a vector v projects it onto that direction. So that if I express, if I use that, then that says that, or let's say u. So if I look at that, it says that u is the identity on u is equal to the sum that looks like that. So as we discussed a little bit last lecture, I can write the representation quickly by just inserting a complete set for resolution of the identity. Well, let's do the same thing now with our operator. So let's consider the operator, and we want to write this as a linear combination of basis operators. So the way we do that is we insert a resolution of the identity in the basis we care about. So this identity is written in terms of the sum of projection operators. So let's do it. And the sum over i, e i, e i, m. And now I want to do it again in the same basis. However, I best not use that dummy index again. So now I'm going to use j. And this is a linear operator. So the m moves through the sum. I can bring both sums outside than I have. So my operator m itself is a sum of the matrix elements times these outer products. So this is an identity. These are equal. In other words, this is not just the representation. No matter what, these things are equal. I didn't do anything. I just inserted the identity here. I can think about these objects, these d squared objects, as a kind of basis of operators. That is to say, every operator is a superposition of these operators weighted by these complex numbers in the same way that every vector is a linear combination of these objects weighted by complex numbers. Now, for fun, because I know this is so much fun, what is the representation of this operator as a matrix? Because every operator can represent of the matrix in the basis of its own in the same basis. So let us consider the representation of the set of outer products like that in the basis. So now let's think about this a little bit. What does it mean? Well, I want to aid matrix, OK? We have to be careful. Maybe just so that I don't. Let's put the e on up here because that's not a matrix element, OK? This is just labeling which operator I'm talking about. So let me do that over here. There's no notion of upper and lower index here, so I don't think about your favorite covariant, contravariant, GR. We're just labeling each other, all right? So I want to find a representation. How do I do that? I want to find the matrix elements. So I want the matrix element of this operator with respect to two basis factors. Now, I already used the i and j, so I got it all the way. So as we just said, the way you do it is you sandwich. That's what the matrix element is, right? So that is equal to dL, and then I have the i, b, j, e, k. What is that? Well, we just said these things are the inner products between orthonormal basis factors, and thus they are chronic or builders. So these matrix elements are 0 everywhere, except if the index L is i, that is to say, if the rho L is equal to the index i, and if the column K is equal to the index j. So what do these matrices look like? Zeroes are the one in the i-th row and the j-th column. Exactly. So this matrix is a matrix of all zeros, except in the i-th row and the j-th column. So what it's saying here is that this matrix is this matrix element times the basis matrix 1, 0s everywhere. And then I have, so let's write it as, guess what the hell. So the matrix n12, I'm sorry, n11, n12, dot, dot, dot, n1d, n21, n22, n2d is n11 times the basis matrix. Yes, the equivalent to what we wrote over here, where we said v was this amount of this basis vector, this amount of that basis vector, that amount of that basis vector. Now what we're saying is the matrix is this amount of that basis matrix plus this amount of that basis matrix, et cetera, et cetera. So we can think about this when we write our expression here that this is the matrix or the operator, this is the basis of operators, and the matrix elements tell us how to make this. So continuing on, the evolution of the identity is a particular example here. So the identity matrix or the identity operator, what are the matrix elements of the identity operator? This is just this, which is the chronic utility. So it's zeros everywhere, except on the diagonal, its elements are all zeros. That's the identity matrix. That's the representation of this. And of course, we can think about this. Also, the identity itself is the resolution of the identity in terms of the basis vectors. And then each one of these projectors is a matrix of this sort where it has a 1. So e1, 1 is something which is the projection operator is 1 on that diagonal element, and zeros everywhere else. And e2, 2, that projection operator is that. And the identity operator is this plus this. What's this all the way? I guess the last little thing I want to say about the matrix representation here, we talked about the inner product. We talked about the inner product in terms of the matrix multiplication. Let me use the same board over here and talk about the outer product. So this is the inner product, written in terms of matrix, the outer product. So let's say we have the outer product u with v. Well, if I write this represented in our basis that we defined, then the ket, we said, was a column vector. And the bra is the conjugate row vector. Now, that object doesn't, it's kind of in the reverse order. It can't multiply a row by a column. It's the other way around. It's here. Here we, and then we multiply it together. So by definition, this thing is not a number. It's a matrix. We write down these outer product. What we say is that this then is equal to, right here, the matrix u1, u1 star, matrix element given by u star. There are all these rules that we get at which tell us how to relate our abstract objects to collections of numbers in different cases. Isn't that just the same matrix multiplication rule as before, if you just follow the rules just from the first row and the first column? Yeah. I mean, indeed. It is a form of matrix multiplication. That's not the one that led to the sum. There's no summing. How would you say it? Because it's just the first column of the second matrix times the first row of the first matrix. But they're only one element. That's just a sum over 1. I mean, it really is just the same rule of the matrix. Oh, yeah, I guess that's true. Yeah, I agree. It is. Because we only have one thing. So there's no sum of 1. Indeed, it is the same. OK, good. So we have inner products. We have outer products. We have representations. Very good. So let's now talk about, let's remind ourselves about some general operations on matrices or matrix manipulation we are familiar with is the inverse operation. So if I have a map or a operator, I define the inverse of that. In this case, we're talking about maps on the whole space. These are all because we assume we map from vectors to the whole vector space. These are square matrices, as we've drawn them. And those square matrices are such that the inverse acts on the right and the left to give me the end. We have the inverse. We have the transpose. So the transpose is defined such that the matrix elements of the transpose are the elements of the original matrix, but with rows and columns reversed. That's switching of the rows and the columns. That is to say the matrix element of the transpose is equal to this matrix. And then, of course, we have the adjoint. So the adjoint is defined in the following way of the matrix. Let's say that n acting on u gives me the vector v. Then I can equivalently obtain the dual matrix v by acting on the dual vector u. So the adjoint acting in this direction on this dual vector is that dual vector. That is to say, because this, we'd said, was the adjoint. And that, that's equal to. Now, we can quickly see what is the representation of the adjoint matrix in terms of the representation of the original matrix using the rules we've already established. So let's let m on u, let me call that m on v, let me call that w. And what we're going to have is that w is v. So now, u and v is equal to u, w. And if I want to take the conjugate of that, that is equal to, that's the complex conjugate by definition. And that then is equal to, by this definition, v and dagger u. So if I look at the matrix element of the transpose, I'm sorry, the adjoint operator using what we just saw, this is equal to the complex conjugate we just showed. So that tells me that the matrix element of the adjoint matrix is equal to the complex conjugate of these matrix elements, which is equal to the matrix elements of the transpose matrix with the same row and column. So this is the rule that tells us how to relate the elements of the adjoint to the elements of the original matrix. Now, in the theory of operators of matrices, we can think about the adjoint is a wide complex conjugation. It's not really the complex conjugation, because we have to transpose it. But for all intents and purposes, everything we think about in terms of complex conjugation of numbers, you should think about as adjoints of operators for reasons that become more apparent as we go. So for example, a matrix which is self-adjoint, which is also called Hermitian, is like a real number. It's the matrix or operator equivalent of real numbers. That's to say a real number is something which is equal to its complex conjugate. So you should think about Hermitian operators in a wide way. We can also have what are called anti-Hermitian operators, ones that when you take their adjoint, you get back minus. That's sometimes called skew commission. This, I claim, is like a pure imaginary number. So operators or matrices that satisfy these, you should think about them as an imaginary number. Because an imaginary, if it's imaginary, then when you take its complex conjugate, you get minus as the definition of imaginary number or three. What is also true is that any operator can be decomposed as having a Hermitian part and an anti-Hermitian part. In the same way as a complex number can be expressed in terms of real and imaginary parts. And how do you find those real and imaginary parts? Indeed. So in the same way as I do that with complex numbers. So the Hermitian part of the operator is what you get when you add it to its conjugate. It gets rid of the anti-Hermitian part. And the anti-Hermitian part is what you get when you subtract from its conjugate. So that can be done for any operator. Another class of operators, as you know, that are very important are the unitary operator. So a unitary operator is defined such that its adjoint is its inverse. That is to say, u dagger u is the same thing as u u dagger is the idea. Now, unitary operators, as we know, are extremely important operators in quantum mechanics. And the reason is they have a particular geometric property. And this is the important thing about unitary operators. Unitary operators preserve the inner product. What that means is that if I have two, let's say, let me call u tilde what I get when I operate this unitary on that vector and v tilde, then the inner product between u and v is equal. Are you tilde and v tilde? Well, I've got to take the dagger, the identity. So what I mean by this statement that I've underlined is that if all vectors are mapped by the same unitary operator, then the inner product between them is the same. Preserves the inner product. That's, as you will see, a very important property of unitary operators that will have important invitations in quantum mechanics. So unitary operators, because of this property that you dagger u is equal to the identity, how do we say permission operators were like your numbers? We say anti-permission operators were like imaginary numbers. What about unitary operators? Well, I say you should think about the permission kind of complex time to get of numbers. So this is analogous to the magnitude of z squared is 1. So what kind of if permission operators were like your numbers and anti-permission operators were like imaginary numbers, unitary operators were like something with magnitude is 1. It's the unit circle. So that means that in this case z is something with a magnitude 1. It's something like a phase. So you should think about unitary operators as things that are like phases. When you multiply a complex number by a phase, if everything gets multiplied by the same phase, well, you don't change the inner product. You don't change the product of that with its conjugate frame. So unitary operators are like phase factors. What does a phase do in the complex plane? If I have a complex number in the complex plane and I multiply it by a phase, then it rotates it. So I can think about the unitary. So in this case, this is the simplest. It's like a one by one matrix, a rotation of a complex plane. A unitary operator acts as a rotation operator in the open space. Because as we saw in the 3D rotation, a rotation operator in three dimension preserved the inner product between the vectors. A rotation in Hilbert space does the same thing. So that's how you should think about it. All right. So given that, we can look at the action as we've seen it so many times to get a deeper understanding of any of these things. It's nice to look in a representation. So in particular, how do we think about what the unitary does to the set of phases? So consider a basis, an orthonormal basis. Let's define E tilde as the action of the map or what happens to the vector after the map acts on it. What can you tell me about this set of vectors? They're orthonormal. Why? Because we preserve the inner product. So this is also an orthonormal basis. Ortho-normal. So a unitary operator takes orthonormal basis to orthonormal basis. OK? And so one way of expressing any unitary operator is in terms of outer products that take this basis vector to that basis vector. So here's one way to write a unitary operator. These are not projection operators. Because this is not the same vector as that. It's the image of this under the map. But it's a perfectly good way of expressing the map in unitary. Now, what about the matrix elements of U in this basis that say in the original basis? Well, that just picks off the I to P's here. It's equal to this. So if I were to express this as a matrix E1, E tilde 1, E2, EB. So what you see is each column here is the representation of the new basis vector in the original basis. Because these guys are orthonormal basis, what that says is that each one of these columns, so the columns of a unitary matrix are orthonormal. Generally, the rows are orthonormal. Because each one of these is a representation of the I row in the new basis. So the rows are orthonormal. So it's another important property of unitary matrices. Their rows and columns are orthonormal. Now, the final thing I want to say in my obligatory no over five minutes for the lecture is how we use unitary matrices to change representation. That's to say, to change the basis. I have seen here a partially unerased thing over here is I have a representation of a matrix or representation of an operator as a matrix in a particular basis. I can have a representation in a different basis. And I want to know how those two representations are related to one another. So let Mij equal those matrix elements. This array thereof is this matrix representation. Let's let Mi tilde be the representation in this basis. I want to now ask, how are these two sets of numbers related to one another? They're related by a change of basis. So the way to do it, whenever we want to change a basis, is to insert a complete set. So we stick in here the identity. The identity written in terms of the basis, the new basis, we want, in this case, this guy. So I'll write this. Now, of course, I've got to use the dummy indices. So I'm going to have the sum over K and L, E, I, tilde, A. That's this inserting the identities, M, T, L. This is the matrix element, M, K, L. This is the elements of the unitary matrix that we just wrote that takes me between these basis vectors. And what is this? Well, this is the complex conjugate of that, which is equal to U, K, I, star, which is equal to U transpose I, K, which is equal to U, I, K, vector. So the end result here is that the matrix element in the new basis are related to the matrix element in the old basis by a unitary transformation, where I multiply summing over rows and columns in exactly the way that we know. Should be U, I, K, dagger. Where did I screw up here? You are on U, I, K, dagger and on U, K, I, dagger. Oh, I see where it's true. Sorry. This is K, L. This is L, J. And this is a unitary transformation. And it tells me that in order to relate the matrix elements of one representation to another, I use a unitary matrix related to how the basis vectors transform. But I don't have to remember any of that. It's the beauty of deracmentation. Just shove it in a complete set and let the math do it. And then again, I was like, ah, I forgot to make this one. This is an example of what's called a similar matrix, which generally would have a matrix and its inverse. That would allow us to transform between any two bases. But if we're transforming from an orthonormal basis to an orthonormal basis, then that transformation is unitary, which basically inverse is the dagger. Very good. All right, we'll end there. And we will complete our mathematical tour and review next time talking about eigenvectors, degeneracies, commutation relations, and the like.