 Today, we want to begin with the problem of addition of angular momentum, such as the rises in the coupling of the spin in order to make a momentum for an electron between either included as the momentum interactions as I described last time. So the context I'm going to use here is rather general, in which we have two spaces, d1 and d2, upon which angular momentum vectors are defined, j1 and j2. These are as soon as satisfied, the standard eight or momentum commutation relations. And by expedition, you get corresponding rotation operators in after these two spaces. So in our spin orbit example, e1 could be a space of orbital wave functions where the l, it's only the magnetic quantum number is variable, so you have a 2l plus one dimensional space. And the other quantum number is n in the l of space. And then e2 would be the space of the spin of the particle, which, of course, also involves just a simple angle of momentum. So since these spaces both have actually rotation operators defined on them, they also possess a standard angular momentum basis, which should be, in these two cases, what are called j1 and j1 and j2 and j2. The j1 and j2 are just strict numbers that describe the space in what they need to, in particular, the dimensions of the spaces to give them plus one and two k's plus one. Only n1 and n2 are variable, including defined basis factors. The first space is the tensor product of the first two spaces. The angular momentum in that space is the sum of the two other angular momentum. You can easily check that if j1 and j2 satisfied the standard angular momentum permutation relations, then so does total j. Part of the proof of that is that the cross commutators between j1 and j2 are 0. j1 and j2 communicate with each other. That is, all three components of these two operators communicate with each other. And the reason is that they happen in different spaces, one in this space and another in that space. The dimensionality of the product space is the product of the dimensions. And the basis in the product space is the product of the basis vectors in the constituent spaces. This particular product basis in this space is something we all call the uncoupled basis to distinguish it from the coupled basis, which is coming up in a moment. Anyway, this gives the setup of effectively the linear algebra of the problem of addition of angular momentum. Now, the main goal here is to find the standard angular momentum basis on the product space. That standard angular momentum basis is going to be, this is the definition of the standard angular momentum basis. It's a simultaneous eigenbasis of the operators j squared and jc to the s. I have a question about what you were saying about the j-punny scheme here. Yes. You were talking about spin and angular momentum. And I think you were talking about spin is the sixth one. Yes. It's having l, y, and angular momentum be specified. And I mean, I guess it's the variable. And then we specify the distribution space. The space of orbital wave functions is, of course, much bigger than the space of spin wave functions, because it's infinite dimensional. But it does have a standard angular momentum basis. If we were talking about hydrogen, we'd write it like this as nlm. So in a sense, the problem with a person hydrogen is not the product of a space like this or the spin space, but rather the whole orbital space type of spin space. However, the orbital space can be decomposed into orthogonal subspaces that are characterized by n and l. These depend on what I've been calling the iridescal subspaces. Those are iridescent subspaces, a space of dimension in this case, which is 2l plus 1, in which the vectors are related by racing and lowering operators in the indexed angular. So if you do a divided conqueror, the bigger space will split it up into two, and do it one at a time. That's actually sufficient. All right, so to repeat the standard angular basis. Thank you for that question, by the way. That was the point that really needed to be made. So to repeat the standard angular momentum basis in the product space. And as I said, this will be eigenstates of J squared and J z. So let's call the quantum numbers of those J and n like this, without any 1 and 2 subscripts, as you see in the other spaces. And a priori, we don't actually know whether or not these simultaneous eigenstates of these two operators might be degenerate. If they were, in addition to the quantum numbers J and m, we would need an additional one, which I've been calling gamma, in a general context. Gamma is just to be identified with the n, which occurs here in the case of the hydrogen atom. But as it turns out, we will see, we will need the extra index gamma. And so I don't bother to carry it around. It turns out the simultaneous eigenstates of these two operators are, in fact, not degenerate. So what this gives us is the second basis, the standard angular momentum basis, which I'll also call the coupling basis, because it's the angular momentum basis of the combined angular momentum of the two subsystems, like this. So there's actually two bases in this space. Uncoupled basis and a coupled basis. This is the angular momentum. All right. Now, to find the eigenvectors of the coupled basis, we need to find the eigenvectors of J squared and Jz. Jz is a simpler operator, so let's start with that first. The first thing to say is that if I take Jz, it turns out that all of the vectors of the uncoupled basis are automatically eigenvectors of Jz. And that's easy to show, because if I let Jz act on the vector J1, J2, M1, M2, that is just in abbreviation. Well, first of all, the Jz. What does that mean? Jz means J1z plus J2z, just as equal to the sum of the chain of momentum. And this uncoupled basis vector is the same as the problem of the basis vectors, J1, M1, and J2. And 2, that's the definition of this more compact notation for the uncoupled basis vectors. And in this final expression, the J1z acts on the J1, M1 cap, and the J2z acts on the J2, M2, cap to F in different spaces. First one brings out the eigenvalue of M1, and the second one brings out the eigenvalue of M2. I'll set h part equals 1 here to save writing. And so the result is this becomes M1 plus M2 times the product states J1, M1 times J2, M2. And let's write this as just simply M times J1, J2, M1, M2 going back to our other notation for the vectors of the uncoupled basis, where M is equal to M1 plus M2. And so this shows why I claim that all the vectors of the uncoupled basis are automatically eigenvectors of Jz with an eigenvalue, which is the sum of the two m-quantum numbers. It's called an amelie total magnetic quantum number. Jz is what you call an additive quantum number, so it just adds up when we take products of states like this. All right, now to proceed with this, I think it helps to have an example. Let me take the example in which J1 is equal to 5 halves, and J2 is equal to 1. This means that 2J1 plus 1, which is the dimension of the first space, is equal to 6. And 2J2 plus 1, the dimension of the second place is equal to 3. And so the dimension of the product space, which I'm calling e to the power of 6 times 3, which is 18, will be given to an 18-dimensional space now. To visualize the vectors of the uncoupled basis, they're indexed by M1 and M2. M1 goes from minus J1 plus J1. M2 goes from minus J2 plus J2. To visualize those vectors, allow me to make a plot of them in the M1 into plane, because that's how they're labeled by. So M1, excuse me, goes from minus 5 halves to plus 5 halves. So I kind of thought this was accurate as possible. That's 1, that's 2, that's 3, and 5 halves is right there. That's 5 halves of the M1 axis. And then M2 goes from minus 1 to plus 1. So that's minus 1, 0, and plus 1. So let's make a dot here, and a dot there, and a dot there. And then by all right, we have 1, we have dots here, here, and here, and then 1 half like this, and then minus 1 half like this, and then minus 3 halves like this, and then finally minus 5 halves. And the result is we get a rectangular array of spots on the M1 into plane. Each of this represents a vector of the uncoupled basis. The vector over here on the upper right-hand corner is what I'll call the doubly-stretched state. In this case, it's the product of the stretched state for the number 1 subspace times the stretched state for the number 2 subspace, which is 5, 5 product with 1, 1, as I say, that's a doubly-stretched state. The one on the other corner is the stretched state in the opposite direction, where the n's go to the minus 5 halves and minus 1 like this. So you have this array. Now, each of these spots of the array, as I said, corresponds to a vector of the uncoupled basis. And each of those vectors is an eigenvector of JC, as you see up here. And the eigenvector is M1 plus M2, which we're calling just M, not capital N, but just M, no subscript like this. If we draw the contour lines of M in the M1 into plane, there are simply straight lines that go down to the 45 degree angle like this. And in particular, if I draw the line that goes through this doubly-stretched state like this, this is the line in which M is equal to the 7 halves. If I move down by 1, there's another line here that goes through these two states in which M is equal to 5 halves. Another line that goes through three states has M equals 3 halves. There's another one that goes through these three states which has M equals 1 half, when it goes to these three, has M equals minus 1 half. One goes through these three, has M equals minus 3 halves. One goes through these two, and has M equals minus 5 halves. And then finally, there's one that goes through the last opposite way or anti-stretched state that has 1 to 7 halves. And so you can see that the N values, which are allowed, range from minus 7 halves to plus 7 halves. You can also see, so these are the eigenvalues of JC. You can also see that some of these eigenvalues are degenerate. For example, N equals 5 halves, there are actually two vectors that have that same eigenvalue of JC. Although the stretch state is not on each other, we're going to get further down. We're going to convert over to each other. See, because there are three states. In fact, allow me to make a table here. That plot here, the total value of N, range from 7 halves, 5 halves, 3 halves, 1 half, minus 1 half, minus 3 halves, minus 5 halves, minus 7 halves, the total value of N values, and down to the office, the totals coming up in a minute. The first thing that we do is to make a table of what are called G of N, which stands for the degeneracy of the JC quantum number. So for the 7 halves, it's only one full degenerate, because it's only one state. For 5 halves, there's two states. For 3 halves, there's three states. What is the 3? And it goes down like this. 3, 3, 3, 2, 1. Those are degeneracies of the different total magnetic quantum number values. If you have these numbers up, you get 18. And you have 2, because the array is a 6 by 3 array of spots. That's this 2j1 plus 1 times 2j2 plus 1, there's 6 times 3 equals 18. All right. Now, this gives us the eigenstates of JC. What about the simultaneous eigenstates of JC and J squared? That's what we really want. To analyze that question, let's start with doubly stretch state right there. The doubly stretch state is 5 halves, 5 halves, times 1, times 1, 1, like this. When we know that JC acts on it, it brings out the eigenvalue of 7 halves, 5 halves, 1, 1, like this. Now, I want to remind you of a theorem that we discussed at the beginning of the semester. I think it was 1.5. It said that if you have a non-degenerate eigenstate of 1 observable, and if that observable commutes with a second observable, then it's automatically an eigenstate of the second observable. This applies in the present case, because we have a non-degenerate eigenstate of JC. And JC, of course, commutes with J squared. So the result is that this non-degenerate eigenstate of JC must also be an eigenstate of J squared. So we must have. So the result is that this state here, 5 halves, 5 halves, 1, 1, must be able to write it in the form of some J with a denominator, which is 7 halves, which you see. Some J must be possible to write about this. And then we have J squared half on the state, J 7 halves. This, of course, is going to bring out J times J plus 1 times J 7 halves. Now, the argument in that theorem don't tell us what the eigenvalue is. It just tells us that it is an eigenvector of that operator. What is the eigenvalue of J here? We can get the answer in the following way. We know that the magnetic quantum number has a maximum value of J. So that means that 7 halves is less than or equal to J. In other words, J must. What if J value there is? It must be greater than or equal to 7 halves by the rules of the magnetic quantum numbers. So what is the, just to strike, yes, here, could we try to say J equals 9 halves? Is that a possibility? Could this state be, this W-stretch state, could it be the 9 halves, 7 halves state? Is that possible? Well, the answer is no. Because if I applied a raising operator to it, it would convert it within a constant factor into a state that had 9 halves and 9 halves. In other words, it would raise the J value by, excuse me, the n value by 1. And we have a state that had total n equals to 9 halves. But there are no states where they equal to 9 halves. 7 halves is the maximum, as you can see here, from the table and from the diagram. And so this is impossible. And so does the J equals 9 halves. The result is that J cannot be greater than 7 halves. And so in fact, a real addition is that J is equal to 7 halves. And the result of this is that this J becomes equal to 7 halves. And we can write this in the W-stretch state, 5 halves, 5 halves, times 1, 1 is equal to 7 halves, 7 halves, like this, where the 1 numbers are J1, L1 here, and 8 to the power of 2 here, and Jm here. What you see now is that we have one vector of the standard angle of minimum basis on the product states, which is the product, the suspect W-stretch state. All right. Now, so let me put it here, J equals 7 halves occurs. So let me put a 1 there, of course, on to that 1, which is the W-stretch state. Now, if I can take that W-stretch state and apply a lowering operator to J minus, I can create all states that go from 7 halves down to 7 halves common minus 7 halves, but more in the negative component of the number in the two steps. And that corresponds to this table to one state here, one state here, one here, one here, one here, one here, one there, and one at the bottom, adding them up, or just marking them off. And if you add them up, in fact, that's full of 8 states. OK. So let me erase this space here. Now, let's take a look at the next eigen space from j-sub-z now, in which j-sub-z total is equal to 5 halves. This is a two-dimensional space, and it's spanned by the two vectors at the other top of basis that I've drawn here. If I draw a two-dimensional diagram as a golden space, as I said, a sub-space eigen space of j-z, it's a spanned by the state's 5 halves, 3 halves, times 1, 1, and also by the state's 5 halves, 5 halves, 1, 0. These are the two spots that I've circled here. So this is a schematic diagram of the two-dimensional space. Now, when I took this stretch state, which was 7 halves, 7 halves from j-n, and we applied a j minus to it, this will give us the state's 7 halves, 5 halves, and then in value of equal to 5 halves. And that will also be a vector of the standard angle memory basis. And since n equals 5 halves, it lies inside this two-dimensional space, and it must be a linear combination of those two. So similar point of law in this direction, if I can just do this schematic here, there's a vector 7 halves, 5 halves. Now, allow me to define a vector orthogonal to this. Let's just call it x, because it's something that is unknown for the time being. But let's make it orthogonal to that space. So what do we know about that? First of all, it's orthogonal to 7 halves, 5 halves. Secondly, we know that x, since it lies in this area, equals 5 halves subspace. It's an eigenstate of jc. So jc, after 1 halves, is equal to 5 halves times 1 halves. Do you know that also? All right. It turns out that this x is also an eigenstate of j squared. You see this this way. Let's take j squared acting on x. Consider that as a vector. And then let's let jz act on the product. Now, since jz commutes with j squared, I can bring it past the j squared. It'll act on the x and bring out the value of 5 halves. So this becomes 5 halves times j squared acting on x. And thus, j squared acting on x is another vector, which is an eigenvector of jc with eigenvalue 5 halves. Therefore, it must lie inside this plane. And so j squared acting on x must be a linear combination of a to the power of vectors. I won't choose those two to become the outcome of the basis. Let me choose these two. So it must be a linear combination. That's right. This way is a times x plus d times 7 halves 5 halves. 7 halves 5 halves, right? Correctly, like this. Now, allow me to take this final equation and multiply it as a bra by 7 halves 5 halves on the left. What I do with the j squared can act on the left on this state, which we know is eigenstate of j squared. And it will bring out 7 halves times 7 halves plus 1. What's left over is 7 halves 5 halves scalar product of x. But that's a fact 0. So left-hand side gives me 0. And the right-hand side, I get 7 halves 5 to get coefficient a. 7 halves 5 halves scalar product of x. And that's a given 0. And then finally, what I get is b multiplying the square of 7 halves 5 halves, which is 1, assuming it's normal less. And so we conclude that the coefficient b is 0. And this term goes away. So the important point here is that j squared acting on x is a multiple of x, and therefore x is an eigenvector of j squared. This is actually a simple generalization of that theorem that was proven in fact in the first set of notes. All right. So this vector, we have calling x, has a value of jz, which is 5 halves. And it's also an eigenstate of j squared. So let's write it this way if I can find some room to do it. Let's write this way. Let's write x is equal to j and 5 halves for something like the value of j. And first of all, what is the value of j? Well, first of all, j must be greater than or equal to 5 halves because 5 halves is the, because j is the maximum value of n. So to take a case, let's ask ourselves, could j be equal to, let's say, 7 halves a possibility or 9 halves or anything? Let's look at 7 halves, some number that's greater than 5 halves. Well, if this x-state were a state that had j equal to 7 halves, then you see it would be the same quantum numbers we have here for the state we got by lowering. We'd have two linearly independent states with the same quantum numbers. You'd need an extra index to resolve the degeneracy if that were the case. But let's suppose for the sake of argument that both those states have the same quantum numbers under j squared and jz. Then by applying raising operators to another state, then spanning the states. Then by applying raising operators, I convert them into a pair of linearly independent vectors that had 7 halves, 7 halves, up here on the integral 7 halves state. But that's a one-dimensional space, not a two-dimensional space. There's only one vector like that. So we can't have two linearly independent vectors with the same value of j equal to 7 halves. So this turns out to be impossible. It's in fact, at this point, you've been eliminating all values of j that are greater than 5 halves. And the result is it must be equal to 5 halves. And so our vector x, I can now write it this way as 5 halves, 5 halves. It's the stretched state of another iridiscible subspace. Now with a j value of 5 halves instead of 7 halves. So our table here looks like this. We have another value of j equals 5 halves, which occurs here. There's a one vector of i in this two-dimensional space. And then by lowering it down to, again, minus 5 halves, we get a set of 6 vectors like this. Now we move on to the third space down. This one here is three-dimensional. And by using a similar argument, we find that there is a single vector in this space or 5 halves of these other two. And it has a value of j equals 3 halves. In fact, it's a stretched state of that multiply. And we lower it most down by using lowering operators if we get a ground total of 4. At this point, all the dimensions have been used about three here as if we were less than 1, less than 1, as you see. So that's the end of it. In fact, if you add up down to the bottom, you get the dimensions of things 8 plus 6 plus 4. And the result of this is that in this case, we see that the problem of these two spaces with these values of j1 and j2 consists of three irreducible subspaces under rotations between angular momentum values, 3 has 5, s, and 7 halves. Moreover, they occur only once. So there's no need for an extra index to result in genesis. What this means is that this now gives us a complete specification of the problem basis, at least in this numerical example I've given up here. This is sometimes written this way in notation for this. We'd say that we took 5 halves and formed a tensor product of 1, what we get is 3 halves, and then a direct sum of 5 halves and a direct sum of 7 halves. This is a shorthand notation for the tensor product of the corresponding ket spaces. I mean, only the j values are indicated here. So it's like the e1 plus e2, which is over here on this board. And as far as this direct sum here, although it might show the direct sum refers to the decomposition of the space into orthogonal subspaces, that's how we're using it. These correspond to the three different j values. There's three different orthogonal subspaces of dimensions 8, 6, and 4 inside this product space. And that's what the right-hand side means. This notation here for the decomposition of the product space corresponds to the dimension count, which is 6 times 3, which is equal to 4 plus 6, plus 8 if you check the dimensions. This argument that I've just been through for these specific values of j1 and j2 can be generalized to arbitrary values. But if you do what you find is that the total j goes from the minimum value, which is the absolute value of j1 minus j2, and integer steps up to a maximum value, which is j1 plus j2. For each one of the j values that occurs in this list, the magnetic quantum numbers run from minus j plus j. It's a complete theoretical subspace. What this means is that if I do the sum, where j ranges from the minimum, which is the absolute value of j1 minus j2 up to a maximum, which is j1 plus j2, and then an m sum, where n goes from minus j up to plus j, well, any time you do the m sum, because the result of the m sum is 2j plus 1, that's the dimensionality of each of these j subspaces, this must be equal to the product 2j1 plus 1 times 2j2 plus 1. And this is an algebraic relation which you can check by just ordinary algebra. And what it is, is that it's a check on the dimension count of these rules. These are the rules to become combination of any momentum, telling you the j's have come out. This is an example of it up here, the numbers we're looking at in a specific example. All right, yeah. So that's the idea. And it gives the right dimension count. The result is that we have two bases on the product state, and I'm calling the uncoupled basis, which I'm writing this way, is j1, j2, m1, and n2. And this is the shorthand for the product, defined to be j1, m1 times j2, m2. And we also have what I'll call the coupled basis, which is jm. And by the way, in the uncoupled basis, the j1 and j2 are fixed, and m1 and n2 range between minus j1 plus j1 minus j2 plus j0. Whereas in the coupled basis, j is not fixed. j has a range which is given a flight of there, and we value j to the n runs from minus j plus j. But in any case, these are two different bases. The number of bases, that gets us the same as the dimension of the space, 18 in the example we were looking at. And so there is a unitary transformation that connects this from one basis to the other. The unitary components of the unitary matrix that carry out the transformation are just the scalar products of the basis vectors from one space with the basis vectors from another space. These are the scalar products, a little like this. These scalar products for you to go in another direction, j1, j2, m1, m2, and jm, if you wanted to take the emission conjugate of the unitary matrix. These matrix elements are called the Plexigorgian coefficients. Now actually, Plexigorgian mathematicians who worked on the problems of the variance several decades before the advent of quantum mechanics, so they had nothing to do with the use of this in quantum mechanics. And in fact, attaching the name of these coefficients is a little misleading in terms of what they actually did. But the most important thing they did was to work out these rules here, how the angle of a minute combined would be to take product spaces. But anyways, common to call these Plexigorgian coefficients, people that are just vestidious about credit sometimes call these vector complemental coefficients instead. And they refer to that as the Plexigorgian series. It's probably more accurate and historic to do that. In any case, Plexigorgian coefficients are merely these components of the unitary matrix connecting the two bases. Now, I will assume that you've had experience in calculating Plexigorgian coefficients by raising the lowering operators. And if you haven't, I suggest you do a practice of 1 1⁄2 cross 1 or something like that. But I'm not going to lecture on that. I will, however, say some things about properties of the Plexigorgian coefficients. So here's what they are. The first one is that they're real. The fact that they're real is not automatic to follow some face conventions that are used in defining the Plexigorgian coefficients. These states that are being produced here are only defined in the eigenstates. eigenstates of any inside of operators are only defined within a phase. But there are reasonable phase conventions one can come up with. The standard ones are called the cognitive and shortly phase conventions. And they guarantee that the Plexigorgian coefficients are real. So in fact, these two versions where one is a complex combination of the other. In fact, we were able to equal the case of the Plexigorgian coefficients. The second property I want to mention is the orthonormality properties. And this just has to do with the fact that these are connected to orthonormality bases. So for example, there's really two orthonormality properties here at the bottom. And by sum of the orthonormality of 2, this is summing over the base vectors of the uncoupled basis. And I have the Plexigorgian coefficient of j1 and jm scalar product of j1, j2, m1, m2. And then another one, which is the scalar product of j1, j2, m1, m2, which we call it j prime, m prime, like this, you can see that this is the resolution of the identity in which the uncoupled basis has been inserted between the scalar product of 2 basis vectors and the coupled basis. And so it's just like orthonormality of the uncoupled basis, it's just a product of deltas and a product of deltj, j prime, and deltas and m prime. Conversely, there's a sum of the other way, the sum of j and m, and let's make it j1, j2, m1, m2, the scalar product of jm, for the first Plexigorgian coefficient. The second one is jm, the scalar product of j1, j2, m1 prime, m2 prime, like this. The range of the summation of j's is this one here, like this. And for the summation of m, this is delta minus j plus j. Well, you can see this is just a resolution of the identity using the coupled basis. And what's left over is the scalar product of 2 vectors of the uncoupled basis. So this is chronic or deltic m1 prime times chronic or deltic m2 prime. These formulas are really very obvious. We think about insertions of resolution to the identity. All right, here we have some work in normality. The third property I want to mention is the selection rule. And the selection rule really comes with the fact that this jz is an additive one number, a write-in outbreak here. The selection rule is that the Plexigorgian coefficient of j1, j2, m1, m2, the scalar product of jm equals 0, unless m is equal to m1, m2. And that is, as I say, simply because jz is an additive one number, so the total jz value must be equal to the sum of the jz values on the other side of us with 0. Those are the three main properties of the Plexigorgian coefficient that will be important for us. All right, now, let me show you some things you can do with Plexigorgian coefficients. Let's take a vector of the uncoupled basis, j1, j2, m1, m2. And let's expand it as a linear combination of vectors of the coupled basis, so I need to sum on a and m. And essentially, you do this just by inserting a resolution of the identity immediately before the vector in question up front. So it becomes jm, an outer product of itself times the original vector, which is j1, j2, m1, m2. And so you see the vectors of one basis, so the vectors of the other basis would be Plexigorgian linear combinations with Plexigorgian coefficients as the coefficients of the expansion, such that we're going to sum this up. Now, I'd like to take this formula. I could do it the other way, too, because it's all for jn in terms of the coupled basis vectors in terms of the uncoupled ones. But this will leave the leading where I want to go for the next step. Now, I'd like to take this equation, both solved, I'd like to take this equation both sides and imply a rotation. In fact, there's only three rotations here. All of them are on the same axis and angle. Let me call the first one u1 of in-hat and theta. This is the same thing as P to the minus i of the h bar by times theta times in-hat dotted into j1. Let me write u2 of in-hat and theta is the same thing except one replaced by two. This acts on the number two ket space. And let's talk about an overall rotation operator u again had come to theta that acts on both spaces. And this can be defined as e to the minus i over h bar times theta times in-hat dotted into total j. This is used for each space. You can use the angle of the middle vector that's appropriate for the given space. However, total j is the sum of j1 plus j2. So this final expression here looks like e to the x plus y. An exponential operator looks like this. And moreover, the x and y operators commute with each other because j1 and j2 act on different spaces. And if that's true, then an exponential like this behaves just like the rules of ordinary numbers. So this is the same thing as e to the x times e to the y. And the result is j is equal to j1 plus j2. This factors and just becomes u1 if it had come to theta times u2 if it had come to theta. The overall rotation operator, in other words, is just the product of the rotations on the sub-spaces. Excuse me, the constituent spaces of the product space. All right. For short, we'll just call this u1, the first one u1, the second one u2, and the third one u. But it's understood they're all of the same axis of an animal. All right. Now, the equation in the box up here is a vector equation in the product space. So the rotation operator we have to use is u. I like to apply u to both sides of this equation, u on the left side. I'll u on the right side is u1 and u2 on the left side. I'll just take it as u on the right side. So let's apply these rotations on the two sides. Allow me to start with the left side here. u1, u2, acting on j1, m1 times j2, m2, like this, where I've rewritten this product vector in terms of the vectors that it's drawn on. Now, the u1 acts on the first factor and the u2 acts on the second factor. And except for one exchange of subscripts 1 and 2, it's really the same calculation. So let's say u1 acting on j1, m1. What does that equal to? Well, we can answer this by making a reasonable, inserting a resolution of the identity here right before the u. Let's write it this way, as the sum on m1 prime times j1, m1 prime times j1, m1 prime u, j1, m1, like that. And I don't need a sum on j1 because j1 is just a fixed number of the basis u1 besides the u as diagonal of j's. The final matrix solving that occurs here is the dematrix. This can be written this way as the sum on m1 prime of j1, m1 prime times the dematrix j1 upstairs, m1 prime, m1 downstairs. And it will be understood that the dematrix is parameterized by the same axis and angle as the either end of the u is the same area as there. All right? Of course, it's something similar to the u2 acting on j2. So applying that u1 and u2 to the left-hand side here, we write it out as a place where I can write the whole thing out, as we get the sum on m1 prime and 2 prime of j1, j2, m1 prime, m2 prime, dj1, m1 prime, m1, dj2, m2 prime, m2 is equal to the rotated version of the right-hand side. Well, on the right-hand side, we have u acting on one of the standard angular momentum bases of the product space, jm. And so the calculation I just went through just applies again, except I just dropped one subscript. So u acting on jm gives me a sum on an index of m prime and another dematrix. So the right-hand side looks like this. I'm going to have to really write it out. It becomes a sum on jm and m prime on the basis vectors jm prime, the dematrix dj m prime m, and then this is the left-hand side, which is jm, j1, j2, m1, m2, this. OK? So this is rotating both sides of this expansion. Now, I'd like you to pick out the term that involves the product of the dematrix. So I want to get rid of this vector, which is here. So let's multiply through by a bra, j1, j2, m1 prime, m2 prime. So I'm going to pick out a single term in the series on the left-hand side. And if we do, then we get this result, which is the product of the two dematrices. And I'm going to do most amazingly with the product of two dematrices as a linear combination of single dematrices. So it looks like this, because that d, j1, m1 prime, m1, d, j2, m2 prime, m2, is equal to the sum on j, m, and m prime of the vector, j, the vector, j1, j2, m1 prime, m2 prime, skim of prime, j0 prime times the dematrix, dj, m prime, m1. So I'm going to touch more to the question of jm, skim of prime, j1, j2, m1 prime, m2 prime, to get this result. So this shows that the product of dematrices can be represented as a linear combination of other dematrices in which the j values that occur are determined by the rules for having angular momentum, j1 plus j2. By the way, it's possible to invert this to get the single dematrix as a linear combination of products and dematrices with other j's. And we can set it up so you can get lower j's. You can use this, as a matter of fact, to build up the dematrices for higher values of j's, once you know them for lower j's. For example, j1 and j2 are 1 half, which case we know what the dematrices are. You can use this plus a table of clutch floating coefficients to get the dematrices for j equals 1 and so on. This is one of the ways of building up the dematrices. In any case, for my parents today, I want to use it in this form. I'm heading somewhere with this formula, which is that we'd like to connect this with ylm's. Let me remind you that the ylm of theta in phi is equal to the square root of 2l plus 1 over 4 pi times dematrix dl in 0 of Euler angles phi theta in 0 plus conjugate in this whole relation between the two. So I'm going to convert this formula into a formula involving ylm's. In order to do that, I need to have the second index of dematrix d0. I hope you'll forgive me for doing this on the board, but it saves the writing, and they decode the whole container of notes. I'm going to go through this equation and swap the emitting primes just to convert it because it's slightly more convenient if I do that. So let me use my fingers. I'll change all ems into imprimes in my subversa. Sometimes the legal law of this becomes unprimed here. It becomes unprimed in an imprime that's swapped there, and all these things can be prime. Now the next thing I'm going to do with my fingers is to set the special case imprime 1 equal to imprime 2 equal to 0, because I want to get 0s to the second indices here. So it will look like ylm's. So if I set n1 prime n to prime equal to 0, I'm going to do that here and here like this. And then these two become 0 to 0 over here in this Fletch-Gordon function. Now, however, by the selection rules of the Fletch-Gordon function, if n1 and n2 are equal to 0, these are the primaries, then the total line of prime has to be 0 also, or else this Fletch-Gordon function vanishes. If the sum of n prime only involves a single term, which imprime equal to 0. So I'm going to press this by 0, put a 0 there, and then put a 0 here. And I'm going to drop that sum in the prime. Now, next, as long as I'm using my fingers to erase things, let's turn the j's into l, so it will look like ylm's. I'll just uniformly run through it by l's instead of j's. This becomes l1, l2, l. This j turns into l. This j turns into l. This k turns into l. And this becomes l1, l2, like this. So we see we're getting there. Let me complex conjugate both sides. Since the Fletch-Gordon coefficients are real, I don't need to do anything to them. And now, apart from the fact that it's a square root of 2l plus 1 over 4 pi, I've got what I need to express this in terms of ylm's. So I'll show you what you get when you do this. You should get l1, l1, l1, and some position theta at 5 times the yl2, and 2 at some position theta at 5. This equals to the sum of lm. In fact, l ranges from the s to l1, l1 minus l2 up to the sum of l1 plus l2. And again, it runs as usual from minus l1 plus l. Then we've got two complex Gordon coefficients, l1, l2, and m1, l2 times l, excuse me. Before I do that, there's a square root of factors to come out really like that down first. When you work on an hr with this, square root of 2l1 plus 1 times 2l2 plus 1 divided by 4 pi times 2l plus 1. And then there's the first complex Gordon coefficient, which is l1, l2, m1, l2, square root of 5, lm. Then there's a ylm of theta at 5, a little v matrix here. And then there's a final, the last complex Gordon coefficient, which is l1, well, it's a l0, it's the other product of l1, l2, 0, and 0, yes. OK? If you get this formula. Now, the comment about this formula is that in the first place, the ylm is that form of a complete orthonormal set of functions on a sphere. So you can expand any function as a linear combination of ylm's. And if you take the product of 2 ylm's, of course you'll have a function of a sphere which can be expanded. And what this shows you is explicitly what the expansion coefficients are. There's some square root factors, and then the product of two complex Gordon coefficients. This turns out to be very useful in applications, which is why I'm writing it now. To put this in a slightly different form, let's take ylm theta applied recursively on a right-hand side, multiply through by the complex conjugate and integrate. Let's do the integral of a solenoid is of ylm, the complex conjugate. And if we do that, we get an integral on the left-hand side, and then the right-hand side would pick out a single term lm, so the ylm drops out and so on and so on. And so you get this formula here, which is essentially the same formula. So the integral of a solenoid of a ylm star times ylm1 times yl2m2 is equal to everything else on the right-hand side, so it's square root as above, and times the same cross-partial coefficients l1, l2, l1, l2, lm times the cross-partial coefficient of the integral l1, l2, 0, 0, like this. We get this formula. And we'll use this Gordon once called this 3ylm41, it allows you to do an integral, you see it on the product, 3ylm, getting answers in terms of cross-partial coefficients. This is all I want to say about the problem of addition of any momentum, and I'd like to turn now to the new topic, which is the transformation property of operators under rotations, let's make a start on it. Again, let's suppose we have a catch space for some physical system, let's let a solid be stayed in this, and let's suppose that we have rotation operators that act on this catch space. Then it's possible to find the rotating spade on a side prime, which is our rotation operator U of r acting on the side, where r is the class of the rotation, and I'll say again that we're only talking about proper rotations for now, parity is an improper rotation, we'll deal with that later. I'll remind you also that for patent or angular momentum, this is only defined within a sign, there's actually two such operators. Now in addition to saying how states transform in the rotations, we'd like to have a way of talking about how operators transform in the rotations. So if A is an old operator, and A prime is the rotated operator, the question is what should be the definition of the rotated operator? The definition that we will adopt is that the expectation value of the rotated operator in the rotated state should be equal to the expectation value of the original operator, which I'll call A, this is some function of A, the original operator in the original state. In other words, we require this, that psi prime sandwiched around A prime should be equal to psi sandwiched around A. This is going to be our definition of A prime. However, since psi prime is U times psi, the left-hand side is psi U dagger A prime U psi. And since these have to be equal for all choices of states, this implies that U dagger A prime U is equal to A. Or if we're going to be using U daggers over the other side, then the answer to our question goes to the definition of the rotated operator, because there should be a rotation operator U of R times the intensity of our dagger, like this. So this is the definition of the rotated operator. So I will have to take off for the next time.